repo
stringlengths 10
39
| pull_number
int64 74
29.2k
| url
stringlengths 37
68
| instance_id
stringlengths 14
45
| issue_numbers
stringclasses 26
values | base_commit
stringlengths 40
40
| patch
stringlengths 525
15.9k
| test_patch
stringlengths 606
17.8k
| created_at
timestamp[s]date 2015-03-20 20:39:55
2025-01-02 13:53:18
| readmes
stringclasses 2
values | files
stringlengths 365
430k
| non_py_patch
stringlengths 0
7.21k
| new_components
stringlengths 142
6.63k
| version
stringclasses 37
values | FAIL_TO_PASS
stringlengths 13
15.6k
| PASS_TO_PASS
stringlengths 2
65k
| environment_setup_commit
stringlengths 40
40
| problem_statement
stringlengths 367
40.7k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
EleutherAI/lm-evaluation-harness
| 1,566
|
https://github.com/EleutherAI/lm-evaluation-harness/pull/1566
|
EleutherAI__lm-evaluation-harness-1566
|
[]
|
49695e8d94c3ab011b7ae8814d809de30b1b1182
|
diff --git a/lm_eval/__main__.py b/lm_eval/__main__.py
index 489c1662d41..18c243d431d 100644
--- a/lm_eval/__main__.py
+++ b/lm_eval/__main__.py
@@ -53,13 +53,30 @@ def parse_value(item):
return items
-def parse_eval_args() -> argparse.Namespace:
+def check_argument_types(parser: argparse.ArgumentParser):
+ """
+ Check to make sure all CLI args are typed, raises error if not
+ """
+ for action in parser._actions:
+ if action.dest != "help" and not action.const:
+ if action.type is None:
+ raise ValueError(
+ f"Argument '{action.dest}' doesn't have a type specified."
+ )
+ else:
+ continue
+
+
+def setup_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
- parser.add_argument("--model", "-m", default="hf", help="Name of model e.g. `hf`")
+ parser.add_argument(
+ "--model", "-m", type=str, default="hf", help="Name of model e.g. `hf`"
+ )
parser.add_argument(
"--tasks",
"-t",
default=None,
+ type=str,
metavar="task1,task2",
help="To get full list of tasks, use the command lm-eval --tasks list",
)
@@ -67,6 +84,7 @@ def parse_eval_args() -> argparse.Namespace:
"--model_args",
"-a",
default="",
+ type=str,
help="Comma separated string arguments for model, e.g. `pretrained=EleutherAI/pythia-160m,dtype=float32`",
)
parser.add_argument(
@@ -164,6 +182,7 @@ def parse_eval_args() -> argparse.Namespace:
)
parser.add_argument(
"--gen_kwargs",
+ type=dict,
default=None,
help=(
"String arguments for model generation on greedy_until tasks,"
@@ -180,6 +199,7 @@ def parse_eval_args() -> argparse.Namespace:
)
parser.add_argument(
"--wandb_args",
+ type=str,
default="",
help="Comma separated string arguments passed to wandb.init, e.g. `project=lm-eval,job_type=eval",
)
@@ -209,13 +229,19 @@ def parse_eval_args() -> argparse.Namespace:
help="Sets trust_remote_code to True to execute code to create HF Datasets from the Hub",
)
+ return parser
+
+
+def parse_eval_args(parser: argparse.ArgumentParser) -> argparse.Namespace:
+ check_argument_types(parser)
return parser.parse_args()
def cli_evaluate(args: Union[argparse.Namespace, None] = None) -> None:
if not args:
# we allow for args to be passed externally, else we parse them ourselves
- args = parse_eval_args()
+ parser = setup_parser()
+ args = parse_eval_args(parser)
if args.wandb_args:
wandb_logger = WandbLogger(**simple_parse_args_string(args.wandb_args))
|
diff --git a/tests/test_cli.py b/tests/test_cli.py
new file mode 100644
index 00000000000..feaa7340d6a
--- /dev/null
+++ b/tests/test_cli.py
@@ -0,0 +1,43 @@
+import argparse
+
+import pytest
+
+import lm_eval.__main__
+
+
+def test_cli_parse_error():
+ """
+ Assert error raised if cli args argument doesn't have type
+ """
+ with pytest.raises(ValueError):
+ parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
+ parser.add_argument(
+ "--model", "-m", type=str, default="hf", help="Name of model e.g. `hf`"
+ )
+ parser.add_argument(
+ "--tasks",
+ "-t",
+ default=None,
+ metavar="task1,task2",
+ help="To get full list of tasks, use the command lm-eval --tasks list",
+ )
+ lm_eval.__main__.check_argument_types(parser)
+
+
+def test_cli_parse_no_error():
+ """
+ Assert typed arguments are parsed correctly
+ """
+ parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
+ parser.add_argument(
+ "--model", "-m", type=str, default="hf", help="Name of model e.g. `hf`"
+ )
+ parser.add_argument(
+ "--tasks",
+ "-t",
+ type=str,
+ default=None,
+ metavar="task1,task2",
+ help="To get full list of tasks, use the command lm-eval --tasks list",
+ )
+ lm_eval.__main__.check_argument_types(parser)
| 2024-03-12T17:35:39
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"lm_eval/__main__.py": "import argparse\nimport json\nimport logging\nimport os\nimport re\nimport sys\nfrom functools import partial\nfrom pathlib import Path\nfrom typing import Union\n\nimport numpy as np\n\nfrom lm_eval import evaluator, utils\nfrom lm_eval.evaluator import request_caching_arg_to_dict\nfrom lm_eval.logging_utils import WandbLogger\nfrom lm_eval.tasks import TaskManager, include_path, initialize_tasks\nfrom lm_eval.utils import make_table, simple_parse_args_string\n\n\nDEFAULT_RESULTS_FILE = \"results.json\"\n\n\ndef _handle_non_serializable(o):\n if isinstance(o, np.int64) or isinstance(o, np.int32):\n return int(o)\n elif isinstance(o, set):\n return list(o)\n else:\n return str(o)\n\n\ndef _int_or_none_list_arg_type(max_len: int, value: str, split_char: str = \",\"):\n def parse_value(item):\n item = item.strip().lower()\n if item == \"none\":\n return None\n try:\n return int(item)\n except ValueError:\n raise argparse.ArgumentTypeError(f\"{item} is not an integer or None\")\n\n items = [parse_value(v) for v in value.split(split_char)]\n num_items = len(items)\n\n if num_items == 1:\n # Makes downstream handling the same for single and multiple values\n items = items * max_len\n elif num_items != max_len:\n raise argparse.ArgumentTypeError(\n f\"Argument requires {max_len} integers or None, separated by '{split_char}'\"\n )\n\n return items\n\n\ndef parse_eval_args() -> argparse.Namespace:\n parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)\n parser.add_argument(\"--model\", \"-m\", default=\"hf\", help=\"Name of model e.g. `hf`\")\n parser.add_argument(\n \"--tasks\",\n \"-t\",\n default=None,\n metavar=\"task1,task2\",\n help=\"To get full list of tasks, use the command lm-eval --tasks list\",\n )\n parser.add_argument(\n \"--model_args\",\n \"-a\",\n default=\"\",\n help=\"Comma separated string arguments for model, e.g. `pretrained=EleutherAI/pythia-160m,dtype=float32`\",\n )\n parser.add_argument(\n \"--num_fewshot\",\n \"-f\",\n type=int,\n default=None,\n metavar=\"N\",\n help=\"Number of examples in few-shot context\",\n )\n parser.add_argument(\n \"--batch_size\",\n \"-b\",\n type=str,\n default=1,\n metavar=\"auto|auto:N|N\",\n help=\"Acceptable values are 'auto', 'auto:N' or N, where N is an integer. Default 1.\",\n )\n parser.add_argument(\n \"--max_batch_size\",\n type=int,\n default=None,\n metavar=\"N\",\n help=\"Maximal batch size to try with --batch_size auto.\",\n )\n parser.add_argument(\n \"--device\",\n type=str,\n default=None,\n help=\"Device to use (e.g. cuda, cuda:0, cpu).\",\n )\n parser.add_argument(\n \"--output_path\",\n \"-o\",\n default=None,\n type=str,\n metavar=\"DIR|DIR/file.json\",\n help=\"The path to the output file where the result metrics will be saved. If the path is a directory and log_samples is true, the results will be saved in the directory. Else the parent directory will be used.\",\n )\n parser.add_argument(\n \"--limit\",\n \"-L\",\n type=float,\n default=None,\n metavar=\"N|0<N<1\",\n help=\"Limit the number of examples per task. \"\n \"If <1, limit is a percentage of the total number of examples.\",\n )\n parser.add_argument(\n \"--use_cache\",\n \"-c\",\n type=str,\n default=None,\n metavar=\"DIR\",\n help=\"A path to a sqlite db file for caching model responses. `None` if not caching.\",\n )\n parser.add_argument(\n \"--cache_requests\",\n type=str,\n default=None,\n choices=[\"true\", \"refresh\", \"delete\"],\n help=\"Speed up evaluation by caching the building of dataset requests. `None` if not caching.\",\n )\n parser.add_argument(\n \"--check_integrity\",\n action=\"store_true\",\n help=\"Whether to run the relevant part of the test suite for the tasks.\",\n )\n parser.add_argument(\n \"--write_out\",\n \"-w\",\n action=\"store_true\",\n default=False,\n help=\"Prints the prompt for the first few documents.\",\n )\n parser.add_argument(\n \"--log_samples\",\n \"-s\",\n action=\"store_true\",\n default=False,\n help=\"If True, write out all model outputs and documents for per-sample measurement and post-hoc analysis. Use with --output_path.\",\n )\n parser.add_argument(\n \"--show_config\",\n action=\"store_true\",\n default=False,\n help=\"If True, shows the the full config of all tasks at the end of the evaluation.\",\n )\n parser.add_argument(\n \"--include_path\",\n type=str,\n default=None,\n metavar=\"DIR\",\n help=\"Additional path to include if there are external tasks to include.\",\n )\n parser.add_argument(\n \"--gen_kwargs\",\n default=None,\n help=(\n \"String arguments for model generation on greedy_until tasks,\"\n \" e.g. `temperature=0,top_k=0,top_p=0`.\"\n ),\n )\n parser.add_argument(\n \"--verbosity\",\n \"-v\",\n type=str.upper,\n default=\"INFO\",\n metavar=\"CRITICAL|ERROR|WARNING|INFO|DEBUG\",\n help=\"Controls the reported logging error level. Set to DEBUG when testing + adding new task configurations for comprehensive log output.\",\n )\n parser.add_argument(\n \"--wandb_args\",\n default=\"\",\n help=\"Comma separated string arguments passed to wandb.init, e.g. `project=lm-eval,job_type=eval\",\n )\n parser.add_argument(\n \"--predict_only\",\n \"-x\",\n action=\"store_true\",\n default=False,\n help=\"Use with --log_samples. Only model outputs will be saved and metrics will not be evaluated.\",\n )\n parser.add_argument(\n \"--seed\",\n type=partial(_int_or_none_list_arg_type, 3),\n default=\"0,1234,1234\", # for backward compatibility\n help=(\n \"Set seed for python's random, numpy and torch.\\n\"\n \"Accepts a comma-separated list of 3 values for python's random, numpy, and torch seeds, respectively, \"\n \"or a single integer to set the same seed for all three.\\n\"\n \"The values are either an integer or 'None' to not set the seed. Default is `0,1234,1234` (for backward compatibility).\\n\"\n \"E.g. `--seed 0,None,8` sets `random.seed(0)` and `torch.manual_seed(8)`. Here numpy's seed is not set since the second value is `None`.\\n\"\n \"E.g, `--seed 42` sets all three seeds to 42.\"\n ),\n )\n parser.add_argument(\n \"--trust_remote_code\",\n action=\"store_true\",\n help=\"Sets trust_remote_code to True to execute code to create HF Datasets from the Hub\",\n )\n\n return parser.parse_args()\n\n\ndef cli_evaluate(args: Union[argparse.Namespace, None] = None) -> None:\n if not args:\n # we allow for args to be passed externally, else we parse them ourselves\n args = parse_eval_args()\n\n if args.wandb_args:\n wandb_logger = WandbLogger(**simple_parse_args_string(args.wandb_args))\n\n eval_logger = utils.eval_logger\n eval_logger.setLevel(getattr(logging, f\"{args.verbosity}\"))\n eval_logger.info(f\"Verbosity set to {args.verbosity}\")\n os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n\n if args.predict_only:\n args.log_samples = True\n if (args.log_samples or args.predict_only) and not args.output_path:\n raise ValueError(\n \"Specify --output_path if providing --log_samples or --predict_only\"\n )\n\n initialize_tasks(args.verbosity)\n task_manager = TaskManager(args.verbosity, include_path=args.include_path)\n\n if args.limit:\n eval_logger.warning(\n \" --limit SHOULD ONLY BE USED FOR TESTING.\"\n \"REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\"\n )\n if args.include_path is not None:\n eval_logger.info(f\"Including path: {args.include_path}\")\n include_path(args.include_path)\n\n if args.tasks is None:\n eval_logger.error(\"Need to specify task to evaluate.\")\n sys.exit()\n elif args.tasks == \"list\":\n eval_logger.info(\n \"Available Tasks:\\n - {}\".format(\"\\n - \".join(task_manager.all_tasks))\n )\n sys.exit()\n else:\n if os.path.isdir(args.tasks):\n import glob\n\n task_names = []\n yaml_path = os.path.join(args.tasks, \"*.yaml\")\n for yaml_file in glob.glob(yaml_path):\n config = utils.load_yaml_config(yaml_file)\n task_names.append(config)\n else:\n task_list = args.tasks.split(\",\")\n task_names = task_manager.match_tasks(task_list)\n for task in [task for task in task_list if task not in task_names]:\n if os.path.isfile(task):\n config = utils.load_yaml_config(task)\n task_names.append(config)\n task_missing = [\n task for task in task_list if task not in task_names and \"*\" not in task\n ] # we don't want errors if a wildcard (\"*\") task name was used\n\n if task_missing:\n missing = \", \".join(task_missing)\n eval_logger.error(\n f\"Tasks were not found: {missing}\\n\"\n f\"{utils.SPACING}Try `lm-eval --tasks list` for list of available tasks\",\n )\n raise ValueError(\n f\"Tasks not found: {missing}. Try `lm-eval --tasks list` for list of available tasks, or '--verbosity DEBUG' to troubleshoot task registration issues.\"\n )\n\n if args.output_path:\n path = Path(args.output_path)\n # check if file or 'dir/results.json' exists\n if path.is_file():\n raise FileExistsError(f\"File already exists at {path}\")\n output_path_file = path.joinpath(DEFAULT_RESULTS_FILE)\n if output_path_file.is_file():\n eval_logger.warning(\n f\"File {output_path_file} already exists. Results will be overwritten.\"\n )\n # if path json then get parent dir\n elif path.suffix in (\".json\", \".jsonl\"):\n output_path_file = path\n path.parent.mkdir(parents=True, exist_ok=True)\n path = path.parent\n else:\n path.mkdir(parents=True, exist_ok=True)\n\n # Respect user's value passed in via CLI, otherwise default to True and add to comma-separated model args\n if args.trust_remote_code:\n os.environ[\"HF_DATASETS_TRUST_REMOTE_CODE\"] = str(args.trust_remote_code)\n args.model_args = (\n args.model_args\n + f\",trust_remote_code={os.environ['HF_DATASETS_TRUST_REMOTE_CODE']}\"\n )\n\n eval_logger.info(f\"Selected Tasks: {task_names}\")\n eval_logger.info(\"Loading selected tasks...\")\n\n request_caching_args = request_caching_arg_to_dict(\n cache_requests=args.cache_requests\n )\n\n results = evaluator.simple_evaluate(\n model=args.model,\n model_args=args.model_args,\n tasks=task_names,\n num_fewshot=args.num_fewshot,\n batch_size=args.batch_size,\n max_batch_size=args.max_batch_size,\n device=args.device,\n use_cache=args.use_cache,\n limit=args.limit,\n check_integrity=args.check_integrity,\n write_out=args.write_out,\n log_samples=args.log_samples,\n gen_kwargs=args.gen_kwargs,\n task_manager=task_manager,\n verbosity=args.verbosity,\n predict_only=args.predict_only,\n random_seed=args.seed[0],\n numpy_random_seed=args.seed[1],\n torch_random_seed=args.seed[2],\n **request_caching_args,\n )\n\n if results is not None:\n if args.log_samples:\n samples = results.pop(\"samples\")\n dumped = json.dumps(\n results, indent=2, default=_handle_non_serializable, ensure_ascii=False\n )\n if args.show_config:\n print(dumped)\n\n batch_sizes = \",\".join(map(str, results[\"config\"][\"batch_sizes\"]))\n\n # Add W&B logging\n if args.wandb_args:\n try:\n wandb_logger.post_init(results)\n wandb_logger.log_eval_result()\n if args.log_samples:\n wandb_logger.log_eval_samples(samples)\n except Exception as e:\n eval_logger.info(f\"Logging to Weights and Biases failed due to {e}\")\n\n if args.output_path:\n output_path_file.open(\"w\", encoding=\"utf-8\").write(dumped)\n\n if args.log_samples:\n for task_name, config in results[\"configs\"].items():\n output_name = \"{}_{}\".format(\n re.sub(\"/|=\", \"__\", args.model_args), task_name\n )\n filename = path.joinpath(f\"{output_name}.jsonl\")\n samples_dumped = json.dumps(\n samples[task_name],\n indent=2,\n default=_handle_non_serializable,\n ensure_ascii=False,\n )\n filename.write_text(samples_dumped, encoding=\"utf-8\")\n\n print(\n f\"{args.model} ({args.model_args}), gen_kwargs: ({args.gen_kwargs}), limit: {args.limit}, num_fewshot: {args.num_fewshot}, \"\n f\"batch_size: {args.batch_size}{f' ({batch_sizes})' if batch_sizes else ''}\"\n )\n print(make_table(results))\n if \"groups\" in results:\n print(make_table(results, \"groups\"))\n\n if args.wandb_args:\n # Tear down wandb run once all the logging is done.\n wandb_logger.run.finish()\n\n\nif __name__ == \"__main__\":\n cli_evaluate()\n"}
|
{"lm_eval/__main__.py": [{"type": "function", "name": "check_argument_types", "lines": [56, 67], "signature": "def check_argument_types(parser: argparse.ArgumentParser):", "doc": "Check to make sure all CLI args are typed, raises error if not"}, {"type": "function", "name": "setup_parser", "lines": [70, 232], "signature": "def setup_parser() -> argparse.ArgumentParser:", "doc": ""}]}
| null |
["tests/test_cli.py::test_cli_parse_error", "tests/test_cli.py::test_cli_parse_no_error"]
|
[]
|
decc533d02222f3b866d9a89263277fe0cc2fcb2
|
{"first_commit_time": 1710264318.0, "pr_title": "Proposed approach for testing CLI arg parsing", "pr_body": "See discussion here: https://github.com/EleutherAI/lm-evaluation-harness/issues/1518\r\n\r\nHere's an approach to start testing CLI argument parsing:\r\n\r\n1. Separate out setting up the argument parser in `parse_eval_args` into a separate method, `setup_parser` that gets called in `parse_eval_args`\r\n2. Create unit tests that call the parser for each of the command line arguments \r\n3. Adding specific TypeError exceptions at each argument entrypoint in the `cli_evaluate` method\r\n\r\nLet me know what you think about this approach. If it seems reasonable, I'll add the tests for the rest of the methods and exceptions where it's reasonable. \r\n\r\n@LSinev @haileyschoelkopf ", "pr_timeline": [{"time": 1710267420.0, "comment": "Combination of HFArgumentParser from transformers with args setup through dataclass like https://github.com/huggingface/transformers/blob/main/examples/research_projects/wav2vec2/run_asr.py#L343 and the `__post_init__` value check like in video (link with timecode) https://youtu.be/zN4VCb0LbQI?t=592\r\nBut this still may not solve points that follow.\r\n\r\nAs for the current code, testing the parser the way presented seems like testing the argument parser, not the code of this repo module. We put 5 to something that should be a number and it works. In this case it might be useful to check that it always fails if the input is like `--numshots five`. What are the cases, which will fail at new written tests, which will not fail inside ArgumentParser? \r\n\r\nThe `try... except' example here seems to be overreacting to an already solved case \u2014 no prevention of new failures. Some future failures may be prevented (though this hypothesis should be tested by turning on failed code and rechecking) after mypy checks are turned back on (even for tests)."}, {"time": 1710426575.0, "comment": "Thanks for the feedback @LSinev ! You're right that these cases don't necessarily cover what we'd like. After thinking about this and checking the videos and the links, I decided to take a different approach and unit test whether each CLI argument, with the exception of booleans, has a type. \r\n\r\nThat way, if you input one without a type unit tests won't pass and if it's a boolean you'll have to delcare a default anyway. Let me know what you think about this approach. "}, {"time": 1710428411.0, "comment": "This seems to be a much better approach.\r\nBy the way, some boolean cli arguments may be also set like\r\n```\r\n parser.add_argument(\r\n \"--some_boolean_arg\",\r\n type=bool,\r\n default=True,\r\n help=\"do something good\",\r\n action=argparse.BooleanOptionalAction, # type: ignore[attr-defined]\r\n )\r\n```\r\nwhich also adds `--no-some_boolean_arg`. Mentioning this way in case you want check those too.\r\n"}, {"time": 1710431811.0, "comment": "Thanks!\r\n\r\n> parser.add_argument(\r\n> \"--some_boolean_arg\",\r\n> type=bool,\r\n> default=True,\r\n> help=\"do something good\",\r\n> action=argparse.BooleanOptionalAction, # type: ignore[attr-defined]\r\n> )\r\n\r\nI checked these and decided not to add a test for them since we use the `store_true` pattern generally in all our arguments and it makes sense to standardize on this, what do you think? "}, {"time": 1710437845.0, "comment": "Standardization is good for future improvements and development. Even more, after reading the documentation I see that `BooleanOptionalAction` is only available since python 3.9, so it is of no use as this repo should support 3.8 as well. But I am not sure if this `store_true` pattern with `default=True` is OK:\r\n ```\r\n parser.add_argument(\r\n \"--trust_remote_code\",\r\n default=True,\r\n action=\"store_true\",\r\n help=\"Sets trust_remote_code to True to execute code to create HF Datasets from the Hub\",\r\n )\r\n```\r\nwith or without this argument, the code is trusted by default. I don't know if this pattern adds an option of `--no-trust_remote_code` (and also if it depends on the Python version)."}, {"time": 1710444571.0, "comment": "The behavior of `store_true` seems somewhat confusing in general. We override to true in the case of the default and respect the user's settings, but if we don't set the default to `True`, then it defaults to `False`, at least in `3.9`: https://gist.github.com/veekaybee/2c8769789a90f219dc83a9e681773000\r\n\r\nIronically this is the [default behavior of the module](https://github.com/python/cpython/blob/c432df6d56f3e02530132321b47dcc7b914a3660/Lib/argparse.py#L1008) \ud83d\ude05 . I figured from that perspective, it was better to explicitly set it (explicit is better than implicit, etc, zen of python) even though we handle it later downstream. I can also check `3.8` if that. helps"}, {"time": 1710448561.0, "comment": "I think, your gist example/test may be more insightful with parsing of same set of args (and also setup when no args is provided) by all three defined parsers.\r\n\r\nI am a bit confused here. As far as I understand now, after this PR (with `default=True` for some boolean `store_true` arguments) merged, calling `lm_eval` with some arguments from commandline, considering `--trust_remote_code` will have effect on datasets invocation like:\r\n\r\n| Command | Trust to remote code state |\r\n|--------|--------|\r\n| (some arguments but no `--trust_remote_code` at all)| `True`|\r\n| `--trust_remote_code` | `True` |\r\n| `--trust_remote_code false` | `False` |\r\n| `--trust_remote_code true` | `True` |\r\n| `--trust_remote_code 0` | `False` |\r\n| `--trust_remote_code 1` | `True` |\r\n\r\nAlso I suppose (in this case) user or any system calling from commandline consider equivalent all typical ways of setting `True` (`1`, `true`, `T`, `True`, `TRUE`, `on`, `On`, `ON`, `Y`, `y`, `yes`, `Yes`, `YES`) and `False` (`0`, `false`, `F`, `False`, `FALSE`, `off`, `Off`, `OFF`, `N`, `n`, `no`, `No`, `NO`). Is this implied somehow, or may be tested also?\r\n\r\nI checked one with ipython 3.8\r\n```\r\nIn [1]: import argparse\r\n\r\nIn [2]: import os\r\n\r\nIn [3]: parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)\r\n\r\nIn [4]: parser.add_argument(\r\n ...: \"--trust_remote_code\",\r\n ...: default=True,\r\n ...: action=\"store_true\",\r\n ...: help=\"Sets trust_remote_code to True to execute code to create HF Datasets from the\r\n ...: Hub\",\r\n ...: )\r\nOut[4]: _StoreTrueAction(option_strings=['--trust_remote_code'], dest='trust_remote_code', nargs=0, const=True, default=True, type=None, choices=None, help='Sets trust_remote_code to True to execute code to create HF Datasets from the Hub', metavar=None)\r\n```\r\nand then\r\n```\r\nIn [20]: parser.parse_args([])\r\nOut[20]: Namespace(trust_remote_code=True)\r\n\r\nIn [21]: parser.parse_args(['--trust_remote_code'])\r\nOut[21]: Namespace(trust_remote_code=True)\r\n```\r\nSeems, there is no way to turn off trust in remote code. I tried some ways to set false and didn't find any.\r\n\r\nWithout `default=True` I thought is like\r\n| Command | Trust to remote code state |\r\n|--------|--------|\r\n| (some arguments but no `--trust_remote_code` at all)| `False`|\r\n| `--trust_remote_code` | `True` |\r\n\r\nNo confusion for user, just having key/argument set \u2014 turns something on, and no trust to remote code if not specified.\r\n\r\nFound big discussion with many ways to implement (still no pre-commit check which I was actually looking for): https://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse"}, {"time": 1710449089.0, "comment": "It seems like you mention, we might want to test this flag specifically - I'll see what I can add from a testing perspective to cover `store_true` flags as they are currently implemented (with the intention of keeping code/behavior changes as minimal as possible) \r\n\r\nBased on the thread you posted, this looks like the easiest and most accepted answer https://stackoverflow.com/a/59579733.\r\n\r\nIn looking at how HF implements this, they take a similar approach:https://github.com/huggingface/transformers/blob/11bbb505c77a1d29370cf16a964cfe73b7a76340/src/transformers/hf_argparser.py#L34C5-L34C19\r\n\r\nso we could go this way too if we wanted. \r\n"}, {"time": 1710449786.0, "comment": "> the easiest and most accepted answer\r\n\r\nIf sorted by highest score, there are more interesting answers.\r\n\r\nLeaving argument like it was before, for me seems the best for now\r\n```\r\n parser.add_argument(\r\n \"--trust_remote_code\",\r\n action=\"store_true\",\r\n help=\"Sets trust_remote_code to True to execute code to create HF Datasets from the Hub\",\r\n )\r\n```\r\nno argument \u2014 no trust, and that's all."}, {"time": 1710450162.0, "comment": "\ud83d\udc4d Works for me, I just changed the two args that take it, but am keeping the ones added for args in cases where action is not `store_true`, such as `model`."}, {"time": 1710505357.0, "comment": "Hi @veekaybee ! This approach looks good to me.\r\n\r\nAnd agree we should leave the `store_true` args as is, as was decided here! The desired behavior is for passing `--trust_remote_code` to set it to True and if not provided to be False otherwise."}, {"time": 1710508492.0, "comment": "Thanks so much both for your discussion and comments! This PR is now ready for review. "}], "issues": {}}
|
|
Project-MONAI/MONAI
| 465
|
https://github.com/Project-MONAI/MONAI/pull/465
|
Project-MONAI__MONAI-465
|
["461"]
|
718d11abb2310ab74321256032a264488a7883b4
|
diff --git a/docs/source/data.rst b/docs/source/data.rst
index 73a6a698bc..d998605bf8 100644
--- a/docs/source/data.rst
+++ b/docs/source/data.rst
@@ -87,3 +87,7 @@ Utilities
.. automodule:: monai.data.utils
:members:
+
+Decathalon DataLoader
+~~~~~~~~~~~~~~~~~~~~~
+.. autofunction:: monai.data.load_decathalon_datalist
diff --git a/monai/data/__init__.py b/monai/data/__init__.py
index 46aec8a01d..2cf7515081 100644
--- a/monai/data/__init__.py
+++ b/monai/data/__init__.py
@@ -19,3 +19,4 @@
from .utils import *
from .png_saver import PNGSaver
from .png_writer import write_png
+from .decathalon_dataloader import load_decathalon_datalist
diff --git a/monai/data/decathalon_dataloader.py b/monai/data/decathalon_dataloader.py
new file mode 100644
index 0000000000..13c291b938
--- /dev/null
+++ b/monai/data/decathalon_dataloader.py
@@ -0,0 +1,75 @@
+# Copyright 2020 MONAI Consortium
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import json
+
+
+def _compute_path(base_dir, element):
+ if isinstance(element, str):
+ return os.path.normpath(os.path.join(base_dir, element))
+ elif isinstance(element, list):
+ for e in element:
+ if not isinstance(e, str):
+ raise ValueError("file path must be a string.")
+ return [os.path.normpath(os.path.join(base_dir, e)) for e in element]
+ else:
+ raise ValueError("file path must be a string or a list of string.")
+
+
+def _append_paths(base_dir, is_segmentation, items):
+ for item in items:
+ if not isinstance(item, dict):
+ raise ValueError("data item must be dict.")
+ for k, v in item.items():
+ if k == "image":
+ item[k] = _compute_path(base_dir, v)
+ elif is_segmentation and k == "label":
+ item[k] = _compute_path(base_dir, v)
+ return items
+
+
+def load_decathalon_datalist(data_list_file_path, is_segmentation=True, data_list_key="training", base_dir=None):
+ """Load image/label paths of decathalon challenge from JSON file
+
+ Json file is similar to what you get from http://medicaldecathlon.com/
+ Those dataset.json files
+
+ Args:
+ data_list_file_path (str): the path to the json file of datalist
+ is_segmentation (bool): whether the datalist is for segmentation task, default is True
+ data_list_key (str): the key to get a list of dictionary to be used, default is "training"
+ base_dir (str): the base directory of the dataset, if None, use the datalist directory
+
+ Returns a list of data items, each of which is a dict keyed by element names, for example:
+
+ .. code-block::
+
+ [
+ {'image': '/workspace/data/chest_19.nii.gz', 'label': 0},
+ {'image': '/workspace/data/chest_31.nii.gz', 'label': 1}
+ ]
+
+ """
+ if not os.path.isfile(data_list_file_path):
+ raise ValueError(f"data list file {data_list_file_path} does not exist.")
+ with open(data_list_file_path) as json_file:
+ json_data = json.load(json_file)
+ if data_list_key not in json_data:
+ raise ValueError(f"data list {data_list_key} not specified in '{data_list_file_path}'.")
+ expected_data = json_data[data_list_key]
+ if data_list_key == "test":
+ expected_data = [{"image": i} for i in expected_data]
+
+ if base_dir is None:
+ base_dir = os.path.dirname(data_list_file_path)
+
+ return _append_paths(base_dir, is_segmentation, expected_data)
|
diff --git a/tests/test_load_decathalon_datalist.py b/tests/test_load_decathalon_datalist.py
new file mode 100644
index 0000000000..4afe151482
--- /dev/null
+++ b/tests/test_load_decathalon_datalist.py
@@ -0,0 +1,104 @@
+# Copyright 2020 MONAI Consortium
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import os
+import json
+import shutil
+import tempfile
+from monai.data import load_decathalon_datalist
+
+
+class TestLoadDecathalonDatalist(unittest.TestCase):
+ def test_seg_values(self):
+ tempdir = tempfile.mkdtemp()
+ test_data = {
+ "name": "Spleen",
+ "description": "Spleen Segmentation",
+ "labels": {"0": "background", "1": "spleen"},
+ "training": [
+ {"image": "spleen_19.nii.gz", "label": "spleen_19.nii.gz"},
+ {"image": "spleen_31.nii.gz", "label": "spleen_31.nii.gz"},
+ ],
+ "test": ["spleen_15.nii.gz", "spleen_23.nii.gz"],
+ }
+ json_str = json.dumps(test_data)
+ file_path = os.path.join(tempdir, "test_data.json")
+ with open(file_path, "w") as json_file:
+ json_file.write(json_str)
+ result = load_decathalon_datalist(file_path, True, "training", tempdir)
+ self.assertEqual(result[0]["image"], os.path.join(tempdir, "spleen_19.nii.gz"))
+ self.assertEqual(result[0]["label"], os.path.join(tempdir, "spleen_19.nii.gz"))
+ shutil.rmtree(tempdir)
+
+ def test_cls_values(self):
+ tempdir = tempfile.mkdtemp()
+ test_data = {
+ "name": "ChestXRay",
+ "description": "Chest X-ray classification",
+ "labels": {"0": "background", "1": "chest"},
+ "training": [{"image": "chest_19.nii.gz", "label": 0}, {"image": "chest_31.nii.gz", "label": 1}],
+ "test": ["chest_15.nii.gz", "chest_23.nii.gz"],
+ }
+ json_str = json.dumps(test_data)
+ file_path = os.path.join(tempdir, "test_data.json")
+ with open(file_path, "w") as json_file:
+ json_file.write(json_str)
+ result = load_decathalon_datalist(file_path, False, "training", tempdir)
+ self.assertEqual(result[0]["image"], os.path.join(tempdir, "chest_19.nii.gz"))
+ self.assertEqual(result[0]["label"], 0)
+ shutil.rmtree(tempdir)
+
+ def test_seg_no_basedir(self):
+ tempdir = tempfile.mkdtemp()
+ test_data = {
+ "name": "Spleen",
+ "description": "Spleen Segmentation",
+ "labels": {"0": "background", "1": "spleen"},
+ "training": [
+ {
+ "image": os.path.join(tempdir, "spleen_19.nii.gz"),
+ "label": os.path.join(tempdir, "spleen_19.nii.gz"),
+ },
+ {
+ "image": os.path.join(tempdir, "spleen_31.nii.gz"),
+ "label": os.path.join(tempdir, "spleen_31.nii.gz"),
+ },
+ ],
+ "test": [os.path.join(tempdir, "spleen_15.nii.gz"), os.path.join(tempdir, "spleen_23.nii.gz")],
+ }
+ json_str = json.dumps(test_data)
+ file_path = os.path.join(tempdir, "test_data.json")
+ with open(file_path, "w") as json_file:
+ json_file.write(json_str)
+ result = load_decathalon_datalist(file_path, True, "training", None)
+ self.assertEqual(result[0]["image"], os.path.join(tempdir, "spleen_19.nii.gz"))
+ self.assertEqual(result[0]["label"], os.path.join(tempdir, "spleen_19.nii.gz"))
+
+ def test_seg_no_labels(self):
+ tempdir = tempfile.mkdtemp()
+ test_data = {
+ "name": "Spleen",
+ "description": "Spleen Segmentation",
+ "labels": {"0": "background", "1": "spleen"},
+ "test": ["spleen_15.nii.gz", "spleen_23.nii.gz"],
+ }
+ json_str = json.dumps(test_data)
+ file_path = os.path.join(tempdir, "test_data.json")
+ with open(file_path, "w") as json_file:
+ json_file.write(json_str)
+ result = load_decathalon_datalist(file_path, True, "test", tempdir)
+ self.assertEqual(result[0]["image"], os.path.join(tempdir, "spleen_15.nii.gz"))
+ shutil.rmtree(tempdir)
+
+
+if __name__ == "__main__":
+ unittest.main()
| 2020-06-01T14:18:19
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/source/data.rst": ":github_url: https://github.com/Project-MONAI/MONAI\n\n.. _data:\n\nData\n====\n\nGeneric Interfaces\n------------------\n.. currentmodule:: monai.data\n\n`Dataset`\n~~~~~~~~~\n.. autoclass:: Dataset\n :members:\n :special-members: __getitem__\n\n`PersistentDataset`\n~~~~~~~~~~~~~~~~~~~\n.. autoclass:: PersistentDataset\n :members:\n :special-members: __getitem__\n\n`CacheDataset`\n~~~~~~~~~~~~~~\n.. autoclass:: CacheDataset\n :members:\n :special-members: __getitem__\n\n`ZipDataset`\n~~~~~~~~~~~~\n.. autoclass:: ZipDataset\n :members:\n :special-members: __getitem__\n\n`ArrayDataset`\n~~~~~~~~~~~~~~\n.. autoclass:: ArrayDataset\n :members:\n :special-members: __getitem__\n\n\nPatch-based dataset\n-------------------\n\n`GridPatchDataset`\n~~~~~~~~~~~~~~~~~~\n.. autoclass:: GridPatchDataset\n :members:\n\n\nNifti format handling\n---------------------\n\nReading\n~~~~~~~\n.. autoclass:: monai.data.NiftiDataset\n :members:\n\nWriting Nifti\n~~~~~~~~~~~~~\n.. autoclass:: monai.data.NiftiSaver\n :members:\n\n.. autofunction:: monai.data.write_nifti\n\n\nPNG format handling\n-------------------\n\nWriting PNG\n~~~~~~~~~~~\n.. autoclass:: monai.data.PNGSaver\n :members:\n\n.. autofunction:: monai.data.write_png\n\n\nSynthetic\n---------\n.. automodule:: monai.data.synthetic\n :members:\n\n\nUtilities\n---------\n.. automodule:: monai.data.utils\n :members:\n\n", "monai/data/__init__.py": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .csv_saver import CSVSaver\nfrom .dataset import Dataset, PersistentDataset, CacheDataset, ZipDataset, ArrayDataset\nfrom .grid_dataset import GridPatchDataset\nfrom .nifti_reader import NiftiDataset\nfrom .nifti_saver import NiftiSaver\nfrom .nifti_writer import write_nifti\nfrom .synthetic import *\nfrom .utils import *\nfrom .png_saver import PNGSaver\nfrom .png_writer import write_png\n", "monai/data/decathalon_dataloader.py": null}
|
diff --git a/docs/source/data.rst b/docs/source/data.rst
index 73a6a698bc..d998605bf8 100644
--- a/docs/source/data.rst
+++ b/docs/source/data.rst
@@ -87,3 +87,7 @@ Utilities
.. automodule:: monai.data.utils
:members:
+
+Decathalon DataLoader
+~~~~~~~~~~~~~~~~~~~~~
+.. autofunction:: monai.data.load_decathalon_datalist
|
{"monai/data/decathalon_dataloader.py": [{"type": "function", "name": "_compute_path", "lines": [16, 25], "signature": "def _compute_path(base_dir, element):", "doc": ""}, {"type": "function", "name": "_append_paths", "lines": [28, 37], "signature": "def _append_paths(base_dir, is_segmentation, items):", "doc": ""}, {"type": "function", "name": "load_decathalon_datalist", "lines": [40, 75], "signature": "def load_decathalon_datalist(data_list_file_path, is_segmentation=True, data_list_key=\"training\", base_dir=None):", "doc": "Load image/label paths of decathalon challenge from JSON file\n\nJson file is similar to what you get from http://medicaldecathlon.com/\nThose dataset.json files\n\nArgs:\n data_list_file_path (str): the path to the json file of datalist\n is_segmentation (bool): whether the datalist is for segmentation task, default is True\n data_list_key (str): the key to get a list of dictionary to be used, default is \"training\"\n base_dir (str): the base directory of the dataset, if None, use the datalist directory\n\nReturns a list of data items, each of which is a dict keyed by element names, for example:\n\n.. code-block::\n\n [\n {'image': '/workspace/data/chest_19.nii.gz', 'label': 0}, \n {'image': '/workspace/data/chest_31.nii.gz', 'label': 1}\n ]"}]}
| null |
["tests/test_load_decathalon_datalist.py::TestLoadDecathalonDatalist::test_cls_values", "tests/test_load_decathalon_datalist.py::TestLoadDecathalonDatalist::test_seg_no_basedir", "tests/test_load_decathalon_datalist.py::TestLoadDecathalonDatalist::test_seg_no_labels", "tests/test_load_decathalon_datalist.py::TestLoadDecathalonDatalist::test_seg_values"]
|
[]
|
e73257caa79309dcce1e93abf1632f4bfd75b11f
|
{"first_commit_time": 1591001255.0, "pr_title": "461 add support to load decathalon datalist", "pr_body": "Fixes #461 .\r\n\r\n### Description\r\nAs Decathalon challenge dataset is very rich and very popular in medical domain, many researchers and students use Decathalon dataset to learn medical DL skills, and we also have notebooks and examples based on Decathalon dataset.\r\nSo this PR added support to load Decathalon datalist from the JSON config file.\r\nUsers can also convert their own datalist to Decathalon format and use this tool.\r\n\r\n### Status\r\n**Ready**\r\n\r\n### Types of changes\r\n<!--- Put an `x` in all the boxes that apply, and remove the not applicable items -->\r\n- [ ] Bug fix (non-breaking change which fixes an issue)\r\n- [x] Breaking change (fix or new feature that would cause existing functionality to change)\r\n- [x] New tests added to cover the changes\r\n- [x] Docstrings/Documentation updated\r\n", "pr_timeline": [{"time": 1591021110.0, "comment": "/black"}, {"time": 1591022261.0, "comment": "/black"}, {"time": 1591028667.0, "comment": "/black"}, {"time": 1591028697.0, "comment": "Hi @wyli ,\r\n\r\nThanks for your review, I updated according to the comments.\r\nCould you please help review it again?\r\nThanks.\r\n"}, {"time": 1591029057.0, "comment": "/black"}, {"time": 1591050663.0, "comment": "/black"}], "issues": {"461": {"issue_title": "Add utility function to load dataset from JSON file", "issue_body": "**Is your feature request related to a problem? Please describe.**\r\nMost of public datasets has a separate file to store `datalist`, let's add a JSON parser first based on the Decathalon dataset format first.\r\n", "issue_timeline": []}}}
|
PyThaiNLP/pythainlp
| 1,054
|
https://github.com/PyThaiNLP/pythainlp/pull/1054
|
PyThaiNLP__pythainlp-1054
|
[]
|
2252dee57bd7be9503242fa734bf0abc48c5ddf1
|
diff --git a/docs/api/lm.rst b/docs/api/lm.rst
index 471282fd3..063aecb2d 100644
--- a/docs/api/lm.rst
+++ b/docs/api/lm.rst
@@ -6,4 +6,5 @@ pythainlp.lm
Modules
-------
+.. autofunction:: calculate_ngram_counts
.. autofunction:: remove_repeated_ngrams
\ No newline at end of file
diff --git a/pythainlp/lm/__init__.py b/pythainlp/lm/__init__.py
index f3e43e801..9fe31c161 100644
--- a/pythainlp/lm/__init__.py
+++ b/pythainlp/lm/__init__.py
@@ -3,6 +3,9 @@
# SPDX-FileType: SOURCE
# SPDX-License-Identifier: Apache-2.0
-__all__ = ["remove_repeated_ngrams"]
+__all__ = [
+ "calculate_ngram_counts",
+ "remove_repeated_ngrams"
+]
-from pythainlp.lm.text_util import remove_repeated_ngrams
+from pythainlp.lm.text_util import calculate_ngram_counts, remove_repeated_ngrams
diff --git a/pythainlp/lm/text_util.py b/pythainlp/lm/text_util.py
index 668ded3c5..0d3181d2a 100644
--- a/pythainlp/lm/text_util.py
+++ b/pythainlp/lm/text_util.py
@@ -4,7 +4,32 @@
# SPDX-License-Identifier: Apache-2.0
# ruff: noqa: C901
-from typing import List
+from typing import List, Tuple, Dict
+
+
+def calculate_ngram_counts(
+ list_words: List[str],
+ n_min: int = 2,
+ n_max: int = 4) -> Dict[Tuple[str], int]:
+ """
+ Calculates the counts of n-grams in the list words for the specified range.
+
+ :param List[str] list_words: List of string
+ :param int n_min: The minimum n-gram size (default: 2).
+ :param int n_max: The maximum n-gram size (default: 4).
+
+ :return: A dictionary where keys are n-grams and values are their counts.
+ :rtype: Dict[Tuple[str], int]
+ """
+
+ ngram_counts = {}
+
+ for n in range(n_min, n_max + 1):
+ for i in range(len(list_words) - n + 1):
+ ngram = tuple(list_words[i:i + n])
+ ngram_counts[ngram] = ngram_counts.get(ngram, 0) + 1
+
+ return ngram_counts
def remove_repeated_ngrams(string_list: List[str], n: int = 2) -> List[str]:
|
diff --git a/tests/core/test_lm.py b/tests/core/test_lm.py
index 5d25cc124..9da213d31 100644
--- a/tests/core/test_lm.py
+++ b/tests/core/test_lm.py
@@ -5,10 +5,23 @@
import unittest
-from pythainlp.lm import remove_repeated_ngrams
+from pythainlp.lm import calculate_ngram_counts, remove_repeated_ngrams
class LMTestCase(unittest.TestCase):
+ def test_calculate_ngram_counts(self):
+ self.assertEqual(
+ calculate_ngram_counts(['1', '2', '3', '4']),
+ {
+ ('1', '2'): 1,
+ ('2', '3'): 1,
+ ('3', '4'): 1,
+ ('1', '2', '3'): 1,
+ ('2', '3', '4'): 1,
+ ('1', '2', '3', '4'): 1
+ }
+ )
+
def test_remove_repeated_ngrams(self):
texts = ['เอา', 'เอา', 'แบบ', 'แบบ', 'แบบ', 'ไหน']
self.assertEqual(
| 2025-01-02T13:53:18
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/api/lm.rst": ".. currentmodule:: pythainlp.lm\n\npythainlp.lm\n============\n\nModules\n-------\n\n.. autofunction:: remove_repeated_ngrams", "pythainlp/lm/__init__.py": "# -*- coding: utf-8 -*-\n# SPDX-FileCopyrightText: 2016-2025 PyThaiNLP Project\n# SPDX-FileType: SOURCE\n# SPDX-License-Identifier: Apache-2.0\n\n__all__ = [\"remove_repeated_ngrams\"]\n\nfrom pythainlp.lm.text_util import remove_repeated_ngrams\n", "pythainlp/lm/text_util.py": "# -*- coding: utf-8 -*-\n# SPDX-FileCopyrightText: 2016-2025 PyThaiNLP Project\n# SPDX-FileType: SOURCE\n# SPDX-License-Identifier: Apache-2.0\n# ruff: noqa: C901\n\nfrom typing import List\n\n\ndef remove_repeated_ngrams(string_list: List[str], n: int = 2) -> List[str]:\n \"\"\"\n Remove repeated n-grams\n\n :param List[str] string_list: List of string\n :param int n: n-gram size\n :return: List of string\n :rtype: List[str]\n\n :Example:\n ::\n\n from pythainlp.lm import remove_repeated_ngrams\n\n remove_repeated_ngrams(['เอา', 'เอา', 'แบบ', 'ไหน'], n=1)\n # output: ['เอา', 'แบบ', 'ไหน']\n \"\"\"\n if not string_list or n <= 0:\n return string_list\n\n unique_ngrams = set()\n\n output_list = []\n\n for i in range(len(string_list)):\n if i + n <= len(string_list):\n ngram = tuple(string_list[i:i + n])\n\n if ngram not in unique_ngrams:\n unique_ngrams.add(ngram)\n\n if not output_list or output_list[-(n - 1):] != list(ngram[:-1]):\n output_list.extend(ngram)\n else:\n output_list.append(ngram[-1])\n else:\n for char in string_list[i:]:\n if not output_list or output_list[-1] != char:\n output_list.append(char)\n\n return output_list\n"}
|
diff --git a/docs/api/lm.rst b/docs/api/lm.rst
index 471282fd3..063aecb2d 100644
--- a/docs/api/lm.rst
+++ b/docs/api/lm.rst
@@ -6,4 +6,5 @@ pythainlp.lm
Modules
-------
+.. autofunction:: calculate_ngram_counts
.. autofunction:: remove_repeated_ngrams
\ No newline at end of file
|
{"pythainlp/lm/text_util.py": [{"type": "function", "name": "calculate_ngram_counts", "lines": [10, 32], "signature": "def calculate_ngram_counts( list_words: List[str], n_min: int = 2, n_max: int = 4) -> Dict[Tuple[str], int]:", "doc": "Calculates the counts of n-grams in the list words for the specified range.\n\n:param List[str] list_words: List of string\n:param int n_min: The minimum n-gram size (default: 2).\n:param int n_max: The maximum n-gram size (default: 4).\n\n:return: A dictionary where keys are n-grams and values are their counts.\n:rtype: Dict[Tuple[str], int]"}]}
| null |
["tests/core/test_lm.py::LMTestCase::test_calculate_ngram_counts", "tests/core/test_lm.py::LMTestCase::test_remove_repeated_ngrams"]
|
[]
|
2252dee57bd7be9503242fa734bf0abc48c5ddf1
|
{"first_commit_time": 1735825940.0, "pr_title": "Add pythainlp.lm.calculate_ngram_counts", "pr_body": "Calculates the counts of n-grams in the list words for the specified range.\r\n\r\n```\r\n>>> from pythainlp.lm import calculate_ngram_counts\r\n>>> a=[\"1\",\"2\",\"3\",\"4\"]\r\n>>> calculate_ngram_counts(a)\r\n{('1', '2'): 1, ('2', '3'): 1, ('3', '4'): 1, ('1', '2', '3'): 1, ('2', '3', '4'): 1, ('1', '2', '3', '4'): 1}\r\n>>> \r\n```", "pr_timeline": [{"time": 1735826259.0, "comment": "Hello @wannaphong! Thanks for updating this PR. We checked the lines you've touched for [PEP\u00a08](https://www.python.org/dev/peps/pep-0008) issues, and found:\n\n\n\n\n\n\n\nThere are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers: \n\n##### Comment last updated at 2025-01-02 13:57:39 UTC"}, {"time": 1735826284.0, "comment": "## [](https://sonarcloud.io/dashboard?id=PyThaiNLP_pythainlp&pullRequest=1054) **Quality Gate passed** \nIssues \n [0 New issues](https://sonarcloud.io/project/issues?id=PyThaiNLP_pythainlp&pullRequest=1054&issueStatuses=OPEN,CONFIRMED&sinceLeakPeriod=true) \n [0 Accepted issues](https://sonarcloud.io/project/issues?id=PyThaiNLP_pythainlp&pullRequest=1054&issueStatuses=ACCEPTED)\n\nMeasures \n [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=PyThaiNLP_pythainlp&pullRequest=1054&issueStatuses=OPEN,CONFIRMED&sinceLeakPeriod=true) \n [0.0% Coverage on New Code](https://sonarcloud.io/component_measures?id=PyThaiNLP_pythainlp&pullRequest=1054&metric=new_coverage&view=list) \n [0.0% Duplication on New Code](https://sonarcloud.io/component_measures?id=PyThaiNLP_pythainlp&pullRequest=1054&metric=new_duplicated_lines_density&view=list) \n \n[See analysis details on SonarQube Cloud](https://sonarcloud.io/dashboard?id=PyThaiNLP_pythainlp&pullRequest=1054)\n\n"}, {"time": 1735826661.0, "comment": "\n[](https://coveralls.io/builds/71557780)\n\ncoverage: 52.728% (-0.06%) from 52.79%\nwhen pulling **ca9446d5018159cc83ffd28a10f441022dfe9fec on add-calculate_ngram_counts**\ninto **2252dee57bd7be9503242fa734bf0abc48c5ddf1 on dev**.\n"}], "issues": {}}
|
RDFLib/rdflib
| 1,968
|
https://github.com/RDFLib/rdflib/pull/1968
|
RDFLib__rdflib-1968
|
[]
|
131d9e66e8515aa81d776969d42f58c72bc68f86
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a43d51694..27c9fc414 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -21,6 +21,23 @@ CHANGE BARRIER is intended to reduce the potential for merge conflicts
and will be removed for release.
-->
+
+<!-- -->
+<!-- -->
+<!-- CHANGE BARRIER: START -->
+<!-- -->
+<!-- -->
+
+- Add chunk serializer that facilitates the encoding of a graph into multiple
+ N-Triples encoded chunks.
+ [PR #1968](https://github.com/RDFLib/rdflib/pull/1968).
+
+<!-- -->
+<!-- -->
+<!-- CHANGE BARRIER: END -->
+<!-- -->
+<!-- -->
+
<!-- -->
<!-- -->
<!-- CHANGE BARRIER: START -->
diff --git a/rdflib/plugins/serializers/nt.py b/rdflib/plugins/serializers/nt.py
index 913dbedf1..8efa0af84 100644
--- a/rdflib/plugins/serializers/nt.py
+++ b/rdflib/plugins/serializers/nt.py
@@ -1,3 +1,5 @@
+from __future__ import annotations
+
"""
N-Triples RDF graph serializer for RDFLib.
See <http://www.w3.org/TR/rdf-testcases/#ntriples> for details about the
@@ -5,12 +7,15 @@
"""
import codecs
import warnings
-from typing import IO, Optional
+from typing import IO, TYPE_CHECKING, Optional, Tuple, Union
from rdflib.graph import Graph
from rdflib.serializer import Serializer
from rdflib.term import Literal
+if TYPE_CHECKING:
+ from rdflib.graph import _TripleType
+
__all__ = ["NTSerializer"]
@@ -52,18 +57,20 @@ def __init__(self, store: Graph):
Serializer.__init__(self, store) # default to utf-8
-def _nt_row(triple):
+def _nt_row(triple: _TripleType):
if isinstance(triple[2], Literal):
return "%s %s %s .\n" % (
- triple[0].n3(),
- triple[1].n3(),
+ # type error: "Node" has no attribute "n3"
+ triple[0].n3(), # type: ignore[attr-defined]
+ triple[1].n3(), # type: ignore[attr-defined]
_quoteLiteral(triple[2]),
)
else:
- return "%s %s %s .\n" % (triple[0].n3(), triple[1].n3(), triple[2].n3())
+ # type error: "Node" has no attribute "n3"
+ return "%s %s %s .\n" % (triple[0].n3(), triple[1].n3(), triple[2].n3()) # type: ignore[attr-defined]
-def _quoteLiteral(l_):
+def _quoteLiteral(l_: Literal) -> str: # noqa: N802
"""
a simpler version of term.Literal.n3()
"""
@@ -80,13 +87,15 @@ def _quoteLiteral(l_):
return "%s" % encoded
-def _quote_encode(l_):
+def _quote_encode(l_: str) -> str:
return '"%s"' % l_.replace("\\", "\\\\").replace("\n", "\\n").replace(
'"', '\\"'
).replace("\r", "\\r")
-def _nt_unicode_error_resolver(err):
+def _nt_unicode_error_resolver(
+ err: UnicodeError,
+) -> Tuple[Union[str, bytes], int]:
"""
Do unicode char replaces as defined in https://www.w3.org/TR/2004/REC-rdf-testcases-20040210/#ntrip_strings
"""
@@ -96,8 +105,12 @@ def _replace_single(c):
fmt = "\\u%04X" if c <= 0xFFFF else "\\U%08X"
return fmt % c
- string = err.object[err.start : err.end]
- return "".join(_replace_single(c) for c in string), err.end
+ # type error: "UnicodeError" has no attribute "object"
+ # type error: "UnicodeError" has no attribute "start"
+ # type error: "UnicodeError" has no attribute "end"
+ string = err.object[err.start : err.end] # type: ignore[attr-defined]
+ # type error: "UnicodeError" has no attribute "end"
+ return "".join(_replace_single(c) for c in string), err.end # type: ignore[attr-defined]
codecs.register_error("_rdflib_nt_escape", _nt_unicode_error_resolver)
diff --git a/rdflib/tools/chunk_serializer.py b/rdflib/tools/chunk_serializer.py
new file mode 100644
index 000000000..cb18d3991
--- /dev/null
+++ b/rdflib/tools/chunk_serializer.py
@@ -0,0 +1,132 @@
+"""
+This file provides a single function `serialize_in_chunks()` which can serialize a
+Graph into a number of NT files with a maximum number of triples or maximum file size.
+
+There is an option to preserve any prefixes declared for the original graph in the first
+file, which will be a Turtle file.
+"""
+
+from contextlib import ExitStack, contextmanager
+from pathlib import Path
+from typing import TYPE_CHECKING, BinaryIO, Generator, Optional, Tuple
+
+from rdflib.graph import Graph
+from rdflib.plugins.serializers.nt import _nt_row
+
+# from rdflib.term import Literal
+
+# if TYPE_CHECKING:
+# from rdflib.graph import _TriplePatternType
+
+__all__ = ["serialize_in_chunks"]
+
+
+def serialize_in_chunks(
+ g: Graph,
+ max_triples: int = 10000,
+ max_file_size_kb: Optional[int] = None,
+ file_name_stem: str = "chunk",
+ output_dir: Optional[Path] = None,
+ write_prefixes: bool = False,
+) -> None:
+ """
+ Serializes a given Graph into a series of n-triples with a given length.
+
+ :param g:
+ The graph to serialize.
+
+ :param max_file_size_kb:
+ Maximum size per NT file in kB (1,000 bytes)
+ Equivalent to ~6,000 triples, depending on Literal sizes.
+
+ :param max_triples:
+ Maximum size per NT file in triples
+ Equivalent to lines in file.
+
+ If both this parameter and max_file_size_kb are set, max_file_size_kb will be used.
+
+ :param file_name_stem:
+ Prefix of each file name.
+ e.g. "chunk" = chunk_000001.nt, chunk_000002.nt...
+
+ :param output_dir:
+ The directory you want the files to be written to.
+
+ :param write_prefixes:
+ The first file created is a Turtle file containing original graph prefixes.
+
+
+ See ``../test/test_tools/test_chunk_serializer.py`` for examples of this in use.
+ """
+
+ if output_dir is None:
+ output_dir = Path.cwd()
+
+ if not output_dir.is_dir():
+ raise ValueError(
+ "If you specify an output_dir, it must actually be a directory!"
+ )
+
+ @contextmanager
+ def _start_new_file(file_no: int) -> Generator[Tuple[Path, BinaryIO], None, None]:
+ if TYPE_CHECKING:
+ # this is here because mypy gets a bit confused
+ assert output_dir is not None
+ fp = Path(output_dir) / f"{file_name_stem}_{str(file_no).zfill(6)}.nt"
+ with open(fp, "wb") as fh:
+ yield fp, fh
+
+ def _serialize_prefixes(g: Graph) -> str:
+ pres = []
+ for k, v in g.namespace_manager.namespaces():
+ pres.append(f"PREFIX {k}: <{v}>")
+
+ return "\n".join(sorted(pres)) + "\n"
+
+ if write_prefixes:
+ with open(
+ Path(output_dir) / f"{file_name_stem}_000000.ttl", "w", encoding="utf-8"
+ ) as fh:
+ fh.write(_serialize_prefixes(g))
+
+ bytes_written = 0
+ with ExitStack() as xstack:
+ if max_file_size_kb is not None:
+ max_file_size = max_file_size_kb * 1000
+ file_no = 1 if write_prefixes else 0
+ for i, t in enumerate(g.triples((None, None, None))):
+ row_bytes = _nt_row(t).encode("utf-8")
+ if len(row_bytes) > max_file_size:
+ raise ValueError(
+ f"cannot write triple {t!r} as it's serialized size of {row_bytes / 1000} exceeds max_file_size_kb = {max_file_size_kb}"
+ )
+ if i == 0:
+ fp, fhb = xstack.enter_context(_start_new_file(file_no))
+ bytes_written = 0
+ elif (bytes_written + len(row_bytes)) >= max_file_size:
+ file_no += 1
+ fp, fhb = xstack.enter_context(_start_new_file(file_no))
+ bytes_written = 0
+
+ bytes_written += fhb.write(row_bytes)
+
+ else:
+ # count the triples in the graph
+ graph_length = len(g)
+
+ if graph_length <= max_triples:
+ # the graph is less than max so just NT serialize the whole thing
+ g.serialize(
+ destination=Path(output_dir) / f"{file_name_stem}_all.nt",
+ format="nt",
+ )
+ else:
+ # graph_length is > max_lines, make enough files for all graph
+ # no_files = math.ceil(graph_length / max_triples)
+ file_no = 1 if write_prefixes else 0
+ for i, t in enumerate(g.triples((None, None, None))):
+ if i % max_triples == 0:
+ fp, fhb = xstack.enter_context(_start_new_file(file_no))
+ file_no += 1
+ fhb.write(_nt_row(t).encode("utf-8"))
+ return
|
diff --git a/test/test_tools/test_chunk_serializer.py b/test/test_tools/test_chunk_serializer.py
new file mode 100644
index 000000000..4f582b192
--- /dev/null
+++ b/test/test_tools/test_chunk_serializer.py
@@ -0,0 +1,173 @@
+from __future__ import annotations
+
+import logging
+import os
+from contextlib import ExitStack
+from pathlib import Path
+from test.data import TEST_DATA_DIR
+from test.utils import GraphHelper
+from test.utils.graph import cached_graph
+from test.utils.namespace import MF
+from test.utils.path import ctx_chdir
+from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple, Union
+
+import pytest
+
+from rdflib import Graph
+from rdflib.tools.chunk_serializer import serialize_in_chunks
+
+if TYPE_CHECKING:
+ from builtins import ellipsis
+
+logger = logging.getLogger(__name__)
+
+
+def test_chunk_by_triples(tmp_path: Path):
+ g = cached_graph((TEST_DATA_DIR / "suites/w3c/trig/manifest.ttl",))
+ # this graph has 2,848 triples
+
+ # serialize into chunks file with 100 triples each
+ serialize_in_chunks(
+ g, max_triples=100, file_name_stem="chunk_100", output_dir=tmp_path
+ )
+
+ # count the resultant .nt files, should be math.ceil(2848 / 100) = 25
+ assert len(list(tmp_path.glob("*.nt"))) == 25
+
+ # check, when a graph is made from the chunk files, it's isomorphic with original
+ g2 = Graph()
+ for f in tmp_path.glob("*.nt"):
+ g2.parse(f, format="nt")
+
+ assert g.isomorphic(g2), "Reconstructed graph is not isomorphic with original"
+
+
+def test_chunk_by_size(tmp_path: Path):
+ g = cached_graph((TEST_DATA_DIR / "suites/w3c/trig/manifest.ttl",))
+ # as an NT file, this graph is 323kb
+
+ # serialize into chunks file of > 50kb each
+ serialize_in_chunks(
+ g, max_file_size_kb=50, file_name_stem="chunk_50k", output_dir=tmp_path
+ )
+
+ # check all files are size < 50kb, with a margin up to 60kb
+ for f in Path(tmp_path).glob("*.nt"):
+ assert os.path.getsize(f) < 50000
+
+ # check, when a graph is made from the chunk files, it's isomorphic with original
+ g2 = Graph()
+ for f in Path(tmp_path).glob("*.nt"):
+ g2.parse(f, format="nt")
+
+ assert g.isomorphic(g2), "Reconstructed graph is not isomorphic with original"
+
+
[email protected](
+ [
+ "test_graph_path",
+ "max_triples",
+ "max_file_size_kb",
+ "write_prefixes",
+ "set_output_dir",
+ "expected_file_count",
+ ],
+ [
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", ..., ..., False, True, 1),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", ..., ..., True, False, 2),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", ..., 5, True, False, (3, 7)),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", ..., 1, False, True, (15, 25)),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", 10000, 1, False, True, (15, 25)),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", 20, ..., False, True, 5),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", 20, ..., True, True, 6),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", 100, ..., True, True, 2),
+ (TEST_DATA_DIR / "defined_namespaces/mf.ttl", 100, ..., False, True, 1),
+ ],
+)
+def test_chuking(
+ tmp_path: Path,
+ test_graph_path: Path,
+ max_triples: Union[ellipsis, int],
+ max_file_size_kb: Union[ellipsis, int, None],
+ write_prefixes: bool,
+ set_output_dir: bool,
+ expected_file_count: Optional[Union[int, Tuple[Optional[int], Optional[int]]]],
+) -> None:
+ test_graph = cached_graph((test_graph_path,))
+ kwargs: Dict[str, Any] = {"write_prefixes": write_prefixes}
+ if max_triples is not ...:
+ kwargs["max_triples"] = max_triples
+ if max_file_size_kb is not ...:
+ kwargs["max_file_size_kb"] = max_file_size_kb
+ logger.debug("kwargs = %s", kwargs)
+ with ExitStack() as xstack:
+ if set_output_dir:
+ kwargs["output_dir"] = tmp_path
+ else:
+ xstack.enter_context(ctx_chdir(tmp_path))
+ serialize_in_chunks(test_graph, **kwargs)
+
+ # set the values to defaults if they were elided in test parameters.
+ if max_file_size_kb is ...:
+ max_file_size_kb = None
+ if max_triples is ...:
+ max_triples = 10000
+
+ stem = "chunk"
+
+ output_paths = set(item.relative_to(tmp_path) for item in tmp_path.glob("**/*"))
+ # output_filenames = set(f"{item}" for item in output_paths)
+
+ if logger.isEnabledFor(logging.DEBUG):
+ logger.debug("tmp_path = %s files = %s", tmp_path, output_paths)
+
+ if isinstance(expected_file_count, tuple):
+ if expected_file_count[0] is not None:
+ assert expected_file_count[0] <= len(output_paths)
+ if expected_file_count[1] is not None:
+ assert expected_file_count[1] >= len(output_paths)
+ elif isinstance(expected_file_count, int):
+ assert expected_file_count == len(output_paths)
+
+ recovered_graph = Graph(bind_namespaces="none")
+
+ if write_prefixes is True:
+ prefixes_path = Path(f"{stem}_000000.ttl")
+ assert prefixes_path in output_paths
+ output_paths.remove(prefixes_path)
+ recovered_graph.parse(tmp_path / prefixes_path, format="turtle")
+ namespaces_data = (tmp_path / prefixes_path).read_text("utf-8")
+ assert f"{MF}" in namespaces_data
+
+ if len(output_paths) == 1:
+ all_file = Path(f"{stem}_all.nt")
+ assert all_file in output_paths
+ all_file = tmp_path / all_file
+ file_bytes = all_file.read_bytes()
+ recovered_graph.parse(all_file, format="nt")
+ if isinstance(max_file_size_kb, int):
+ assert len(file_bytes) <= (max_file_size_kb * 1000)
+ elif isinstance(max_triples, int):
+ assert len(recovered_graph) <= max_triples
+
+ elif max_file_size_kb is not None:
+ assert isinstance(max_file_size_kb, int)
+ for output_path in output_paths:
+ output_path = tmp_path / output_path
+ file_bytes = output_path.read_bytes()
+ assert len(file_bytes) <= (max_file_size_kb * 1000)
+ logger.debug("reading %s", output_path)
+ recovered_graph.parse(output_path, format="nt")
+ else:
+ assert isinstance(max_triples, int)
+ for output_path in output_paths:
+ output_path = tmp_path / output_path
+ file_bytes = output_path.read_bytes()
+ triple_count = len(recovered_graph)
+ logger.debug("reading %s", output_path)
+ recovered_graph.parse(output_path, format="nt")
+ extra_triples = len(recovered_graph) - triple_count
+ assert extra_triples <= max_triples
+
+ logger.debug("checking isomorphism")
+ GraphHelper.assert_isomorphic(test_graph, recovered_graph)
diff --git a/test/utils/path.py b/test/utils/path.py
new file mode 100644
index 000000000..40717474f
--- /dev/null
+++ b/test/utils/path.py
@@ -0,0 +1,17 @@
+import os
+from contextlib import contextmanager
+from pathlib import PurePath
+from typing import Iterator, TypeVar, Union
+
+PathLike = Union[PurePath, str]
+PathLikeT = TypeVar("PathLikeT", bound=PathLike)
+
+
+@contextmanager
+def ctx_chdir(newdir: PathLikeT) -> Iterator[PathLikeT]:
+ cwd = os.getcwd()
+ try:
+ os.chdir(f"{newdir}")
+ yield newdir
+ finally:
+ os.chdir(cwd)
| 2022-05-22T12:06:12
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"CHANGELOG.md": "# 2022-10-16 RELEASE MAJOR.MINOR.PATCH\n\n## User facing changes\n\nThis section lists changes that have a potential impact on users of RDFLib,\nchanges with no user impact are not included in this section.\n\n<!--\nPlease add an entry for user facing changes in this section.\n\nNew changes should go at the bottom of the list but the placeholder should\nremain.\n\nNon user-facing changes does not have to be recorded in the changelog. This\nincludes changes to CI, testing, etc. These changes will show up in the \"PRs\nmerged since last release\" section but they are somewhat irrelevant to users.\n-->\n\n<!--\nCHANGE BARRIER is intended to reduce the potential for merge conflicts\nand will be removed for release.\n-->\n\n <!-- --> \n <!-- --> \n <!-- CHANGE BARRIER: START --> \n <!-- --> \n <!-- --> \n\n - Fixes passing `NamespaceManager` in `ConjunctiveGraph`'s method `get_context()`. \n The `get_context()` method will now pass the `NamespaceManager` of `ConjunctiveGraph` to the `namespace_manager` attribute of the newly created context graph, instead of the `ConjunctiveGraph` object itself. This cleans up an old FIXME commment.\n [PR #2073](https://github.com/RDFLib/rdflib/pull/2073). \n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END -->\n<!-- -->\n<!-- -->\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START -->\n<!-- -->\n<!-- -->\n\n- InfixOWL fixes and cleanup.\n Closed [issue #2030](https://github.com/RDFLib/rdflib/issues/2030).\n [PR #2024](https://github.com/RDFLib/rdflib/pull/2024),\n and [PR #2033](https://github.com/RDFLib/rdflib/pull/2033).\n - `rdflib.extras.infixowl.Restriction.__init__` will now raise a `ValueError`\n if there is no restriction value instead of an `AssertionError`.\n - Fixed numerous issues with\n `rdflib.extras.infixowl.Restriction.restrictionKind` which was essentially\n not working at all.\n - Fixed how `rdflib.extras.infixowl.Property.__repr__` uses\n `rdflib.namespace.OWL`. \n - Removed `rdflib.extras.infixowl.Infix.__ror__` and\n `rdflib.extras.infixowl.Infix.__or__` as they were broken.\n - Removed unused `rdflib.extras.infixowl.termDeletionDecorator`.\n - Added `rdflib.extras.infixowl.MalformedClassError` which will replace\n `rdflib.extras.infixowl.MalformedClass` (which is an exception) in the next\n major version.\n - Eliminated the use of mutable data structures in some argument defaults.\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END -->\n<!-- -->\n<!-- -->\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START -->\n<!-- -->\n<!-- -->\n\n- Fixed some cross-referencing issues in RDFLib documentation.\n Closed [issue #1878](https://github.com/RDFLib/rdflib/issues/1878).\n [PR #2036](https://github.com/RDFLib/rdflib/pull/2036).\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END -->\n<!-- -->\n<!-- -->\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START -->\n<!-- -->\n<!-- -->\n\n- Fixed import of `xml.sax.handler` in `rdflib.plugins.parsers.trix` so that it\n no longer tries to import it from `xml.sax.saxutils`.\n [PR #2041](https://github.com/RDFLib/rdflib/pull/2041).\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END -->\n<!-- -->\n<!-- -->\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START -->\n<!-- -->\n<!-- -->\n\n- Removed a pre python 3.5 regex related workaround in the REPLACE SPARQL\n function.\n [PR #2042](https://github.com/RDFLib/rdflib/pull/2042).\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END -->\n<!-- -->\n<!-- -->\n\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START PR #2044 -->\n<!-- -->\n<!-- -->\n\n- Fixed some issues with SPARQL XML result parsing that caused problems with\n [`lxml`](https://lxml.de/). Closed [issue #2035](https://github.com/RDFLib/rdflib/issues/2035),\n [issue #1847](https://github.com/RDFLib/rdflib/issues/1847).\n [PR #2044](https://github.com/RDFLib/rdflib/pull/2044).\n - Result parsing from\n [`TextIO`](https://docs.python.org/3/library/typing.html#typing.TextIO)\n streams now work correctly with `lxml` installed and with XML documents that\n are not `utf-8` encoded.\n - Elements inside `<results>` that are not `<result>` are now ignored.\n - Elements inside `<result>` that are not `<binding>` are now ignored.\n - Also added type hints to `rdflib.plugins.sparql.results.xmlresults`.\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END -->\n<!-- -->\n<!-- -->\n\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START PR #2057 -->\n<!-- -->\n<!-- -->\n\n- Added type hints.\n [PR #2057](https://github.com/RDFLib/rdflib/pull/2057).\n - `rdflib.store` and builtin stores have mostly complete type hints.\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END PR #2057 -->\n<!-- -->\n<!-- -->\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START PR #2066 -->\n<!-- -->\n<!-- -->\n\n- Removed pre python 3.7 compatibility code.\n [PR #2066](https://github.com/RDFLib/rdflib/pull/2066).\n - Removed fallback in case the `shutil` module does not have the `move`\n function.\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END PR #2066 -->\n<!-- -->\n<!-- -->\n\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START #2068 -->\n<!-- -->\n<!-- -->\n\n- Improve file-URI and path handling in `Graph.serialize` and `Result.serialize` to\n address problems with windows path handling in `Result.serialize` and to make\n the behavior between `Graph.serialize` and `Result.serialie` more consistent.\n Closed [issue #2067](https://github.com/RDFLib/rdflib/issues/2067).\n [PR #2068](https://github.com/RDFLib/rdflib/pull/2068).\n - String values for the `destination` argument will now only be treated as\n file URIs if `urllib.parse.urlparse` returns their schema as `file`.\n - Simplified file writing to avoid a temporary file.\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END #2068 -->\n<!-- -->\n<!-- -->\n\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START PR #2069 -->\n<!-- -->\n<!-- -->\n\n- Narrow the type of context-identifiers/graph-names from `rdflib.term.Node` to\n `rdflib.term.IdentifiedNode` as no supported abstract syntax allows for other\n types of context-identifiers.\n [PR #2069](https://github.com/RDFLib/rdflib/pull/2069).\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END PR #2069 -->\n<!-- -->\n<!-- -->\n\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START PR #2070 -->\n<!-- -->\n<!-- -->\n\n- Always parse HexTuple files as utf-8. \n [PR #2070](https://github.com/RDFLib/rdflib/pull/2070).\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END PR #2070 -->\n<!-- -->\n<!-- -->\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: START -->\n<!-- -->\n<!-- -->\n\n- PLACEHOLDER.\n Description of changes.\n Closed [issue #....](https://github.com/RDFLib/rdflib/issues/).\n [PR #....](https://ichard26.github.io/next-pr-number/?owner=RDFLib&name=rdflib).\n\n<!-- -->\n<!-- -->\n<!-- CHANGE BARRIER: END -->\n<!-- -->\n<!-- -->\n\n## PRs merged since last release\n\n<!-- This will be auto generated with:\n\ngh search prs --repo RDFLib/rdflib --merged --base master --json assignees,author,authorAssociation,body,closedAt,commentsCount,createdAt,id,isLocked,isPullRequest,labels,number,repository,state,title,updatedAt,url --limit 1000 --jq '[.[] | select(.closedAt >= \"2022-07-17T00:00:00Z\")]' | jq '(. |= sort_by(.closedAt)) | reverse' | tee /var/tmp/merged-prs.json\n\njq -r '.[] | [ .url, .title ] | @tsv' /var/tmp/merged-prs.json | sort -r | awk -F$'\\t' '(match($1, \"^.*/([^/]+)$\", matches)){printf(\"* %s\\n [PR #%s](%s)\\n\", $2, matches[1], $1)}'\n\n-->\n\n# 2022-07-16 RELEASE 6.2.0\n\nThis is a minor release that includes bug fixes and features.\n\n## User facing changes\n\nThis section lists changes that have a potential impact on users of RDFLib,\nchanges with no user impact are not included in this section.\n\n- SPARQL: Fixed handing of `HAVING` clause with variable composition. Closed\n [issue #936](https://github.com/RDFLib/rdflib/issues/936) and [issue\n #935](https://github.com/RDFLib/rdflib/pull/935), [PR\n #1093](https://github.com/RDFLib/rdflib/pull/1093).\n- JSON-LD parser: better support for content negotiation. Closed [issue\n #1423](https://github.com/RDFLib/rdflib/issues/1423), [PR\n #1436](https://github.com/RDFLib/rdflib/pull/1436).\n- Removed the following functions that were marked as deprecated and scheduled\n for removal in version 6.0.0: `Graph.load`, `Graph.seq`, `Graph.comment`,\n `Graph.label`. [PR #1527](https://github.com/RDFLib/rdflib/pull/1527).\n- Use `functools.total_ordering` to implement most comparison operations for\n `rdflib.paths.Path`. Closed [issue\n #685](https://github.com/RDFLib/rdflib/issues/685), [PR\n #1528](https://github.com/RDFLib/rdflib/pull/1528).\n- Fixed error handling for invalid URIs. Closed [issue\n #821](https://github.com/RDFLib/rdflib/issues/821), [PR\n #1529](https://github.com/RDFLib/rdflib/pull/1529).\n- InfixOWL: Fixed handling of cardinality 0. Closed [issue\n #1453](https://github.com/RDFLib/rdflib/issues/1453) and [issue\n #944](https://github.com/RDFLib/rdflib/pull/1530), [PR\n #1530](https://github.com/RDFLib/rdflib/pull/1530).\n- Added quad support to handling to `rdflib.graph.ReadOnlyGraphAggregate.quads`.\n Closed [issue #430](https://github.com/RDFLib/rdflib/issues/430), [PR\n #1590](https://github.com/RDFLib/rdflib/pull/1590)\n- Fixed base validation used when joining URIs. [PR\n #1607](https://github.com/RDFLib/rdflib/pull/1607).\n- Add GEO defined namespace for GeoSPARQL. Closed [issue\n #1371](https://github.com/RDFLib/rdflib/issues/1371), [PR\n #1622](https://github.com/RDFLib/rdflib/pull/1622).\n- Explicitly raise exception when\n `rdflib.plugins.stores.sparqlstore.SPARQLStore.update` is called. Closed\n [issue #1032](https://github.com/RDFLib/rdflib/issues/1032), [PR\n #1623](https://github.com/RDFLib/rdflib/pull/1623).\n- Added `rdflib.plugins.sparql.processor.prepareUpdate`. Closed [issue\n #272](https://github.com/RDFLib/rdflib/issues/272) and [discussion\n #1581](https://github.com/RDFLib/rdflib/discussions/1581), [PR\n #1624](https://github.com/RDFLib/rdflib/pull/1624).\n- Added `rdflib.namespace.DefinedNamespaceMeta.__dir__`. Closed [issue\n #1593](https://github.com/RDFLib/rdflib/issues/1593), [PR\n #1626](https://github.com/RDFLib/rdflib/pull/1626).\n- Removed `TypeCheckError`, `SubjectTypeError`, `PredicateTypeError`,\n `ObjectTypeError` and `ContextTypeError` as these exceptions are not raised by\n RDFLib and their existence will only confuse users which may expect them to be\n used. Also remove corresponding `check_context`, `check_subject`,\n `check_predicate`, `check_object`, `check_statement`, `check_pattern` that is\n unused. [PR #1640](https://github.com/RDFLib/rdflib/pull/1640).\n- Improved the population of the `Accept` HTTP header so that it is correctly\n populated for all formats. [PR\n #1643](https://github.com/RDFLib/rdflib/pull/1643).\n- Fixed some issues with SPARQL Algebra handling/translation. [PR\n #1645](https://github.com/RDFLib/rdflib/pull/1645).\n- Add `nquads` to recognized file extensions.\n [PR #1653](https://github.com/RDFLib/rdflib/pull/1653).\n- Fixed issues that prevented HexTuples roundtripping.\n [PR #1656](https://github.com/RDFLib/rdflib/pull/1656).\n- Make `rdflib.plugins.sparql.operators.unregister_custom_function` idempotent.\n Closed [issue #1492](https://github.com/RDFLib/rdflib/issues/1492),\n [PR #1659](https://github.com/RDFLib/rdflib/pull/1659).\n- Fixed the handling of escape sequences in the N-Triples and N-Quads parsers.\n These parsers will now correctly handle strings like `\"\\\\r\"`. The time it\n takes for these parsers to parse strings with escape sequences will be\n increased, and the increase will be correlated with the amount of escape\n sequences that occur in a string. For strings with many escape sequences the\n parsing speed seems to be almost 4 times slower. Closed [issue\n #1655](https://github.com/RDFLib/rdflib/issues/1655), [PR\n #1663](https://github.com/RDFLib/rdflib/pull/1663).\n - Also marked `rdflib.compat.decodeStringEscape` as deprecated as this\n function is not used anywhere in RDFLib anymore and the utility that it does\n provide is not implemented correctly. It will be removed in RDFLib 7.0.0\n- Added an abstract class `IdentifiedNode` as a superclass of `BNode` and\n `URIRef`. Closed [issue #1526](https://github.com/RDFLib/rdflib/issues/1526),\n [PR #1680](https://github.com/RDFLib/rdflib/pull/1680).\n- Fixed turtle serialization of `rdf:type` in subject, object. Closed [issue\n #1649](https://github.com/RDFLib/rdflib/issues/1649), [PR\n #1649](https://github.com/RDFLib/rdflib/pull/1684).\n- Fixed turtle serialization of PNames that contain brackets. Closed [issue\n #1661](https://github.com/RDFLib/rdflib/issues/1661), [PR\n #1678](https://github.com/RDFLib/rdflib/pull/1678).\n- Added support for selecting which namespace prefixes to bind. Closed [issue\n #1679](https://github.com/RDFLib/rdflib/issues/1679) and [issue #1880](https://github.com/RDFLib/rdflib/pull/1880), [PR\n #1686](https://github.com/RDFLib/rdflib/pull/1686), [PR\n #1845](https://github.com/RDFLib/rdflib/pull/1845) and [PR\n #2018](https://github.com/RDFLib/rdflib/pull/2018).\n - Also added `ConjunctiveGraph.get_graph`.\n - Also added an `override` argument to `Store.bind` which behaves similarly to\n the `override` parameter for `NamespaceManager.bind`.\n - Also fixed handing of support of the `override` parameter to\n `NamespaceManager.bind` by passing.\n- Eliminated a `DeprecationWarning` related to plugin loading [issue\n #1631](https://github.com/RDFLib/rdflib/issues/1631), [PR\n #1694](https://github.com/RDFLib/rdflib/pull/1694).\n- Removed the `rdflib.graph.ContextNode` and `rdflib.graph.DatasetQuad` type\n aliases. These were not being widely used in RDFLib and were also not correct.\n [PR #1695](https://github.com/RDFLib/rdflib/pull/1695).\n- Added `DefinedNamespace.as_jsonld_context`. [PR\n #1706](https://github.com/RDFLib/rdflib/pull/1706).\n- Added `rdflib.namespace.WGS` for WGS84. Closed [issue\n #1709](https://github.com/RDFLib/rdflib/issues/1709), [PR\n #1710](https://github.com/RDFLib/rdflib/pull/1710).\n- Improved performance of `DefinedNamespace` by caching attribute values. [PR\n #1718](https://github.com/RDFLib/rdflib/pull/1718).\n- Only configure python logging if `sys.stderr` has a `isatty` attribute. Closed\n [issue #1760](https://github.com/RDFLib/rdflib/issues/1760), [PR\n #1761](https://github.com/RDFLib/rdflib/pull/1761).\n- Removed unused `rdflib.compat.etree_register_namespace`. [PR\n #1768](https://github.com/RDFLib/rdflib/pull/1768).\n- Fixed numeric shortcut handling in `rdflib.util.from_n3`. Closed [issue\n #1769](https://github.com/RDFLib/rdflib/issues/1769), [PR\n #1771](https://github.com/RDFLib/rdflib/pull/1771).\n- Add ability to detect and mark ill-typed literals. Closed [issue\n #1757](https://github.com/RDFLib/rdflib/issues/1757) and [issue\n #848](https://github.com/RDFLib/rdflib/issues/848), [PR\n #1773](https://github.com/RDFLib/rdflib/pull/1773) and [PR\n #2003](https://github.com/RDFLib/rdflib/pull/2003).\n- Optimized `NamespaceManager.compute_qname` by caching validity. [PR\n #1779](https://github.com/RDFLib/rdflib/pull/1779).\n- SPARQL: Fixed the handling of `EXISTS` inside `BIND` for SPARQL. This was\n raising an exception during evaluation before but is now correctly handled.\n Closed [issue #1472](https://github.com/RDFLib/rdflib/issues/1472), [PR\n #1794](https://github.com/RDFLib/rdflib/pull/1794).\n- Propagate exceptions from SPARQL TSV result parser. Closed [issue\n #1477](https://github.com/RDFLib/rdflib/issues/1477), [PR\n #1809](https://github.com/RDFLib/rdflib/pull/1809)\n- Eliminate usage of `rdflib.term.RDFLibGenid` as a type as this caused issues\n with querying. Closed [issue\n #1808](https://github.com/RDFLib/rdflib/issues/1808), [PR\n #1821](https://github.com/RDFLib/rdflib/pull/1821)\n- Fixed handing of `DefinedNamespace` control attributes so that\n `inspect.signature` works correctly on defined namespaces. [PR\n #1825](https://github.com/RDFLib/rdflib/pull/1825).\n- Fixed namespace rebinding in `Memory`, `SimpleMemory` and `BerkelyDB` stores.\n Closed [issue #1826](https://github.com/RDFLib/rdflib/issues/1826), [PR\n #1843](https://github.com/RDFLib/rdflib/pull/1843).\n- Fixed issues with the N3 serializer. Closed [issue\n #1701](https://github.com/RDFLib/rdflib/issues/1701) and [issue\n #1807](https://github.com/RDFLib/rdflib/issues/1807), [PR\n #1858](https://github.com/RDFLib/rdflib/pull/1858):\n - The N3 serializer was incorrectly considers a subject as seralized if it is serialized in a quoted graph.\n - The N3 serializer does not consider that the predicate of a triple can also\nbe a graph.\n- Added `NamespaceManager.expand_curie`. Closed [issue\n #1868](https://github.com/RDFLib/rdflib/issues/1868), [PR\n #1869](https://github.com/RDFLib/rdflib/pull/1869).\n- Added `Literal.__sub__` and support for datetimes to both `Literal.__add__`\n and `Literal.__sub__`. [PR #1870](https://github.com/RDFLib/rdflib/pull/1870).\n- SPARQL: Fix `None`/undefined handing in `GROUP_CONCAT`. Closed [issue\n #1467](https://github.com/RDFLib/rdflib/issues/1467), [PR\n #1887](https://github.com/RDFLib/rdflib/pull/1887).\n- SPARQL: Fixed result handling for `SERVICE` directive. Closed [issue\n #1278](https://github.com/RDFLib/rdflib/issues/1278), [PR\n #1894](https://github.com/RDFLib/rdflib/pull/1894).\n- Change the skolem default authority for RDFLib from `http://rdlib.net/` to\n `https://rdflib.github.io` and also change other uses of `http://rdlib.net/`\n to `https://rdflib.github.io`. Closed [issue\n #1824](https://github.com/RDFLib/rdflib/issues/1824), [PR\n #1901](https://github.com/RDFLib/rdflib/pull/1901).\n- Fixes handling of non-ascii characters in IRIs. Closed [issue\n #1429](https://github.com/RDFLib/rdflib/issues/1429), [PR\n #1902](https://github.com/RDFLib/rdflib/pull/1902).\n- Pass `generate` to `NamespaceManager.compute_qname` from\n `NamespaceManager.compute_qname_strict` so it raises an error in the same\n case as the \"non-strict\" version. [PR\n #1934](https://github.com/RDFLib/rdflib/pull/1934).\n- Log warnings when encountering ill-typed literals.\n [PR #1944](https://github.com/RDFLib/rdflib/pull/1944).\n- Fixed error handling in TriX serializer. [PR\n #1945](https://github.com/RDFLib/rdflib/pull/1945).\n- Fixed QName generation in XML serializer.\n [PR #1951](https://github.com/RDFLib/rdflib/pull/1951)\n- Remove unnecessary hex expansion for PN_LOCAL in SPARQL parser. Closed [issue\n #1957](https://github.com/RDFLib/rdflib/issues/1957), \n [PR #1959](https://github.com/RDFLib/rdflib/pull/1959).\n- Changed the TriX parser to support both `trix` and `TriX` as root element. [PR\n #1966](https://github.com/RDFLib/rdflib/pull/1966).\n- Fix SPARQL CSV result serialization of blank nodes.\n [PR #1979](https://github.com/RDFLib/rdflib/pull/1979).\n- Added a `URIRef.fragment` property.\n [PR #1991](https://github.com/RDFLib/rdflib/pull/1991).\n- Remove superfluous newline from N-Triples output. Closed [issue\n #1998](https://github.com/RDFLib/rdflib/issues/1998), [PR\n #1999](https://github.com/RDFLib/rdflib/pull/1999).\n- Added a bunch of type hints. The following modules have nearly complete type hints now:\n - `rdflib.namespace`\n - `rdflib.term`\n - `rdflib.parser`\n\n## PRs merged since last release\n\n* Fallback to old `Store.bind` signature on `TypeError`\n [PR #2018](https://github.com/RDFLib/rdflib/pull/2018)\n* Fix/ignore flake8 errors in `rdflib/parser.py`\n [PR #2016](https://github.com/RDFLib/rdflib/pull/2016)\n* Update black to 22.6.0\n [PR #2015](https://github.com/RDFLib/rdflib/pull/2015)\n* Fix for #1873 avoid AttributeError raised ...\n [PR #2013](https://github.com/RDFLib/rdflib/pull/2013)\n* Change Literal.ill_formed to Literal.ill_typed\n [PR #2003](https://github.com/RDFLib/rdflib/pull/2003)\n* Continuation of infixowl update and coverage improvement\n [PR #2001](https://github.com/RDFLib/rdflib/pull/2001)\n* Update test README\n [PR #2000](https://github.com/RDFLib/rdflib/pull/2000)\n* Remove extra newline from N-Triples output\n [PR #1999](https://github.com/RDFLib/rdflib/pull/1999)\n* Infixowl cleanup\n [PR #1996](https://github.com/RDFLib/rdflib/pull/1996)\n* Add line-specific # noqa to `infixowl.py`, remove exclusion from pyproject.toml\n [PR #1994](https://github.com/RDFLib/rdflib/pull/1994)\n* Bump actions/setup-python from 3 to 4\n [PR #1992](https://github.com/RDFLib/rdflib/pull/1992)\n* Add fragment property to URIRef\n [PR #1991](https://github.com/RDFLib/rdflib/pull/1991)\n* test: run tests on python 3.11 also\n [PR #1989](https://github.com/RDFLib/rdflib/pull/1989)\n* test: rework SPARQL test suite\n [PR #1988](https://github.com/RDFLib/rdflib/pull/1988)\n* test: rework RDF/XML test suite\n [PR #1987](https://github.com/RDFLib/rdflib/pull/1987)\n* Rework turtle-like test suites\n [PR #1986](https://github.com/RDFLib/rdflib/pull/1986)\n* Improve docstring of `Graph.serialize`f \n [PR #1984](https://github.com/RDFLib/rdflib/pull/1984)\n* Add more tests for graph_diff\n [PR #1983](https://github.com/RDFLib/rdflib/pull/1983)\n* Convert some more graph tests to pytest\n [PR #1982](https://github.com/RDFLib/rdflib/pull/1982)\n* Fix SPARQL test data\n [PR #1981](https://github.com/RDFLib/rdflib/pull/1981)\n* Add more namespaces to test utils\n [PR #1980](https://github.com/RDFLib/rdflib/pull/1980)\n* Fix SPARQL CSV result serialization of blank nodes\n [PR #1979](https://github.com/RDFLib/rdflib/pull/1979)\n* correct italic markup in plugin stores docs\n [PR #1977](https://github.com/RDFLib/rdflib/pull/1977)\n* escape literal * symbol in `rdflib.paths` docs\n [PR #1976](https://github.com/RDFLib/rdflib/pull/1976)\n* Update sphinx requirement from <5 to <6\n [PR #1975](https://github.com/RDFLib/rdflib/pull/1975)\n* Remove `pytest-subtest`\n [PR #1973](https://github.com/RDFLib/rdflib/pull/1973)\n* style: fix/ignore flake8 errors in store related code\n [PR #1971](https://github.com/RDFLib/rdflib/pull/1971)\n* build: speed up flake8 by ignoring test data\n [PR #1970](https://github.com/RDFLib/rdflib/pull/1970)\n* Fix trix parser\n [PR #1966](https://github.com/RDFLib/rdflib/pull/1966)\n* Add more typing for SPARQL\n [PR #1965](https://github.com/RDFLib/rdflib/pull/1965)\n* style: fix/ignore flake8 errors in `rdflib/plugins/sparql/`\n [PR #1964](https://github.com/RDFLib/rdflib/pull/1964)\n* test: fix `None` comparisons\n [PR #1963](https://github.com/RDFLib/rdflib/pull/1963)\n* style: fix/ingore some flake8 errors in `rdflib/graph.py`\n [PR #1962](https://github.com/RDFLib/rdflib/pull/1962)\n* test: convert `test/jsonld/test_util.py` to pytest\n [PR #1961](https://github.com/RDFLib/rdflib/pull/1961)\n* Fix for issue1957 sparql parser percent encoded reserved chars\n [PR #1959](https://github.com/RDFLib/rdflib/pull/1959)\n* test: convert `test_graph_http.py` to pytest\n [PR #1956](https://github.com/RDFLib/rdflib/pull/1956)\n* edit tabs to spaces\n [PR #1952](https://github.com/RDFLib/rdflib/pull/1952)\n* fix sonarcloud-reported bug in xmlwriter, add test\n [PR #1951](https://github.com/RDFLib/rdflib/pull/1951)\n* test: convert test_literal.py to pytest\n [PR #1949](https://github.com/RDFLib/rdflib/pull/1949)\n* style: ignore flake8 name errors for existing names\n [PR #1948](https://github.com/RDFLib/rdflib/pull/1948)\n* test: remove unused imports in test code\n [PR #1947](https://github.com/RDFLib/rdflib/pull/1947)\n* test: fix `GraphHelper.quad_set` handling of Dataset\n [PR #1946](https://github.com/RDFLib/rdflib/pull/1946)\n* fix for sonarcloud-reported bug\n [PR #1945](https://github.com/RDFLib/rdflib/pull/1945)\n* Logging exceptions from Literal value converters\n [PR #1944](https://github.com/RDFLib/rdflib/pull/1944)\n* fix outmoded `x and x or y` idiom in `infixowl.py`\n [PR #1943](https://github.com/RDFLib/rdflib/pull/1943)\n* Address lingering instances of deprecated `tempfile.mktemp`\n [PR #1942](https://github.com/RDFLib/rdflib/pull/1942)\n* Add CODEOWNERS\n [PR #1941](https://github.com/RDFLib/rdflib/pull/1941)\n* Bump actions/setup-python from 2 to 3\n [PR #1940](https://github.com/RDFLib/rdflib/pull/1940)\n* Bump actions/checkout from 2 to 3\n [PR #1939](https://github.com/RDFLib/rdflib/pull/1939)\n* Bump actions/cache from 2 to 3\n [PR #1938](https://github.com/RDFLib/rdflib/pull/1938)\n* Bump actions/setup-java from 2 to 3\n [PR #1937](https://github.com/RDFLib/rdflib/pull/1937)\n* test: move rdfs.ttl into `test/data/defined_namespaces`\n [PR #1936](https://github.com/RDFLib/rdflib/pull/1936)\n* feat: add tests and typing for `rdflib.utils.{get_tree,find_roots}`\n [PR #1935](https://github.com/RDFLib/rdflib/pull/1935)\n* Passing \"generate\" option through in compute_qname_strict\n [PR #1934](https://github.com/RDFLib/rdflib/pull/1934)\n* build: add GitHub Actions to dependabot\n [PR #1933](https://github.com/RDFLib/rdflib/pull/1933)\n* test: move `EARL` and `RDFT` namespaces to separate files\n [PR #1931](https://github.com/RDFLib/rdflib/pull/1931)\n* Removed old and unused `test/data/suites/DAWG/data-r2`\n [PR #1930](https://github.com/RDFLib/rdflib/pull/1930)\n* Added SPARQL unicode numeric codepoint escape tests\n [PR #1929](https://github.com/RDFLib/rdflib/pull/1929)\n* style: enable and baseline flakeheaven\n [PR #1928](https://github.com/RDFLib/rdflib/pull/1928)\n* feat: add typing for `rdflib/plugins/sparql`\n [PR #1926](https://github.com/RDFLib/rdflib/pull/1926)\n* Switch to latest DAWG test suite\n [PR #1925](https://github.com/RDFLib/rdflib/pull/1925)\n* Move `test/data/suites/DAWG/rdflib`\n [PR #1924](https://github.com/RDFLib/rdflib/pull/1924)\n* style: normalize quoting with black\n [PR #1916](https://github.com/RDFLib/rdflib/pull/1916)\n* Added test for example at CBD definition. Fixes #1914.\n [PR #1915](https://github.com/RDFLib/rdflib/pull/1915)\n* Rename `test/data/suites/DAWG/data-r2-1.0`\n [PR #1908](https://github.com/RDFLib/rdflib/pull/1908)\n* Move `DAWG/data-sparql11` to `w3c/sparql11/data-sparql11`\n [PR #1907](https://github.com/RDFLib/rdflib/pull/1907)\n* Add n3 test suite runner\n [PR #1906](https://github.com/RDFLib/rdflib/pull/1906)\n* Migrated the various `test_*_w3c.py` test files into `test/test_w3c_spec/`\n [PR #1904](https://github.com/RDFLib/rdflib/pull/1904)\n* Fixes #1429, add `iri2uri`\n [PR #1902](https://github.com/RDFLib/rdflib/pull/1902)\n* Fix for #1824 `s,http://rdlib.net,http://rdflib.net,g`\n [PR #1901](https://github.com/RDFLib/rdflib/pull/1901)\n* test: Add more tests for Graph serialize\n [PR #1898](https://github.com/RDFLib/rdflib/pull/1898)\n* test: earlier assert rewrite for test utitlities\n [PR #1897](https://github.com/RDFLib/rdflib/pull/1897)\n* test: Add more tests for test utilities\n [PR #1896](https://github.com/RDFLib/rdflib/pull/1896)\n* test: add more graph variants highlighting bugs\n [PR #1895](https://github.com/RDFLib/rdflib/pull/1895)\n* Fix simple literals returned as NULL using SERVICE (issue #1278)\n [PR #1894](https://github.com/RDFLib/rdflib/pull/1894)\n* W3 test reorg\n [PR #1891](https://github.com/RDFLib/rdflib/pull/1891)\n* Improved mock HTTP Server\n [PR #1888](https://github.com/RDFLib/rdflib/pull/1888)\n* Fix `None`/undefined handing in GROUP_CONCAT\n [PR #1887](https://github.com/RDFLib/rdflib/pull/1887)\n* Move test utility modules into `test/utils/`\n [PR #1879](https://github.com/RDFLib/rdflib/pull/1879)\n* Move coveralls to GitHub Actions\n [PR #1877](https://github.com/RDFLib/rdflib/pull/1877)\n* test: run doctest on rst files in `docs/`\n [PR #1875](https://github.com/RDFLib/rdflib/pull/1875)\n* Add tests demonstrating forward-slash behaviors in Turtle, JSON-LD, and SPARQL\n [PR #1872](https://github.com/RDFLib/rdflib/pull/1872)\n* Literal datetime sub\n [PR #1870](https://github.com/RDFLib/rdflib/pull/1870)\n* resolve issue1868, add a method to expand qname to URI\n [PR #1869](https://github.com/RDFLib/rdflib/pull/1869)\n* build: add Taskfile with development tasks\n [PR #1867](https://github.com/RDFLib/rdflib/pull/1867)\n* Delete basically-unusable example\n [PR #1866](https://github.com/RDFLib/rdflib/pull/1866)\n* Move `test/translate_algebra` into `test/data`\n [PR #1864](https://github.com/RDFLib/rdflib/pull/1864)\n* test: move `test/variants` into `test/data`\n [PR #1862](https://github.com/RDFLib/rdflib/pull/1862)\n* test: convert `test/test_serializers/test_serializer.py` to pytest\n [PR #1861](https://github.com/RDFLib/rdflib/pull/1861)\n* Add remote file fetcher and N3 test suite\n [PR #1860](https://github.com/RDFLib/rdflib/pull/1860)\n* fix: two issues with the N3 serializer\n [PR #1858](https://github.com/RDFLib/rdflib/pull/1858)\n* Tell coveragepy to ignore type checking code and `...`\n [PR #1855](https://github.com/RDFLib/rdflib/pull/1855)\n* docs: switch to sphinx-autodoc-typehints\n [PR #1854](https://github.com/RDFLib/rdflib/pull/1854)\n* More type hints for `rdflib.graph` and related\n [PR #1853](https://github.com/RDFLib/rdflib/pull/1853)\n* Remove testing and debug code from rdflib\n [PR #1849](https://github.com/RDFLib/rdflib/pull/1849)\n* text: fix pytest config\n [PR #1846](https://github.com/RDFLib/rdflib/pull/1846)\n* fix: Raise ValueError for unsupported `bind_namespace` values\n [PR #1845](https://github.com/RDFLib/rdflib/pull/1845)\n* fix: namespace rebinding in `Memory`, `SimpleMemory` and `BerkelyDB` stores.\n [PR #1843](https://github.com/RDFLib/rdflib/pull/1843)\n* test re-org\n [PR #1838](https://github.com/RDFLib/rdflib/pull/1838)\n* fix: DefinedNamespace: fixed handling of control attributes.\n [PR #1825](https://github.com/RDFLib/rdflib/pull/1825)\n* docs: change term reference to italicized\n [PR #1823](https://github.com/RDFLib/rdflib/pull/1823)\n* Fix issue 1808\n [PR #1821](https://github.com/RDFLib/rdflib/pull/1821)\n* build: disable building of epub on readthedocs.org\n [PR #1820](https://github.com/RDFLib/rdflib/pull/1820)\n* docs: fix sphinx warnings\n [PR #1818](https://github.com/RDFLib/rdflib/pull/1818)\n* style: fix isort config\n [PR #1817](https://github.com/RDFLib/rdflib/pull/1817)\n* Migrate to pytest, relocate in subfolder\n [PR #1813](https://github.com/RDFLib/rdflib/pull/1813)\n* test: add a test for n3 serialization with formula\n [PR #1812](https://github.com/RDFLib/rdflib/pull/1812)\n* refactor: convert `test_n3.py` to pytest\n [PR #1811](https://github.com/RDFLib/rdflib/pull/1811)\n* test: Add tests for SPARQL parsing and serialization\n [PR #1810](https://github.com/RDFLib/rdflib/pull/1810)\n* fix: propagate exceptions from SPARQL TSV result parser\n [PR #1809](https://github.com/RDFLib/rdflib/pull/1809)\n* Migrate more tests to pytest\n [PR #1806](https://github.com/RDFLib/rdflib/pull/1806)\n* Convert `test_sparql/test_tsvresults.py` to pytest\n [PR #1805](https://github.com/RDFLib/rdflib/pull/1805)\n* Ignore pyparsing type hints\n [PR #1802](https://github.com/RDFLib/rdflib/pull/1802)\n* Add two xfails related to Example 2 from RDF 1.1 TriG specification\n [PR #1801](https://github.com/RDFLib/rdflib/pull/1801)\n* change pytest.skip to pytest.xfail\n [PR #1799](https://github.com/RDFLib/rdflib/pull/1799)\n* Black tests\n [PR #1798](https://github.com/RDFLib/rdflib/pull/1798)\n* Convert `test/test_util.py` to `pytest`\n [PR #1795](https://github.com/RDFLib/rdflib/pull/1795)\n* Fix handling of EXISTS inside BIND\n [PR #1794](https://github.com/RDFLib/rdflib/pull/1794)\n* update test_graph_generators to import from test.data\n [PR #1792](https://github.com/RDFLib/rdflib/pull/1792)\n* Test reorg (continued)\n [PR #1788](https://github.com/RDFLib/rdflib/pull/1788)\n* Edit readme\n [PR #1787](https://github.com/RDFLib/rdflib/pull/1787)\n* Add tests for computing qname on invalid URIs\n [PR #1783](https://github.com/RDFLib/rdflib/pull/1783)\n* Convert namespace tests to pytest\n [PR #1782](https://github.com/RDFLib/rdflib/pull/1782)\n* Update to black 22.3.0 because of issue with click\n [PR #1780](https://github.com/RDFLib/rdflib/pull/1780)\n* Isvaliduri optimization\n [PR #1779](https://github.com/RDFLib/rdflib/pull/1779)\n* Add tests for the parsing of literals for the turtle family of formats\n [PR #1778](https://github.com/RDFLib/rdflib/pull/1778)\n* Migrate some tests to pytest\n [PR #1774](https://github.com/RDFLib/rdflib/pull/1774)\n* Add ability to detect and mark ill-typed literals\n [PR #1773](https://github.com/RDFLib/rdflib/pull/1773)\n* Fix for issue1769\n [PR #1771](https://github.com/RDFLib/rdflib/pull/1771)\n* Remove unused compatability function\n [PR #1768](https://github.com/RDFLib/rdflib/pull/1768)\n* Add pull request guidelines and template.\n [PR #1767](https://github.com/RDFLib/rdflib/pull/1767)\n* Rename some tests\n [PR #1766](https://github.com/RDFLib/rdflib/pull/1766)\n* Add config for readthedocs.org\n [PR #1764](https://github.com/RDFLib/rdflib/pull/1764)\n* Fix black\n [PR #1763](https://github.com/RDFLib/rdflib/pull/1763)\n* Check if sys.stderr has isatty\n [PR #1761](https://github.com/RDFLib/rdflib/pull/1761)\n* Remove redundant type ignores and fix typing errors\n [PR #1759](https://github.com/RDFLib/rdflib/pull/1759)\n* Add documentation about type hints\n [PR #1751](https://github.com/RDFLib/rdflib/pull/1751)\n* Enable showing typehints in sphinx function/method signature and content\n [PR #1728](https://github.com/RDFLib/rdflib/pull/1728)\n* Update reference to black.toml\n [PR #1721](https://github.com/RDFLib/rdflib/pull/1721)\n* black formatting for rdflib/store.py\n [PR #1720](https://github.com/RDFLib/rdflib/pull/1720)\n* Use the correct warnings module\n [PR #1719](https://github.com/RDFLib/rdflib/pull/1719)\n* `DefinedNamespaceMeta.__getitem__` is slow\n [PR #1718](https://github.com/RDFLib/rdflib/pull/1718)\n* Introduce WGS84 DefinedNamespace\n [PR #1710](https://github.com/RDFLib/rdflib/pull/1710)\n* #1699 Document `Graph` behavior regarding context in constructor docstring\n [PR #1707](https://github.com/RDFLib/rdflib/pull/1707)\n* Generate JSON-LD context from a DefinedNamespace\n [PR #1706](https://github.com/RDFLib/rdflib/pull/1706)\n* Use the `property` built-in as a decorator\n [PR #1703](https://github.com/RDFLib/rdflib/pull/1703)\n* Apply IdentifiedNode to Graph iterators\n [PR #1697](https://github.com/RDFLib/rdflib/pull/1697)\n* Remove singly-used alias obviated by IdentifiedNode\n [PR #1695](https://github.com/RDFLib/rdflib/pull/1695)\n* Unify plugin loading\n [PR #1694](https://github.com/RDFLib/rdflib/pull/1694)\n* Rename black.toml to pyproject.toml\n [PR #1692](https://github.com/RDFLib/rdflib/pull/1692)\n* Improved tox config\n [PR #1691](https://github.com/RDFLib/rdflib/pull/1691)\n* Add isort\n [PR #1689](https://github.com/RDFLib/rdflib/pull/1689)\n* Fix black\n [PR #1688](https://github.com/RDFLib/rdflib/pull/1688)\n* Bind prefixes choices\n [PR #1686](https://github.com/RDFLib/rdflib/pull/1686)\n* Fix turtle serialization of `rdf:type` in subject, object\n [PR #1684](https://github.com/RDFLib/rdflib/pull/1684)\n* Add typing to rdflib.term\n [PR #1683](https://github.com/RDFLib/rdflib/pull/1683)\n* Add a class diagram for terms.\n [PR #1682](https://github.com/RDFLib/rdflib/pull/1682)\n* Add typing to rdflib.namespace\n [PR #1681](https://github.com/RDFLib/rdflib/pull/1681)\n* Add IdentifiedNode abstract intermediary class\n [PR #1680](https://github.com/RDFLib/rdflib/pull/1680)\n* Fix turtle serialization of PNames that contain brackets\n [PR #1678](https://github.com/RDFLib/rdflib/pull/1678)\n* Add a test case for a prefix followed by dot in Turtle format\n [PR #1677](https://github.com/RDFLib/rdflib/pull/1677)\n* Bump sphinx from 4.3.2 to 4.4.0\n [PR #1675](https://github.com/RDFLib/rdflib/pull/1675)\n* pre-commit and pre-commit-ci\n [PR #1672](https://github.com/RDFLib/rdflib/pull/1672)\n* Eliminate star import\n [PR #1667](https://github.com/RDFLib/rdflib/pull/1667)\n* Fixed the handling of escape sequences in the ntriples and nquads parsers\n [PR #1663](https://github.com/RDFLib/rdflib/pull/1663)\n* Remove narrow build detection\n [PR #1660](https://github.com/RDFLib/rdflib/pull/1660)\n* Make unregister_custom_function idempotent\n [PR #1659](https://github.com/RDFLib/rdflib/pull/1659)\n* Allow hext to participate in RDF format roundtripping\n [PR #1656](https://github.com/RDFLib/rdflib/pull/1656)\n* change tests to use urn:example\n [PR #1654](https://github.com/RDFLib/rdflib/pull/1654)\n* add nquads to recognised file extensions\n [PR #1653](https://github.com/RDFLib/rdflib/pull/1653)\n* Don't update `SUFFIX_FORMAT_MAP` in `plugins/parsers/jsonld.py`\n [PR #1652](https://github.com/RDFLib/rdflib/pull/1652)\n* Add Contributor Covenant Code of Conduct\n [PR #1651](https://github.com/RDFLib/rdflib/pull/1651)\n* add test of ConjunctiveGraph operators\n [PR #1647](https://github.com/RDFLib/rdflib/pull/1647)\n* added three tests to cover changes made by the pull request #1361\n [PR #1645](https://github.com/RDFLib/rdflib/pull/1645)\n* Fixed and refactored roundtrip, n3_suite and nt_suite tests\n [PR #1644](https://github.com/RDFLib/rdflib/pull/1644)\n* Allow parse of RDF from URL with all RDF Media Types\n [PR #1643](https://github.com/RDFLib/rdflib/pull/1643)\n* Black rdflib except for rdflib/namespace/_GEO.py\n [PR #1642](https://github.com/RDFLib/rdflib/pull/1642)\n* Remove `(TypeCheck|SubjectType|PredicateType|ObjectType)Error` and related\n [PR #1640](https://github.com/RDFLib/rdflib/pull/1640)\n* Rename `test/triple_store.py` so pytest picks it up\n [PR #1639](https://github.com/RDFLib/rdflib/pull/1639)\n* Convert translate_algebra tests to pytest\n [PR #1636](https://github.com/RDFLib/rdflib/pull/1636)\n* Add some type annotations to JSON-LD code\n [PR #1634](https://github.com/RDFLib/rdflib/pull/1634)\n* Add some typing for evaluation related functions in the SPARQL plugin.\n [PR #1633](https://github.com/RDFLib/rdflib/pull/1633)\n* Add classifier for python 3.10\n [PR #1630](https://github.com/RDFLib/rdflib/pull/1630)\n* Add tests for update method on `Graph(store=\"SPARQLStore\")`\n [PR #1629](https://github.com/RDFLib/rdflib/pull/1629)\n* Add __dir__ to DefinedNamespaceMeta.\n [PR #1626](https://github.com/RDFLib/rdflib/pull/1626)\n* Add `version` to docker-compose config for tests\n [PR #1625](https://github.com/RDFLib/rdflib/pull/1625)\n* Feature prepareupdate\n [PR #1624](https://github.com/RDFLib/rdflib/pull/1624)\n* Fix issue1032 error on sparqlstore update\n [PR #1623](https://github.com/RDFLib/rdflib/pull/1623)\n* Restore geosparql defined namespace\n [PR #1622](https://github.com/RDFLib/rdflib/pull/1622)\n* Fix typing errors in tests\n [PR #1621](https://github.com/RDFLib/rdflib/pull/1621)\n* Compile docs in GitHub Actions CI\n [PR #1620](https://github.com/RDFLib/rdflib/pull/1620)\n* Scale down CI checks\n [PR #1619](https://github.com/RDFLib/rdflib/pull/1619)\n* Revert error-raising change, enable Exception to be raised.\n [PR #1607](https://github.com/RDFLib/rdflib/pull/1607)\n* Fix for issue430\n [PR #1590](https://github.com/RDFLib/rdflib/pull/1590)\n* Fix for infixowl issues 1453 and 944\n [PR #1530](https://github.com/RDFLib/rdflib/pull/1530)\n* Fix `self.line` typos in call to BadSyntax.\n [PR #1529](https://github.com/RDFLib/rdflib/pull/1529)\n* Overdue restoration of functools total_order decorator.\n [PR #1528](https://github.com/RDFLib/rdflib/pull/1528)\n* Remove deprecated\n [PR #1527](https://github.com/RDFLib/rdflib/pull/1527)\n* Clean up documentation\n [PR #1525](https://github.com/RDFLib/rdflib/pull/1525)\n* TypeErrors from Results do not propagate through list creation\n [PR #1523](https://github.com/RDFLib/rdflib/pull/1523)\n* Add typing for parsers\n [PR #1522](https://github.com/RDFLib/rdflib/pull/1522)\n* Fix for issue #837. Graph.[subjects|objects|predicates] optionally return uniques.\n [PR #1520](https://github.com/RDFLib/rdflib/pull/1520)\n* Bump sphinx from 4.3.1 to 4.3.2\n [PR #1518](https://github.com/RDFLib/rdflib/pull/1518)\n* Start support for mypy --strict\n [PR #1515](https://github.com/RDFLib/rdflib/pull/1515)\n* Allow URLInputSource to get content-negotiation links from the Link headers\n [PR #1436](https://github.com/RDFLib/rdflib/pull/1436)\n* Fix issue #936 HAVING clause with variable comparison not correctly evaluated\n [PR #1093](https://github.com/RDFLib/rdflib/pull/1093)\n\n2021-12-20 RELEASE 6.1.1\n========================\nBetter testing and tidier code.\n\nThis is a semi-major release that:\n\n* add support for Python 3.10\n* updates the test suite to pytest (from nose) \n* tidies up a lot of continuous integration\n* gets more tests tested, not skipped\n* implements lots of mypy tests\n* updates several parsers and serializers\n * supports the new HexTuples format!\n* many bug fixes\n\nThis release contains many, many hours of updates from Iwan Aucamp, so thank you Iwan!\n\nPRs merged since last release: \n\n* Update the guidelines for writing tests\n [PR #1517](https://github.com/RDFLib/rdflib/pull/1517)\n* Updated tox config to run mypy in default environment\n [PR #1450](https://github.com/RDFLib/rdflib/pull/1450)\n* Add type annotations to constructor parameters in Literal\n [PR #1498](https://github.com/RDFLib/rdflib/pull/1498)\n* Increase fuseki start timeout from 15 to 30 seconds\n [PR #1516](https://github.com/RDFLib/rdflib/pull/1516)\n* Forbid truthy values for lang when initializing Literal\n [PR #1494](https://github.com/RDFLib/rdflib/pull/1494)\n* Add Py 3.10 to testing envs\n [PR #1473](https://github.com/RDFLib/rdflib/pull/1473)\n* Add mypy to GitHub actions validate workflow\n [PR #1512](https://github.com/RDFLib/rdflib/pull/1512)\n* Improve error messages from with-fuseki.sh\n [PR #1510](https://github.com/RDFLib/rdflib/pull/1510)\n* Fix pipeline triggers\n [PR #1511](https://github.com/RDFLib/rdflib/pull/1511)\n* Change python version used for mypy to 3.7\n [PR #1514](https://github.com/RDFLib/rdflib/pull/1514)\n* Quench nt test userwarn\n [PR #1500](https://github.com/RDFLib/rdflib/pull/1500)\n* Raise a more specific Exception when lang isn't valid\n [PR #1497](https://github.com/RDFLib/rdflib/pull/1497)\n* Fix for issue893\n [PR #1504](https://github.com/RDFLib/rdflib/pull/1504)\n* Fix for issue 893\n [PR #1501](https://github.com/RDFLib/rdflib/pull/1501)\n* Re-make of nicholascar's “Concise Bounded Description” PR #968 ...\n [PR #1502](https://github.com/RDFLib/rdflib/pull/1502)\n* Remove deprecated Statement class\n [PR #1496](https://github.com/RDFLib/rdflib/pull/1496)\n* Fix BNode.skolemize() returning a URIRef instead of an RDFLibGenid.\n [PR #1493](https://github.com/RDFLib/rdflib/pull/1493)\n* demo 980 resolution\n [PR #1495](https://github.com/RDFLib/rdflib/pull/1495)\n* Hextuples Serializer\n [PR #1489](https://github.com/RDFLib/rdflib/pull/1489)\n* Add bindings for rdflib namespaces. Import DCAM.\n [PR #1491](https://github.com/RDFLib/rdflib/pull/1491)\n* fix for issue 1484 raised and solved by Graham Klyne:\n [PR #1490](https://github.com/RDFLib/rdflib/pull/1490)\n* SDO HTTPS and DN creator script\n [PR #1485](https://github.com/RDFLib/rdflib/pull/1485)\n* Fix typing of create_input_source\n [PR #1487](https://github.com/RDFLib/rdflib/pull/1487)\n* guess_format() cater for JSON-LD files ending .json-ld\n [PR #1486](https://github.com/RDFLib/rdflib/pull/1486)\n* Add GitHub actions workflow for validation\n [PR #1461](https://github.com/RDFLib/rdflib/pull/1461)\n* Improved script for running with fuseki\n [PR #1476](https://github.com/RDFLib/rdflib/pull/1476)\n* RFC: Add PythonInputSource to create py-based graphs\n [PR #1463](https://github.com/RDFLib/rdflib/pull/1463)\n* Adapt for pytest and add back import of os in rdflib/parser.py\n [PR #1480](https://github.com/RDFLib/rdflib/pull/1480)\n* Make the test pass on windows\n [PR #1478](https://github.com/RDFLib/rdflib/pull/1478)\n* Add type hints\n [PR #1449](https://github.com/RDFLib/rdflib/pull/1449)\n* Fix shield for CI status\n [PR #1474](https://github.com/RDFLib/rdflib/pull/1474)\n* Fix test files with bare code\n [PR #1481](https://github.com/RDFLib/rdflib/pull/1481)\n* Remove some remaining nosetest import\n [PR #1482](https://github.com/RDFLib/rdflib/pull/1482)\n* Fix JSON-LD data import adds trailing slashes to IRIs (#1443)\n [PR #1456](https://github.com/RDFLib/rdflib/pull/1456)\n* Iwana 20211114 t1305 pytestx\n [PR #1460](https://github.com/RDFLib/rdflib/pull/1460)\n* Migrate from nosetest to pytest\n [PR #1452](https://github.com/RDFLib/rdflib/pull/1452)\n* Add import of os\n [PR #1464](https://github.com/RDFLib/rdflib/pull/1464)\n* replace pkg_resources with importlib.metadata\n [PR #1445](https://github.com/RDFLib/rdflib/pull/1445)\n* A new Turtle serializer\n [PR #1425](https://github.com/RDFLib/rdflib/pull/1425)\n* Fix typos discovered by codespell\n [PR #1446](https://github.com/RDFLib/rdflib/pull/1446)\n* Use assertTrue instead of assert_ for python 3.11 compatibility.\n [PR #1448](https://github.com/RDFLib/rdflib/pull/1448)\n* Undefined name: tmppath --> self.tmppath\n [PR #1438](https://github.com/RDFLib/rdflib/pull/1438)\n* Fix Graph.parse URL handling on windows\n [PR #1441](https://github.com/RDFLib/rdflib/pull/1441)\n* Make Store.namespaces an empty generator\n [PR #1432](https://github.com/RDFLib/rdflib/pull/1432)\n* Export DCMITYPE\n [PR #1433](https://github.com/RDFLib/rdflib/pull/1433)\n\n2021-12-20 RELEASE 6.1.0\n========================\nA slightly messed-up release of what is now 6.1.1. Do not use!\n\n2021-10-10 RELEASE 6.0.2\n========================\nMinor release to add OWL.rational & OWL.real which are needed to allow the OWL-RL package to use only rdflib namespaces, not it's own versions.\n\n* Add owl:rational and owl:real to match standard.\n [PR #1428](https://github.com/RDFLib/rdflib/pull/1428)\n\nA few other small things have been added, see the following merged PRs list:\n\n* rename arg LOVE to ns in rdfpipe\n [PR #1426](https://github.com/RDFLib/rdflib/pull/1426)\n* Remove Tox reference to Python 3.6\n [PR #1422](https://github.com/RDFLib/rdflib/pull/1422)\n* Add Brick DefinedNamespace\n [PR #1419](https://github.com/RDFLib/rdflib/pull/1419)\n* Use setName on TokenConverter to set the name property\n [PR #1409](https://github.com/RDFLib/rdflib/pull/1409)\n* Add test for adding JSON-LD to guess_format()\n [PR #1408](https://github.com/RDFLib/rdflib/pull/1408)\n* Fix mypy type errors and add mypy to .drone.yml\n [PR #1407](https://github.com/RDFLib/rdflib/pull/1407)\n\n\n2021-09-17 RELEASE 6.0.1\n========================\nMinor release to fix a few small errors, in particular with JSON-LD parsing & serializing integration from rdflib-jsonld. Also, a few other niceties, such as allowing graph `add()`, `remove()` etc. to be chainable.\n\n* Add test for adding JSON-LD to guess_format()\n [PR #1408](https://github.com/RDFLib/rdflib/pull/1408)\n* Add JSON-LD to guess_format()\n [PR #1403](https://github.com/RDFLib/rdflib/pull/1403)\n* add dateTimeStamp, fundamental & constraining facets, 7-prop data model\n [PR #1399](https://github.com/RDFLib/rdflib/pull/1399)\n* fix: remove log message on import\n [PR #1398](https://github.com/RDFLib/rdflib/pull/1398)\n* Make graph and other methods chainable\n [PR #1394](https://github.com/RDFLib/rdflib/pull/1394)\n* fix: use correct name for json-ld\n [PR #1388](https://github.com/RDFLib/rdflib/pull/1388)\n* Allowing Container Membership Properties in RDF namespace (#873)\n [PR #1386](https://github.com/RDFLib/rdflib/pull/1386)\n* Update intro_to_sparql.rst\n [PR #1386](https://github.com/RDFLib/rdflib/pull/1384)\n* Iterate over dataset return quads\n [PR #1382](https://github.com/RDFLib/rdflib/pull/1382)\n\n2021-07-20 RELEASE 6.0.0\n========================\n6.0.0 is a major stable release that drops support for Python 2 and Python 3 < 3.7. Type hinting is now present in much\nof the toolkit as a result.\n\nIt includes the formerly independent JSON-LD parser/serializer, improvements to Namespaces that allow for IDE namespace\nprompting, simplified use of `g.serialize()` (turtle default, no need to `decode()`) and many other updates to \ndocumentation, store backends and so on.\n\nPerformance of the in-memory store has also improved since Python 3.6 dictionary improvements.\n\nThere are numerous supplementary improvements to the toolkit too, such as:\n\n* inclusion of Docker files for easier CI/CD\n* black config files for standardised code formatting\n* improved testing with mock SPARQL stores, rather than a reliance on DBPedia etc\n\n_**All PRs merged since 5.0.0:**_\n\n* Fixes 1190 - pin major version of pyparsing\n [PR #1366](https://github.com/RDFLib/rdflib/pull/1366)\n* Add __init__ for shared jsonld module\n [PR #1365](https://github.com/RDFLib/rdflib/pull/1365)\n* Update README with chat info\n [PR #1363](https://github.com/RDFLib/rdflib/pull/1363)\n* add xsd dayTimeDuration and yearMonthDuration\n [PR #1364](https://github.com/RDFLib/rdflib/pull/1364)\n* Updated film.py\n [PR #1359](https://github.com/RDFLib/rdflib/pull/1359)\n* Migration from ClosedNamespace to DeclaredNamespace\n [PR #1074](https://github.com/RDFLib/rdflib/pull/1074)\n* Add @expectedFailure unit tests for #1294 and type annotations for compare.py\n [PR #1346](https://github.com/RDFLib/rdflib/pull/1346)\n* JSON-LD Integration\n [PR #1354](https://github.com/RDFLib/rdflib/pull/1354)\n* ENH: Make ClosedNamespace extend Namespace\n [PR #1213](https://github.com/RDFLib/rdflib/pull/1213)\n* Add unit test for #919 and more type hints for sparqlconnector and sparqlstore\n [PR #1348](https://github.com/RDFLib/rdflib/pull/1348)\n* fix #876 Updated term.py to add xsd:normalizedString and xsd:token support for Literals\n [PR #1102](https://github.com/RDFLib/rdflib/pull/1102)\n* Dev stack update\n [PR #1355](https://github.com/RDFLib/rdflib/pull/1355)\n* Add make coverage instructions to README\n [PR #1353](https://github.com/RDFLib/rdflib/pull/1353)\n* Improve running tests locally\n [PR #1352](https://github.com/RDFLib/rdflib/pull/1352)\n* support day, month and year function for date\n [PR #1154](https://github.com/RDFLib/rdflib/pull/1154)\n* Prevent `from_n3` from unescaping `\\xhh`\n [PR #1343](https://github.com/RDFLib/rdflib/pull/1343)\n* Complete clean up of docs for 6.0.0\n [PR #1296](https://github.com/RDFLib/rdflib/pull/1296)\n* pathname2url removal\n [PR #1288](https://github.com/RDFLib/rdflib/pull/1288)\n* Replace Sleepycat with BerkeleyDB\n [PR #1347](https://github.com/RDFLib/rdflib/pull/1347)\n* Replace use of DBPedia with the new SimpleHTTPMock\n [PR #1345](https://github.com/RDFLib/rdflib/pull/1345)\n* Update graph operator overloading for subclasses\n [PR #1349](https://github.com/RDFLib/rdflib/pull/1349)\n* Speedup Literal.__hash__ and Literal.__eq__ by accessing directly _da…\n [PR #1321](https://github.com/RDFLib/rdflib/pull/1321)\n* Implemented function translateAlgebra. This functions takes a SPARQL …\n [PR #1322](https://github.com/RDFLib/rdflib/pull/1322)\n* attempt at adding coveralls support to drone runs\n [PR #1337](https://github.com/RDFLib/rdflib/pull/1337)\n* Fix SPARQL update parsing to handle arbitrary amounts of triples in inserts\n [PR #1340](https://github.com/RDFLib/rdflib/pull/1340)\n* Add pathlib.PurePath support for Graph.serialize and Graph.parse\n [PR #1309](https://github.com/RDFLib/rdflib/pull/1309)\n* dataset examples file\n [PR #1289](https://github.com/RDFLib/rdflib/pull/1289)\n* Add handling for 308 (Permanent Redirect)\n [PR #1342](https://github.com/RDFLib/rdflib/pull/1342)\n* Speedup of __add_triple_context\n [PR #1320](https://github.com/RDFLib/rdflib/pull/1320)\n* Fix prov ns\n [PR #1318](https://github.com/RDFLib/rdflib/pull/1318)\n* Speedup __ctx_to_str.\n [PR #1319](https://github.com/RDFLib/rdflib/pull/1319)\n* Speedup decodeUnicodeEscape by avoiding useless string replace.\n [PR #1324](https://github.com/RDFLib/rdflib/pull/1324)\n* Fix errors reported by mypy\n [PR #1330](https://github.com/RDFLib/rdflib/pull/1330)\n* Require setuptools, rdflib/plugins/sparql/__init__.py and rdflib/plugin.py import pkg_resources\n [PR #1339](https://github.com/RDFLib/rdflib/pull/1339)\n* Fix tox config\n [PR #1313](https://github.com/RDFLib/rdflib/pull/1313)\n* Fix formatting of xsd:decimal\n [PR #1335](https://github.com/RDFLib/rdflib/pull/1335)\n* Add tests for issue #1299\n [PR #1328](https://github.com/RDFLib/rdflib/pull/1328)\n* Add special handling for gYear and gYearMonth\n [PR #1315](https://github.com/RDFLib/rdflib/pull/1315)\n* Replace incomplete example in intro_to_sparql.rst\n [PR #1331](https://github.com/RDFLib/rdflib/pull/1331)\n* Added unit test for issue #977.\n [PR #1112](https://github.com/RDFLib/rdflib/pull/1112)\n* Don't sort variables in TXTResultSerializer\n [PR #1310](https://github.com/RDFLib/rdflib/pull/1310)\n* handle encoding of base64Binary Literals\n [PR #1258](https://github.com/RDFLib/rdflib/pull/1258)\n* Add tests for Graph.transitive_{subjects,objects}\n [PR #1307](https://github.com/RDFLib/rdflib/pull/1307)\n* Changed to support passing fully qualified queries through the graph …\n [PR #1253](https://github.com/RDFLib/rdflib/pull/1253)\n* Upgrade to GitHub-native Dependabot\n [PR #1298](https://github.com/RDFLib/rdflib/pull/1298)\n* Fix transitive_objects/subjects docstrings and signatures\n [PR #1305](https://github.com/RDFLib/rdflib/pull/1305)\n* Fix typo in ClosedNamespace doc string\n [PR #1293](https://github.com/RDFLib/rdflib/pull/1293)\n* Allow parentheses in uri\n [PR #1280](https://github.com/RDFLib/rdflib/pull/1280)\n* Add notes about how to install from git\n [PR #1286](https://github.com/RDFLib/rdflib/pull/1286)\n* Feature/forward version to 6.0.0-alpha\n [PR #1285](https://github.com/RDFLib/rdflib/pull/1285)\n* speedup notation3/turtle parser\n [PR #1272](https://github.com/RDFLib/rdflib/pull/1272)\n* Correct behaviour of compute_qname for URNs\n [PR #1274](https://github.com/RDFLib/rdflib/pull/1274)\n* Speedup __add_triple_context.\n [PR #1271](https://github.com/RDFLib/rdflib/pull/1271)\n* Feature/coverage configuration\n [PR #1267](https://github.com/RDFLib/rdflib/pull/1267)\n* optimize sparql.Bindings\n [PR #1192](https://github.com/RDFLib/rdflib/pull/1192)\n* issue_771_add_key_error_if_spaces\n [PR #1070](https://github.com/RDFLib/rdflib/pull/1070)\n* Typo fix\n [PR #1254](https://github.com/RDFLib/rdflib/pull/1254)\n* Adding Namespace.__contains__()\n [PR #1237](https://github.com/RDFLib/rdflib/pull/1237)\n* Add a Drone config file.\n [PR #1247](https://github.com/RDFLib/rdflib/pull/1247)\n* Add sentence on names not valid as Python IDs.\n [PR #1234](https://github.com/RDFLib/rdflib/pull/1234)\n* Add trig mimetype\n [PR #1238](https://github.com/RDFLib/rdflib/pull/1238)\n* Move flake8 config\n [PR #1239](https://github.com/RDFLib/rdflib/pull/1239)\n* Update SPARQL tests since the DBpedia was updated\n [PR #1240](https://github.com/RDFLib/rdflib/pull/1240)\n* fix foaf ClosedNamespace\n [PR #1220](https://github.com/RDFLib/rdflib/pull/1220)\n* add GeoSPARQL ClosedNamespace\n [PR #1221](https://github.com/RDFLib/rdflib/pull/1221)\n* docs: fix simple typo, -> yield\n [PR #1223](https://github.com/RDFLib/rdflib/pull/1223)\n* do not use current time in sparql TIMEZONE\n [PR #1193](https://github.com/RDFLib/rdflib/pull/1193)\n* Reset graph on exit from context\n [PR #1206](https://github.com/RDFLib/rdflib/pull/1206)\n* Fix usage of default-graph for POST and introduce POST_FORM\n [PR #1185](https://github.com/RDFLib/rdflib/pull/1185)\n* Changes to graph.serialize()\n [PR #1183](https://github.com/RDFLib/rdflib/pull/1183)\n* rd2dot Escape HTML in node label and URI text\n [PR #1209](https://github.com/RDFLib/rdflib/pull/1209)\n* tests: retry on network error (CI)\n [PR #1203](https://github.com/RDFLib/rdflib/pull/1203)\n* Add documentation and type hints for rdflib.query.Result and rdflib.graph.Graph\n [PR #1211](https://github.com/RDFLib/rdflib/pull/1211)\n* fix typo\n [PR #1218](https://github.com/RDFLib/rdflib/pull/1218)\n* Add architecture ppc64le to travis build\n [PR #1212](https://github.com/RDFLib/rdflib/pull/1212)\n* small cleanups\n [PR #1191](https://github.com/RDFLib/rdflib/pull/1191)\n* Remove the usage of assert in the SPARQLConnector\n [PR #1186](https://github.com/RDFLib/rdflib/pull/1186)\n* Remove requests\n [PR #1175](https://github.com/RDFLib/rdflib/pull/1175)\n* Support parsing paths specified with pathlib\n [PR #1180](https://github.com/RDFLib/rdflib/pull/1180)\n* URI Validation Performance Improvements\n [PR #1177](https://github.com/RDFLib/rdflib/pull/1177)\n* Fix serialize with multiple disks on windows\n [PR #1172](https://github.com/RDFLib/rdflib/pull/1172)\n* Fix for issue #629 - Arithmetic Operations of DateTime in SPARQL\n [PR #1061](https://github.com/RDFLib/rdflib/pull/1061)\n* Fixes #1043.\n [PR #1054](https://github.com/RDFLib/rdflib/pull/1054)\n* N3 parser: do not create formulas if the Turtle mode is activated\n [PR #1142](https://github.com/RDFLib/rdflib/pull/1142)\n* Move to using graph.parse() rather than deprecated graph.load()\n [PR #1167](https://github.com/RDFLib/rdflib/pull/1167)\n* Small improvement to serialize docs\n [PR #1162](https://github.com/RDFLib/rdflib/pull/1162)\n* Issue 1160 missing url fragment\n [PR #1163](https://github.com/RDFLib/rdflib/pull/1163)\n* remove import side-effects\n [PR #1156](https://github.com/RDFLib/rdflib/pull/1156)\n* Docs update\n [PR #1161](https://github.com/RDFLib/rdflib/pull/1161)\n* replace cgi by html, fixes issue #1110\n [PR #1152](https://github.com/RDFLib/rdflib/pull/1152)\n* Deprecate some more Graph API surface\n [PR #1151](https://github.com/RDFLib/rdflib/pull/1151)\n* Add deprecation warning on graph.load()\n [PR #1150](https://github.com/RDFLib/rdflib/pull/1150)\n* Remove all remnants of Python2 compatibility\n [PR #1149](https://github.com/RDFLib/rdflib/pull/1149)\n* make csv2rdf work in py3\n [PR #1117](https://github.com/RDFLib/rdflib/pull/1117)\n* Add a __dir__ attribute to a closed namespace\n [PR #1134](https://github.com/RDFLib/rdflib/pull/1134)\n* improved Graph().parse()\n [PR #1140](https://github.com/RDFLib/rdflib/pull/1140)\n* Discussion around new dict-based store implementation\n [PR #1133](https://github.com/RDFLib/rdflib/pull/1133)\n* fix 913\n [PR #1139](https://github.com/RDFLib/rdflib/pull/1139)\n* Make parsers CharacterStream aware\n [PR #1145](https://github.com/RDFLib/rdflib/pull/1145)\n* More Black formatting changes\n [PR #1146](https://github.com/RDFLib/rdflib/pull/1146)\n* Fix comment\n [PR #1130](https://github.com/RDFLib/rdflib/pull/1130)\n* Updating namespace.py to solve issue #801\n [PR #1044](https://github.com/RDFLib/rdflib/pull/1044)\n* Fix namespaces for SOSA and SSN. Fix #1126.\n [PR #1128](https://github.com/RDFLib/rdflib/pull/1128)\n* Create pull request template\n [PR #1114](https://github.com/RDFLib/rdflib/pull/1114)\n* BNode context dicts for NT and N-Quads parsers\n [PR #1108](https://github.com/RDFLib/rdflib/pull/1108)\n* Allow distinct blank node contexts from one NTriples parser to the next (#980)\n [PR #1107](https://github.com/RDFLib/rdflib/pull/1107)\n* Autodetect parse() format\n [PR #1046](https://github.com/RDFLib/rdflib/pull/1046)\n* fix #910: Updated evaluate.py so that union includes results of both branches, even when identical.\n [PR #1057](https://github.com/RDFLib/rdflib/pull/1057)\n* Removal of six & styling\n [PR #1051](https://github.com/RDFLib/rdflib/pull/1051)\n* Add SERVICE clause to documentation\n [PR #1041](https://github.com/RDFLib/rdflib/pull/1041)\n* add test with ubuntu 20.04\n [PR #1038](https://github.com/RDFLib/rdflib/pull/1038)\n* Improved logo\n [PR #1037](https://github.com/RDFLib/rdflib/pull/1037)\n* Add requests to the tests_requirements\n [PR #1036](https://github.com/RDFLib/rdflib/pull/1036)\n* Set update endpoint similar to query endpoint for sparqlstore if only one is given\n [PR #1033](https://github.com/RDFLib/rdflib/pull/1033)\n* fix shebang typo\n [PR #1034](https://github.com/RDFLib/rdflib/pull/1034)\n* Add the content type 'application/sparql-update' when preparing a SPARQL update request\n [PR #1022](https://github.com/RDFLib/rdflib/pull/1022)\n* Fix typo in README.md\n [PR #1030](https://github.com/RDFLib/rdflib/pull/1030)\n* add Python 3.8\n [PR #1023](https://github.com/RDFLib/rdflib/pull/1023)\n* Fix n3 parser exponent syntax of floats with leading dot.\n [PR #1012](https://github.com/RDFLib/rdflib/pull/1012)\n* DOC: Use sphinxcontrib-apidoc and various cleanups\n [PR #1010](https://github.com/RDFLib/rdflib/pull/1010)\n* FIX: Change is comparison to == for tuple\n [PR #1009](https://github.com/RDFLib/rdflib/pull/1009)\n* Update copyright year in docs conf.py\n [PR #1006](https://github.com/RDFLib/rdflib/pull/1006)\n \n \n2020-04-18 RELEASE 5.0.0\n========================\n5.0.0 is a major stable release and is the last release to support Python 2 & 3.4. 5.0.0 is mostly backwards-\ncompatible with 4.2.2 and is intended for long-term, bug fix only support.\n\n5.0.0 comes two weeks after the 5.0.0RC1 and includes a small number of additional bug fixes. Note that \nrdflib-jsonld has released a version 0.5.0 to be compatible with rdflib 5.0.0.\n\n_**All PRs merged since 5.0.0RC1:**_\n\n### General Bugs Fixed:\n * Fix n3 parser exponent syntax of floats with leading dot.\n [PR #1012](https://github.com/RDFLib/rdflib/pull/1012)\n * FIX: Change is comparison to == for tuple\n [PR #1009](https://github.com/RDFLib/rdflib/pull/1009)\n * fix #913 : Added _parseBoolean function to enforce correct Lexical-to-value mapping\n [PR #995](https://github.com/RDFLib/rdflib/pull/995) \n \n### Enhanced Features:\n * Issue 1003\n [PR #1005](https://github.com/RDFLib/rdflib/pull/1005)\n \n### SPARQL Fixes:\n * CONSTRUCT resolve with initBindings fixes #1001\n [PR #1002](https://github.com/RDFLib/rdflib/pull/1002)\n\n### Documentation Fixes:\n * DOC: Use sphinxcontrib-apidoc and various cleanups\n [PR #1010](https://github.com/RDFLib/rdflib/pull/1010)\n * Update copyright year in docs conf.py\n [PR #1006](https://github.com/RDFLib/rdflib/pull/1006)\n * slightly improved styling, small index text changes\n [PR #1004](https://github.com/RDFLib/rdflib/pull/1004)\n \n\n2020-04-04 RELEASE 5.0.0RC1\n===========================\n\nAfter more than three years, RDFLib 5.0.0rc1 is finally released.\n\nThis is a rollup of all of the bugfixes merged, and features introduced to RDFLib since \nRDFLib 4.2.2 was released in Jan 2017.\n\nWhile all effort was taken to minimize breaking changes in this release, there are some.\n\nPlease see the upgrade4to5 document in the docs directory for more information on some specific differences from 4.2.2 to 5.0.0.\n\n_**All issues closed and PRs merged since 4.2.2:**_\n\n### General Bugs Fixed:\n * Pr 451 redux\n [PR #978](https://github.com/RDFLib/rdflib/pull/978)\n * NTriples fails to parse URIs with only a scheme\n [ISSUE #920](https://github.com/RDFLib/rdflib/issues/920), [PR #974](https://github.com/RDFLib/rdflib/pull/974)\n * Cannot clone on windows - Remove colons from test result files.\n [ISSUE #901](https://github.com/RDFLib/rdflib/issues/901), [PR #971](https://github.com/RDFLib/rdflib/pull/971) \n * Add requirement for requests to setup.py\n [PR #969](https://github.com/RDFLib/rdflib/pull/969)\n * fixed URIRef including native unicode characters\n [PR #961](https://github.com/RDFLib/rdflib/pull/961)\n * DCTERMS.format not working\n [ISSUE #932](https://github.com/RDFLib/rdflib/issues/932)\n * infixowl.manchesterSyntax do not encode strings\n [PR #906](https://github.com/RDFLib/rdflib/pull/906)\n * Fix blank node label to not contain '_:' during parsing\n [PR #886](https://github.com/RDFLib/rdflib/pull/886)\n * rename new SPARQLWrapper to SPARQLConnector\n [PR #872](https://github.com/RDFLib/rdflib/pull/872)\n * Fix #859. Unquote and Uriquote Literal Datatype.\n [PR #860](https://github.com/RDFLib/rdflib/pull/860)\n * Parsing nquads\n [ISSUE #786](https://github.com/RDFLib/rdflib/issues/786)\n * ntriples spec allows for upper-cased lang tag, fixes #782\n [PR #784](https://github.com/RDFLib/rdflib/pull/784), [ISSUE #782](https://github.com/RDFLib/rdflib/issues/782)\n * Adds escaped single quote to literal parser\n [PR #736](https://github.com/RDFLib/rdflib/pull/736)\n * N3 parse error on single quote within single quotes \n [ISSUE #732](https://github.com/RDFLib/rdflib/issues/732)\n * Fixed #725 \n [PR #730](https://github.com/RDFLib/rdflib/pull/730)\n * test for issue #725: canonicalization collapses BNodes\n [PR #726](https://github.com/RDFLib/rdflib/pull/726)\n * RGDA1 graph canonicalization sometimes still collapses distinct BNodes\n [ISSUE #725](https://github.com/RDFLib/rdflib/issues/725)\n * Accept header should use a q parameter\n [PR #720](https://github.com/RDFLib/rdflib/pull/720)\n * Added test for Issue #682 and fixed.\n [PR #718](https://github.com/RDFLib/rdflib/pull/718)\n * Incompatibility with Python3: unichr\n [ISSUE #687](https://github.com/RDFLib/rdflib/issues/687)\n * namespace.py include colon in ALLOWED_NAME_CHARS\n [PR #663](https://github.com/RDFLib/rdflib/pull/663)\n * namespace.py fix compute_qname missing namespaces\n [PR #649](https://github.com/RDFLib/rdflib/pull/649)\n * RDFa parsing Error! `__init__()` got an unexpected keyword argument 'encoding'\n [ISSUE #639](https://github.com/RDFLib/rdflib/issues/639)\n * Bugfix: `term.Literal.__add__`\n [PR #451](https://github.com/RDFLib/rdflib/pull/451)\n * fixup of #443\n [PR #445](https://github.com/RDFLib/rdflib/pull/445)\n * Microdata to rdf second edition bak\n [PR #444](https://github.com/RDFLib/rdflib/pull/444)\n\n### Enhanced Features:\n * Register additional serializer plugins for SPARQL mime types.\n [PR #987](https://github.com/RDFLib/rdflib/pull/987)\n * Pr 388 redux\n [PR #979](https://github.com/RDFLib/rdflib/pull/979)\n * Allows RDF terms introduced by JSON-LD 1.1\n [PR #970](https://github.com/RDFLib/rdflib/pull/970)\n * make SPARQLConnector work with DBpedia\n [PR #941](https://github.com/RDFLib/rdflib/pull/941)\n * ClosedNamespace returns right exception for way of access\n [PR #866](https://github.com/RDFLib/rdflib/pull/866)\n * Not adding all namespaces for n3 serializer\n [PR #832](https://github.com/RDFLib/rdflib/pull/832)\n * Adds basic support of xsd:duration\n [PR #808](https://github.com/RDFLib/rdflib/pull/808)\n * Add possibility to set authority and basepath to skolemize graph\n [PR #807](https://github.com/RDFLib/rdflib/pull/807)\n * Change notation3 list realization to non-recursive function.\n [PR #805](https://github.com/RDFLib/rdflib/pull/805)\n * Suppress warning for not using custom encoding.\n [PR #800](https://github.com/RDFLib/rdflib/pull/800)\n * Add support to parsing large xml inputs\n [ISSUE #749](https://github.com/RDFLib/rdflib/issues/749)\n [PR #750](https://github.com/RDFLib/rdflib/pull/750)\n * improve hash efficiency by directly using str/unicode hash\n [PR #746](https://github.com/RDFLib/rdflib/pull/746)\n * Added the csvw prefix to the RDFa initial context.\n [PR #594](https://github.com/RDFLib/rdflib/pull/594)\n * syncing changes from pyMicrodata\n [PR #587](https://github.com/RDFLib/rdflib/pull/587)\n * Microdata parser: updated the parser to the latest version of the microdata->rdf note (published in December 2014)\n [PR #443](https://github.com/RDFLib/rdflib/pull/443)\n * Literal.toPython() support for xsd:hexBinary\n [PR #388](https://github.com/RDFLib/rdflib/pull/388)\n \n### SPARQL Fixes:\n * Total order patch patch\n [PR #862](https://github.com/RDFLib/rdflib/pull/862)\n * use <<= instead of deprecated <<\n [PR #861](https://github.com/RDFLib/rdflib/pull/861)\n * Fix #847\n [PR #856](https://github.com/RDFLib/rdflib/pull/856)\n * RDF Literal `\"1\"^^xsd:boolean` should _not_ coerce to True\n [ISSUE #847](https://github.com/RDFLib/rdflib/issues/847)\n * Makes NOW() return an UTC date\n [PR #844](https://github.com/RDFLib/rdflib/pull/844)\n * NOW() SPARQL should return an xsd:dateTime with a timezone\n [ISSUE #843](https://github.com/RDFLib/rdflib/issues/843)\n * fix property paths bug: issue #715\n [PR #822](https://github.com/RDFLib/rdflib/pull/822), [ISSUE #715](https://github.com/RDFLib/rdflib/issues/715)\n * MulPath: correct behaviour of n3()\n [PR #820](https://github.com/RDFLib/rdflib/pull/820)\n * Literal total ordering\n [PR #793](https://github.com/RDFLib/rdflib/pull/793)\n * Remove SPARQLWrapper dependency\n [PR #744](https://github.com/RDFLib/rdflib/pull/744)\n * made UNION faster by not preventing duplicates\n [PR #741](https://github.com/RDFLib/rdflib/pull/741)\n * added a hook to add custom functions to SPARQL\n [PR #723](https://github.com/RDFLib/rdflib/pull/723)\n * Issue714\n [PR #717](https://github.com/RDFLib/rdflib/pull/717)\n * Use <<= instead of deprecated << in SPARQL parser\n [PR #417](https://github.com/RDFLib/rdflib/pull/417)\n * Custom FILTER function for SPARQL engine\n [ISSUE #274](https://github.com/RDFLib/rdflib/issues/274)\n \n### Code Quality and Cleanups:\n * a slightly opinionated autopep8 run\n [PR #870](https://github.com/RDFLib/rdflib/pull/870)\n * remove rdfa and microdata parsers from core RDFLib\n [PR #828](https://github.com/RDFLib/rdflib/pull/828)\n * ClosedNamespace KeyError -> AttributeError\n [PR #827](https://github.com/RDFLib/rdflib/pull/827)\n * typo in rdflib/plugins/sparql/update.py\n [ISSUE #760](https://github.com/RDFLib/rdflib/issues/760)\n * Fix logging in interactive mode\n [PR #731](https://github.com/RDFLib/rdflib/pull/731)\n * make namespace module flake8-compliant, change exceptions in that mod…\n [PR #711](https://github.com/RDFLib/rdflib/pull/711)\n * delete ez_setup.py? \n [ISSUE #669](https://github.com/RDFLib/rdflib/issues/669)\n * code duplication issue between rdflib and pymicrodata\n [ISSUE #582](https://github.com/RDFLib/rdflib/issues/582)\n * Transition from 2to3 to use of six.py to be merged in 5.0.0-dev\n [PR #519](https://github.com/RDFLib/rdflib/pull/519)\n * sparqlstore drop deprecated methods and args\n [PR #516](https://github.com/RDFLib/rdflib/pull/516)\n * python3 code seems shockingly inefficient\n [ISSUE #440](https://github.com/RDFLib/rdflib/issues/440)\n * removed md5_term_hash, fixes #240\n [PR #439](https://github.com/RDFLib/rdflib/pull/439), [ISSUE #240](https://github.com/RDFLib/rdflib/issues/240)\n \n### Testing:\n * 3.7 for travis\n [PR #864](https://github.com/RDFLib/rdflib/pull/864)\n * Added trig unit tests to highlight some current parsing/serializing issues\n [PR #431](https://github.com/RDFLib/rdflib/pull/431)\n\n### Documentation Fixes:\n * Fix a doc string in the query module\n [PR #976](https://github.com/RDFLib/rdflib/pull/976)\n * setup.py: Make the license field use an SPDX identifier\n [PR #789](https://github.com/RDFLib/rdflib/pull/789)\n * Update README.md\n [PR #764](https://github.com/RDFLib/rdflib/pull/764)\n * Update namespaces_and_bindings.rst\n [PR #757](https://github.com/RDFLib/rdflib/pull/757)\n * DOC: README.md: rdflib-jsonld, https uris\n [PR #712](https://github.com/RDFLib/rdflib/pull/712)\n * make doctest support py2/py3\n [ISSUE #707](https://github.com/RDFLib/rdflib/issues/707)\n * `pip install rdflib` (as per README.md) gets OSError on Mint 18.1\n [ISSUE #704](https://github.com/RDFLib/rdflib/issues/704)\n\n\n\n2017-01-29 RELEASE 4.2.2\n========================\n\nThis is a bug-fix release, and the last release in the 4.X.X series.\n\nBug fixes:\n----------\n* SPARQL bugs fixed:\n * Fix for filters in sub-queries\n [#693](https://github.com/RDFLib/rdflib/pull/693)\n * Fixed bind, initBindings and filter problems\n [#294](https://github.com/RDFLib/rdflib/issues/294)\n [#555](https://github.com/RDFLib/rdflib/pull/555)\n [#580](https://github.com/RDFLib/rdflib/issues/580)\n [#586](https://github.com/RDFLib/rdflib/issues/586)\n [#601](https://github.com/RDFLib/rdflib/pull/601)\n [#615](https://github.com/RDFLib/rdflib/issues/615)\n [#617](https://github.com/RDFLib/rdflib/issues/617)\n [#619](https://github.com/RDFLib/rdflib/issues/619)\n [#630](https://github.com/RDFLib/rdflib/issues/630)\n [#653](https://github.com/RDFLib/rdflib/issues/653)\n [#686](https://github.com/RDFLib/rdflib/issues/686)\n [#688](https://github.com/RDFLib/rdflib/pull/688)\n [#692](https://github.com/RDFLib/rdflib/pull/692)\n * Fixed unexpected None value in SPARQL-update\n [#633](https://github.com/RDFLib/rdflib/issues/633)\n [#634](https://github.com/RDFLib/rdflib/pull/634)\n * Fix sparql, group by and count of null values with `optional`\n [#631](https://github.com/RDFLib/rdflib/issues/631)\n * Fixed sparql sub-query and aggregation bugs\n [#607](https://github.com/RDFLib/rdflib/issues/607)\n [#610](https://github.com/RDFLib/rdflib/pull/610)\n [#628](https://github.com/RDFLib/rdflib/issues/628)\n [#694](https://github.com/RDFLib/rdflib/pull/694)\n * Fixed parsing Complex BGPs as triples\n [#622](https://github.com/RDFLib/rdflib/pull/622)\n [#623](https://github.com/RDFLib/rdflib/issues/623)\n * Fixed DISTINCT being ignored inside aggregate functions\n [#404](https://github.com/RDFLib/rdflib/issues/404)\n [#611](https://github.com/RDFLib/rdflib/pull/611)\n [#678](https://github.com/RDFLib/rdflib/pull/678)\n * Fix unicode encoding errors in sparql processor\n [#446](https://github.com/RDFLib/rdflib/issues/446)\n [#599](https://github.com/RDFLib/rdflib/pull/599)\n * Fixed SPARQL select nothing no longer returning a `None` row\n [#554](https://github.com/RDFLib/rdflib/issues/554)\n [#592](https://github.com/RDFLib/rdflib/pull/592)\n * Fixed aggregate operators COUNT and SAMPLE to ignore unbound / NULL values\n [#564](https://github.com/RDFLib/rdflib/pull/564)\n [#563](https://github.com/RDFLib/rdflib/issues/563)\n [#567](https://github.com/RDFLib/rdflib/pull/567)\n [#568](https://github.com/RDFLib/rdflib/pull/568)\n * Fix sparql relative uris\n [#523](https://github.com/RDFLib/rdflib/issues/523)\n [#524](https://github.com/RDFLib/rdflib/pull/524)\n * SPARQL can now compare xsd:date type as well, fixes #532\n [#532](https://github.com/RDFLib/rdflib/issues/532)\n [#533](https://github.com/RDFLib/rdflib/pull/533)\n * fix sparql path order on python3: \"TypeError: unorderable types: SequencePath() < SequencePath()\"\"\n [#492](https://github.com/RDFLib/rdflib/issues/492)\n [#525](https://github.com/RDFLib/rdflib/pull/525)\n * SPARQL parser now robust to spurious semicolon\n [#381](https://github.com/RDFLib/rdflib/issues/381)\n [#528](https://github.com/RDFLib/rdflib/pull/528)\n * Let paths be comparable against all nodes even in py3 (preparedQuery error)\n [#545](https://github.com/RDFLib/rdflib/issues/545)\n [#552](https://github.com/RDFLib/rdflib/pull/552)\n * Made behavior of `initN` in `update` and `query` more consistent\n [#579](https://github.com/RDFLib/rdflib/issues/579)\n [#600](https://github.com/RDFLib/rdflib/pull/600)\n* SparqlStore:\n * SparqlStore now closes underlying urllib response body\n [#638](https://github.com/RDFLib/rdflib/pull/638)\n [#683](https://github.com/RDFLib/rdflib/pull/683)\n * SparqlStore injectPrefixes only modifies query if prefixes present and if adds a newline in between\n [#521](https://github.com/RDFLib/rdflib/issues/521)\n [#522](https://github.com/RDFLib/rdflib/pull/522)\n* Fixes and tests for AuditableStore\n [#537](https://github.com/RDFLib/rdflib/pull/537)\n [#557](https://github.com/RDFLib/rdflib/pull/557)\n* Trig bugs fixed:\n * trig export of multiple graphs assigns wrong prefixes to prefixedNames\n [#679](https://github.com/RDFLib/rdflib/issues/679)\n * Trig serialiser writing empty named graph name for default graph\n [#433](https://github.com/RDFLib/rdflib/issues/433)\n * Trig parser can creating multiple contexts for the default graph\n [#432](https://github.com/RDFLib/rdflib/issues/432)\n * Trig serialisation handling prefixes incorrectly\n [#428](https://github.com/RDFLib/rdflib/issues/428)\n [#699](https://github.com/RDFLib/rdflib/pull/699)\n* Fixed Nquads parser handling of triples in default graph\n [#535](https://github.com/RDFLib/rdflib/issues/535)\n [#536](https://github.com/RDFLib/rdflib/pull/536)\n* Fixed TypeError in Turtle serializer (unorderable types: DocumentFragment() > DocumentFragment())\n [#613](https://github.com/RDFLib/rdflib/issues/613)\n [#648](https://github.com/RDFLib/rdflib/issues/648)\n [#666](https://github.com/RDFLib/rdflib/pull/666)\n [#676](https://github.com/RDFLib/rdflib/issues/676)\n* Fixed serialization and parsing of inf/nan\n [#655](https://github.com/RDFLib/rdflib/pull/655)\n [#658](https://github.com/RDFLib/rdflib/pull/658)\n* Fixed RDFa parser from failing on time elements with child nodes\n [#576](https://github.com/RDFLib/rdflib/issues/576)\n [#577](https://github.com/RDFLib/rdflib/pull/577)\n* Fix double reduction of \\\\ escapes in from_n3\n [#546](https://github.com/RDFLib/rdflib/issues/546)\n [#548](https://github.com/RDFLib/rdflib/pull/548)\n* Fixed handling of xsd:base64Binary\n [#646](https://github.com/RDFLib/rdflib/issues/646)\n [#674](https://github.com/RDFLib/rdflib/pull/674)\n* Fixed Collection.__setitem__ broken\n [#604](https://github.com/RDFLib/rdflib/issues/604)\n [#605](https://github.com/RDFLib/rdflib/pull/605)\n* Fix ImportError when __main__ already loaded\n [#616](https://github.com/RDFLib/rdflib/pull/616)\n* Fixed broken top_level.txt file in distribution\n [#571](https://github.com/RDFLib/rdflib/issues/571)\n [#572](https://github.com/RDFLib/rdflib/pull/572)\n [#573](https://github.com/RDFLib/rdflib/pull/573)\n\n\nEnhancements:\n-------------\n* Added support for Python 3.5+\n [#526](https://github.com/RDFLib/rdflib/pull/526)\n* More aliases for common formats (nt, turtle)\n [#701](https://github.com/RDFLib/rdflib/pull/701)\n* Improved RDF1.1 ntriples support\n [#695](https://github.com/RDFLib/rdflib/issues/695)\n [#700](https://github.com/RDFLib/rdflib/pull/700)\n* Dependencies updated and improved compatibility with pyparsing, html5lib, SPARQLWrapper and elementtree\n [#550](https://github.com/RDFLib/rdflib/pull/550)\n [#589](https://github.com/RDFLib/rdflib/issues/589)\n [#606](https://github.com/RDFLib/rdflib/issues/606)\n [#641](https://github.com/RDFLib/rdflib/pull/641)\n [#642](https://github.com/RDFLib/rdflib/issues/642)\n [#650](https://github.com/RDFLib/rdflib/pull/650)\n [#671](https://github.com/RDFLib/rdflib/issues/671)\n [#675](https://github.com/RDFLib/rdflib/pull/675)\n [#684](https://github.com/RDFLib/rdflib/pull/684)\n [#696](https://github.com/RDFLib/rdflib/pull/696)\n* Improved prefix for SPARQL namespace in XML serialization\n [#493](https://github.com/RDFLib/rdflib/issues/493)\n [#588](https://github.com/RDFLib/rdflib/pull/588)\n* Performance improvements:\n * SPARQL Aggregation functions don't build up memory for each row\n [#678](https://github.com/RDFLib/rdflib/pull/678)\n * Collections now support += (__iadd__), fixes slow creation of large lists\n [#609](https://github.com/RDFLib/rdflib/issues/609)\n [#612](https://github.com/RDFLib/rdflib/pull/612)\n [#691](https://github.com/RDFLib/rdflib/pull/691)\n * SPARQL Optimisation to expand BGPs in a smarter way\n [#547](https://github.com/RDFLib/rdflib/pull/547)\n* SPARQLStore improvements\n * improved SPARQLStore BNode customizability\n [#511](https://github.com/RDFLib/rdflib/issues/511)\n [#512](https://github.com/RDFLib/rdflib/pull/512)\n [#513](https://github.com/RDFLib/rdflib/pull/513)\n [#603](https://github.com/RDFLib/rdflib/pull/603)\n * Adding the option of using POST for long queries in SPARQLStore\n [#672](https://github.com/RDFLib/rdflib/issues/672)\n [#673](https://github.com/RDFLib/rdflib/pull/673)\n * Exposed the timeout of SPARQLWrapper\n [#531](https://github.com/RDFLib/rdflib/pull/531)\n* SPARQL prepared query now carries the original (unparsed) parameters\n [#565](https://github.com/RDFLib/rdflib/pull/565)\n* added .n3 methods for path objects\n [#553](https://github.com/RDFLib/rdflib/pull/553)\n* Added support for xsd:gYear and xsd:gYearMonth\n [#635](https://github.com/RDFLib/rdflib/issues/635)\n [#636](https://github.com/RDFLib/rdflib/pull/636)\n* Allow duplicates in rdf:List\n [#223](https://github.com/RDFLib/rdflib/issues/223)\n [#690](https://github.com/RDFLib/rdflib/pull/690)\n* Improved slicing of Resource objects\n [#529](https://github.com/RDFLib/rdflib/pull/529)\n\n\nCleanups:\n---------\n* cleanup: SPARQL Prologue and Query new style classes\n [#566](https://github.com/RDFLib/rdflib/pull/566)\n* Reduce amount of warnings, especially closing opened file pointers\n [#518](https://github.com/RDFLib/rdflib/pull/518)\n [#651](https://github.com/RDFLib/rdflib/issues/651)\n* Improved ntriples parsing exceptions to actually tell you what's wrong\n [#640](https://github.com/RDFLib/rdflib/pull/640)\n [#643](https://github.com/RDFLib/rdflib/pull/643)\n* remove ancient and broken 2.3 support code.\n [#680](https://github.com/RDFLib/rdflib/issues/680)\n [#681](https://github.com/RDFLib/rdflib/pull/681)\n* Logger output improved\n [#662](https://github.com/RDFLib/rdflib/pull/662)\n* properly cite RGDA1\n [#624](https://github.com/RDFLib/rdflib/pull/624)\n* Avoid class reference to imported function\n [#574](https://github.com/RDFLib/rdflib/issues/574)\n [#578](https://github.com/RDFLib/rdflib/pull/578)\n* Use find_packages for package discovery.\n [#590](https://github.com/RDFLib/rdflib/pull/590)\n* Prepared ClosedNamespace (and _RDFNamespace) to inherit from Namespace (5.0.0)\n [#551](https://github.com/RDFLib/rdflib/pull/551)\n [#595](https://github.com/RDFLib/rdflib/pull/595)\n* Avoid verbose build logging\n [#534](https://github.com/RDFLib/rdflib/pull/534)\n* (ultra petty) Remove an unused import\n [#593](https://github.com/RDFLib/rdflib/pull/593)\n\n\nTesting improvements:\n---------------------\n* updating deprecated testing syntax\n [#697](https://github.com/RDFLib/rdflib/pull/697)\n* make test 375 more portable (use sys.executable rather than python)\n [#664](https://github.com/RDFLib/rdflib/issues/664)\n [#668](https://github.com/RDFLib/rdflib/pull/668)\n* Removed outdated, skipped test for #130 that depended on content from the internet\n [#256](https://github.com/RDFLib/rdflib/issues/256)\n* enable all warnings during travis nosetests\n [#517](https://github.com/RDFLib/rdflib/pull/517)\n* travis updates\n [#659](https://github.com/RDFLib/rdflib/issues/659)\n* travis also builds release branches\n [#598](https://github.com/RDFLib/rdflib/pull/598)\n\n\nDoc improvements:\n-----------------\n* Update list of builtin serialisers in docstring\n [#621](https://github.com/RDFLib/rdflib/pull/621)\n* Update reference to \"Emulating container types\"\n [#575](https://github.com/RDFLib/rdflib/issues/575)\n [#581](https://github.com/RDFLib/rdflib/pull/581)\n [#583](https://github.com/RDFLib/rdflib/pull/583)\n [#584](https://github.com/RDFLib/rdflib/pull/584)\n* docs: clarify the use of an identifier when persisting a triplestore\n [#654](https://github.com/RDFLib/rdflib/pull/654)\n* DOC: fix simple typo, -> unnamed\n [#562](https://github.com/RDFLib/rdflib/pull/562)\n\n\n\n2015-08-12 RELEASE 4.2.1\n========================\n\nThis is a bug-fix release.\n\nMinor enhancements:\n-------------------\n* Added a Networkx connector\n [#471](https://github.com/RDFLib/rdflib/pull/471),\n [#507](https://github.com/RDFLib/rdflib/pull/507)\n* Added a graph_tool connector\n [#473](https://github.com/RDFLib/rdflib/pull/473)\n* Added a `graphs` method to the Dataset object\n [#504](https://github.com/RDFLib/rdflib/pull/504),\n [#495](https://github.com/RDFLib/rdflib/issues/495)\n* Batch commits for `SPARQLUpdateStore`\n [#486](https://github.com/RDFLib/rdflib/pull/486)\n\nBug fixes:\n----------\n* Fixed bnode collision bug\n [#506](https://github.com/RDFLib/rdflib/pull/506),\n [#496](https://github.com/RDFLib/rdflib/pull/496),\n [#494](https://github.com/RDFLib/rdflib/issues/494)\n* fix `util.from_n3()` parsing Literals with datatypes and Namespace support\n [#503](https://github.com/RDFLib/rdflib/pull/503),\n [#502](https://github.com/RDFLib/rdflib/issues/502)\n* make `Identifier.__hash__` stable wrt. multi processes\n [#501](https://github.com/RDFLib/rdflib/pull/501),\n [#500](https://github.com/RDFLib/rdflib/issues/500)\n* fix handling `URLInputSource` without content-type\n [#499](https://github.com/RDFLib/rdflib/pull/499),\n [#498](https://github.com/RDFLib/rdflib/pull/498)\n* no relative import in `algebra` when run as a script\n [#497](https://github.com/RDFLib/rdflib/pull/497)\n* Duplicate option in armstrong `theme.conf` removed\n [#491](https://github.com/RDFLib/rdflib/issues/491)\n* `Variable.__repr__` returns a python representation string, not n3\n [#488](https://github.com/RDFLib/rdflib/pull/488)\n* fixed broken example\n [#482](https://github.com/RDFLib/rdflib/pull/482)\n* trig output fixes\n [#480](https://github.com/RDFLib/rdflib/pull/480)\n* set PYTHONPATH to make rdfpipe tests use the right rdflib version\n [#477](https://github.com/RDFLib/rdflib/pull/477)\n* fix RDF/XML problem with unqualified use of `rdf:about`\n [#470](https://github.com/RDFLib/rdflib/pull/470),\n [#468](https://github.com/RDFLib/rdflib/issues/468)\n* `AuditableStore` improvements\n [#469](https://github.com/RDFLib/rdflib/pull/469),\n [#463](https://github.com/RDFLib/rdflib/pull/463)\n* added asserts for `graph.set([s,p,o])` so `s` and `p` aren't `None`\n [#467](https://github.com/RDFLib/rdflib/pull/467)\n* `threading.RLock` instances are context managers\n [#465](https://github.com/RDFLib/rdflib/pull/465)\n* SPARQLStore does not transform Literal('') into Literal('None') anymore\n [#459](https://github.com/RDFLib/rdflib/pull/459),\n [#457](https://github.com/RDFLib/rdflib/issues/457)\n* slight performance increase for graph.all_nodes()\n [#458](https://github.com/RDFLib/rdflib/pull/458)\n\nTesting improvements:\n---------------------\n* travis: migrate to docker container infrastructure\n [#508](https://github.com/RDFLib/rdflib/pull/508)\n* test for narrow python builds (chars > 0xFFFF) (related to\n [#453](https://github.com/RDFLib/rdflib/pull/453),\n [#454](https://github.com/RDFLib/rdflib/pull/454)\n )\n [#456](https://github.com/RDFLib/rdflib/issues/456),\n [#509](https://github.com/RDFLib/rdflib/pull/509)\n* dropped testing py3.2\n [#448](https://github.com/RDFLib/rdflib/issues/448)\n* Running a local fuseki server on travis and making it failsafe\n [#476](https://github.com/RDFLib/rdflib/pull/476),\n [#475](https://github.com/RDFLib/rdflib/issues/475),\n [#474](https://github.com/RDFLib/rdflib/pull/474),\n [#466](https://github.com/RDFLib/rdflib/pull/466),\n [#460](https://github.com/RDFLib/rdflib/issues/460)\n* exclude `def main():` functions from test coverage analysis\n [#472](https://github.com/RDFLib/rdflib/pull/472)\n\n\n2015-02-19 RELEASE 4.2.0\n========================\n\nThis is a new minor version of RDFLib including a handful of new features:\n\n* Supporting N-Triples 1.1 syntax using UTF-8 encoding\n [#447](https://github.com/RDFLib/rdflib/pull/447),\n [#449](https://github.com/RDFLib/rdflib/pull/449),\n [#400](https://github.com/RDFLib/rdflib/issues/400)\n* Graph comparison now really works using RGDA1 (RDF Graph Digest Algorithm 1)\n [#441](https://github.com/RDFLib/rdflib/pull/441)\n [#385](https://github.com/RDFLib/rdflib/issues/385)\n* More graceful degradation than simple crashing for unicode chars > 0xFFFF on\n narrow python builds. Parsing such characters will now work, but issue a\n UnicodeWarning. If you run `python -W all` you will already see a warning on\n `import rdflib` will show a warning (ImportWarning).\n [#453](https://github.com/RDFLib/rdflib/pull/453),\n [#454](https://github.com/RDFLib/rdflib/pull/454)\n* URLInputSource now supports json-ld\n [#425](https://github.com/RDFLib/rdflib/pull/425)\n* SPARQLStore is now graph aware\n [#401](https://github.com/RDFLib/rdflib/pull/401),\n [#402](https://github.com/RDFLib/rdflib/pull/402)\n* SPARQLStore now uses SPARQLWrapper for updates\n [#397](https://github.com/RDFLib/rdflib/pull/397)\n* Certain logging output is immediately shown in interactive mode\n [#414](https://github.com/RDFLib/rdflib/pull/414)\n* Python 3.4 fully supported\n [#418](https://github.com/RDFLib/rdflib/pull/418)\n\nMinor enhancements & bugs fixed:\n--------------------------------\n\n* Fixed double invocation of 2to3\n [#437](https://github.com/RDFLib/rdflib/pull/437)\n* PyRDFa parser missing brackets\n [#434](https://github.com/RDFLib/rdflib/pull/434)\n* Correctly handle \\uXXXX and \\UXXXXXXXX escapes in n3 files\n [#426](https://github.com/RDFLib/rdflib/pull/426)\n* Logging cleanups and keeping it on stderr\n [#420](https://github.com/RDFLib/rdflib/pull/420)\n [#414](https://github.com/RDFLib/rdflib/pull/414)\n [#413](https://github.com/RDFLib/rdflib/issues/413)\n* n3: allow @base URI to have a trailing '#'\n [#407](https://github.com/RDFLib/rdflib/pull/407)\n [#379](https://github.com/RDFLib/rdflib/issues/379)\n* microdata: add file:// to base if it's a filename so rdflib can parse its own\n output\n [#406](https://github.com/RDFLib/rdflib/pull/406)\n [#403](https://github.com/RDFLib/rdflib/issues/403)\n* TSV Results parse skips empty bindings in result\n [#390](https://github.com/RDFLib/rdflib/pull/390)\n* fixed accidental test run due to name\n [#389](https://github.com/RDFLib/rdflib/pull/389)\n* Bad boolean list serialization to Turtle & fixed ambiguity between\n Literal(False) and None\n [#387](https://github.com/RDFLib/rdflib/pull/387)\n [#382](https://github.com/RDFLib/rdflib/pull/382)\n* Current version number & PyPI link in README.md\n [#383](https://github.com/RDFLib/rdflib/pull/383)\n\n\n2014-04-15 RELEASE 4.1.2\n========================\n\nThis is a bug-fix release.\n\n* Fixed unicode/str bug in py3 for rdfpipe\n [#375](https://github.com/RDFLib/rdflib/issues/375)\n\n2014-03-03 RELEASE 4.1.1\n========================\n\nThis is a bug-fix release.\n\nThis will be the last RDFLib release to support python 2.5.\n\n* The RDF/XML Parser was made stricter, now raises exceptions for\n illegal repeated node-elements.\n [#363](https://github.com/RDFLib/rdflib/issues/363)\n\n* The SPARQLUpdateStore now supports non-ascii unicode in update\n statements\n [#356](https://github.com/RDFLib/rdflib/issues/356)\n\n* Fixed a bug in the NTriple/NQuad parser wrt. to unicode escape sequences\n [#352](https://github.com/RDFLib/rdflib/issues/352)\n\n* HTML5Lib is no longer pinned to 0.95\n [#355](https://github.com/RDFLib/rdflib/issues/360)\n\n* RDF/XML Serializer now uses parseType=Literal for well-formed XML literals\n\n* A bug in the manchester OWL syntax was fixed\n [#355](https://github.com/RDFLib/rdflib/issues/355)\n\n2013-12-31 RELEASE 4.1\n======================\n\nThis is a new minor version RDFLib, which includes a handful of new features:\n\n* A TriG parser was added (we already had a serializer) - it is\n up-to-date wrt. to the newest spec from: http://www.w3.org/TR/trig/\n\n* The Turtle parser was made up to date wrt. to the latest Turtle spec.\n\n* Many more tests have been added - RDFLib now has over 2000\n (passing!) tests. This is mainly thanks to the NT, Turtle, TriG,\n NQuads and SPARQL test-suites from W3C. This also included many\n fixes to the nt and nquad parsers.\n\n* ```ConjunctiveGraph``` and ```Dataset``` now support directly adding/removing\n quads with ```add/addN/remove``` methods.\n\n* ```rdfpipe``` command now supports datasets, and reading/writing context\n sensitive formats.\n\n* Optional graph-tracking was added to the Store interface, allowing\n empty graphs to be tracked for Datasets. The DataSet class also saw\n a general clean-up, see: [#309](https://github.com/RDFLib/rdflib/pull/309)\n\n* After long deprecation, ```BackwardCompatibleGraph``` was removed.\n\nMinor enhancements/bugs fixed:\n------------------------------\n\n* Many code samples in the documentation were fixed thanks to @PuckCh\n\n* The new ```IOMemory``` store was optimised a bit\n\n* ```SPARQL(Update)Store``` has been made more generic.\n\n* MD5 sums were never reinitialized in ```rdflib.compare```\n\n* Correct default value for empty prefix in N3\n [#312](https://github.com/RDFLib/rdflib/issues/312)\n\n* Fixed tests when running in a non UTF-8 locale\n [#344](https://github.com/RDFLib/rdflib/issues/344)\n\n* Prefix in the original turtle have an impact on SPARQL query\n resolution\n [#313](https://github.com/RDFLib/rdflib/issues/313)\n\n* Duplicate BNode IDs from N3 Parser\n [#305](https://github.com/RDFLib/rdflib/issues/305)\n\n* Use QNames for TriG graph names\n [#330](https://github.com/RDFLib/rdflib/issues/330)\n\n* \\uXXXX escapes in Turtle/N3 were fixed\n [#335](https://github.com/RDFLib/rdflib/issues/335)\n\n* A way to limit the number of triples retrieved from the\n ```SPARQLStore``` was added\n [#346](https://github.com/RDFLib/rdflib/pull/346)\n\n* Dots in localnames in Turtle\n [#345](https://github.com/RDFLib/rdflib/issues/345)\n [#336](https://github.com/RDFLib/rdflib/issues/336)\n\n* ```BNode``` as Graph's public ID\n [#300](https://github.com/RDFLib/rdflib/issues/300)\n\n* Introduced ordering of ```QuotedGraphs```\n [#291](https://github.com/RDFLib/rdflib/issues/291)\n\n2013-05-22 RELEASE 4.0.1\n========================\n\nFollowing RDFLib tradition, some bugs snuck into the 4.0 release.\nThis is a bug-fixing release:\n\n* the new URI validation caused lots of problems, but is\n necessary to avoid ''RDF injection'' vulnerabilities. In the\n spirit of ''be liberal in what you accept, but conservative in\n what you produce\", we moved validation to serialisation time.\n\n* the ```rdflib.tools``` package was missing from the\n ```setup.py``` script, and was therefore not included in the\n PYPI tarballs.\n\n* RDF parser choked on empty namespace URI\n [#288](https://github.com/RDFLib/rdflib/issues/288)\n\n* Parsing from ```sys.stdin``` was broken\n [#285](https://github.com/RDFLib/rdflib/issues/285)\n\n* The new IO store had problems with concurrent modifications if\n several graphs used the same store\n [#286](https://github.com/RDFLib/rdflib/issues/286)\n\n* Moved HTML5Lib dependency to the recently released 1.0b1 which\n support python3\n\n2013-05-16 RELEASE 4.0\n======================\n\nThis release includes several major changes:\n\n* The new SPARQL 1.1 engine (rdflib-sparql) has been included in\n the core distribution. SPARQL 1.1 queries and updates should\n work out of the box.\n\n * SPARQL paths are exposed as operators on ```URIRefs```, these can\n then be be used with graph.triples and friends:\n\n ```py\n # List names of friends of Bob:\n g.triples(( bob, FOAF.knows/FOAF.name , None ))\n\n # All super-classes:\n g.triples(( cls, RDFS.subClassOf * '+', None ))\n ```\n\n * a new ```graph.update``` method will apply SPARQL update statements\n\n* Several RDF 1.1 features are available:\n * A new ```DataSet``` class\n * ```XMLLiteral``` and ```HTMLLiterals```\n * ```BNode``` (de)skolemization is supported through ```BNode.skolemize```,\n ```URIRef.de_skolemize```, ```Graph.skolemize``` and ```Graph.de_skolemize```\n\n* Handled of Literal equality was split into lexical comparison\n (for normal ```==``` operator) and value space (using new ```Node.eq```\n methods). This introduces some slight backwards incompatible\n changes, but was necessary, as the old version had\n inconsistent hash and equality methods that could lead the\n literals not working correctly in dicts/sets.\n The new way is more in line with how SPARQL 1.1 works.\n For the full details, see:\n\n https://github.com/RDFLib/rdflib/wiki/Literal-reworking\n\n* Iterating over ```QueryResults``` will generate ```ResultRow``` objects,\n these allow access to variable bindings as attributes or as a\n dict. I.e.\n\n ```py\n for row in graph.query('select ... ') :\n print row.age, row[\"name\"]\n ```\n\n* \"Slicing\" of Graphs and Resources as syntactic sugar:\n ([#271](https://github.com/RDFLib/rdflib/issues/271))\n\n ```py\n graph[bob : FOAF.knows/FOAF.name]\n -> generator over the names of Bobs friends\n ```\n\n* The ```SPARQLStore``` and ```SPARQLUpdateStore``` are now included\n in the RDFLib core\n\n* The documentation has been given a major overhaul, and examples\n for most features have been added.\n\n\nMinor Changes:\n--------------\n\n* String operations on URIRefs return new URIRefs: ([#258](https://github.com/RDFLib/rdflib/issues/258))\n ```py\n >>> URIRef('http://example.org/')+'test\n rdflib.term.URIRef('http://example.org/test')\n ```\n\n* Parser/Serializer plugins are also found by mime-type, not just\n by plugin name: ([#277](https://github.com/RDFLib/rdflib/issues/277))\n* ```Namespace``` is no longer a subclass of ```URIRef```\n* URIRefs and Literal language tags are validated on construction,\n avoiding some \"RDF-injection\" issues ([#266](https://github.com/RDFLib/rdflib/issues/266))\n* A new memory store needs much less memory when loading large\n graphs ([#268](https://github.com/RDFLib/rdflib/issues/268))\n* Turtle/N3 serializer now supports the base keyword correctly ([#248](https://github.com/RDFLib/rdflib/issues/248))\n* py2exe support was fixed ([#257](https://github.com/RDFLib/rdflib/issues/257))\n* Several bugs in the TriG serializer were fixed\n* Several bugs in the NQuads parser were fixed\n\n2013-03-01 RELEASE 3.4\n======================\n\nThis release introduced new parsers for structured data in HTML.\nIn particular formats: hturtle, rdfa, mdata and an auto-detecting\nhtml format were added. Thanks to Ivan Herman for this!\n\nThis release includes a lot of admin maintentance - correct\ndependencies for different python versions, etc. Several py3 bugs\nwere also fixed.\n\nThis release drops python 2.4 compatibility - it was just getting\ntoo expensive for us to maintain. It should however be compatible\nwith any cpython from 2.5 through 3.3.\n\n* ```node.md5_term``` is now deprecated, if you use it let us know.\n\n* Literal.datatype/language are now read-only properties ([#226](https://github.com/RDFLib/rdflib/issues/226))\n* Serializing to file fails in py3 ([#249](https://github.com/RDFLib/rdflib/issues/249))\n* TriX serializer places two xmlns attributes on same element ([#250](https://github.com/RDFLib/rdflib/issues/250))\n* RDF/XML parser fails on when XML namespace is not explicitly declared ([#247](https://github.com/RDFLib/rdflib/issues/247))\n* Resource class should \"unbox\" Resource instances on add ([#215](https://github.com/RDFLib/rdflib/issues/215))\n* Turtle/N3 does not encode final quote of a string ([#239](https://github.com/RDFLib/rdflib/issues/239))\n* float Literal precision lost when serializing graph to turtle or n3 ([#237](https://github.com/RDFLib/rdflib/issues/237))\n* plain-literal representation of xsd:decimals fixed\n* allow read-only sleepycat stores\n* language tag parsing in N3/Turtle fixes to allow several subtags.\n\n2012-10-10 RELEASE 3.2.3\n========================\n\nAlmost identical to 3.2.2\nA stupid bug snuck into 3.2.2, and querying graphs were broken.\n\n* Fixes broken querying ([#234](https://github.com/RDFLib/rdflib/issues/234))\n* graph.transitiveClosure now works with loops ([#206](https://github.com/RDFLib/rdflib/issues/206))\n\n2012-09-25 RELEASE 3.2.2\n========================\n\nThis is mainly a maintenance release.\n\nThis release should be compatible with python 2.4 through to 3.\n\nChanges:\n\n* Improved serialization/parsing roundtrip tests led to some fixes\n of obscure parser/serializer bugs. In particular complex string\n Literals in ntriples improved a lot.\n* The terms of a triple are now asserted to be RDFLib Node's in graph.add\n This should avoid getting strings and other things in the store. ([#200](https://github.com/RDFLib/rdflib/issues/200))\n* Added a specific TurtleParser that does not require the store to be\n non-formula aware. ([#214](https://github.com/RDFLib/rdflib/issues/214))\n* A trig-serializer was added, see:\n http://www4.wiwiss.fu-berlin.de/bizer/trig/\n* BNode generation was made thread-safe ([#209](https://github.com/RDFLib/rdflib/issues/209))\n (also fixed better by dzinxed)\n* Illegal BNode IDs removed from NT output: ([#212](https://github.com/RDFLib/rdflib/issues/212))\n* and more minor bug fixes that had no issues\n\n2012-04-24 RELEASE 3.2.1\n========================\n\nThis is mainly a maintenance release.\n\nChanges:\n\n* New setuptools entry points for query processors and results\n\n* Literals constructed from other literals copy datatype/lang ([#188](https://github.com/RDFLib/rdflib/issues/188))\n* Relative URIs are resolved incorrectly after redirects ([#130](https://github.com/RDFLib/rdflib/issues/130))\n* Illegal prefixes in turtle output ([#161](https://github.com/RDFLib/rdflib/issues/161))\n* Sleepcat store unstable prefixes ([#201](https://github.com/RDFLib/rdflib/issues/201))\n* Consistent toPyton() for all node objects ([#174](https://github.com/RDFLib/rdflib/issues/174))\n* Better random BNode ID in multi-thread environments ([#185](https://github.com/RDFLib/rdflib/issues/185))\n\n2012-01-19 RELEASE 3.2.0\n========================\n\nMajor changes:\n* Thanks to Thomas Kluyver, rdflib now works under python3,\n the setup.py script automatically runs 2to3.\n\n* Unit tests were updated and cleaned up. Now all tests should pass.\n* Documentation was updated and cleaned up.\n\n* A new resource oriented API was added:\n http://code.google.com/p/rdflib/issues/detail?id=166\n\n Fixed many minor issues:\n * http://code.google.com/p/rdflib/issues/detail?id=177\n http://code.google.com/p/rdflib/issues/detail?id=129\n Restored compatibility with Python 2.4\n * http://code.google.com/p/rdflib/issues/detail?id=158\n Reworking of Query result handling\n * http://code.google.com/p/rdflib/issues/detail?id=193\n generating xml:base attribute in RDF/XML output\n* http://code.google.com/p/rdflib/issues/detail?id=180\n serialize(format=\"pretty-xml\") fails on cyclic links\n\n\n2011-03-17 RELEASE 3.1.0\n========================\n\nFixed a range of minor issues:\n\n* http://code.google.com/p/rdflib/issues/detail?id=128\n\n Literal.__str__ does not behave like unicode\n\n* http://code.google.com/p/rdflib/issues/detail?id=141\n\n (RDFa Parser) Does not handle application/xhtml+xml\n\n* http://code.google.com/p/rdflib/issues/detail?id=142\n\n RDFa TC #117: Fragment identifiers stripped from BASE\n\n* http://code.google.com/p/rdflib/issues/detail?id=146\n\n Malformed literals produced when rdfa contains newlines\n\n* http://code.google.com/p/rdflib/issues/detail?id=152\n\n Namespaces beginning with _ are invalid\n\n* http://code.google.com/p/rdflib/issues/detail?id=156\n\n Turtle Files with a UTF-8 BOM fail to parse\n\n* http://code.google.com/p/rdflib/issues/detail?id=154\n\n ClosedNamespace.__str__ returns URIRef not str\n\n* http://code.google.com/p/rdflib/issues/detail?id=150\n\n IOMemory does not override open\n\n* http://code.google.com/p/rdflib/issues/detail?id=153\n\n Timestamps with microseconds *and* \"Z\" timezone are not parsed\n\n* http://code.google.com/p/rdflib/issues/detail?id=118\n\n DateTime literals with offsets fail to convert to Python\n\n* http://code.google.com/p/rdflib/issues/detail?id=157\n\n Timestamps with timezone information are not parsed\n\n* http://code.google.com/p/rdflib/issues/detail?id=151\n\n problem with unicode literals in rdflib.compare.graph_diff\n\n* http://code.google.com/p/rdflib/issues/detail?id=149\n\n BerkeleyDB Store broken with create=False\n\n* http://code.google.com/p/rdflib/issues/detail?id=134\n\n Would be useful if Graph.query could propagate kwargs to a\n\n plugin processor\n\n* http://code.google.com/p/rdflib/issues/detail?id=133\n\n Graph.connected exception when passed empty graph\n\n* http://code.google.com/p/rdflib/issues/detail?id=129\n\n Not compatible with Python 2.4\n\n* http://code.google.com/p/rdflib/issues/detail?id=119\n\n Support Python's set operations on Graph\n\n* http://code.google.com/p/rdflib/issues/detail?id=130\n\n NT output encoding to utf-8 broken as it goes through\n\n _xmlcharrefreplace\n\n* http://code.google.com/p/rdflib/issues/detail?id=121#c1\n\n Store SPARQL Support\n\n\n2010-05-13 RELEASE 3.0.0\n========================\n\nWorking test suite with all tests passing.\n\nRemoved dependency on setuptools.\n\n(Issue #43) Updated Package and Module Names to follow\nconventions outlined in\nhttp://www.python.org/dev/peps/pep-0008/\n\nRemoved SPARQL bits and non core plugins. They are mostly\nmoving to http://code.google.com/p/rdfextras/ at least until\nthey are stable.\n\nFixed datatype for Literal(True).\n\nFixed Literal to enforce constraint of having either a language\nor datatype but not both.\n\nFixed Literal's repr.\n\nFixed to Graph Add/Sub/Mul opterators.\n\nUpgraded RDFa parser to pyRdfa.\n\nUpgraded N3 parser to the one from CWM.\n\nFixed unicode encoding issue involving N3Parser.\n\nN3 serializer improvements.\n\nFixed HTTP content-negotiation\n\nFixed Store.namespaces method (which caused a few issues\ndepending on Store implementation being used.)\n\nFixed interoperability issue with plugin module.\n\nFixed use of Deprecated functionality.\n\n2009-03-30 RELEASE 2.4.1\n========================\n\nFixed Literal comparison case involving Literal's with\ndatatypes of XSD.base64Binary.\n\nFixed case where XSD.date was matching before XSD.dateTime for\ndatetime instances.\n\nFixed jython interoperability issue (issue #53).\n\nFixed Literal repr to handle apostrophes correctly (issue #28).\n\nFixed Literal's repr to be consistent with its ```__init__``` (issue #33).\n\n\n2007-04-04 RELEASE 2.4.0\n========================\n\nImproved Literal comparison / equality\n\nSparql cleanup.\n\ngetLiteralValue now returns the Literal object instead of the\nresult of toPython(). Now that Literals override a good\ncoverage of comparison operators, they should be passed around\nas first class objects in the SPARQL evaluation engine.\n\nAdded support for session bnodes re: sparql\n\nFixed prolog reduce/reduce conflict. Added Py_None IncRefs\nwhere they were being passed into Python method invocations\n(per drewp's patch)\n\nFixed sparql queries involving empty namespace prefix.\n\nFixed the selected variables sparql issue\n\nFixed <BASE> support in SPARQL queries.\n\nFixed involving multiple unions and queries are nested more\nthan one level (bug in _getAllVariables causing failure when\nparent.top is None)\n\nFixed test_sparql_equals.py.\n\nFixed sparql json result comma errors issue.\n\nFixed test_sparql_json_results.py (SELECT * variables out of\norder)\n\nAdded a 4Suite-based SPARQL XML Writer implementation. If\n4Suite is not installed, the fallback python saxutils is used\ninstead\n\napplied patch from\nhttp://rdflib.net/issues/2007/02/23/bugs_in_rdflib.sparql.queryresult/issue\n\nThe restriction on GRAPH patterns with variables has been\nrelieved a bit to allow such usage when the variable is\nprovided as an initial binding\n\nFix for OPTIONAL patterns. P1 OPT P2, where P1 and P2 shared\nvariables which were bound to BNodes were not unifying on\nthese BNode variable efficiently / correctly. The fix was to\nadd bindings for 'stored' BNodes so they aren't confused for\nwildcards\n\n\n\n\nAdded support to n3 parser for retaining namespace bindings.\n\nFixed several RDFaParser bugs.\n\nAdded serializer specific argument support.\n\nFixed a few PrettyXMLSerializer issues and added a max_depth\noption.\n\nFixed some TurtleSerializer issues.\n\nFixed some N3Serializer issues.\n\n\n\nAdded support easy_install\n\nadded link to long_descriptin for easy_install -U rdflib==dev\nto work; added download_url back\n\nadded continuous-releases-using-subversion bit\n\n\n\nAdded rdflib_tools package\n Added rdfpipe\n Added initial EARLPluging\n\n\n\nImproved test running... using nose... added tests\n\nExposed generated test cases for nose to find.\nadded bit to configure 'setup.py nosetests' to run doc tests\n\nadded nose test bits\n\n\n\nAdded md5_term_hash method to terms.\n\nAdded commit_pending_transaction argument to Graph's close\nmethod.\n\nAdded DeprecationWarning to rdflib.constants\n\nAdded a NamespaceDict class for those who want to avoid the\nNamespace as subclass of URIRef issues\n\nAdded bind function\n\nFixed type of Namespace re: URIRef vs. unicode\n\nImproved ValueError message\n\nChanged value method's any argument to default to True\n\nChanged ```__repr__``` to always reflect that it's an rdf.Literal --\nas this is the case even though we now have it acting like the\ncorresponding type in some casses\n\nA DISTINCT was added to the SELECT clause to ensure duplicate\ntriples are not returned (an RDF graph is a set of triples) -\nwhich can happen for certain join expressions.\n\nSupport for ConditionalAndExpressionList and\nRelationalExpressionList (|| and && operators in FILTER)\n\nFixed context column comparison. The hash integer was being\ncompared with 'F' causing a warning:Warning: Truncated\nincorrect DOUBLE value: 'F'\n\napplied patch in\nhttp://rdflib.net/issues/2006/12/13/typos_in_abstractsqlstore.py/issue\n\nfix for\nhttp://rdflib.net/issues/2006/12/07/problems_with_graph.seq()_when_sequences_contain_more_than_9_items./issue\n\n\n\n\n\nGeneral code cleanup (removing redundant imports, changing\nrelative imports to absolute imports etc)\n\nRemoved usage of deprecated bits.\n\nAdded a number of test cases.\n\nAdded DeprecationWarning for save method\n\nrefactoring of GraphPattern\n\nReadOnlyGraphAggregate uses Graph constructor properly to\nsetup (optionally) a common store\n\n\nFixed bug with . (fullstop) in localname parts.\n\nChanged Graph's value method to return None instead of raising\nan AssertionError.\n\nFixed conversion of (exiplicit) MySQL ports to integers.\n\nFixed MySQL store so it properly calculates ```__len__``` of\nindividual Graphs\n\nAligned with how BerkeleyDB is generating events (remove events\nare expressed in terms of interned strings)\n\nAdded code to catch unpickling related exceptions\n\nAdded BerkeleyDB store implementation.\n\nMerged TextIndex from michel-events branch.\n\n\n2006-10-15 RELEASE 2.3.3\n========================\n\nAdded TriXParser, N3Serializer and TurtleSerializer.\n\nAdded events to store interface: StoreCreated, TripleAdded and\nTripleRemoved.\n\nAdded Journal Reader and Writer.\n\nRemoved BerkeleyDB level journaling.\n\nAdded support for triple quoted Literal's.\n\nFixed some corner cases with Literal comparison.\n\nFixed PatternResolution for patterns that return contexts only.\n\nFixed NodePickler not to choke on unhashable objects.\n\nFixed Namespace's ```__getattr__``` hack to ignore names starting\nwith __\n\nAdded SPARQL != operator.\n\nFixed query result ```__len__``` (more efficient).\n\nFixed and improved RDFa parser.\n\nredland patches from\nhttp://rdflib.net/pipermail/dev/2006-September/000069.html\n\nvarious patches for the testsuite -\nhttp://rdflib.net/pipermail/dev/2006-September/000069.html\n\n\n2006-08-01 RELEASE 2.3.2\n========================\n\nAdded SPARQL query support.\n\nAdded XSD to/from Python datatype support to Literals.\n\nFixed ConjunctiveGraph so that it is a proper subclass of Graph.\n\nAdded Deprecation Warning when BackwardCompatGraph gets used.\n\nAdded RDFa parser.\n\nAdded Collection Class for working with RDF Collections.\n\nAdded method to Graph for testing connectedness\n\nFixed bug in N3 parser where identical BNodes were not being combined.\n\nFixed literal quoting in N3 serializer.\n\nFixed RDF/XML serializer to skip over N3 bits.\n\nChanged Literal and URIRef instantiation to catch\nUnicodeDecodeErrors - which were being thrown when the default\ndecoding method (ascii) was hitting certain characters.\n\nChanged Graph's bind method to also override the binding in\nthe case of an existing generated bindings.\n\nAdded FOPLRelationalModel - a set of utility classes that\nimplement a minimal Relational Model of FOPL implemented as a\nSQL database (uses identifier/value interning and integer\nhalf-md5-hashes for space and index efficiency).\n\nChanged MySQL store to use FOPLRelationalModel plus fixes and\nimprovements.\n\nAdded more test cases.\n\nCleaned up source code to follow pep8 / pep257.\n\n\n2006-02-27 RELEASE 2.3.1\n========================\n\nAdded save method to BackwardCompatibleGraph so that\nexample.py etc work again.\n\nApplied patch from Drew Perttula to add local_time_zone\nargument to util's date_time method.\n\nFixed a relativize bug in the rdf/xml serializer.\n\nFixed NameError: global name 'URIRef' is not defined error in\nBerkeleyDB.py by adding missing import.\n\nApplied patch for Seq to sort list by integer, added by Drew\nHess.\n\nAdded a preserve_bnode_ids option to rdf/xml parser.\n\nApplied assorted patches for tests (see\nhttp://tracker.asemantics.com/rdflib/ticket/8 )\n\nApplied redland.diff (see\nhttp://tracker.asemantics.com/rdflib/ticket/9 )\n\nApplied changes specified\nhttp://tracker.asemantics.com/rdflib/ticket/7\n\nAdded a set method to Graph.\n\nFixed RDF/XML serializer so that it does not choke on n3 bits\n(rather it'll just ignore them)\n\n\n2005-12-23 RELEASE 2.3.0\n========================\n\nSee http://rdflib.net/2.3.0/ for most up-to-date release notes\n\nAdded N3 support to Graph and Store.\n\nAdded Sean's n3p parser, and ntriples parser.\n\nBerkeleyDB implementation has been revamped in the process of\nexpanding it to support the new requirements n3\nrequirements. It also now persists a journal -- more to come.\n\ndetabified source files.\n\nLiteral and parsers now distinguish between datatype of None and datatype of \"\".\n\nStore-agnostic 'fallback' implementation of REGEX matching\n(inefficient but provides the capability to stores that don't\nsupport it natively). Implemented as a 'wrapper' around any\nStore which replaces REGEX terms with None (before dispatching\nto the store) and whittles out results that don't match the\ngiven REGEX term expression(s).\n\nStore-agnostic 'fallback' implementation of transactional\nrollbacks (also inefficient but provides the capability to\nstores that don't support it natively). Implemented as a\nwrapper that tracks a 'thread-safe' list of reversal\noperations (for every add, track the remove call that reverts\nthe store, and vice versa). Upon store.rollback(), execute the\nreverse operations. However, this doesn't guarantee\ndurability, since if the system fails before the rollbacks are\nall executed, the store will remain in an invalid state, but\nit provides Atomicity in the best case scenario.\n\n\n2005-10-10 RELEASE 2.2.3\n========================\n\nFixed BerkeleyDB backend to commit after an add and\nremove. This should help just a bit with those unclean\nshutdowns ;)\n\nFixed use of logging so that it does not mess with the root\nlogger. Thank you, Arve, for pointing this one out.\n\nFixed Graph's value method to have default for subject in\naddition to predicate and object.\n\nFixed Fourthought backend to be consistent with interface. It\nnow supports an empty constructor and an open method that\ntakes a configuration string.\n\n\n2005-09-10 RELEASE 2.2.2\n========================\n\nApplied patch from inkel to add encoding argument to all\nserialization related methods.\n\nFixed XMLSerializer bug regarding default namespace bindings.\n\nFixed namespace binding bug involving binding a second default\nnamespace.\n\nApplied patch from Gunnar AAstrand Grimnes to add context\nsupport to ```__iadd__``` on Graph. (Am considering the lack of\ncontext support a bug. Any users currently using ```__iadd__```, let\nme know if this breaks any of your code.)\n\nAdded Fourthought backend contributed by Chimezie Ogbuji.\n\nFixed a RDF/XML parser bug relating to XMLLiteral and\nescaping.\n\nFixed setup.py so that install does not try to uninstall\n(rename_old) before installing; there's now an uninstall\ncommand if one needs to uninstall.\n\n\n2005-08-25 RELEASE 2.2.1\n========================\n\nFixed issue regarding Python2.3 compatibility.\n\nFixed minor issue with URIRef's absolute method.\n\n\n2005-08-12 RELEASE 2.1.4\n========================\n\nAdded optional base argument to URIRef.\n\nFixed bug where load and parse had inconsistent behavior.\n\nAdded a FileInputSource.\n\nAdded skeleton sparql parser and test framework.\n\nIncluded pyparsing (pyparsing.sourceforge.net) for sparql parsing.\n\nAdded attribute support to namespaces.\n\n\n2005-06-28 RELEASE 2.1.3\n========================\n\nAdded Ivan's sparql-p implementation.\n\nLiteral is now picklable.\n\nAdded optional base argument to serialize methods about which to relativize.\n\nApplied patch to remove some dependencies on Python 2.4\nfeatures.\n\nFixed BNode's n3 serialization bug (recently introduced).\n\nFixed a collections related bug.\n\n\n2005-05-13 RELEASE 2.1.2\n========================\n\nAdded patch from Sidnei da Silva that adds a sqlobject based backend.\n\nFixed bug in PrettyXMLSerializer (rdf prefix decl was missing sometimes)\n\nFixed bug in RDF/XML parser where empty collections where\ncausing exceptions.\n\n\n2005-05-01 RELEASE 2.1.1\n========================\n\nFixed a number of bugs relating to 2.0 backward compatibility.\n\nFixed split_uri to handle URIs with _ in them properly.\n\nFixed bug in RDF/XML handler's absolutize that would cause some URIRefs to end in ##\n\nAdded check_context to Graph.\n\nAdded patch the improves IOMemory implementation.\n\n\n2005-04-12 RELEASE 2.1.0\n========================\n\nMerged TripleStore and InformationStore into Graph.\n\nAdded plugin support (or at least cleaned up, made consistent the\nplugin support that existed).\n\nAdded value and seq methods to Graph.\n\nRenamed prefix_mapping to bind.\n\nAdded namespaces method that is a generator over all prefix,\nnamespace bindings.\n\nAdded notion of NamespaceManager.\n\nAdded couple new backends, IOMemory and ZODB.\n\n\n2005-03-19 RELEASE 2.0.6\n========================\n\nAdded pretty-xml serializer (inlines BNodes where possible,\ntyped nodes, Collections).\n\nFixed bug in NTParser and n3 methods where not all characters\nwhere being escaped.\n\nChanged label and comment methods to return default passed in\nwhen there is no label or comment. Moved methods to Store\nClass. Store no longer inherits from Schema.\n\nFixed bug involving a case with rdf:about='#'\n\nChanged InMemoryBackend to update third index in the same style it\ndoes the first two.\n\n\n2005-01-08 RELEASE 2.0.5\n========================\n\nAdded publicID argument to Store's load method.\n\nAdded RDF and RDFS to top level rdflib package.\n\n\n2004-10-14 RELEASE 2.0.4\n========================\n\nRemoved unfinished functionality.\n\nFixed bug where another prefix other than rdf was getting\ndefined for the rdf namespace (causing an assertion to fail).\n\nFixed bug in serializer where nodeIDs were not valid NCNames.\n\n\n2004-04-21 RELEASE 2.0.3\n========================\n\nAdded missing \"from __future__ import generators\" statement to\nInformationStore.\n\nSimplified RDF/XML serializer fixing a few bugs involving\nBNodes.\n\nAdded a reset method to RDF/XML parser.\n\nChanged 'if foo' to \"if foo is not None\" in a few places in\nthe RDF/XML parser.\n\nFully qualified imports in rdflib.syntax {parser, serializer}.\n\nContext now goes through InformationStore (was bypassing it\ngoing directly to backend).\n\n\n2004-03-22 RELEASE 2.0.2\n========================\n\nImproved performance of Identifier equality tests.\n\nAdded missing \"from __future__ import generators\" statements\nneeded to run on Python2.2.\n\nAdded alternative to shlib.move() if it isn't present.\n\nFixed bug that occurred when specifying a backend to\nInformationStore's constructor.\n\nFixed bug recently introduced into InformationStore's remove\nmethod.\n\n\n2004-03-15 RELEASE 2.0.1\n========================\n\nFixed a bug in the SleepyCatBackend multi threaded concurrency\nsupport. (Tested fairly extensively under the following\nconditions: multi threaded, multi process, and both).\n\n> NOTE: fix involved change to database format -- so 2.0.1 will not be\n> able to open databases created with 2.0.0\n\nRemoved the use of the Concurrent wrapper around\nInMemoryBackend and modified InMemoryBackend to handle\nconcurrent requests. (Motivated by Concurrent's poor\nperformance on bigger TripleStores.)\n\nImproved the speed of len(store) by making backends\nresponsible for implementing ```__len__```.\n\nContext objects now have a identifier property.\n\n\n2004-03-10 RELEASE 2.0.0\n========================\n\nFixed a few bugs in the SleepyCatBackend multi process\nconcurrency support.\n\nRemoved rdflib.Resource\n\nChanged remove to now take a triple pattern and removed\nremove_triples method.\n\nAdded ```__iadd__``` method to Store in support of store +=\nanother_store.\n\n\n2004-01-04 RELEASE 1.3.2\n========================\n\nAdded a serialization dispatcher.\n\nAdded format arg to save method.\n\nStore now remembers prefix/namespace bindings.\n\nBackends are now more pluggable\n\n...\n\n2003-10-14 RELEASE 1.3.1\n========================\n\nFixed bug in serializer where triples where only getting\nserialized the first time.\n\nAdded type checking for contexts.\n\nFixed bug that caused comparisons with a Literal to fail when\nthe right hand side was not a string.\n\nAdded DB_INIT_CDB flag to SCBacked for supporting multiple\nreader/single writer access\n\nChanged rdf:RDF to be optional to conform with latest spec.\n\nFixed handling of XMLLiterals\n\n\n2003-04-40 RELEASE 1.3.0\n========================\n\nRemoved bag_id support and added it to OLD_TERMS.\n\nAdded a double hash for keys in SCBacked.\n\nFixed _HTTPClient so that it no longer removes metadata about\na context right after it adds it.\n\nAdded a KDTreeStore and RedlandStore backends.\n\nAdded a StoreTester.\n\n\n2003-02-28 RELEASE 1.2.4\n========================\n\nFixed bug in SCBackend where language and datatype information\nwhere being ignored.\n\nFixed bug in transitive_subjects.\n\nUpdated some of the test cases that where not up to date.\n\nasync_load now adds more http header and error information to\nthe InformationStore.\n\n\n2003-02-11 RELEASE 1.2.3\n========================\n\nFixed bug in load methods where relative URLs where not being\nabsolutized correctly on Windows.\n\nFixed serializer so that it throws an exception when trying to\nserialize a graph with a predicate that can not be split.\n\n\n2003-02-07 RELEASE 1.2.2\n========================\n\nAdded an exists method to the BackwardCompatibility mixin.\n\nAdded versions of remove, remove_triples and triples methods\nto the BackwardCompatility mixin for TripleStores that take an\ns, p, o as opposed to an (s, p, o).\n\n\n2003-02-03 RELEASE 1.2.1\n========================\n\nAdded support for parsing XMLLiterals.\n\nAdded support for proper charmod checking (only works in\nPython2.3).\n\nFixed remaining rdfcore test cases that where not passing.\n\nFixed windows bug in AbstractInformationStore's run method.\n\n\n2003-01-02 RELEASE 1.2.0\n========================\n\nAdded systemID, line #, and column # to error messages.\n\nBNode prefix is now composed of ascii_letters instead of letters.\n\nAdded a bsddb backed InformationStore.\n\nAdded an asynchronous load method, methods for scheduling context\nupdates, and a run method.\n\n\n2002-12-16 RELEASE 1.1.5\n========================\n\nIntroduction of InformationStore, a TripleStore with the\naddition of context support.\n\nResource ```__getitem__``` now returns object (no longer returns a\nResource for the object).\n\nFixed bug in parser that was introduced in last release\nregaurding unqualified names.\n\n\n2002-12-10 RELEASE 1.1.4\n========================\n\nInterface realigned with last stable release.\n\nSerializer now uses more of the abbreviated forms where\npossible.\n\nParser optimized and cleaned up.\n\nAdded third index to InMemoryStore.\n\nThe load and parse methods now take a single argument.\n\nAdded a StringInputSource for to support parsing from strings.\n\nRenamed rdflib.BTreeTripleStore.TripleStore to\nrdflib.BTreeTripleStore.BTreeTripleStore.\n\nMinor reorganization of mix-in classes.\n\n\n2002-12-03 RELEASE 1.1.3\n========================\n\nBNodes now created with a more unique identifier so BNodes\nfrom different sessions do not collide.\n\nAdded initial support for XML Literals (for now they are\nparsed into Literals).\n\nResource is no longer a special kind of URIRef.\n\nResource no longer looks at range to determine default return\ntype for ```__getitem__```. Instead there is now a get(predicate, default)\nmethod.\n\n\n2002-11-21 RELEASE 1.1.2\n========================\n\nFixed Literal's ```__eq__``` method so that Literal('foo')=='foo' etc.\n\nFixed Resource's ```__setitem__``` method so that it does not raise\na dictionary changed size while iterating exception.\n\n\n2002-11-09 RELEASE 1.1.1\n========================\n\nResource is now a special kind of URIRef\n\nResource's ```__getitem__``` now looks at rdfs:range to determine\nreturn type in default case.\n\n\n\n2002-11-05 RELEASE 1.1.0\n========================\n\n# A new development branch\n\nCleaned up interface and promoted it to SIR: Simple Interface\nfor RDF.\n\nUpdated parser to use SAX2 interfaces instead of using expat directly.\n\nAdded BTreeTripleStore, a ZODB BTree TripleStore backend. And\na default pre-mixed TripleStore that uses it.\n\nSynced with latest (Editor's draft) RDF/XML spec.\n\nAdded datatype support.\n\nCleaned up interfaces for load/parse: removed generate_path\nfrom loadsave andrenamed parse_URI to parse.\n\n\n2002-10-08 RELEASE 0.9.6\n========================\n\n\n# The end of a development branch\n\nBNode can now be created with specified value.\n\nLiteral now has a language attribute.\n\nParser now creates Literals with language attribute set\nappropriately as determined by xml:lang attributes.\n\n\nTODO: Serializer-Literals-language attribute\n\nTODO: Change ```__eq__``` so that Literal(\"foo\")==\"foo\" etc\n\nTripleStores now support \"in\" operator.\nFor example: if (s, p, o) in store: print \"Found \", s, p, o\n\nAdded APIs/object for working at level of a Resource. NOTE:\nThis functionality is still experimental\n\nConsecutive Collections now parse correctly.\n\n2002-08-06 RELEASE 0.9.5\n========================\n\n\nAdded support for rdf:parseType=\"Collection\"\n\nAdded items generator for getting items in a Collection\n\nRenamed rdflib.triple_store to rdflib.TripleStore to better follow\npython style conventions.\n\nAdded an Identifier Class\n\nMoved each node into its own Python module.\n\nAdded rdflib.util with a first and uniq function.\n\nAdded a little more to example.py\n\nRemoved generate_uri since we have BNodes now.\n\n\n2002-07-29 RELEASE 0.9.4\n========================\n\n\nAdded support for proposed rdf:nodeID to both the parser and\nserializer.\n\nReimplemented serializer which now nests things where\npossible.\n\nAdded partial support for XML Literal parseTypes.\n\n\n2002-07-16 RELEASE 0.9.3\n========================\n\n\nFixed bug where bNodes where being created for nested property\nelements when they where not supposed to be.\n\nAdded lax mode that will convert rdf/xml files that contain bare\nIDs etc. Also, lax mode will only report parse errors instead of\nraising exceptions.\n\nAdded missing check for valid attribute names in the case of\nproduction 5.18 of latest WD spec.\n\n\n2002-07-05 RELEASE 0.9.2\n========================\n\n\nAdded missing constants for SUBPROPERTYOF, ISDEFINEDBY.\n\nAdded test case for running all of the rdf/xml test cases.\n\nReimplemented rdf/xml parser to conform to latest WD.\n\n\n2002-06-10 RELEASE 0.9.1\n========================\n\n\nThere is now a remove and a remove_triples (no more overloaded\nremove).\n\nLayer 2 has been merged with layer 1 since there is no longer a\nneed for them to be separate layers.\n\nThe generate_uri method has moved to LoadSave since triple stores\ndo not have a notion of a uri. [Also, with proper bNode support on\nits way the need for a generate_uri might not be as high.]\n\nFixed bug in node's n3 function: URI -> URIRef.\n\nReplaced string based exceptions with class based exceptions.\n\nAdded PyUnit TestCase for parser.py\n\nAdded N-Triples parser.\n\nAdded ```__len__``` and ```__eq__``` methods to store interface.\n\n\n2002-06-04 RELEASE 0.9.0\n========================\n\nInitial release after being split from redfootlib.\n", "rdflib/plugins/serializers/nt.py": "\"\"\"\nN-Triples RDF graph serializer for RDFLib.\nSee <http://www.w3.org/TR/rdf-testcases/#ntriples> for details about the\nformat.\n\"\"\"\nimport codecs\nimport warnings\nfrom typing import IO, Optional\n\nfrom rdflib.graph import Graph\nfrom rdflib.serializer import Serializer\nfrom rdflib.term import Literal\n\n__all__ = [\"NTSerializer\"]\n\n\nclass NTSerializer(Serializer):\n \"\"\"\n Serializes RDF graphs to NTriples format.\n \"\"\"\n\n def __init__(self, store: Graph):\n Serializer.__init__(self, store)\n\n def serialize(\n self,\n stream: IO[bytes],\n base: Optional[str] = None,\n encoding: Optional[str] = \"utf-8\",\n **args,\n ):\n if base is not None:\n warnings.warn(\"NTSerializer does not support base.\")\n if encoding != \"utf-8\":\n warnings.warn(\n \"NTSerializer always uses UTF-8 encoding. \"\n f\"Given encoding was: {encoding}\"\n )\n\n for triple in self.store:\n stream.write(_nt_row(triple).encode())\n\n\nclass NT11Serializer(NTSerializer):\n \"\"\"\n Serializes RDF graphs to RDF 1.1 NTriples format.\n\n Exactly like nt - only utf8 encoded.\n \"\"\"\n\n def __init__(self, store: Graph):\n Serializer.__init__(self, store) # default to utf-8\n\n\ndef _nt_row(triple):\n if isinstance(triple[2], Literal):\n return \"%s %s %s .\\n\" % (\n triple[0].n3(),\n triple[1].n3(),\n _quoteLiteral(triple[2]),\n )\n else:\n return \"%s %s %s .\\n\" % (triple[0].n3(), triple[1].n3(), triple[2].n3())\n\n\ndef _quoteLiteral(l_):\n \"\"\"\n a simpler version of term.Literal.n3()\n \"\"\"\n\n encoded = _quote_encode(l_)\n\n if l_.language:\n if l_.datatype:\n raise Exception(\"Literal has datatype AND language!\")\n return \"%s@%s\" % (encoded, l_.language)\n elif l_.datatype:\n return \"%s^^<%s>\" % (encoded, l_.datatype)\n else:\n return \"%s\" % encoded\n\n\ndef _quote_encode(l_):\n return '\"%s\"' % l_.replace(\"\\\\\", \"\\\\\\\\\").replace(\"\\n\", \"\\\\n\").replace(\n '\"', '\\\\\"'\n ).replace(\"\\r\", \"\\\\r\")\n\n\ndef _nt_unicode_error_resolver(err):\n \"\"\"\n Do unicode char replaces as defined in https://www.w3.org/TR/2004/REC-rdf-testcases-20040210/#ntrip_strings\n \"\"\"\n\n def _replace_single(c):\n c = ord(c)\n fmt = \"\\\\u%04X\" if c <= 0xFFFF else \"\\\\U%08X\"\n return fmt % c\n\n string = err.object[err.start : err.end]\n return \"\".join(_replace_single(c) for c in string), err.end\n\n\ncodecs.register_error(\"_rdflib_nt_escape\", _nt_unicode_error_resolver)\n", "rdflib/tools/chunk_serializer.py": null}
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a43d51694..27c9fc414 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -21,6 +21,23 @@ CHANGE BARRIER is intended to reduce the potential for merge conflicts
and will be removed for release.
-->
+
+<!-- -->
+<!-- -->
+<!-- CHANGE BARRIER: START -->
+<!-- -->
+<!-- -->
+
+- Add chunk serializer that facilitates the encoding of a graph into multiple
+ N-Triples encoded chunks.
+ [PR #1968](https://github.com/RDFLib/rdflib/pull/1968).
+
+<!-- -->
+<!-- -->
+<!-- CHANGE BARRIER: END -->
+<!-- -->
+<!-- -->
+
<!-- -->
<!-- -->
<!-- CHANGE BARRIER: START -->
|
{"rdflib/tools/chunk_serializer.py": [{"type": "function", "name": "serialize_in_chunks", "lines": [24, 132], "signature": "def serialize_in_chunks( g: Graph, max_triples: int = 10000, max_file_size_kb: Optional[int] = None, file_name_stem: str = \"chunk\", output_dir: Optional[Path] = None, write_prefixes: bool = False, ) -> None:", "doc": "Serializes a given Graph into a series of n-triples with a given length.\n\n:param g:\n The graph to serialize.\n\n:param max_file_size_kb:\n Maximum size per NT file in kB (1,000 bytes)\n Equivalent to ~6,000 triples, depending on Literal sizes.\n\n:param max_triples:\n Maximum size per NT file in triples\n Equivalent to lines in file.\n\n If both this parameter and max_file_size_kb are set, max_file_size_kb will be used.\n\n:param file_name_stem:\n Prefix of each file name.\n e.g. \"chunk\" = chunk_000001.nt, chunk_000002.nt...\n\n:param output_dir:\n The directory you want the files to be written to.\n\n:param write_prefixes:\n The first file created is a Turtle file containing original graph prefixes.\n\n\nSee ``../test/test_tools/test_chunk_serializer.py`` for examples of this in use."}, {"type": "function", "name": "serialize_in_chunks._start_new_file", "lines": [71, 77], "signature": "def _start_new_file(file_no: int) -> Generator[Tuple[Path, BinaryIO], None, None]:", "doc": ""}, {"type": "function", "name": "serialize_in_chunks._serialize_prefixes", "lines": [79, 84], "signature": "def _serialize_prefixes(g: Graph) -> str:", "doc": ""}]}
| null |
["test/test_tools/test_chunk_serializer.py::test_chunk_by_triples", "test/test_tools/test_chunk_serializer.py::test_chunk_by_size", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path0-max_triples0-max_file_size_kb0-False-True-1]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path1-max_triples1-max_file_size_kb1-True-False-2]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path2-max_triples2-5-True-False-expected_file_count2]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path3-max_triples3-1-False-True-expected_file_count3]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path4-10000-1-False-True-expected_file_count4]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path5-20-max_file_size_kb5-False-True-5]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path6-20-max_file_size_kb6-True-True-6]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path7-100-max_file_size_kb7-True-True-2]", "test/test_tools/test_chunk_serializer.py::test_chuking[test_graph_path8-100-max_file_size_kb8-False-True-1]"]
|
[]
|
0c11debb5178157baeac27b735e49a757916d2a6
|
{"first_commit_time": 1653220003.0, "pr_title": "add chunk serializer & tests", "pr_body": "# Summary of changes\r\n\r\nThis file provides a single function `serialize_in_chunks()` which can serialize a \r\nGraph into a number of NT files with a maximum number of triples or maximum file size.\r\n\r\nThere is an option to preserve any prefixes declared for the original graph in the first\r\nfile, which will be a Turtle file.\r\n\r\n# Checklist\r\n\r\n- [x] Checked that there aren't other open pull requests for\r\n the same change.\r\n- [x] Added tests for any changes that have a runtime impact.\r\n- [x] Checked that all tests and type checking passes.\r\n- For changes that have a potential impact on users of this project:\r\n - [x] Updated relevant documentation to avoid inaccuracies.\r\n - [ ] Considered adding additional documentation.\r\n - [ ] Considered adding an example in `./examples` for new features.\r\n - [x] Considered updating our changelog (`CHANGELOG.md`).\r\n- [x] Considered granting [push permissions to the PR branch](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork),\r\n so maintainers can fix minor issues and keep your PR up to date.\r\n\r\n", "pr_timeline": [{"time": 1653252036.0, "comment": "I will have a look at the windows test failures on Monday (CET)."}, {"time": 1653435833.0, "comment": "@gjhiggins @ashleysommer @aucampia what do you think of the approach here? \r\n\r\nThe need to chunk serialize files is a small one - a project I'm working on needs it - and I thought it interesting enough to make an RDFLib tool for, rather than just keeping the code within the project. \r\n\r\nI've tried to be efficient in terms of memory usage - no duplicate graph objects etc. - and to faithfully serialize the graph but there may be smarter approaches."}, {"time": 1653514787.0, "comment": "pre-commit.ci autofix"}, {"time": 1653519985.0, "comment": "> I've tried to be efficient in terms of memory usage - no duplicate graph objects etc. - and to faithfully serialize the graph but there may be smarter approaches.\r\n\r\nI think it is a fairly reasonable approach. It would have been nice if we had some way to do stream serialization to some text sink, but that is quite a big change and should probably be done with an abundance of caution and this seems like a reasonable approach in the interim.\r\n\r\nI will maybe add some more tests on your branch if that is okay with you. Also happy to address comments I made, just let me know on the comment if you agree or disagree."}, {"time": 1653520355.0, "comment": "> @gjhiggins @ashleysommer @aucampia what do you think of the approach here?\r\n\r\nMinimally, it should indicate clearly that it is restricted to `Graph` serialization because, as the test below shows, context information is not preserved:\r\n\r\n```py\r\[email protected](reason=\"Context information not preserved\")\r\ndef test_chunking_of_conjunctivegraph():\r\n nquads = \"\"\"\\\r\n<http://example.org/alice> <http://purl.org/dc/terms/publisher> \"Alice\" .\r\n<http://example.org/bob> <http://purl.org/dc/terms/publisher> \"Bob\" .\r\n_:harry <http://purl.org/dc/terms/publisher> \"Harry\" .\r\n_:harry <http://xmlns.com/foaf/0.1/name> \"Harry\" _:harry .\r\n_:harry <http://xmlns.com/foaf/0.1/mbox> <mailto:[email protected]> _:harry .\r\n_:alice <http://xmlns.com/foaf/0.1/name> \"Alice\" <http://example.org/alice> .\r\n_:alice <http://xmlns.com/foaf/0.1/mbox> <mailto:[email protected]> <http://example.org/alice> .\r\n_:bob <http://xmlns.com/foaf/0.1/name> \"Bob\" <http://example.org/bob> .\r\n_:bob <http://xmlns.com/foaf/0.1/mbox> <mailto:[email protected]> <http://example.org/bob> .\r\n_:bob <http://xmlns.com/foaf/0.1/knows> _:alice <http://example.org/bob> .\"\"\"\r\n g = ConjunctiveGraph()\r\n g.parse(data=nquads, format=\"nquads\")\r\n\r\n # make a temp dir to work with\r\n temp_dir_path = Path(tempfile.TemporaryDirectory().name)\r\n Path(temp_dir_path).mkdir()\r\n\r\n # serialize into chunks file with 100 triples each\r\n serialize_in_chunks(\r\n g, max_triples=100, file_name_stem=\"chunk_100\", output_dir=temp_dir_path\r\n )\r\n\r\n # check, when a graph is made from the chunk files, it's isomorphic with original\r\n g2 = ConjunctiveGraph()\r\n for f in Path(temp_dir_path).glob(\"*.nt\"):\r\n g2.parse(f, format=\"nt\")\r\n\r\n assert len(list(g.contexts())) == len(list(g2.contexts()))\r\n```\r\n\r\n> The need to chunk serialize files is a small one - a project I'm working on needs it - and I thought it interesting enough to make an RDFLib tool for, rather than just keeping the code within the project.\r\n\r\nRDFLib has traditionally been ambivalent about what's perceived as core vs non-core. Additional functionality appears to inevitably accrete, up to a point where it gets migrated out *en masse* into [a separate package](https://rdfextras.readthedocs.io/), the contents of which gradually become obsolete as they either fall out of use or are subsequently integrated into core library functionality.\r\n\r\nAdditional non-core functionality does have a regrettable tendency to languish in an untended and unkempt state. For instance, there's [`tools/graphisomorphism.py`](https://github.com/RDFLib/rdflib/blob/6ed2ef48ed38679bcdafe7cae250a2ef4b315e7b/rdflib/tools/graphisomorphism.py) which is\r\n1. currently broken (and [has been since 2018](https://github.com/RDFLib/rdflib/issues/812))\r\n2. long-obsolete, refererring as it does to `RDFa` as a supported format and based on [Sean B. Palmers's 2004 `rdfdiff.py` implementation](https://web.archive.org/web/20060522193057/http://www.w3.org/2001/sw/DataAccess/proto-tests/tools/)\r\n3. Is subject to the same triples-only limitation.\r\n4. Is obsoleted in functionality by both [`rdflib.compare.isomorphic`](https://github.com/RDFLib/rdflib/blob/a3a46114ccbeec873b77a8f746bfc0282a8545f0/rdflib/compare.py#L544) and the weaker [`Graph.isomorphic()`](https://github.com/RDFLib/rdflib/blob/a3a46114ccbeec873b77a8f746bfc0282a8545f0/rdflib/graph.py#L1382)\r\n\r\nIs it even worth bothering with a relatively trivial fix/update ...\r\n```diff\r\ndiff --git a/rdflib/tools/graphisomorphism.py b/rdflib/tools/graphisomorphism.py\r\nindex 004b567b..75462eb9 100644\r\n--- a/rdflib/tools/graphisomorphism.py\r\n+++ b/rdflib/tools/graphisomorphism.py\r\n@@ -27,6 +27,10 @@ class IsomorphicTestableGraph(Graph):\r\n \"\"\"\r\n return hash(tuple(sorted(self.hashtriples())))\r\n \r\n+ def __hash__(self): \r\n+ # return hash(tuple(sorted(self.hashtriples())))\r\n+ return self.internal_hash()\r\n+\r\n def hashtriples(self):\r\n for triple in self:\r\n g = ((isinstance(t, BNode) and self.vhash(t)) or t for t in triple)\r\n@@ -49,19 +53,19 @@ class IsomorphicTestableGraph(Graph):\r\n else:\r\n yield self.vhash(triple[p], done=True)\r\n \r\n- def __eq__(self, G):\r\n+ def __eq__(self, g):\r\n \"\"\"Graph isomorphism testing.\"\"\"\r\n- if not isinstance(G, IsomorphicTestableGraph):\r\n+ if not isinstance(g, IsomorphicTestableGraph):\r\n return False\r\n- elif len(self) != len(G):\r\n+ elif len(self) != len(g):\r\n return False\r\n- elif list.__eq__(list(self), list(G)):\r\n+ elif list.__eq__(list(self), list(g)):\r\n return True # @@\r\n- return self.internal_hash() == G.internal_hash()\r\n+ return self.internal_hash() == g.internal_hash()\r\n \r\n- def __ne__(self, G):\r\n+ def __ne__(self, g):\r\n \"\"\"Negative graph isomorphism testing.\"\"\"\r\n- return not self.__eq__(G)\r\n+ return not self.__eq__(g)\r\n \r\n \r\n def main():\r\n@@ -82,10 +86,10 @@ def main():\r\n default=\"xml\",\r\n dest=\"inputFormat\",\r\n metavar=\"RDF_FORMAT\",\r\n- choices=[\"xml\", \"trix\", \"n3\", \"nt\", \"rdfa\"],\r\n+ choices=[\"xml\", \"n3\", \"nt\", \"turtle\", \"trix\", \"trig\", \"nquads\", \"json-ld\", \"hext\"],\r\n help=\"The format of the RDF document(s) to compare\"\r\n- + \"One of 'xml','n3','trix', 'nt', \"\r\n- + \"or 'rdfa'. The default is %default\",\r\n+ + \"One of 'xml', 'turtle', 'n3', 'nt', 'trix', 'trig', 'nquads', 'json-ld'\"\r\n+ + \"or 'hext'. The default is %default\",\r\n )\r\n \r\n (options, args) = op.parse_args()\r\n```\r\n\r\nwhen its appearance in `tools` is unlikely to persist for much longer?\r\n\r\nThere are a few non-core contributions in the closed PRs which I'm recruiting for preservaition in the cookbook. I'm guessing that a command-line version of graphisomorphism will ultimately end up there."}, {"time": 1653520557.0, "comment": "NOTE: I still have not had time to look at windows issue, will try tomorrow."}, {"time": 1653563971.0, "comment": "@nicholascar made some changes to your branch to get the tests to pass on windows:\r\n\r\n- Use bytes written as size instead of using `os.path.size()`.\r\n The second option here is very dependent on OS behaviour and what is\r\n on disk.\r\n\r\n- Add all open files to an exit stack so they are closed by the time the\r\n function returns.\r\n\r\n- Set encoding explicitly to utf-8 on opened files."}, {"time": 1660092496.0, "comment": "\n[](https://coveralls.io/builds/51545335)\n\nCoverage increased (+0.01%) to 90.458% when pulling **8647eb06542a188d12b676e70d3d26190b459bed on chunk_serializer** into **131d9e66e8515aa81d776969d42f58c72bc68f86 on master**.\n"}, {"time": 1653565685.0, "comment": "Another commit:\r\n\r\n- Very that writing triple won't exceed max file size before writing\r\n instead of after writing.\r\n\r\n This also necesitates using binary mode for file IO so that an\r\n accurate byte count can be obtained."}, {"time": 1659903140.0, "comment": "@nicholascar will finish this up in W32"}, {"time": 1660073099.0, "comment": "pre-commit.ci autofix"}, {"time": 1660111574.0, "comment": "> I think this is good to merge now.\r\n\r\nI will merge this later this week if there is no further feedback."}], "issues": {}}
|
RDFLib/rdflib
| 968
|
https://github.com/RDFLib/rdflib/pull/968
|
RDFLib__rdflib-968
|
[]
|
0e5efef78702575e4abff4d8076eac4e2bd9d5f0
|
diff --git a/rdflib/graph.py b/rdflib/graph.py
index 4a27e6de7..f68300cb1 100644
--- a/rdflib/graph.py
+++ b/rdflib/graph.py
@@ -1269,6 +1269,55 @@ def do_de_skolemize2(t):
return retval
+ def cbd(self, resource):
+ """Retrieves the Concise Bounded Description of a Resource from a Graph
+
+ Concise Bounded Description (CBD) is defined in [1] as:
+
+ Given a particular node (the starting node) in a particular RDF graph (the source graph), a subgraph of that
+ particular graph, taken to comprise a concise bounded description of the resource denoted by the starting node,
+ can be identified as follows:
+
+ 1. Include in the subgraph all statements in the source graph where the subject of the statement is the
+ starting node;
+ 2. Recursively, for all statements identified in the subgraph thus far having a blank node object, include
+ in the subgraph all statements in the source graph where the subject of the statement is the blank node
+ in question and which are not already included in the subgraph.
+ 3. Recursively, for all statements included in the subgraph thus far, for all reifications of each statement
+ in the source graph, include the concise bounded description beginning from the rdf:Statement node of
+ each reification.
+
+ This results in a subgraph where the object nodes are either URI references, literals, or blank nodes not
+ serving as the subject of any statement in the graph.
+
+ [1] https://www.w3.org/Submission/CBD/
+
+ :param resource: a URIRef object, of the Resource for queried for
+ :return: a Graph, subgraph of self
+ """
+ subgraph = Graph()
+
+ def add_to_cbd(uri):
+ for s, p, o in self.triples((uri, None, None)):
+ subgraph.add((s, p, o))
+ # recurse 'down' through ll Blank Nodes
+ if type(o) == BNode and not (o, None, None) in subgraph:
+ add_to_cbd(o)
+
+ # for Rule 3 (reification)
+ # for any rdf:Statement in the graph with the given URI as the object of rdf:subject,
+ # get all triples with that rdf:Statement instance as subject
+
+ # find any subject s where the predicate is rdf:subject and this uri is the object
+ # (these subjects are of type rdf:Statement, given the domain of rdf:subject)
+ for s, p, o in self.triples((None, RDF.subject, uri)):
+ # find all triples with s as the subject and add these to the subgraph
+ for s2, p2, o2 in self.triples((s, None, None)):
+ subgraph.add((s2, p2, o2))
+ add_to_cbd(resource)
+
+ return subgraph
+
class ConjunctiveGraph(Graph):
|
diff --git a/test/test_graph_cbd.py b/test/test_graph_cbd.py
new file mode 100644
index 000000000..aedc9dd07
--- /dev/null
+++ b/test/test_graph_cbd.py
@@ -0,0 +1,124 @@
+import unittest
+from rdflib import Graph, Namespace
+
+
+class CbdTestCase(unittest.TestCase):
+ """Tests the Graph class' cbd() function"""
+
+ def setUp(self):
+ self.g = Graph()
+ # adding example data for testing
+ self.g.parse(
+ data="""
+ PREFIX ex: <http://ex/>
+ PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+
+ ex:R1
+ a rdf:Resource ;
+ ex:hasChild ex:R2 , ex:R3 .
+
+ ex:R2
+ ex:propOne ex:P1 ;
+ ex:propTwo ex:P2 .
+
+ ex:R3
+ ex:propOne ex:P3 ;
+ ex:propTwo ex:P4 ;
+ ex:propThree [
+ a rdf:Resource ;
+ ex:propFour "Some Literal" ;
+ ex:propFive ex:P5 ;
+ ex:propSix [
+ ex:propSeven ex:P7 ;
+ ] ;
+ ] .
+ """,
+ format="turtle",
+ )
+
+ self.EX = Namespace("http://ex/")
+ self.g.bind("ex", self.EX)
+
+ def testCbd(self):
+ self.assertEqual(
+ len(self.g.cbd(self.EX.R1)), 3, "cbd() for R1 should return 3 triples"
+ )
+
+ self.assertEqual(
+ len(self.g.cbd(self.EX.R2)), 2, "cbd() for R3 should return 2 triples"
+ )
+
+ self.assertEqual(
+ len(self.g.cbd(self.EX.R3)), 8, "cbd() for R3 should return 8 triples"
+ )
+
+ self.assertEqual(
+ len(self.g.cbd(self.EX.R4)), 0, "cbd() for R4 should return 0 triples"
+ )
+
+ def testCbdReified(self):
+ # add some reified triples to the testing graph
+ self.g.parse(
+ data="""
+ PREFIX ex: <http://ex/>
+ PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+
+ ex:R5
+ ex:propOne ex:P1 ;
+ ex:propTwo ex:P2 ;
+ ex:propRei ex:Pre1 .
+
+ ex:S
+ a rdf:Statement ;
+ rdf:subject ex:R5 ;
+ rdf:predicate ex:propRei ;
+ rdf:object ex:Pre1 ;
+ ex:otherReiProp ex:Pre2 .
+ """,
+ format="turtle",
+ )
+
+ # this cbd() call should get the 3 basic triples with ex:R5 as subject as well as 5 more from the reified
+ # statement
+ self.assertEqual(
+ len(self.g.cbd(self.EX.R5)), (3 + 5), "cbd() for R5 should return 8 triples"
+ )
+
+ # add crazy reified triples to the testing graph
+ self.g.parse(
+ data="""
+ PREFIX ex: <http://ex/>
+ PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+
+ ex:R6
+ ex:propOne ex:P1 ;
+ ex:propTwo ex:P2 ;
+ ex:propRei ex:Pre1 .
+
+ ex:S1
+ a rdf:Statement ;
+ rdf:subject ex:R6 ;
+ rdf:predicate ex:propRei ;
+ rdf:object ex:Pre1 ;
+ ex:otherReiProp ex:Pre3 .
+
+ ex:S2
+ rdf:subject ex:R6 ;
+ rdf:predicate ex:propRei2 ;
+ rdf:object ex:Pre2 ;
+ ex:otherReiProp ex:Pre4 ;
+ ex:otherReiProp ex:Pre5 .
+ """,
+ format="turtle",
+ )
+
+ self.assertEqual(
+ len(self.g.cbd(self.EX.R6)), (3 + 5 + 5), "cbd() for R6 should return 12 triples"
+ )
+
+ def tearDown(self):
+ self.g.close()
+
+
+if __name__ == "__main__":
+ unittest.main()
| 2020-03-13T11:56:00
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"rdflib/graph.py": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom rdflib.term import Literal # required for doctests\nassert Literal # avoid warning\nfrom rdflib.namespace import Namespace # required for doctests\nassert Namespace # avoid warning\n\n\n__doc__ = \"\"\"\\\n\nRDFLib defines the following kinds of Graphs:\n\n* :class:`~rdflib.graph.Graph`\n* :class:`~rdflib.graph.QuotedGraph`\n* :class:`~rdflib.graph.ConjunctiveGraph`\n* :class:`~rdflib.graph.Dataset`\n\nGraph\n-----\n\nAn RDF graph is a set of RDF triples. Graphs support the python ``in``\noperator, as well as iteration and some operations like union,\ndifference and intersection.\n\nsee :class:`~rdflib.graph.Graph`\n\nConjunctive Graph\n-----------------\n\nA Conjunctive Graph is the most relevant collection of graphs that are\nconsidered to be the boundary for closed world assumptions. This\nboundary is equivalent to that of the store instance (which is itself\nuniquely identified and distinct from other instances of\n:class:`Store` that signify other Conjunctive Graphs). It is\nequivalent to all the named graphs within it and associated with a\n``_default_`` graph which is automatically assigned a :class:`BNode`\nfor an identifier - if one isn't given.\n\nsee :class:`~rdflib.graph.ConjunctiveGraph`\n\nQuoted graph\n------------\n\nThe notion of an RDF graph [14] is extended to include the concept of\na formula node. A formula node may occur wherever any other kind of\nnode can appear. Associated with a formula node is an RDF graph that\nis completely disjoint from all other graphs; i.e. has no nodes in\ncommon with any other graph. (It may contain the same labels as other\nRDF graphs; because this is, by definition, a separate graph,\nconsiderations of tidiness do not apply between the graph at a formula\nnode and any other graph.)\n\nThis is intended to map the idea of \"{ N3-expression }\" that is used\nby N3 into an RDF graph upon which RDF semantics is defined.\n\nsee :class:`~rdflib.graph.QuotedGraph`\n\nDataset\n-------\n\nThe RDF 1.1 Dataset, a small extension to the Conjunctive Graph. The\nprimary term is \"graphs in the datasets\" and not \"contexts with quads\"\nso there is a separate method to set/retrieve a graph in a dataset and\nto operate with dataset graphs. As a consequence of this approach,\ndataset graphs cannot be identified with blank nodes, a name is always\nrequired (RDFLib will automatically add a name if one is not provided\nat creation time). This implementation includes a convenience method\nto directly add a single quad to a dataset graph.\n\nsee :class:`~rdflib.graph.Dataset`\n\nWorking with graphs\n===================\n\nInstantiating Graphs with default store (IOMemory) and default identifier\n(a BNode):\n\n >>> g = Graph()\n >>> g.store.__class__\n <class 'rdflib.plugins.memory.IOMemory'>\n >>> g.identifier.__class__\n <class 'rdflib.term.BNode'>\n\nInstantiating Graphs with a IOMemory store and an identifier -\n<http://rdflib.net>:\n\n >>> g = Graph('IOMemory', URIRef(\"http://rdflib.net\"))\n >>> g.identifier\n rdflib.term.URIRef(u'http://rdflib.net')\n >>> str(g) # doctest: +NORMALIZE_WHITESPACE\n \"<http://rdflib.net> a rdfg:Graph;rdflib:storage\n [a rdflib:Store;rdfs:label 'IOMemory'].\"\n\nCreating a ConjunctiveGraph - The top level container for all named Graphs\nin a 'database':\n\n >>> g = ConjunctiveGraph()\n >>> str(g.default_context)\n \"[a rdfg:Graph;rdflib:storage [a rdflib:Store;rdfs:label 'IOMemory']].\"\n\nAdding / removing reified triples to Graph and iterating over it directly or\nvia triple pattern:\n\n >>> g = Graph()\n >>> statementId = BNode()\n >>> print(len(g))\n 0\n >>> g.add((statementId, RDF.type, RDF.Statement))\n >>> g.add((statementId, RDF.subject,\n ... URIRef(u'http://rdflib.net/store/ConjunctiveGraph')))\n >>> g.add((statementId, RDF.predicate, RDFS.label))\n >>> g.add((statementId, RDF.object, Literal(\"Conjunctive Graph\")))\n >>> print(len(g))\n 4\n >>> for s, p, o in g:\n ... print(type(s))\n ...\n <class 'rdflib.term.BNode'>\n <class 'rdflib.term.BNode'>\n <class 'rdflib.term.BNode'>\n <class 'rdflib.term.BNode'>\n\n >>> for s, p, o in g.triples((None, RDF.object, None)):\n ... print(o)\n ...\n Conjunctive Graph\n >>> g.remove((statementId, RDF.type, RDF.Statement))\n >>> print(len(g))\n 3\n\n``None`` terms in calls to :meth:`~rdflib.graph.Graph.triples` can be\nthought of as \"open variables\".\n\nGraph support set-theoretic operators, you can add/subtract graphs, as\nwell as intersection (with multiplication operator g1*g2) and xor (g1\n^ g2).\n\nNote that BNode IDs are kept when doing set-theoretic operations, this\nmay or may not be what you want. Two named graphs within the same\napplication probably want share BNode IDs, two graphs with data from\ndifferent sources probably not. If your BNode IDs are all generated\nby RDFLib they are UUIDs and unique.\n\n >>> g1 = Graph()\n >>> g2 = Graph()\n >>> u = URIRef(u'http://example.com/foo')\n >>> g1.add([u, RDFS.label, Literal('foo')])\n >>> g1.add([u, RDFS.label, Literal('bar')])\n >>> g2.add([u, RDFS.label, Literal('foo')])\n >>> g2.add([u, RDFS.label, Literal('bing')])\n >>> len(g1 + g2) # adds bing as label\n 3\n >>> len(g1 - g2) # removes foo\n 1\n >>> len(g1 * g2) # only foo\n 1\n >>> g1 += g2 # now g1 contains everything\n\n\nGraph Aggregation - ConjunctiveGraphs and ReadOnlyGraphAggregate within\nthe same store:\n\n >>> store = plugin.get('IOMemory', Store)()\n >>> g1 = Graph(store)\n >>> g2 = Graph(store)\n >>> g3 = Graph(store)\n >>> stmt1 = BNode()\n >>> stmt2 = BNode()\n >>> stmt3 = BNode()\n >>> g1.add((stmt1, RDF.type, RDF.Statement))\n >>> g1.add((stmt1, RDF.subject,\n ... URIRef(u'http://rdflib.net/store/ConjunctiveGraph')))\n >>> g1.add((stmt1, RDF.predicate, RDFS.label))\n >>> g1.add((stmt1, RDF.object, Literal(\"Conjunctive Graph\")))\n >>> g2.add((stmt2, RDF.type, RDF.Statement))\n >>> g2.add((stmt2, RDF.subject,\n ... URIRef(u'http://rdflib.net/store/ConjunctiveGraph')))\n >>> g2.add((stmt2, RDF.predicate, RDF.type))\n >>> g2.add((stmt2, RDF.object, RDFS.Class))\n >>> g3.add((stmt3, RDF.type, RDF.Statement))\n >>> g3.add((stmt3, RDF.subject,\n ... URIRef(u'http://rdflib.net/store/ConjunctiveGraph')))\n >>> g3.add((stmt3, RDF.predicate, RDFS.comment))\n >>> g3.add((stmt3, RDF.object, Literal(\n ... \"The top-level aggregate graph - The sum \" +\n ... \"of all named graphs within a Store\")))\n >>> len(list(ConjunctiveGraph(store).subjects(RDF.type, RDF.Statement)))\n 3\n >>> len(list(ReadOnlyGraphAggregate([g1,g2]).subjects(\n ... RDF.type, RDF.Statement)))\n 2\n\nConjunctiveGraphs have a :meth:`~rdflib.graph.ConjunctiveGraph.quads` method\nwhich returns quads instead of triples, where the fourth item is the Graph\n(or subclass thereof) instance in which the triple was asserted:\n\n >>> uniqueGraphNames = set(\n ... [graph.identifier for s, p, o, graph in ConjunctiveGraph(store\n ... ).quads((None, RDF.predicate, None))])\n >>> len(uniqueGraphNames)\n 3\n >>> unionGraph = ReadOnlyGraphAggregate([g1, g2])\n >>> uniqueGraphNames = set(\n ... [graph.identifier for s, p, o, graph in unionGraph.quads(\n ... (None, RDF.predicate, None))])\n >>> len(uniqueGraphNames)\n 2\n\nParsing N3 from a string\n\n >>> g2 = Graph()\n >>> src = '''\n ... @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .\n ... @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .\n ... [ a rdf:Statement ;\n ... rdf:subject <http://rdflib.net/store#ConjunctiveGraph>;\n ... rdf:predicate rdfs:label;\n ... rdf:object \"Conjunctive Graph\" ] .\n ... '''\n >>> g2 = g2.parse(data=src, format='n3')\n >>> print(len(g2))\n 4\n\nUsing Namespace class:\n\n >>> RDFLib = Namespace('http://rdflib.net/')\n >>> RDFLib.ConjunctiveGraph\n rdflib.term.URIRef(u'http://rdflib.net/ConjunctiveGraph')\n >>> RDFLib['Graph']\n rdflib.term.URIRef(u'http://rdflib.net/Graph')\n\n\"\"\"\n\nimport logging\nlogger = logging.getLogger(__name__)\n\n# import md5\nimport random\nimport warnings\n\nfrom hashlib import md5\n\n\nfrom rdflib.namespace import RDF, RDFS, SKOS\n\nfrom rdflib import plugin, exceptions, query\n\nfrom rdflib.term import Node, URIRef, Genid\nfrom rdflib.term import BNode\n\nimport rdflib.term\n\nfrom rdflib.paths import Path\n\nfrom rdflib.store import Store\nfrom rdflib.serializer import Serializer\nfrom rdflib.parser import Parser\nfrom rdflib.parser import create_input_source\nfrom rdflib.namespace import NamespaceManager\nfrom rdflib.resource import Resource\nfrom rdflib.collection import Collection\n\nimport os\nimport shutil\nimport tempfile\n\nfrom six import BytesIO\nfrom six import b\nfrom six.moves.urllib.parse import urlparse\n\n__all__ = [\n 'Graph', 'ConjunctiveGraph', 'QuotedGraph', 'Seq',\n 'ModificationException', 'Dataset',\n 'UnSupportedAggregateOperation', 'ReadOnlyGraphAggregate']\n\n\nclass Graph(Node):\n \"\"\"An RDF Graph\n\n The constructor accepts one argument, the 'store'\n that will be used to store the graph data (see the 'store'\n package for stores currently shipped with rdflib).\n\n Stores can be context-aware or unaware. Unaware stores take up\n (some) less space but cannot support features that require\n context, such as true merging/demerging of sub-graphs and\n provenance.\n\n The Graph constructor can take an identifier which identifies the Graph\n by name. If none is given, the graph is assigned a BNode for its\n identifier.\n For more on named graphs, see: http://www.w3.org/2004/03/trix/\n\n \"\"\"\n\n def __init__(self, store='default', identifier=None,\n namespace_manager=None):\n super(Graph, self).__init__()\n self.__identifier = identifier or BNode()\n\n if not isinstance(self.__identifier, Node):\n self.__identifier = URIRef(self.__identifier)\n\n if not isinstance(store, Store):\n # TODO: error handling\n self.__store = store = plugin.get(store, Store)()\n else:\n self.__store = store\n self.__namespace_manager = namespace_manager\n self.context_aware = False\n self.formula_aware = False\n self.default_union = False\n\n def __get_store(self):\n return self.__store\n store = property(__get_store) # read-only attr\n\n def __get_identifier(self):\n return self.__identifier\n identifier = property(__get_identifier) # read-only attr\n\n def _get_namespace_manager(self):\n if self.__namespace_manager is None:\n self.__namespace_manager = NamespaceManager(self)\n return self.__namespace_manager\n\n def _set_namespace_manager(self, nm):\n self.__namespace_manager = nm\n\n namespace_manager = property(_get_namespace_manager,\n _set_namespace_manager,\n doc=\"this graph's namespace-manager\")\n\n def __repr__(self):\n return \"<Graph identifier=%s (%s)>\" % (self.identifier, type(self))\n\n def __str__(self):\n if isinstance(self.identifier, URIRef):\n return (\"%s a rdfg:Graph;rdflib:storage \" +\n \"[a rdflib:Store;rdfs:label '%s'].\") % (\n self.identifier.n3(),\n self.store.__class__.__name__)\n else:\n return (\"[a rdfg:Graph;rdflib:storage \" +\n \"[a rdflib:Store;rdfs:label '%s']].\") % (\n self.store.__class__.__name__)\n\n def toPython(self):\n return self\n\n def destroy(self, configuration):\n \"\"\"Destroy the store identified by `configuration` if supported\"\"\"\n self.__store.destroy(configuration)\n\n # Transactional interfaces (optional)\n def commit(self):\n \"\"\"Commits active transactions\"\"\"\n self.__store.commit()\n\n def rollback(self):\n \"\"\"Rollback active transactions\"\"\"\n self.__store.rollback()\n\n def open(self, configuration, create=False):\n \"\"\"Open the graph store\n\n Might be necessary for stores that require opening a connection to a\n database or acquiring some resource.\n \"\"\"\n return self.__store.open(configuration, create)\n\n def close(self, commit_pending_transaction=False):\n \"\"\"Close the graph store\n\n Might be necessary for stores that require closing a connection to a\n database or releasing some resource.\n \"\"\"\n self.__store.close(\n commit_pending_transaction=commit_pending_transaction)\n\n def add(self, triple):\n \"\"\"Add a triple with self as context\"\"\"\n s, p, o = triple\n assert isinstance(s, Node), \\\n \"Subject %s must be an rdflib term\" % (s,)\n assert isinstance(p, Node), \\\n \"Predicate %s must be an rdflib term\" % (p,)\n assert isinstance(o, Node), \\\n \"Object %s must be an rdflib term\" % (o,)\n self.__store.add((s, p, o), self, quoted=False)\n\n def addN(self, quads):\n \"\"\"Add a sequence of triple with context\"\"\"\n\n self.__store.addN((s, p, o, c) for s, p, o, c in quads\n if isinstance(c, Graph) and\n c.identifier is self.identifier and\n _assertnode(s, p, o)\n )\n\n def remove(self, triple):\n \"\"\"Remove a triple from the graph\n\n If the triple does not provide a context attribute, removes the triple\n from all contexts.\n \"\"\"\n self.__store.remove(triple, context=self)\n\n def triples(self, triple):\n \"\"\"Generator over the triple store\n\n Returns triples that match the given triple pattern. If triple pattern\n does not provide a context, all contexts will be searched.\n \"\"\"\n s, p, o = triple\n if isinstance(p, Path):\n for _s, _o in p.eval(self, s, o):\n yield (_s, p, _o)\n else:\n for (s, p, o), cg in self.__store.triples((s, p, o), context=self):\n yield (s, p, o)\n\n def __getitem__(self, item):\n \"\"\"\n A graph can be \"sliced\" as a shortcut for the triples method\n The python slice syntax is (ab)used for specifying triples.\n A generator over matches is returned,\n the returned tuples include only the parts not given\n\n >>> import rdflib\n >>> g = rdflib.Graph()\n >>> g.add((rdflib.URIRef('urn:bob'), rdflib.RDFS.label, rdflib.Literal('Bob')))\n\n >>> list(g[rdflib.URIRef('urn:bob')]) # all triples about bob\n [(rdflib.term.URIRef(u'http://www.w3.org/2000/01/rdf-schema#label'), rdflib.term.Literal(u'Bob'))]\n\n >>> list(g[:rdflib.RDFS.label]) # all label triples\n [(rdflib.term.URIRef(u'urn:bob'), rdflib.term.Literal(u'Bob'))]\n\n >>> list(g[::rdflib.Literal('Bob')]) # all triples with bob as object\n [(rdflib.term.URIRef(u'urn:bob'), rdflib.term.URIRef(u'http://www.w3.org/2000/01/rdf-schema#label'))]\n\n Combined with SPARQL paths, more complex queries can be\n written concisely:\n\n Name of all Bobs friends:\n\n g[bob : FOAF.knows/FOAF.name ]\n\n Some label for Bob:\n\n g[bob : DC.title|FOAF.name|RDFS.label]\n\n All friends and friends of friends of Bob\n\n g[bob : FOAF.knows * '+']\n\n etc.\n\n .. versionadded:: 4.0\n\n \"\"\"\n\n if isinstance(item, slice):\n\n s, p, o = item.start, item.stop, item.step\n if s is None and p is None and o is None:\n return self.triples((s, p, o))\n elif s is None and p is None:\n return self.subject_predicates(o)\n elif s is None and o is None:\n return self.subject_objects(p)\n elif p is None and o is None:\n return self.predicate_objects(s)\n elif s is None:\n return self.subjects(p, o)\n elif p is None:\n return self.predicates(s, o)\n elif o is None:\n return self.objects(s, p)\n else:\n # all given\n return (s, p, o) in self\n\n elif isinstance(item, (Path, Node)):\n\n return self.predicate_objects(item)\n\n else:\n raise TypeError(\"You can only index a graph by a single rdflib term or path, or a slice of rdflib terms.\")\n\n def __len__(self):\n \"\"\"Returns the number of triples in the graph\n\n If context is specified then the number of triples in the context is\n returned instead.\n \"\"\"\n return self.__store.__len__(context=self)\n\n def __iter__(self):\n \"\"\"Iterates over all triples in the store\"\"\"\n return self.triples((None, None, None))\n\n def __contains__(self, triple):\n \"\"\"Support for 'triple in graph' syntax\"\"\"\n for triple in self.triples(triple):\n return True\n return False\n\n def __hash__(self):\n return hash(self.identifier)\n\n def __cmp__(self, other):\n if other is None:\n return -1\n elif isinstance(other, Graph):\n return cmp(self.identifier, other.identifier)\n else:\n # Note if None is considered equivalent to owl:Nothing\n # Then perhaps a graph with length 0 should be considered\n # equivalent to None (if compared to it)?\n return 1\n\n def __eq__(self, other):\n return isinstance(other, Graph) \\\n and self.identifier == other.identifier\n\n def __lt__(self, other):\n return (other is None) \\\n or (isinstance(other, Graph) and\n self.identifier < other.identifier)\n\n def __le__(self, other):\n return self < other or self == other\n\n def __gt__(self, other):\n return (isinstance(other, Graph) and\n self.identifier > other.identifier) \\\n or (other is not None)\n\n def __ge__(self, other):\n return self > other or self == other\n\n def __iadd__(self, other):\n \"\"\"Add all triples in Graph other to Graph.\n BNode IDs are not changed.\"\"\"\n self.addN((s, p, o, self) for s, p, o in other)\n return self\n\n def __isub__(self, other):\n \"\"\"Subtract all triples in Graph other from Graph.\n BNode IDs are not changed.\"\"\"\n for triple in other:\n self.remove(triple)\n return self\n\n def __add__(self, other):\n \"\"\"Set-theoretic union\n BNode IDs are not changed.\"\"\"\n retval = Graph()\n for (prefix, uri) in set(\n list(self.namespaces()) + list(other.namespaces())):\n retval.bind(prefix, uri)\n for x in self:\n retval.add(x)\n for y in other:\n retval.add(y)\n return retval\n\n def __mul__(self, other):\n \"\"\"Set-theoretic intersection.\n BNode IDs are not changed.\"\"\"\n retval = Graph()\n for x in other:\n if x in self:\n retval.add(x)\n return retval\n\n def __sub__(self, other):\n \"\"\"Set-theoretic difference.\n BNode IDs are not changed.\"\"\"\n retval = Graph()\n for x in self:\n if not x in other:\n retval.add(x)\n return retval\n\n def __xor__(self, other):\n \"\"\"Set-theoretic XOR.\n BNode IDs are not changed.\"\"\"\n return (self - other) + (other - self)\n\n __or__ = __add__\n __and__ = __mul__\n\n # Conv. methods\n\n def set(self, triple):\n \"\"\"Convenience method to update the value of object\n\n Remove any existing triples for subject and predicate before adding\n (subject, predicate, object).\n \"\"\"\n (subject, predicate, object_) = triple\n assert subject is not None, \\\n \"s can't be None in .set([s,p,o]), as it would remove (*, p, *)\"\n assert predicate is not None, \\\n \"p can't be None in .set([s,p,o]), as it would remove (s, *, *)\"\n self.remove((subject, predicate, None))\n self.add((subject, predicate, object_))\n\n def subjects(self, predicate=None, object=None):\n \"\"\"A generator of subjects with the given predicate and object\"\"\"\n for s, p, o in self.triples((None, predicate, object)):\n yield s\n\n def predicates(self, subject=None, object=None):\n \"\"\"A generator of predicates with the given subject and object\"\"\"\n for s, p, o in self.triples((subject, None, object)):\n yield p\n\n def objects(self, subject=None, predicate=None):\n \"\"\"A generator of objects with the given subject and predicate\"\"\"\n for s, p, o in self.triples((subject, predicate, None)):\n yield o\n\n def subject_predicates(self, object=None):\n \"\"\"A generator of (subject, predicate) tuples for the given object\"\"\"\n for s, p, o in self.triples((None, None, object)):\n yield s, p\n\n def subject_objects(self, predicate=None):\n \"\"\"A generator of (subject, object) tuples for the given predicate\"\"\"\n for s, p, o in self.triples((None, predicate, None)):\n yield s, o\n\n def predicate_objects(self, subject=None):\n \"\"\"A generator of (predicate, object) tuples for the given subject\"\"\"\n for s, p, o in self.triples((subject, None, None)):\n yield p, o\n\n def triples_choices(self, triple, context=None):\n subject, predicate, object_ = triple\n for (s, p, o), cg in self.store.triples_choices(\n (subject, predicate, object_), context=self):\n yield (s, p, o)\n\n def value(self, subject=None, predicate=RDF.value, object=None,\n default=None, any=True):\n \"\"\"Get a value for a pair of two criteria\n\n Exactly one of subject, predicate, object must be None. Useful if one\n knows that there may only be one value.\n\n It is one of those situations that occur a lot, hence this\n 'macro' like utility\n\n Parameters:\n subject, predicate, object -- exactly one must be None\n default -- value to be returned if no values found\n any -- if True, return any value in the case there is more than one,\n else, raise UniquenessError\n \"\"\"\n retval = default\n\n if (subject is None and predicate is None) or \\\n (subject is None and object is None) or \\\n (predicate is None and object is None):\n return None\n\n if object is None:\n values = self.objects(subject, predicate)\n if subject is None:\n values = self.subjects(predicate, object)\n if predicate is None:\n values = self.predicates(subject, object)\n\n try:\n retval = next(values)\n except StopIteration:\n retval = default\n else:\n if any is False:\n try:\n next(values)\n msg = (\"While trying to find a value for (%s, %s, %s) the\"\n \" following multiple values where found:\\n\" %\n (subject, predicate, object))\n triples = self.store.triples(\n (subject, predicate, object), None)\n for (s, p, o), contexts in triples:\n msg += \"(%s, %s, %s)\\n (contexts: %s)\\n\" % (\n s, p, o, list(contexts))\n raise exceptions.UniquenessError(msg)\n except StopIteration:\n pass\n return retval\n\n def label(self, subject, default=''):\n \"\"\"Query for the RDFS.label of the subject\n\n Return default if no label exists or any label if multiple exist.\n \"\"\"\n if subject is None:\n return default\n return self.value(subject, RDFS.label, default=default, any=True)\n\n def preferredLabel(self, subject, lang=None, default=None,\n labelProperties=(SKOS.prefLabel, RDFS.label)):\n \"\"\"\n Find the preferred label for subject.\n\n By default prefers skos:prefLabels over rdfs:labels. In case at least\n one prefLabel is found returns those, else returns labels. In case a\n language string (e.g., 'en', 'de' or even '' for no lang-tagged\n literals) is given, only such labels will be considered.\n\n Return a list of (labelProp, label) pairs, where labelProp is either\n skos:prefLabel or rdfs:label.\n\n >>> from rdflib import ConjunctiveGraph, URIRef, RDFS, Literal\n >>> from rdflib.namespace import SKOS\n >>> from pprint import pprint\n >>> g = ConjunctiveGraph()\n >>> u = URIRef(u'http://example.com/foo')\n >>> g.add([u, RDFS.label, Literal('foo')])\n >>> g.add([u, RDFS.label, Literal('bar')])\n >>> pprint(sorted(g.preferredLabel(u)))\n [(rdflib.term.URIRef(u'http://www.w3.org/2000/01/rdf-schema#label'),\n rdflib.term.Literal(u'bar')),\n (rdflib.term.URIRef(u'http://www.w3.org/2000/01/rdf-schema#label'),\n rdflib.term.Literal(u'foo'))]\n >>> g.add([u, SKOS.prefLabel, Literal('bla')])\n >>> pprint(g.preferredLabel(u))\n [(rdflib.term.URIRef(u'http://www.w3.org/2004/02/skos/core#prefLabel'),\n rdflib.term.Literal(u'bla'))]\n >>> g.add([u, SKOS.prefLabel, Literal('blubb', lang='en')])\n >>> sorted(g.preferredLabel(u)) #doctest: +NORMALIZE_WHITESPACE\n [(rdflib.term.URIRef(u'http://www.w3.org/2004/02/skos/core#prefLabel'),\n rdflib.term.Literal(u'bla')),\n (rdflib.term.URIRef(u'http://www.w3.org/2004/02/skos/core#prefLabel'),\n rdflib.term.Literal(u'blubb', lang='en'))]\n >>> g.preferredLabel(u, lang='') #doctest: +NORMALIZE_WHITESPACE\n [(rdflib.term.URIRef(u'http://www.w3.org/2004/02/skos/core#prefLabel'),\n rdflib.term.Literal(u'bla'))]\n >>> pprint(g.preferredLabel(u, lang='en'))\n [(rdflib.term.URIRef(u'http://www.w3.org/2004/02/skos/core#prefLabel'),\n rdflib.term.Literal(u'blubb', lang='en'))]\n \"\"\"\n\n if default is None:\n default = []\n\n # setup the language filtering\n if lang is not None:\n if lang == '': # we only want not language-tagged literals\n def langfilter(l): return l.language is None\n else:\n def langfilter(l): return l.language == lang\n else: # we don't care about language tags\n def langfilter(l): return True\n\n for labelProp in labelProperties:\n labels = list(filter(langfilter, self.objects(subject, labelProp)))\n if len(labels) == 0:\n continue\n else:\n return [(labelProp, l) for l in labels]\n return default\n\n def comment(self, subject, default=''):\n \"\"\"Query for the RDFS.comment of the subject\n\n Return default if no comment exists\n \"\"\"\n if subject is None:\n return default\n return self.value(subject, RDFS.comment, default=default, any=True)\n\n def items(self, list):\n \"\"\"Generator over all items in the resource specified by list\n\n list is an RDF collection.\n \"\"\"\n chain = set([list])\n while list:\n item = self.value(list, RDF.first)\n if item is not None:\n yield item\n list = self.value(list, RDF.rest)\n if list in chain:\n raise ValueError(\"List contains a recursive rdf:rest reference\")\n chain.add(list)\n\n def transitiveClosure(self, func, arg, seen=None):\n \"\"\"\n Generates transitive closure of a user-defined\n function against the graph\n\n >>> from rdflib.collection import Collection\n >>> g=Graph()\n >>> a=BNode('foo')\n >>> b=BNode('bar')\n >>> c=BNode('baz')\n >>> g.add((a,RDF.first,RDF.type))\n >>> g.add((a,RDF.rest,b))\n >>> g.add((b,RDF.first,RDFS.label))\n >>> g.add((b,RDF.rest,c))\n >>> g.add((c,RDF.first,RDFS.comment))\n >>> g.add((c,RDF.rest,RDF.nil))\n >>> def topList(node,g):\n ... for s in g.subjects(RDF.rest,node):\n ... yield s\n >>> def reverseList(node,g):\n ... for f in g.objects(node,RDF.first):\n ... print(f)\n ... for s in g.subjects(RDF.rest,node):\n ... yield s\n\n >>> [rt for rt in g.transitiveClosure(\n ... topList,RDF.nil)] # doctest: +NORMALIZE_WHITESPACE\n [rdflib.term.BNode('baz'),\n rdflib.term.BNode('bar'),\n rdflib.term.BNode('foo')]\n\n >>> [rt for rt in g.transitiveClosure(\n ... reverseList,RDF.nil)] # doctest: +NORMALIZE_WHITESPACE\n http://www.w3.org/2000/01/rdf-schema#comment\n http://www.w3.org/2000/01/rdf-schema#label\n http://www.w3.org/1999/02/22-rdf-syntax-ns#type\n [rdflib.term.BNode('baz'),\n rdflib.term.BNode('bar'),\n rdflib.term.BNode('foo')]\n\n \"\"\"\n if seen is None:\n seen = {}\n elif arg in seen:\n return\n seen[arg] = 1\n for rt in func(arg, self):\n yield rt\n for rt_2 in self.transitiveClosure(func, rt, seen):\n yield rt_2\n\n def transitive_objects(self, subject, property, remember=None):\n \"\"\"Transitively generate objects for the ``property`` relationship\n\n Generated objects belong to the depth first transitive closure of the\n ``property`` relationship starting at ``subject``.\n \"\"\"\n if remember is None:\n remember = {}\n if subject in remember:\n return\n remember[subject] = 1\n yield subject\n for object in self.objects(subject, property):\n for o in self.transitive_objects(object, property, remember):\n yield o\n\n def transitive_subjects(self, predicate, object, remember=None):\n \"\"\"Transitively generate objects for the ``property`` relationship\n\n Generated objects belong to the depth first transitive closure of the\n ``property`` relationship starting at ``subject``.\n \"\"\"\n if remember is None:\n remember = {}\n if object in remember:\n return\n remember[object] = 1\n yield object\n for subject in self.subjects(predicate, object):\n for s in self.transitive_subjects(predicate, subject, remember):\n yield s\n\n def seq(self, subject):\n \"\"\"Check if subject is an rdf:Seq\n\n If yes, it returns a Seq class instance, None otherwise.\n \"\"\"\n if (subject, RDF.type, RDF.Seq) in self:\n return Seq(self, subject)\n else:\n return None\n\n def qname(self, uri):\n return self.namespace_manager.qname(uri)\n\n def compute_qname(self, uri, generate=True):\n return self.namespace_manager.compute_qname(uri, generate)\n\n def bind(self, prefix, namespace, override=True, replace=False):\n \"\"\"Bind prefix to namespace\n\n If override is True will bind namespace to given prefix even\n if namespace was already bound to a different prefix.\n\n if replace, replace any existing prefix with the new namespace\n\n for example: graph.bind('foaf', 'http://xmlns.com/foaf/0.1/')\n\n \"\"\"\n return self.namespace_manager.bind(\n prefix, namespace, override=override, replace=replace)\n\n def namespaces(self):\n \"\"\"Generator over all the prefix, namespace tuples\"\"\"\n for prefix, namespace in self.namespace_manager.namespaces():\n yield prefix, namespace\n\n def absolutize(self, uri, defrag=1):\n \"\"\"Turn uri into an absolute URI if it's not one already\"\"\"\n return self.namespace_manager.absolutize(uri, defrag)\n\n def serialize(self, destination=None, format=\"xml\",\n base=None, encoding=None, **args):\n \"\"\"Serialize the Graph to destination\n\n If destination is None serialize method returns the serialization as a\n string. Format defaults to xml (AKA rdf/xml).\n\n Format support can be extended with plugins,\n but 'xml', 'n3', 'turtle', 'nt', 'pretty-xml', 'trix', 'trig' and 'nquads' are built in.\n \"\"\"\n serializer = plugin.get(format, Serializer)(self)\n if destination is None:\n stream = BytesIO()\n serializer.serialize(stream, base=base, encoding=encoding, **args)\n return stream.getvalue()\n if hasattr(destination, \"write\"):\n stream = destination\n serializer.serialize(stream, base=base, encoding=encoding, **args)\n else:\n location = destination\n scheme, netloc, path, params, _query, fragment = urlparse(location)\n if netloc != \"\":\n print(\"WARNING: not saving as location\" +\n \"is not a local file reference\")\n return\n fd, name = tempfile.mkstemp()\n stream = os.fdopen(fd, \"wb\")\n serializer.serialize(stream, base=base, encoding=encoding, **args)\n stream.close()\n if hasattr(shutil, \"move\"):\n shutil.move(name, path)\n else:\n shutil.copy(name, path)\n os.remove(name)\n\n def parse(self, source=None, publicID=None, format=None,\n location=None, file=None, data=None, **args):\n \"\"\"\n Parse source adding the resulting triples to the Graph.\n\n The source is specified using one of source, location, file or\n data.\n\n :Parameters:\n\n - `source`: An InputSource, file-like object, or string. In the case\n of a string the string is the location of the source.\n - `location`: A string indicating the relative or absolute URL of the\n source. Graph's absolutize method is used if a relative location\n is specified.\n - `file`: A file-like object.\n - `data`: A string containing the data to be parsed.\n - `format`: Used if format can not be determined from source.\n Defaults to rdf/xml. Format support can be extended with plugins,\n but 'xml', 'n3', 'nt', 'trix', 'rdfa' are built in.\n - `publicID`: the logical URI to use as the document base. If None\n specified the document location is used (at least in the case where\n there is a document location).\n\n :Returns:\n\n - self, the graph instance.\n\n Examples:\n\n >>> my_data = '''\n ... <rdf:RDF\n ... xmlns:rdf='http://www.w3.org/1999/02/22-rdf-syntax-ns#'\n ... xmlns:rdfs='http://www.w3.org/2000/01/rdf-schema#'\n ... >\n ... <rdf:Description>\n ... <rdfs:label>Example</rdfs:label>\n ... <rdfs:comment>This is really just an example.</rdfs:comment>\n ... </rdf:Description>\n ... </rdf:RDF>\n ... '''\n >>> import tempfile\n >>> fd, file_name = tempfile.mkstemp()\n >>> f = os.fdopen(fd, 'w')\n >>> dummy = f.write(my_data) # Returns num bytes written on py3\n >>> f.close()\n\n >>> g = Graph()\n >>> result = g.parse(data=my_data, format=\"application/rdf+xml\")\n >>> len(g)\n 2\n\n >>> g = Graph()\n >>> result = g.parse(location=file_name, format=\"application/rdf+xml\")\n >>> len(g)\n 2\n\n >>> g = Graph()\n >>> with open(file_name, \"r\") as f:\n ... result = g.parse(f, format=\"application/rdf+xml\")\n >>> len(g)\n 2\n\n >>> os.remove(file_name)\n\n \"\"\"\n\n source = create_input_source(source=source, publicID=publicID,\n location=location, file=file,\n data=data, format=format)\n if format is None:\n format = source.content_type\n if format is None:\n # raise Exception(\"Could not determine format for %r. You can\" + \\\n # \"expicitly specify one with the format argument.\" % source)\n format = \"application/rdf+xml\"\n parser = plugin.get(format, Parser)()\n try:\n parser.parse(source, self, **args)\n finally:\n if source.auto_close:\n source.close()\n return self\n\n def load(self, source, publicID=None, format=\"xml\"):\n self.parse(source, publicID, format)\n\n def query(self, query_object, processor='sparql',\n result='sparql', initNs=None, initBindings=None,\n use_store_provided=True, **kwargs):\n \"\"\"\n Query this graph.\n\n A type of 'prepared queries' can be realised by providing\n initial variable bindings with initBindings\n\n Initial namespaces are used to resolve prefixes used in the query,\n if none are given, the namespaces from the graph's namespace manager\n are used.\n\n :returntype: rdflib.query.QueryResult\n\n \"\"\"\n\n initBindings = initBindings or {}\n initNs = initNs or dict(self.namespaces())\n\n if hasattr(self.store, \"query\") and use_store_provided:\n try:\n return self.store.query(\n query_object, initNs, initBindings,\n self.default_union and\n '__UNION__' or\n self.identifier,\n **kwargs)\n except NotImplementedError:\n pass # store has no own implementation\n\n if not isinstance(result, query.Result):\n result = plugin.get(result, query.Result)\n if not isinstance(processor, query.Processor):\n processor = plugin.get(processor, query.Processor)(self)\n\n return result(processor.query(\n query_object, initBindings, initNs, **kwargs))\n\n def update(self, update_object, processor='sparql',\n initNs=None, initBindings=None,\n use_store_provided=True, **kwargs):\n \"\"\"Update this graph with the given update query.\"\"\"\n initBindings = initBindings or {}\n initNs = initNs or dict(self.namespaces())\n\n if hasattr(self.store, \"update\") and use_store_provided:\n try:\n return self.store.update(\n update_object, initNs, initBindings,\n self.default_union and\n '__UNION__' or\n self.identifier,\n **kwargs)\n except NotImplementedError:\n pass # store has no own implementation\n\n if not isinstance(processor, query.UpdateProcessor):\n processor = plugin.get(processor, query.UpdateProcessor)(self)\n\n return processor.update(update_object, initBindings, initNs, **kwargs)\n\n def n3(self):\n \"\"\"return an n3 identifier for the Graph\"\"\"\n return \"[%s]\" % self.identifier.n3()\n\n def __reduce__(self):\n return (Graph, (self.store, self.identifier,))\n\n def isomorphic(self, other):\n \"\"\"\n does a very basic check if these graphs are the same\n If no BNodes are involved, this is accurate.\n\n See rdflib.compare for a correct implementation of isomorphism checks\n \"\"\"\n # TODO: this is only an approximation.\n if len(self) != len(other):\n return False\n for s, p, o in self:\n if not isinstance(s, BNode) and not isinstance(o, BNode):\n if not (s, p, o) in other:\n return False\n for s, p, o in other:\n if not isinstance(s, BNode) and not isinstance(o, BNode):\n if not (s, p, o) in self:\n return False\n # TODO: very well could be a false positive at this point yet.\n return True\n\n def connected(self):\n \"\"\"Check if the Graph is connected\n\n The Graph is considered undirectional.\n\n Performs a search on the Graph, starting from a random node. Then\n iteratively goes depth-first through the triplets where the node is\n subject and object. Return True if all nodes have been visited and\n False if it cannot continue and there are still unvisited nodes left.\n \"\"\"\n all_nodes = list(self.all_nodes())\n discovered = []\n\n # take a random one, could also always take the first one, doesn't\n # really matter.\n if not all_nodes:\n return False\n\n visiting = [all_nodes[random.randrange(len(all_nodes))]]\n while visiting:\n x = visiting.pop()\n if x not in discovered:\n discovered.append(x)\n for new_x in self.objects(subject=x):\n if new_x not in discovered and new_x not in visiting:\n visiting.append(new_x)\n for new_x in self.subjects(object=x):\n if new_x not in discovered and new_x not in visiting:\n visiting.append(new_x)\n\n # optimisation by only considering length, since no new objects can\n # be introduced anywhere.\n if len(all_nodes) == len(discovered):\n return True\n else:\n return False\n\n def all_nodes(self):\n res = set(self.objects())\n res.update(self.subjects())\n return res\n\n def collection(self, identifier):\n \"\"\"Create a new ``Collection`` instance.\n\n Parameters:\n\n - ``identifier``: a URIRef or BNode instance.\n\n Example::\n\n >>> graph = Graph()\n >>> uri = URIRef(\"http://example.org/resource\")\n >>> collection = graph.collection(uri)\n >>> assert isinstance(collection, Collection)\n >>> assert collection.uri is uri\n >>> assert collection.graph is graph\n >>> collection += [ Literal(1), Literal(2) ]\n \"\"\"\n\n return Collection(self, identifier)\n\n def resource(self, identifier):\n \"\"\"Create a new ``Resource`` instance.\n\n Parameters:\n\n - ``identifier``: a URIRef or BNode instance.\n\n Example::\n\n >>> graph = Graph()\n >>> uri = URIRef(\"http://example.org/resource\")\n >>> resource = graph.resource(uri)\n >>> assert isinstance(resource, Resource)\n >>> assert resource.identifier is uri\n >>> assert resource.graph is graph\n\n \"\"\"\n if not isinstance(identifier, Node):\n identifier = URIRef(identifier)\n return Resource(self, identifier)\n\n def _process_skolem_tuples(self, target, func):\n for t in self.triples((None, None, None)):\n target.add(func(t))\n\n def skolemize(self, new_graph=None, bnode=None, authority=None, basepath=None):\n def do_skolemize(bnode, t):\n (s, p, o) = t\n if s == bnode:\n s = s.skolemize(authority=authority, basepath=basepath)\n if o == bnode:\n o = o.skolemize(authority=authority, basepath=basepath)\n return (s, p, o)\n\n def do_skolemize2(t):\n (s, p, o) = t\n if isinstance(s, BNode):\n s = s.skolemize(authority=authority, basepath=basepath)\n if isinstance(o, BNode):\n o = o.skolemize(authority=authority, basepath=basepath)\n return (s, p, o)\n\n retval = Graph() if new_graph is None else new_graph\n\n if bnode is None:\n self._process_skolem_tuples(retval, do_skolemize2)\n elif isinstance(bnode, BNode):\n self._process_skolem_tuples(\n retval, lambda t: do_skolemize(bnode, t))\n\n return retval\n\n def de_skolemize(self, new_graph=None, uriref=None):\n def do_de_skolemize(uriref, t):\n (s, p, o) = t\n if s == uriref:\n s = s.de_skolemize()\n if o == uriref:\n o = o.de_skolemize()\n return (s, p, o)\n\n def do_de_skolemize2(t):\n (s, p, o) = t\n if isinstance(s, Genid):\n s = s.de_skolemize()\n if isinstance(o, Genid):\n o = o.de_skolemize()\n return (s, p, o)\n\n retval = Graph() if new_graph is None else new_graph\n\n if uriref is None:\n self._process_skolem_tuples(retval, do_de_skolemize2)\n elif isinstance(uriref, Genid):\n self._process_skolem_tuples(\n retval, lambda t: do_de_skolemize(uriref, t))\n\n return retval\n\n\nclass ConjunctiveGraph(Graph):\n\n \"\"\"\n A ConjunctiveGraph is an (unnamed) aggregation of all the named\n graphs in a store.\n\n It has a ``default`` graph, whose name is associated with the\n graph throughout its life. :meth:`__init__` can take an identifier\n to use as the name of this default graph or it will assign a\n BNode.\n\n All methods that add triples work against this default graph.\n\n All queries are carried out against the union of all graphs.\n\n \"\"\"\n\n def __init__(self, store='default', identifier=None):\n super(ConjunctiveGraph, self).__init__(store, identifier=identifier)\n assert self.store.context_aware, (\"ConjunctiveGraph must be backed by\"\n \" a context aware store.\")\n self.context_aware = True\n self.default_union = True # Conjunctive!\n self.default_context = Graph(store=self.store,\n identifier=identifier or BNode())\n\n def __str__(self):\n pattern = (\"[a rdflib:ConjunctiveGraph;rdflib:storage \"\n \"[a rdflib:Store;rdfs:label '%s']]\")\n return pattern % self.store.__class__.__name__\n\n def _spoc(self, triple_or_quad, default=False):\n \"\"\"\n helper method for having methods that support\n either triples or quads\n \"\"\"\n if triple_or_quad is None:\n return (None, None, None, self.default_context if default else None)\n if len(triple_or_quad) == 3:\n c = self.default_context if default else None\n (s, p, o) = triple_or_quad\n elif len(triple_or_quad) == 4:\n (s, p, o, c) = triple_or_quad\n c = self._graph(c)\n return s, p, o, c\n\n def __contains__(self, triple_or_quad):\n \"\"\"Support for 'triple/quad in graph' syntax\"\"\"\n s, p, o, c = self._spoc(triple_or_quad)\n for t in self.triples((s, p, o), context=c):\n return True\n return False\n\n def add(self, triple_or_quad):\n \"\"\"\n Add a triple or quad to the store.\n\n if a triple is given it is added to the default context\n \"\"\"\n\n s, p, o, c = self._spoc(triple_or_quad, default=True)\n\n _assertnode(s, p, o)\n\n self.store.add((s, p, o), context=c, quoted=False)\n\n def _graph(self, c):\n if c is None:\n return None\n if not isinstance(c, Graph):\n return self.get_context(c)\n else:\n return c\n\n def addN(self, quads):\n \"\"\"Add a sequence of triples with context\"\"\"\n\n self.store.addN(\n (s, p, o, self._graph(c)) for s, p, o, c in quads if\n _assertnode(s, p, o)\n )\n\n def remove(self, triple_or_quad):\n \"\"\"\n Removes a triple or quads\n\n if a triple is given it is removed from all contexts\n\n a quad is removed from the given context only\n\n \"\"\"\n s, p, o, c = self._spoc(triple_or_quad)\n\n self.store.remove((s, p, o), context=c)\n\n def triples(self, triple_or_quad, context=None):\n \"\"\"\n Iterate over all the triples in the entire conjunctive graph\n\n For legacy reasons, this can take the context to query either\n as a fourth element of the quad, or as the explicit context\n keyword parameter. The kw param takes precedence.\n \"\"\"\n\n s, p, o, c = self._spoc(triple_or_quad)\n context = self._graph(context or c)\n\n if self.default_union:\n if context == self.default_context:\n context = None\n else:\n if context is None:\n context = self.default_context\n\n if isinstance(p, Path):\n if context is None:\n context = self\n\n for s, o in p.eval(context, s, o):\n yield (s, p, o)\n else:\n for (s, p, o), cg in self.store.triples((s, p, o), context=context):\n yield s, p, o\n\n def quads(self, triple_or_quad=None):\n \"\"\"Iterate over all the quads in the entire conjunctive graph\"\"\"\n\n s, p, o, c = self._spoc(triple_or_quad)\n\n for (s, p, o), cg in self.store.triples((s, p, o), context=c):\n for ctx in cg:\n yield s, p, o, ctx\n\n def triples_choices(self, triple, context=None):\n \"\"\"Iterate over all the triples in the entire conjunctive graph\"\"\"\n s, p, o = triple\n if context is None:\n if not self.default_union:\n context = self.default_context\n else:\n context = self._graph(context)\n\n for (s1, p1, o1), cg in self.store.triples_choices((s, p, o),\n context=context):\n yield (s1, p1, o1)\n\n def __len__(self):\n \"\"\"Number of triples in the entire conjunctive graph\"\"\"\n return self.store.__len__()\n\n def contexts(self, triple=None):\n \"\"\"Iterate over all contexts in the graph\n\n If triple is specified, iterate over all contexts the triple is in.\n \"\"\"\n for context in self.store.contexts(triple):\n if isinstance(context, Graph):\n # TODO: One of these should never happen and probably\n # should raise an exception rather than smoothing over\n # the weirdness - see #225\n yield context\n else:\n yield self.get_context(context)\n\n def get_context(self, identifier, quoted=False):\n \"\"\"Return a context graph for the given identifier\n\n identifier must be a URIRef or BNode.\n \"\"\"\n return Graph(store=self.store, identifier=identifier,\n namespace_manager=self)\n\n def remove_context(self, context):\n \"\"\"Removes the given context from the graph\"\"\"\n self.store.remove((None, None, None), context)\n\n def context_id(self, uri, context_id=None):\n \"\"\"URI#context\"\"\"\n uri = uri.split(\"#\", 1)[0]\n if context_id is None:\n context_id = \"#context\"\n return URIRef(context_id, base=uri)\n\n def parse(self, source=None, publicID=None, format=\"xml\",\n location=None, file=None, data=None, **args):\n \"\"\"\n Parse source adding the resulting triples to its own context\n (sub graph of this graph).\n\n See :meth:`rdflib.graph.Graph.parse` for documentation on arguments.\n\n :Returns:\n\n The graph into which the source was parsed. In the case of n3\n it returns the root context.\n \"\"\"\n\n source = create_input_source(\n source=source, publicID=publicID, location=location,\n file=file, data=data, format=format)\n\n g_id = publicID and publicID or source.getPublicId()\n if not isinstance(g_id, Node):\n g_id = URIRef(g_id)\n\n context = Graph(store=self.store, identifier=g_id)\n context.remove((None, None, None)) # hmm ?\n context.parse(source, publicID=publicID, format=format, **args)\n return context\n\n def __reduce__(self):\n return (ConjunctiveGraph, (self.store, self.identifier))\n\n\nDATASET_DEFAULT_GRAPH_ID = URIRef('urn:x-rdflib:default')\n\n\nclass Dataset(ConjunctiveGraph):\n __doc__ = \"\"\"\n RDF 1.1 Dataset. Small extension to the Conjunctive Graph:\n - the primary term is graphs in the datasets and not contexts with quads,\n so there is a separate method to set/retrieve a graph in a dataset and\n operate with graphs\n - graphs cannot be identified with blank nodes\n - added a method to directly add a single quad\n\n Examples of usage:\n\n >>> # Create a new Dataset\n >>> ds = Dataset()\n >>> # simple triples goes to default graph\n >>> ds.add((URIRef('http://example.org/a'),\n ... URIRef('http://www.example.org/b'),\n ... Literal('foo')))\n >>>\n >>> # Create a graph in the dataset, if the graph name has already been\n >>> # used, the corresponding graph will be returned\n >>> # (ie, the Dataset keeps track of the constituent graphs)\n >>> g = ds.graph(URIRef('http://www.example.com/gr'))\n >>>\n >>> # add triples to the new graph as usual\n >>> g.add(\n ... (URIRef('http://example.org/x'),\n ... URIRef('http://example.org/y'),\n ... Literal('bar')) )\n >>> # alternatively: add a quad to the dataset -> goes to the graph\n >>> ds.add(\n ... (URIRef('http://example.org/x'),\n ... URIRef('http://example.org/z'),\n ... Literal('foo-bar'),g) )\n >>>\n >>> # querying triples return them all regardless of the graph\n >>> for t in ds.triples((None,None,None)): # doctest: +SKIP\n ... print(t) # doctest: +NORMALIZE_WHITESPACE\n (rdflib.term.URIRef(u'http://example.org/a'),\n rdflib.term.URIRef(u'http://www.example.org/b'),\n rdflib.term.Literal(u'foo'))\n (rdflib.term.URIRef(u'http://example.org/x'),\n rdflib.term.URIRef(u'http://example.org/z'),\n rdflib.term.Literal(u'foo-bar'))\n (rdflib.term.URIRef(u'http://example.org/x'),\n rdflib.term.URIRef(u'http://example.org/y'),\n rdflib.term.Literal(u'bar'))\n >>>\n >>> # querying quads return quads; the fourth argument can be unrestricted\n >>> # or restricted to a graph\n >>> for q in ds.quads((None, None, None, None)): # doctest: +SKIP\n ... print(q) # doctest: +NORMALIZE_WHITESPACE\n (rdflib.term.URIRef(u'http://example.org/a'),\n rdflib.term.URIRef(u'http://www.example.org/b'),\n rdflib.term.Literal(u'foo'),\n None)\n (rdflib.term.URIRef(u'http://example.org/x'),\n rdflib.term.URIRef(u'http://example.org/y'),\n rdflib.term.Literal(u'bar'),\n rdflib.term.URIRef(u'http://www.example.com/gr'))\n (rdflib.term.URIRef(u'http://example.org/x'),\n rdflib.term.URIRef(u'http://example.org/z'),\n rdflib.term.Literal(u'foo-bar'),\n rdflib.term.URIRef(u'http://www.example.com/gr'))\n >>>\n >>> for q in ds.quads((None,None,None,g)): # doctest: +SKIP\n ... print(q) # doctest: +NORMALIZE_WHITESPACE\n (rdflib.term.URIRef(u'http://example.org/x'),\n rdflib.term.URIRef(u'http://example.org/y'),\n rdflib.term.Literal(u'bar'),\n rdflib.term.URIRef(u'http://www.example.com/gr'))\n (rdflib.term.URIRef(u'http://example.org/x'),\n rdflib.term.URIRef(u'http://example.org/z'),\n rdflib.term.Literal(u'foo-bar'),\n rdflib.term.URIRef(u'http://www.example.com/gr'))\n >>> # Note that in the call above -\n >>> # ds.quads((None,None,None,'http://www.example.com/gr'))\n >>> # would have been accepted, too\n >>>\n >>> # graph names in the dataset can be queried:\n >>> for c in ds.graphs(): # doctest: +SKIP\n ... print(c) # doctest:\n DEFAULT\n http://www.example.com/gr\n >>> # A graph can be created without specifying a name; a skolemized genid\n >>> # is created on the fly\n >>> h = ds.graph()\n >>> for c in ds.graphs(): # doctest: +SKIP\n ... print(c) # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS\n DEFAULT\n http://rdlib.net/.well-known/genid/rdflib/N...\n http://www.example.com/gr\n >>> # Note that the Dataset.graphs() call returns names of empty graphs,\n >>> # too. This can be restricted:\n >>> for c in ds.graphs(empty=False): # doctest: +SKIP\n ... print(c) # doctest: +NORMALIZE_WHITESPACE\n DEFAULT\n http://www.example.com/gr\n >>>\n >>> # a graph can also be removed from a dataset via ds.remove_graph(g)\n\n .. versionadded:: 4.0\n \"\"\"\n\n def __init__(self, store='default', default_union=False):\n super(Dataset, self).__init__(store=store, identifier=None)\n\n if not self.store.graph_aware:\n raise Exception(\"DataSet must be backed by a graph-aware store!\")\n self.default_context = Graph(store=self.store, identifier=DATASET_DEFAULT_GRAPH_ID)\n\n self.default_union = default_union\n\n def __str__(self):\n pattern = (\"[a rdflib:Dataset;rdflib:storage \"\n \"[a rdflib:Store;rdfs:label '%s']]\")\n return pattern % self.store.__class__.__name__\n\n def graph(self, identifier=None):\n if identifier is None:\n from rdflib.term import rdflib_skolem_genid\n self.bind(\n \"genid\", \"http://rdflib.net\" + rdflib_skolem_genid,\n override=False)\n identifier = BNode().skolemize()\n\n g = self._graph(identifier)\n\n self.store.add_graph(g)\n return g\n\n def parse(self, source=None, publicID=None, format=\"xml\",\n location=None, file=None, data=None, **args):\n c = ConjunctiveGraph.parse(self, source, publicID, format, location, file, data, **args)\n self.graph(c)\n return c\n\n def add_graph(self, g):\n \"\"\"alias of graph for consistency\"\"\"\n return self.graph(g)\n\n def remove_graph(self, g):\n if not isinstance(g, Graph):\n g = self.get_context(g)\n\n self.store.remove_graph(g)\n if g is None or g == self.default_context:\n # default graph cannot be removed\n # only triples deleted, so add it back in\n self.store.add_graph(self.default_context)\n\n def contexts(self, triple=None):\n default = False\n for c in super(Dataset, self).contexts(triple):\n default |= c.identifier == DATASET_DEFAULT_GRAPH_ID\n yield c\n if not default:\n yield self.graph(DATASET_DEFAULT_GRAPH_ID)\n\n graphs = contexts\n\n def quads(self, quad):\n for s, p, o, c in super(Dataset, self).quads(quad):\n if c.identifier == self.default_context:\n yield (s, p, o, None)\n else:\n yield (s, p, o, c.identifier)\n\n\nclass QuotedGraph(Graph):\n \"\"\"\n Quoted Graphs are intended to implement Notation 3 formulae. They are\n associated with a required identifier that the N3 parser *must* provide\n in order to maintain consistent formulae identification for scenarios\n such as implication and other such processing.\n \"\"\"\n\n def __init__(self, store, identifier):\n super(QuotedGraph, self).__init__(store, identifier)\n\n def add(self, triple):\n \"\"\"Add a triple with self as context\"\"\"\n s, p, o = triple\n assert isinstance(s, Node), \\\n \"Subject %s must be an rdflib term\" % (s,)\n assert isinstance(p, Node), \\\n \"Predicate %s must be an rdflib term\" % (p,)\n assert isinstance(o, Node), \\\n \"Object %s must be an rdflib term\" % (o,)\n\n self.store.add((s, p, o), self, quoted=True)\n\n def addN(self, quads):\n \"\"\"Add a sequence of triple with context\"\"\"\n\n self.store.addN(\n (s, p, o, c) for s, p, o, c in quads\n if isinstance(c, QuotedGraph) and\n c.identifier is self.identifier and\n _assertnode(s, p, o)\n )\n\n def n3(self):\n \"\"\"Return an n3 identifier for the Graph\"\"\"\n return \"{%s}\" % self.identifier.n3()\n\n def __str__(self):\n identifier = self.identifier.n3()\n label = self.store.__class__.__name__\n pattern = (\"{this rdflib.identifier %s;rdflib:storage \"\n \"[a rdflib:Store;rdfs:label '%s']}\")\n return pattern % (identifier, label)\n\n def __reduce__(self):\n return (QuotedGraph, (self.store, self.identifier))\n\n\n# Make sure QuotedGraph is ordered correctly\n# wrt to other Terms.\n# this must be done here, as the QuotedGraph cannot be\n# circularily imported in term.py\nrdflib.term._ORDERING[QuotedGraph] = 11\n\n\nclass Seq(object):\n \"\"\"Wrapper around an RDF Seq resource\n\n It implements a container type in Python with the order of the items\n returned corresponding to the Seq content. It is based on the natural\n ordering of the predicate names _1, _2, _3, etc, which is the\n 'implementation' of a sequence in RDF terms.\n \"\"\"\n\n def __init__(self, graph, subject):\n \"\"\"Parameters:\n\n - graph:\n the graph containing the Seq\n\n - subject:\n the subject of a Seq. Note that the init does not\n check whether this is a Seq, this is done in whoever\n creates this instance!\n \"\"\"\n\n _list = self._list = list()\n LI_INDEX = URIRef(str(RDF) + \"_\")\n for (p, o) in graph.predicate_objects(subject):\n if p.startswith(LI_INDEX): # != RDF.Seq: #\n i = int(p.replace(LI_INDEX, ''))\n _list.append((i, o))\n\n # here is the trick: the predicates are _1, _2, _3, etc. Ie,\n # by sorting the keys (by integer) we have what we want!\n _list.sort()\n\n def toPython(self):\n return self\n\n def __iter__(self):\n \"\"\"Generator over the items in the Seq\"\"\"\n for _, item in self._list:\n yield item\n\n def __len__(self):\n \"\"\"Length of the Seq\"\"\"\n return len(self._list)\n\n def __getitem__(self, index):\n \"\"\"Item given by index from the Seq\"\"\"\n index, item = self._list.__getitem__(index)\n return item\n\n\nclass ModificationException(Exception):\n\n def __init__(self):\n pass\n\n def __str__(self):\n return (\"Modifications and transactional operations not allowed on \"\n \"ReadOnlyGraphAggregate instances\")\n\n\nclass UnSupportedAggregateOperation(Exception):\n\n def __init__(self):\n pass\n\n def __str__(self):\n return (\"This operation is not supported by ReadOnlyGraphAggregate \"\n \"instances\")\n\n\nclass ReadOnlyGraphAggregate(ConjunctiveGraph):\n \"\"\"Utility class for treating a set of graphs as a single graph\n\n Only read operations are supported (hence the name). Essentially a\n ConjunctiveGraph over an explicit subset of the entire store.\n \"\"\"\n\n def __init__(self, graphs, store='default'):\n if store is not None:\n super(ReadOnlyGraphAggregate, self).__init__(store)\n Graph.__init__(self, store)\n self.__namespace_manager = None\n\n assert isinstance(graphs, list) \\\n and graphs \\\n and [g for g in graphs if isinstance(g, Graph)], \\\n \"graphs argument must be a list of Graphs!!\"\n self.graphs = graphs\n\n def __repr__(self):\n return \"<ReadOnlyGraphAggregate: %s graphs>\" % len(self.graphs)\n\n def destroy(self, configuration):\n raise ModificationException()\n\n # Transactional interfaces (optional)\n def commit(self):\n raise ModificationException()\n\n def rollback(self):\n raise ModificationException()\n\n def open(self, configuration, create=False):\n # TODO: is there a use case for this method?\n for graph in self.graphs:\n graph.open(self, configuration, create)\n\n def close(self):\n for graph in self.graphs:\n graph.close()\n\n def add(self, triple):\n raise ModificationException()\n\n def addN(self, quads):\n raise ModificationException()\n\n def remove(self, triple):\n raise ModificationException()\n\n def triples(self, triple):\n s, p, o = triple\n for graph in self.graphs:\n if isinstance(p, Path):\n for s, o in p.eval(self, s, o):\n yield s, p, o\n else:\n for s1, p1, o1 in graph.triples((s, p, o)):\n yield (s1, p1, o1)\n\n def __contains__(self, triple_or_quad):\n context = None\n if len(triple_or_quad) == 4:\n context = triple_or_quad[3]\n for graph in self.graphs:\n if context is None or graph.identifier == context.identifier:\n if triple_or_quad[:3] in graph:\n return True\n return False\n\n def quads(self, triple):\n \"\"\"Iterate over all the quads in the entire aggregate graph\"\"\"\n s, p, o = triple\n for graph in self.graphs:\n for s1, p1, o1 in graph.triples((s, p, o)):\n yield (s1, p1, o1, graph)\n\n def __len__(self):\n return sum(len(g) for g in self.graphs)\n\n def __hash__(self):\n raise UnSupportedAggregateOperation()\n\n def __cmp__(self, other):\n if other is None:\n return -1\n elif isinstance(other, Graph):\n return -1\n elif isinstance(other, ReadOnlyGraphAggregate):\n return cmp(self.graphs, other.graphs)\n else:\n return -1\n\n def __iadd__(self, other):\n raise ModificationException()\n\n def __isub__(self, other):\n raise ModificationException()\n\n # Conv. methods\n\n def triples_choices(self, triple, context=None):\n subject, predicate, object_ = triple\n for graph in self.graphs:\n choices = graph.triples_choices((subject, predicate, object_))\n for (s, p, o) in choices:\n yield (s, p, o)\n\n def qname(self, uri):\n if hasattr(self, 'namespace_manager') and self.namespace_manager:\n return self.namespace_manager.qname(uri)\n raise UnSupportedAggregateOperation()\n\n def compute_qname(self, uri, generate=True):\n if hasattr(self, 'namespace_manager') and self.namespace_manager:\n return self.namespace_manager.compute_qname(uri, generate)\n raise UnSupportedAggregateOperation()\n\n def bind(self, prefix, namespace, override=True):\n raise UnSupportedAggregateOperation()\n\n def namespaces(self):\n if hasattr(self, 'namespace_manager'):\n for prefix, namespace in self.namespace_manager.namespaces():\n yield prefix, namespace\n else:\n for graph in self.graphs:\n for prefix, namespace in graph.namespaces():\n yield prefix, namespace\n\n def absolutize(self, uri, defrag=1):\n raise UnSupportedAggregateOperation()\n\n def parse(self, source, publicID=None, format=\"xml\", **args):\n raise ModificationException()\n\n def n3(self):\n raise UnSupportedAggregateOperation()\n\n def __reduce__(self):\n raise UnSupportedAggregateOperation()\n\n\ndef _assertnode(*terms):\n for t in terms:\n assert isinstance(t, Node), \\\n 'Term %s must be an rdflib term' % (t,)\n return True\n\n\ndef test():\n import doctest\n doctest.testmod()\n\n\nif __name__ == '__main__':\n test()\n"}
|
{"rdflib/graph.py": [{"type": "function", "name": "Graph.cbd", "lines": [1272, 1319], "signature": "def cbd(self, resource):", "doc": "Retrieves the Concise Bounded Description of a Resource from a Graph\n\nConcise Bounded Description (CBD) is defined in [1] as:\n\nGiven a particular node (the starting node) in a particular RDF graph (the source graph), a subgraph of that\nparticular graph, taken to comprise a concise bounded description of the resource denoted by the starting node,\ncan be identified as follows:\n\n 1. Include in the subgraph all statements in the source graph where the subject of the statement is the\n starting node;\n 2. Recursively, for all statements identified in the subgraph thus far having a blank node object, include\n in the subgraph all statements in the source graph where the subject of the statement is the blank node\n in question and which are not already included in the subgraph.\n 3. Recursively, for all statements included in the subgraph thus far, for all reifications of each statement\n in the source graph, include the concise bounded description beginning from the rdf:Statement node of\n each reification.\n\nThis results in a subgraph where the object nodes are either URI references, literals, or blank nodes not\nserving as the subject of any statement in the graph.\n\n[1] https://www.w3.org/Submission/CBD/\n\n:param resource: a URIRef object, of the Resource for queried for\n:return: a Graph, subgraph of self"}, {"type": "function", "name": "Graph.cbd.add_to_cbd", "lines": [1300, 1316], "signature": "def add_to_cbd(uri):", "doc": ""}]}
| null |
["test/test_graph_cbd.py::CbdTestCase::testCbd", "test/test_graph_cbd.py::CbdTestCase::testCbdReified"]
|
[]
|
0c11debb5178157baeac27b735e49a757916d2a6
|
{"first_commit_time": 1584098175.0, "pr_title": "Concise Bounded Description", "pr_body": "This PR implements a Graph() function cbd() for 'Concise Bounded Description'. It extracts a subgraph from a source graph as per the rules in the W3C member submission [1].\r\n\r\nThis PR includes documentation for the method, tests and I want it mostly to be an example of a well-presented PR, not so much a rocket science improvement to the codebase!\r\n\r\nIt is also a precursor to implementing SPARQL's `DESCRIBE` queries though.\r\n\r\n~Please note this PR adds `black` to requirements.txt as I'd like `black` to be run on all code being submitted to rdfllib and, for rdflib 6.0.0, all existing code too.~\r\n\r\n[1] https://www.w3.org/Submission/CBD/", "pr_timeline": [{"time": 1584100678.0, "comment": "This PR should allow Issue #123 (Suggested method on Graph for BNode Closure) to be closed"}, {"time": 1584100712.0, "comment": "This PR assists with Issue #479 "}, {"time": 1584231483.0, "comment": "This feature is part of a series of features needed for SPARQL's `DESCRIBE` function, as discussed in Issue #479. This will likely be added to rdflib's 6.0.0 release in about July 2020 (see https://rdflib.github.io)."}, {"time": 1585449079.0, "comment": "As this is already merged, changing the milestone to \"rdflib 5.0.0\" for inclusion in the 5.0.0 changelog generation."}, {"time": 1585454346.0, "comment": "Actually, I stuffed this up by developing the function in a local `master` which then got merged into another branch of mine which was put up for a PR. Before pushing that branch to `origin/master` I removed this so when the CDB branch was merge, that other PR then removed it! See that the `cbd()` function is not actually in `origin/master`, see the place it should be: https://github.com/RDFLib/rdflib/blob/5e691870d242b152a3af09c540bde576ed3aa6c7/rdflib/graph.py#L1323\r\n\r\nSo I'll re-issue this PR's content in a new PR and remove the 5.0.0 label from it as it shouldn't be in the changelog."}, {"time": 1604226963.0, "comment": "Hi. Just curious, any idea where this code ended up? I cannot find it in any branch..."}], "issues": {}}
|
|
Textualize/rich
| 1,706
|
https://github.com/Textualize/rich/pull/1706
|
Textualize__rich-1706
|
[]
|
008854c40772f647dfcb873bc3489e8a1c02d598
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 77dd5a0a9b..2b1dc69d04 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -24,6 +24,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Allowed `__rich__` to work recursively
- Allowed Text classes to work with sep in print https://github.com/willmcgugan/rich/issues/1689
+### Added
+
+- Added a `rich.text.Text.from_ansi` helper method for handling pre-formatted input strings https://github.com/willmcgugan/rich/issues/1670
+
## [10.13.0] - 2021-11-07
### Added
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 0e299be6e7..557f548e8d 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -22,4 +22,5 @@ The following people have contributed to the development of Rich:
- [Clément Robert](https://github.com/neutrinoceros)
- [Tushar Sadhwani](https://github.com/tusharsadhwani)
- [Tim Savage](https://github.com/timsavage)
+- [Nicolas Simonds](https://github.com/0xDEC0DE)
- [Gabriele N. Tornetta](https://github.com/p403n1x87)
diff --git a/docs/source/text.rst b/docs/source/text.rst
index b5f2eaf8f1..142b0639e1 100644
--- a/docs/source/text.rst
+++ b/docs/source/text.rst
@@ -26,6 +26,11 @@ Alternatively, you can construct styled text by calling :meth:`~rich.text.Text.a
text.append(" World!")
console.print(text)
+If you would like to use text that is already formatted with ANSI codes, call :meth:`~rich.text.Text.from_ansi` to convert it to a ``Text`` object:
+
+ text = Text.from_ansi("\033[1mHello, World!\033[0m")
+ console.print(text.spans)
+
Since building Text instances from parts is a common requirement, Rich offers :meth:`~rich.text.Text.assemble` which will combine strings or pairs of string and Style, and return a Text instance. The follow example is equivalent to the code above::
text = Text.assemble(("Hello", "bold magenta"), " World!")
diff --git a/rich/text.py b/rich/text.py
index 52ecd16042..c49e152b60 100644
--- a/rich/text.py
+++ b/rich/text.py
@@ -242,6 +242,40 @@ def from_markup(
rendered_text.overflow = overflow
return rendered_text
+ @classmethod
+ def from_ansi(
+ cls,
+ text: str,
+ *,
+ style: Union[str, Style] = "",
+ justify: Optional["JustifyMethod"] = None,
+ overflow: Optional["OverflowMethod"] = None,
+ no_wrap: Optional[bool] = None,
+ end: str = "\n",
+ tab_size: Optional[int] = 8,
+ ) -> "Text":
+ """Create a Text object from pre-formatted ANSI.
+
+ Args:
+ text (str): A string containing ANSI color codes.
+ style (Union[str, Style], optional): Base style for text. Defaults to "".
+ justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None.
+ overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None.
+ no_wrap (bool, optional): Disable text wrapping, or None for default. Defaults to None.
+ end (str, optional): Character to end text with. Defaults to "\\\\n".
+ tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8.
+ """
+ from .ansi import AnsiDecoder
+
+ decoded_text = AnsiDecoder().decode_line(text)
+ decoded_text.justify = justify
+ decoded_text.overflow = overflow
+ decoded_text.no_wrap = no_wrap
+ decoded_text.end = end
+ decoded_text.tab_size = tab_size
+ decoded_text.stylize(style)
+ return decoded_text
+
@classmethod
def styled(
cls,
|
diff --git a/tests/test_text.py b/tests/test_text.py
index 3727d1602c..6eecb9ee73 100644
--- a/tests/test_text.py
+++ b/tests/test_text.py
@@ -95,6 +95,15 @@ def test_from_markup():
assert text._spans == [Span(7, 13, "bold")]
+def test_from_ansi():
+ text = Text.from_ansi("Hello, \033[1mWorld!\033[0m")
+ text2 = Text.from_ansi("Hello, \033[1mWorld!\033[0m", style="red")
+ assert str(text) == "Hello, World!"
+ assert text._spans == [Span(7, 13, Style(bold=True))]
+ assert str(text2) == "Hello, World!"
+ assert text2._spans == [Span(7, 13, Style(bold=True)), Span(0, 13, "red")]
+
+
def test_copy():
test = Text()
test.append("Hello", "bold")
| 2021-11-17T21:21:55
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"CHANGELOG.md": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),\nand this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).\n\n## [10.14.0] - 2021-11-16\n\n### Fixed\n\n- Fixed progress speed not updating when total doesn't change\n- Fixed superfluous new line in Status https://github.com/willmcgugan/rich/issues/1662\n- Fixed Windows legacy width again\n- Fixed infinite loop in set_cell_size https://github.com/willmcgugan/rich/issues/1682\n\n### Added\n\n- Added file protocol to URL highlighter https://github.com/willmcgugan/rich/issues/1681\n- Added rich.protocol.rich_cast\n\n### Changed\n\n- Allowed `__rich__` to work recursively\n- Allowed Text classes to work with sep in print https://github.com/willmcgugan/rich/issues/1689\n\n## [10.13.0] - 2021-11-07\n\n### Added\n\n- Added json.dumps parameters to print_json https://github.com/willmcgugan/rich/issues/1638\n\n### Fixed\n\n- Fixed an edge case bug when console module try to detect if they are in a tty at the end of a pytest run\n- Fixed a bug where logging handler raises an exception when running with pythonw (related to https://bugs.python.org/issue13807)\n- Fixed issue with TERM env vars that have more than one hyphen https://github.com/willmcgugan/rich/issues/1640\n- Fixed missing new line after progress bar when terminal is not interactive https://github.com/willmcgugan/rich/issues/1606\n- Fixed exception in IPython when disabling pprint with %pprint https://github.com/willmcgugan/rich/issues/1646\n- Fixed issue where values longer than the console width produced invalid JSON https://github.com/willmcgugan/rich/issues/1653\n- Fixes trailing comma when pretty printing dataclass with last field repr=False https://github.com/willmcgugan/rich/issues/1599\n\n## Changed\n\n- Markdown codeblocks now word-wrap https://github.com/willmcgugan/rich/issues/1515\n\n## [10.12.0] - 2021-10-06\n\n### Updated\n\n- Official Py3.10 release\n\n### Fixed\n\n- Fixed detection of custom repr when pretty printing dataclasses\n\n## [10.11.0] - 2021-09-24\n\n### Added\n\n- Added `suppress` parameter to tracebacks\n- Added `max_frames` parameter to tracebacks\n\n## [10.10.0] - 2021-09-18\n\n### Added\n\n- Added stdin support to `rich.json`\n\n### Fixed\n\n- Fixed pretty printing of objects with fo magic with **getattr** https://github.com/willmcgugan/rich/issues/1492\n\n## [10.9.0] - 2021-08-29\n\n### Added\n\n- Added data parameter to print_json method / function\n- Added an --indent parameter to python -m rich.json\n\n### Changed\n\n- Changed default indent of JSON to 2 (down from 4)\n- Changed highlighting of JSON keys to new style (bold blue)\n\n## [10.8.0] - 2021-08-28\n\n### Added\n\n- Added Panel.subtitle\n- Added Panel.subtitle_align\n- Added rich.json.JSON\n- Added rich.print_json and Console.print_json\n\n### Fixed\n\n- Fixed a bug where calling `rich.reconfigure` within a `pytest_configure` hook would lead to a crash\n- Fixed highlight not being passed through options https://github.com/willmcgugan/rich/issues/1404\n\n## [10.7.0] - 2021-08-05\n\n### Added\n\n- Added Text.apply_meta\n- Added meta argument to Text.assemble\n- Added Style.from_meta\n- Added Style.on\n- Added Text.on\n\n### Changed\n\n- Changed `RenderGroup` to `Group` and `render_group` to `group` (old names remain for compatibility but will be deprecated in the future)\n- Changed `rich.repr.RichReprResult` to `rich.repr.Result` (old names remain for compatibility but will be deprecated in the future)\n- Changed meta serialization to use pickle rather than marshal to permit callables\n\n## [10.6.0] - 2021-07-12\n\n### Deprecated\n\n- Added deprecation warning for tabulate_mapping which will be removed in v11.0.0\n\n### Added\n\n- Added precision argument to filesize.decimal\n- Added separator argument to filesize.decimal\n- Added \\_rich_traceback_guard to Traceback\n- Added emoji_variant to Console\n- Added -emoji and -text variant selectors to emoji code\n\n### Fixed\n\n- Fixed issue with adjoining color tags https://github.com/willmcgugan/rich/issues/1334\n\n### Changed\n\n- Changed Console.size to use unproxied stdin and stdout\n\n## [10.5.0] - 2021-07-05\n\n### Fixed\n\n- Fixed Pandas objects not pretty printing https://github.com/willmcgugan/rich/issues/1305\n- Fixed https://github.com/willmcgugan/rich/issues/1256\n- Fixed typing with rich.repr.auto decorator\n- Fixed repr error formatting https://github.com/willmcgugan/rich/issues/1326\n\n### Added\n\n- Added new_line_start argument to Console.print\n- Added Segment.divide method\n- Added Segment.split_cells method\n- Added segment.SegmentLines class\n\n## [10.4.0] - 2021-06-18\n\n### Added\n\n- Added Style.meta\n- Added rich.repr.auto decorator\n\n### Fixed\n\n- Fixed error pretty printing classes with special **rich_repr** method\n\n## [10.3.0] - 2021-06-09\n\n### Added\n\n- Added Console.size setter\n- Added Console.width setter\n- Added Console.height setter\n- Added angular style Rich reprs\n- Added an IPython extension. Load via `%load_ext rich`\n\n### Changed\n\n- Changed the logic for retrieving the calling frame in console logs to a faster one for the Python implementations that support it.\n\n## [10.2.2] - 2021-05-19\n\n### Fixed\n\n- Fixed status not rendering console markup https://github.com/willmcgugan/rich/issues/1244\n\n## [10.2.1] - 2021-05-17\n\n### Fixed\n\n- Fixed panel in Markdown exploding https://github.com/willmcgugan/rich/issues/1234\n\n## [10.2.0] - 2021-05-12\n\n### Added\n\n- Added syntax for call, i.e. \"Foo(bar)\"\n- Added Console.measure as a convenient alias for Measurement.get\n- Added support for pretty printing attrs objects\n- Added mappingproxy to pretty print\n- Added UserDict and UserList support to pretty printer\n\n### Changed\n\n- Changed colorama init to set strip=False\n- Changed highlighter for False, True, None to not match in the middle of a word. i.e. NoneType is no longer highlighted as None\n\n### Fixed\n\n- Fixed initial blank lines removed from Syntax https://github.com/willmcgugan/rich/issues/1214\n\n## [10.1.0] - 2021-04-03\n\n### Fixed\n\n- Fixed support for jupyter qtconsole and similar Jupyter environments\n\n## [10.0.1] - 2021-03-30\n\n### Fixed\n\n- Fixed race condition that duplicated lines in progress https://github.com/willmcgugan/rich/issues/1144\n\n## [10.0.0] - 2021-03-27\n\n### Changed\n\n- Made pydoc import lazy as at least one use found it slow to import https://github.com/willmcgugan/rich/issues/1104\n- Modified string highlighting to not match in the middle of a word, so that apostrophes are not considered strings\n- New way of encoding control codes in Segment\n- New signature for Control class\n- Changed Layout.split to use new Splitter class\n- Improved layout.tree\n- Changed default theme color for repr.number to cyan\n- `__rich_measure__` signature changed to accept ConsoleOptions rather than max_width\n- `text` parameter to rich.spinner.Spinner changed to RenderableType\n\n### Added\n\n- Added `__rich_repr__` protocol method to Pretty\n- Added rich.region.Region\n- Added ConsoleOptions.update_dimensions\n- Added rich.console.ScreenUpdate\n- Added Console.is_alt_screen\n- Added Control.segment, Control.bell, Control.home, Control.move_to, Control.clear, Control.show_cursor, Control.alt_screen\n- Added Console.update_screen and Console.update_screen_lines\n- Added Layout.add_split, Layout.split_column, Layout.split_row, layout.refresh\n- Added new Rich repr protocol `__rich_repr__`\n\n### Fixed\n\n- Fixed table style taking precedence over row style https://github.com/willmcgugan/rich/issues/1129\n- Fixed incorrect measurement of Text with new lines and whitespace https://github.com/willmcgugan/rich/issues/1133\n- Made type annotations consistent for various `total` keyword arguments in `rich.progress` and rich.`progress_bar`\n- Disabled Progress no longer displays itself when starting https://github.com/willmcgugan/rich/pull/1125\n- Animations no longer reset when updating rich.status.Status\n\n## [9.13.0] - 2021-03-06\n\n### Added\n\n- Pretty printer now supports dataclasses\n\n### Fixed\n\n- Fixed Syntax background https://github.com/willmcgugan/rich/issues/1088\n- Fix for double tracebacks when no formatter https://github.com/willmcgugan/rich/issues/1079\n\n### Changed\n\n- Added ws and wss to url highlighter\n\n## [9.12.4] - 2021-03-01\n\n### Fixed\n\n- Fixed custom formatters with rich tracebacks in RichHandler https://github.com/willmcgugan/rich/issues/1079\n\n### Changed\n\n- Allow highly compressed table cells to go to 0 width\n- Optimization to remove empty styles in various places\n\n## [9.12.3] - 2021-02-28\n\n### Changed\n\n- Optimized Padding\n\n## [9.12.2] - 2021-02-27\n\n### Added\n\n- Added ConsoleOptions.copy\n\n### Changed\n\n- Optimized ConsoleOptions.update\n\n## [9.12.1] - 2021-02-27\n\n### Fixed\n\n- Fixed deadlock in Progress https://github.com/willmcgugan/rich/issues/1061\n\n### Added\n\n- Added Task.finished_speed\n\n### Changed\n\n- Froze TransferSpeedColumn speed when task is finished\n- Added SIGINT handler to downloader.py example\n- Optimization for large tables\n\n## [9.12.0] - 2021-02-24\n\n### Fixed\n\n- Fixed issue with Syntax and missing lines in Layout https://github.com/willmcgugan/rich/issues/1050\n- Fixed issue with nested markdown elements https://github.com/willmcgugan/rich/issues/1036\n- Fixed new lines not invoking render hooks https://github.com/willmcgugan/rich/issues/1052\n- Fixed Align setting height to child https://github.com/willmcgugan/rich/issues/1057\n\n### Changed\n\n- Printing a table with no columns now result in a blank line https://github.com/willmcgugan/rich/issues/1044\n\n### Added\n\n- Added height to Panel\n\n## [9.11.1] - 2021-02-20\n\n### Fixed\n\n- Fixed table with expand=False not expanding when justify=\"center\"\n- Fixed single renderable in Layout not respecting height\n- Fixed COLUMNS and LINES env var https://github.com/willmcgugan/rich/issues/1019\n- Layout now respects minimum_size when fixes sizes are greater than available space\n- HTML export now changes link underline score to match terminal https://github.com/willmcgugan/rich/issues/1009\n\n### Changed\n\n- python -m rich.markdown and rich.syntax show usage with no file\n\n### Added\n\n- Added height parameter to Layout\n- Added python -m rich.segment\n\n## [9.11.0] - 2021-02-15\n\n### Fixed\n\n- Fixed error message for tracebacks with broken `__str__` https://github.com/willmcgugan/rich/issues/980\n- Fixed markup edge case https://github.com/willmcgugan/rich/issues/987\n\n### Added\n\n- Added cheeky sponsorship request to test card\n- Added `quiet` argument to Console constructor\n- Added support for a callback function to format timestamps (allows presentation of milliseconds)\n- Added Console.set_alt_screen and Console.screen\n- Added height to ConsoleOptions\n- Added `vertical` parameter to Align\n- Added Layout class\n\n### Changed\n\n- Pretty.overflow now defaults to None\n- Panel now respects options.height\n- Traceback lexer defaults to Python if no extension on source\n- Added ConsoleDimensions size attribute to ConsoleOptions so that size can't change mid-render\n\n## [9.10.0] - 2021-01-27\n\n### Changed\n\n- Some optimizations for Text\n- Further optimized Tracebacks by not tokenizing code more that necessary\n- Table Column.header_style and Column.footer_style are now added to Table header/footer style\n\n## [9.9.0] - 2021-01-23\n\n### Changed\n\n- Extended Windows palette to 16 colors\n- Modified windows palette to Windows 10 colors\n- Change regex for attrib_name to be more performant\n- Optimized traceback generation\n\n### Fixed\n\n- Fix double line tree guides on Windows\n- Fixed Tracebacks ignoring initial blank lines\n- Partial fix for tracebacks not finding source after chdir\n- Fixed error message when code in tracebacks doesn't have an extension https://github.com/willmcgugan/rich/issues/996\n\n### Added\n\n- Added post_style argument to Segment.apply_style\n\n## [9.8.2] - 2021-01-15\n\n### Fixed\n\n- Fixed deadlock in live https://github.com/willmcgugan/rich/issues/927\n\n## [9.8.1] - 2021-01-13\n\n### Fixed\n\n- Fixed rich.inspect failing with attributes that claim to be callable but aren't https://github.com/willmcgugan/rich/issues/916\n\n## [9.8.0] - 2021-01-11\n\n### Added\n\n- Added **rich_measure** for tree\n- Added rich.align.VerticalCenter\n\n### Changed\n\n- The `style` argument on Align now applies to background only\n- Changed display of progress bars in no_color mode for clarity\n- Console property `size` will fall back to getting the terminal size of stdout it stdin fails, this allows size to be correctly determined when piping\n\n### Fixed\n\n- Fixed panel cropping when shrunk too bar\n- Allow passing markdown over STDIN when using `python -m rich.markdown`\n- Fix printing MagicMock.mock_calls https://github.com/willmcgugan/rich/issues/903\n\n## [9.7.0] - 2021-01-09\n\n### Added\n\n- Added rich.tree\n- Added no_color argument to Console\n\n## [9.6.2] - 2021-01-07\n\n### Fixed\n\n- Fixed markup escaping edge case https://github.com/willmcgugan/rich/issues/878\n- Double tag escape, i.e. `\"\\\\[foo]\"` results in a backslash plus `[foo]` tag\n- Fixed header_style not applying to headers in positional args https://github.com/willmcgugan/rich/issues/953\n\n## [9.6.1] - 2020-12-31\n\n### Fixed\n\n- Fixed encoding error on Windows when loading code for Tracebacks\n\n## [9.6.0] - 2020-12-30\n\n### Changed\n\n- MarkupError exception raise from None to omit internal exception\n- Factored out RichHandler.render and RichHandler.render_message for easier extending\n- Display pretty printed value in rich.inspect\n\n### Added\n\n- Added Progress.TimeElapsedColumn\n- Added IPython support to pretty.install\n\n### Fixed\n\n- Fixed display of locals in Traceback for stdin\n\n## [9.5.1] - 2020-12-19\n\n### Fixed\n\n- Fixed terminal size detection on Windows https://github.com/willmcgugan/rich/issues/836\n- Fixed hex number highlighting\n\n## [9.5.0] - 2020-12-18\n\n### Changed\n\n- If file is not specified on Console then the Console.file will return the current sys.stdout. Prior to 9.5.0 sys.stdout was cached on the Console, which could break code that wrapped sys.stdout after the Console was constructed.\n- Changed `Color.__str__` to not include ansi codes\n- Changed Console.size to get the terminal dimensions via sys.stdin. This means that if you set file to be an io.StringIO file then the width will be set to the current terminal dimensions and not a default of 80.\n\n### Added\n\n- Added stderr parameter to Console\n- Added rich.reconfigure\n- Added `Color.__rich__`\n- Added Console.soft_wrap\n- Added Console.style parameter\n- Added Table.highlight parameter to enable highlighting of cells\n- Added Panel.highlight parameter to enable highlighting of panel title\n- Added highlight to ConsoleOptions\n\n### Fixed\n\n- Fixed double output in rich.live https://github.com/willmcgugan/rich/issues/485\n- Fixed Console.out highlighting not reflecting defaults https://github.com/willmcgugan/rich/issues/827\n- FileProxy now raises TypeError for empty non-str arguments https://github.com/willmcgugan/rich/issues/828\n\n## [9.4.0] - 2020-12-12\n\n### Added\n\n- Added rich.live https://github.com/willmcgugan/rich/pull/382\n- Added algin parameter to Rule and Console.rule\n- Added rich.Status class and Console.status\n- Added getitem to Text\n- Added style parameter to Console.log\n- Added rich.diagnose command\n\n### Changed\n\n- Table.add_row style argument now applies to entire line and not just cells\n- Added end_section parameter to Table.add_row to force a line underneath row\n\n## Fixed\n\n- Fixed suppressed traceback context https://github.com/willmcgugan/rich/issues/468\n\n## [9.3.0] - 2020-12-1\n\n### Added\n\n- Added get_datetime parameter to Console, to allow for repeatable tests\n- Added get_time parameter to Console\n- Added rich.abc.RichRenderable\n- Added expand_all to rich.pretty.install()\n- Added locals_max_length, and locals_max_string to Traceback and logging.RichHandler\n- Set defaults of max_length and max_string for Traceback to 10 and 80\n- Added disable argument to Progress\n\n### Changed\n\n- Reformatted test card (python -m rich)\n\n### Fixed\n\n- Fixed redirecting of stderr in Progress\n- Fixed broken expanded tuple of one https://github.com/willmcgugan/rich/issues/445\n- Fixed traceback message with `from` exceptions\n- Fixed justify argument not working in console.log https://github.com/willmcgugan/rich/issues/460\n\n## [9.2.0] - 2020-11-08\n\n### Added\n\n- Added tracebacks_show_locals parameter to RichHandler\n- Added max_string to Pretty\n- Added rich.ansi.AnsiDecoder\n- Added decoding of ansi codes to captured stdout in Progress\n- Added expand_all to rich.pretty.pprint\n\n### Changed\n\n- Applied dim=True to indent guide styles\n- Factored out RichHandler.get_style_and_level to allow for overriding in subclasses\n- Hid progress bars from html export\n- rich.pretty.pprint now soft wraps\n\n## [9.1.0] - 2020-10-23\n\n### Added\n\n- Added Text.with_indentation_guide\n- Added Text.detect_indentation\n- Added Pretty.indent_guides\n- Added Syntax.indent_guides\n- Added indent_guides parameter on pretty.install\n- Added rich.pretty.pprint\n- Added max_length to Pretty\n\n### Changed\n\n- Enabled indent guides on Tracebacks\n\n### Fixed\n\n- Fixed negative time remaining in Progress bars https://github.com/willmcgugan/rich/issues/378\n\n## [9.0.1] - 2020-10-19\n\n### Fixed\n\n- Fixed broken ANSI codes in input on windows legacy https://github.com/willmcgugan/rich/issues/393\n\n## [9.0.0] - 2020-10-18\n\n### Fixed\n\n- Progress download column now displays decimal units\n\n### Added\n\n- Support for Python 3.9\n- Added legacy_windows to ConsoleOptions\n- Added ascii_only to ConsoleOptions\n- Added box.SQUARE_DOUBLE_HEAD\n- Added highlighting of EUI-48 and EUI-64 (MAC addresses)\n- Added Console.pager\n- Added Console.out\n- Added binary_units in progress download column\n- Added Progress.reset\n- Added Style.background_style property\n- Added Bar renderable https://github.com/willmcgugan/rich/pull/361\n- Added Table.min_width\n- Added table.Column.min_width and table.Column.max_width, and same to Table.add_column\n\n### Changed\n\n- Dropped box.get_safe_box function in favor of Box.substitute\n- Changed default padding in Panel from 0 to (0, 1) https://github.com/willmcgugan/rich/issues/385\n- Table with row_styles will extend background color between cells if the box has no vertical dividerhttps://github.com/willmcgugan/rich/issues/383\n- Changed default of fit kwarg in render_group() from False to True\n- Renamed rich.bar to rich.progress_bar, and Bar class to ProgressBar, rich.bar is now the new solid bar class\n\n### Fixed\n\n- Fixed typo in `Style.transparent_background` method name.\n\n## [8.0.0] - 2020-10-03\n\n### Added\n\n- Added Console.bell method\n- Added Set to types that Console.print will automatically pretty print\n- Added show_locals to Traceback\n- Added theme stack mechanism, see Console.push_theme and Console.pop_theme\n\n### Changed\n\n- Changed Style.empty to Style.null to better reflect what it does\n- Optimized combining styles involving a null style\n- Change error messages in Style.parse to read better\n\n### Fixed\n\n- Fixed Table.\\_\\_rich_measure\\_\\_\n- Fixed incorrect calculation of fixed width columns\n\n## [7.1.0] - 2020-09-26\n\n### Added\n\n- Added Console.begin_capture, Console.end_capture and Console.capture\n- Added Table.title_justify and Table.caption_justify https://github.com/willmcgugan/rich/issues/301\n\n### Changed\n\n- Improved formatting of exceptions\n- Enabled Rich exceptions in logging https://github.com/taliraj\n- UTF-8 encoding is now mentioned in HTML head section\n\n### Removed\n\n- Removed line_numbers argument from traceback.install, which was undocumented and did nothing\n\n## [7.0.0] - 2020-09-18\n\n### Added\n\n- New ansi_dark and ansi_light themes\n- Added Text.append_tokens for fast appending of string + Style pairs\n- Added Text.remove_suffix\n- Added Text.append_tokens\n\n### Changed\n\n- Text.tabs_to_spaces was renamed to Text.expand_tabs, which works in place rather than returning a new instance\n- Renamed Column.index to Column.\\_index\n- Optimized Style.combine and Style.chain\n- Optimized text rendering by fixing internal cache mechanism\n- Optimized hash generation for Styles\n\n## [6.2.0] - 2020-09-13\n\n### Added\n\n- Added inline code highlighting to Markdown\n\n## [6.1.2] - 2020-09-11\n\n### Added\n\n- Added ipv4 and ipv6 to ReprHighlighter\n\n### Changed\n\n- The `#` sign is included in url highlighting\n\n### Fixed\n\n- Fixed force-color switch in rich.syntax and rich.markdown commands\n\n## [6.1.1] - 2020-09-07\n\n### Changed\n\n- Restored \"def\" in inspect signature\n\n## [6.1.0] - 2020-09-07\n\n### Added\n\n- New inspect module\n- Added os.\\_Environ to pretty print\n\n### Fixed\n\n- Prevented recursive renderables from getting stuck\n\n## Changed\n\n- force_terminal and force_jupyter can now be used to force the disabled state, or left as None to auto-detect.\n- Panel now expands to fit title if supplied\n\n## [6.0.0] - 2020-08-25\n\n### Fixed\n\n- Fixed use of `__rich__` cast\n\n### Changed\n\n- New algorithm to pretty print which fits more on a line if possible\n- Deprecated `character` parameter in Rule and Console.rule, in favor of `characters`\n- Optimized Syntax.from_path to avoid searching all lexers, which also speeds up tracebacks\n\n### Added\n\n- Added soft_wrap flag to Console.print\n\n## [5.2.1] - 2020-08-19\n\n### Fixed\n\n- Fixed underscore with display hook https://github.com/willmcgugan/rich/issues/235\n\n## [5.2.0] - 2020-08-14\n\n### Changed\n\n- Added crop argument to Console.print\n- Added \"ignore\" overflow method\n- Added multiple characters per rule @hedythedev https://github.com/willmcgugan/rich/pull/207\n\n## [5.1.2] - 2020-08-10\n\n### Fixed\n\n- Further optimized pretty printing ~5X.\n\n## [5.1.1] - 2020-08-09\n\n### Fixed\n\n- Optimized pretty printing ~3X faster\n\n## [5.1.0] - 2020-08-08\n\n### Added\n\n- Added Text.cell_len\n- Added helpful message regarding unicode decoding errors https://github.com/willmcgugan/rich/issues/212\n- Added display hook with pretty.install()\n\n### Fixed\n\n- Fixed deprecation warnings re backslash https://github.com/willmcgugan/rich/issues/210\n- Fixed repr highlighting of scientific notation, e.g. 1e100\n\n### Changed\n\n- Implemented pretty printing, and removed pprintpp from dependencies\n- Optimized Text.join\n\n## [5.0.0] - 2020-08-02\n\n### Changed\n\n- Change to console markup syntax to not parse Python structures as markup, i.e. `[1,2,3]` is treated as a literal, not a tag.\n- Standard color numbers syntax has changed to `\"color(<number>)\"` so that `[5]` (for example) is considered a literal.\n- Markup escape method has changed from double brackets to preceding with a backslash, so `foo[[]]` would be `foo\\[bar]`\n\n## [4.2.2] - 2020-07-30\n\n### Changed\n\n- Added thread to automatically call update() in progress.track(). Replacing previous adaptive algorithm.\n- Second attempt at working around https://bugs.python.org/issue37871\n\n## [4.2.1] - 2020-07-29\n\n### Added\n\n- Added show_time and show_level parameters to RichHandler https://github.com/willmcgugan/rich/pull/182\n\n### Fixed\n\n- Fixed progress.track iterator exiting early https://github.com/willmcgugan/rich/issues/189\n- Added workaround for Python bug https://bugs.python.org/issue37871, fixing https://github.com/willmcgugan/rich/issues/186\n\n### Changed\n\n- Set overflow=fold for log messages https://github.com/willmcgugan/rich/issues/190\n\n## [4.2.0] - 2020-07-27\n\n### Fixed\n\n- Fixed missing new lines https://github.com/willmcgugan/rich/issues/178\n- Fixed Progress.track https://github.com/willmcgugan/rich/issues/184\n- Remove control codes from exported text https://github.com/willmcgugan/rich/issues/181\n- Implemented auto-detection and color rendition of 16-color mode\n\n## [4.1.0] - 2020-07-26\n\n### Changed\n\n- Optimized progress.track for very quick iterations\n- Force default size of 80x25 if get_terminal_size reports size of 0,0\n\n## [4.0.0] - 2020-07-23\n\nMajor version bump for a breaking change to `Text.stylize signature`, which corrects a minor but irritating API wart. The style now comes first and the `start` and `end` offsets default to the entire text. This allows for `text.stylize_all(style)` to be replaced with `text.stylize(style)`. The `start` and `end` offsets now support negative indexing, so `text.stylize(\"bold\", -1)` makes the last character bold.\n\n### Added\n\n- Added markup switch to RichHandler https://github.com/willmcgugan/rich/issues/171\n\n### Changed\n\n- Change signature of Text.stylize to accept style first\n- Remove Text.stylize_all which is no longer necessary\n\n### Fixed\n\n- Fixed rendering of Confirm prompt https://github.com/willmcgugan/rich/issues/170\n\n## [3.4.1] - 2020-07-22\n\n### Fixed\n\n- Fixed incorrect default of expand in Table.grid\n\n## [3.4.0] - 2020-07-22\n\n### Added\n\n- Added stream parameter to Console.input\n- Added password parameter to Console.input\n- Added description parameter to Progress.update\n- Added rich.prompt\n- Added detecting 'dumb' terminals\n- Added Text.styled alternative constructor\n\n### Fixes\n\n- Fixed progress bars so that they are readable when color is disabled\n\n## [3.3.2] - 2020-07-14\n\n### Changed\n\n- Optimized Text.pad\n\n### Added\n\n- Added rich.scope\n- Change log_locals to use scope.render_scope\n- Added title parameter to Columns\n\n## [3.3.1] - 2020-07-13\n\n### Added\n\n- box.ASCII_DOUBLE_HEAD\n\n### Changed\n\n- Removed replace of -- --- ... from Markdown, as it made it impossible to include CLI info\n\n## [3.3.0] - 2020-07-12\n\n### Added\n\n- Added title and title_align options to Panel\n- Added pad and width parameters to Align\n- Added end parameter to Rule\n- Added Text.pad and Text.align methods\n- Added leading parameter to Table\n\n## [3.2.0] - 2020-07-10\n\n### Added\n\n- Added Align.left Align.center Align.right shortcuts\n- Added Panel.fit shortcut\n- Added align parameter to Columns\n\n### Fixed\n\n- Align class now pads to the right, like Text\n- ipywidgets added as an optional dependency\n- Issue with Panel and background color\n- Fixed missing `__bool__` on Segment\n\n### Changed\n\n- Added `border_style` argument to Panel (note, `style` now applies to interior of the panel)\n\n## [3.1.0] - 2020-07-09\n\n### Changed\n\n- Progress bars now work in Jupyter\n\n## Added\n\n- Added refresh_per_second to progress.track\n- Added styles to BarColumn and progress.track\n\n## [3.0.5] - 2020-07-07\n\n### Fixed\n\n- Fixed Windows version number require for truecolor\n\n## [3.0.4] - 2020-07-07\n\n### Changed\n\n- More precise detection of Windows console https://github.com/willmcgugan/rich/issues/140\n\n## [3.0.3] - 2020-07-03\n\n### Fixed\n\n- Fixed edge case with wrapped and overflowed text\n\n### Changed\n\n- New algorithm for compressing table that priorities smaller columns\n\n### Added\n\n- Added safe_box parameter to Console constructor\n\n## [3.0.2] - 2020-07-02\n\n### Added\n\n- Added rich.styled.Styled class to apply styles to renderable\n- Table.add_row now has an optional style parameter\n- Added table_movie.py to examples\n\n### Changed\n\n- Modified box options to use half line characters at edges\n- Non no_wrap columns will now shrink below minimum width if table is compressed\n\n## [3.0.1] - 2020-06-30\n\n### Added\n\n- Added box.ASCII2\n- Added markup argument to logging extra\n\n### Changed\n\n- Setting a non-None width now implies expand=True\n\n## [3.0.0] - 2020-06-28\n\n### Changed\n\n- Enabled supported box chars for legacy Windows, and introduce `safe_box` flag\n- Disable hyperlinks on legacy Windows\n- Constructors for Rule and Panel now have keyword only arguments (reason for major version bump)\n- Table.add_colum added keyword only arguments\n\n### Fixed\n\n- Fixed Table measure\n\n## [2.3.1] - 2020-06-26\n\n### Fixed\n\n- Disabled legacy_windows if jupyter is detected https://github.com/willmcgugan/rich/issues/125\n\n## [2.3.0] - 2020-06-26\n\n### Fixed\n\n- Fixed highlighting of paths / filenames\n- Corrected docs for RichHandler which erroneously said default console writes to stderr\n\n### Changed\n\n- Allowed `style` parameter for `highlight_regex` to be a callable that returns a style\n\n### Added\n\n- Added optional highlighter parameter to RichHandler\n\n## [2.2.6] - 2020-06-24\n\n### Changed\n\n- Store a \"link id\" on Style instance, so links containing different styles are highlighted together. (https://github.com/willmcgugan/rich/pull/123)\n\n## [2.2.5] - 2020-06-23\n\n### Fixed\n\n- Fixed justify of tables (https://github.com/willmcgugan/rich/issues/117)\n\n## [2.2.4] - 2020-06-21\n\n### Added\n\n- Added enable_link_path to RichHandler\n- Added legacy_windows switch to Console constructor\n\n## [2.2.3] - 2020-06-15\n\n### Fixed\n\n- Fixed console.log hyperlink not containing full path\n\n### Changed\n\n- Used random number for hyperlink id\n\n## [2.2.2] - 2020-06-14\n\n### Changed\n\n- Exposed RichHandler highlighter as a class var\n\n## [2.2.1] - 2020-06-14\n\n### Changed\n\n- Linked path in log render to file\n\n## [2.2.0] - 2020-06-14\n\n### Added\n\n- Added redirect_stdout and redirect_stderr to Progress\n\n### Changed\n\n- printing to console with an active Progress doesn't break visuals\n\n## [2.1.0] - 2020-06-11\n\n### Added\n\n- Added 'transient' option to Progress\n\n### Changed\n\n- Truncated overly long text in Rule with ellipsis overflow\n\n## [2.0.1] - 2020-06-10\n\n### Added\n\n- Added expand option to Padding\n\n### Changed\n\n- Some minor optimizations in Text\n\n### Fixed\n\n- Fixed broken rule with CJK text\n\n## [2.0.0] - 2020-06-06\n\n### Added\n\n- Added overflow methods\n- Added no_wrap option to print()\n- Added width option to print\n- Improved handling of compressed tables\n\n### Fixed\n\n- Fixed erroneous space at end of log\n- Fixed erroneous space at end of progress bar\n\n### Changed\n\n- Renamed \\_ratio.ratio_divide to \\_ratio.ratio_distribute\n- Renamed JustifyValues to JustifyMethod (backwards incompatible)\n- Optimized \\_trim_spans\n- Enforced keyword args in Console / Text interfaces (backwards incompatible)\n- Return self from text.append\n\n## [1.3.1] - 2020-06-01\n\n### Changed\n\n- Changed defaults of Table.grid\n- Polished listdir.py example\n\n### Added\n\n- Added width argument to Columns\n\n### Fixed\n\n- Fixed for `columns_first` argument in Columns\n- Fixed incorrect padding in columns with fixed width\n\n## [1.3.0] - 2020-05-31\n\n### Added\n\n- Added rich.get_console() function to get global console instance.\n- Added Columns class\n\n### Changed\n\n- Updated `markdown.Heading.create()` to work with subclassing.\n- Console now transparently works with Jupyter\n\n### Fixed\n\n- Fixed issue with broken table with show_edge=False and a non-None box arg\n\n## [1.2.3] - 2020-05-24\n\n### Added\n\n- Added `padding` parameter to Panel\n- Added 'indeterminate' state when progress bars aren't started\n\n### Fixed\n\n- Fixed Progress deadlock https://github.com/willmcgugan/rich/issues/90\n\n### Changed\n\n- Auto-detect \"truecolor\" color system when in Windows Terminal\n\n## [1.2.2] - 2020-05-22\n\n### Fixed\n\n- Issue with right aligned wrapped text adding extra spaces\n\n## [1.2.1] - 2020-05-22\n\n### Fixed\n\n- Issue with sum and Style\n\n## [1.2.0] - 2020-05-22\n\n### Added\n\n- Support for double underline, framed, encircled, and overlined attributes\n\n### Changed\n\n- Optimized Style\n- Changed methods `__console__` to `__rich_console__`, and `__measure__` to `__rich_measure__`\n\n## [1.1.9] - 2020-05-20\n\n### Fixed\n\n- Exception when BarColumn.bar_width == None\n\n## [1.1.8] - 2020-05-20\n\n### Changed\n\n- Optimizations for Segment, Console and Table\n\n### Added\n\n- Added Console.clear method\n- Added exporting of links to HTML\n\n## [1.1.7] - 2020-05-19\n\n### Added\n\n- Added collapse_padding option to Table.\n\n### Changed\n\n- Some style attributes may be abbreviated (b for bold, i for italic etc). Previously abbreviations worked in console markup but only one at a time, i.e. \"[b]Hello[/]\" but not \"[b i]Hello[/]\" -- now they work everywhere.\n- Renamed 'text' property on Text to 'plain'. i.e. text.plain returns a string version of the Text instance.\n\n### Fixed\n\n- Fixed zero division if total is 0 in progress bar\n\n## [1.1.6] - 2020-05-17\n\n### Added\n\n- Added rich.align.Align class\n- Added justify argument to Console.print and console.log\n\n## [1.1.5] - 2020-05-15\n\n### Changed\n\n- Changed progress bars to write to stdout on terminal and hide on non-terminal\n\n## [1.1.4] - 2020-05-15\n\n### Fixed\n\n- Fixed incorrect file and link in progress.log\n- Fixes for legacy windows: Bar, Panel, and Rule now use ASCII characters\n- show_cursor is now a no-op on legacy windows\n\n### Added\n\n- Added Console.input\n\n### Changed\n\n- Disable progress bars when not writing to a terminal\n\n## [1.1.3] - 2020-05-14\n\n### Fixed\n\n- Issue with progress of one line`\n\n## [1.1.2] - 2020-05-14\n\n### Added\n\n- Added -p switch to python -m rich.markdown to page output\n- Added Console.control to output control codes\n\n### Changed\n\n- Changed Console log_time_format to no longer require a space at the end\n- Added print and log to Progress to render terminal output when progress is active\n\n## [1.1.1] - 2020-05-12\n\n### Changed\n\n- Stripped cursor moving control codes from text\n\n## [1.1.0] - 2020-05-10\n\n### Added\n\n- Added hyperlinks to Style and markup\n- Added justify and code theme switches to markdown command\n\n## [1.0.3] - 2020-05-08\n\n### Added\n\n- Added `python -m rich.syntax` command\n\n## [1.0.2] - 2020-05-08\n\n### Fixed\n\n- Issue with Windows legacy support https://github.com/willmcgugan/rich/issues/59\n\n## [1.0.1] - 2020-05-08\n\n### Changed\n\n- Applied console markup after highlighting\n- Documented highlighting\n- Changed Markup parser to handle overlapping styles\n- Relaxed dependency on colorama\n- Allowed Theme to accept values as style definitions (str) as well as Style instances\n- Added a panel to emphasize code in Markdown\n\n### Added\n\n- Added markup.escape\n- Added `python -m rich.theme` command\n- Added `python -m rich.markdown` command\n- Added rendering of images in Readme (links only)\n\n### Fixed\n\n- Fixed Text.assemble not working with strings https://github.com/willmcgugan/rich/issues/57\n- Fixed table when column widths must be compressed to fit\n\n## [1.0.0] - 2020-05-03\n\n### Changed\n\n- Improvements to repr highlighter to highlight URLs\n\n## [0.8.13] - 2020-04-28\n\n### Fixed\n\n- Fixed incorrect markdown rendering for quotes and changed style\n\n## [0.8.12] - 2020-04-21\n\n### Fixed\n\n- Removed debug print from rich.progress\n\n## [0.8.11] - 2020-04-14\n\n### Added\n\n- Added Table.show_lines to render lines between rows\n\n### Changed\n\n- Added markup escape with double square brackets\n\n## [0.8.10] - 2020-04-12\n\n### Fixed\n\n- Fix row_styles applying to header\n\n## [0.8.9] - 2020-04-12\n\n### Changed\n\n- Added force_terminal option to `Console.__init__`\n\n### Added\n\n- Added Table.row_styles to enable zebra striping.\n\n## [0.8.8] - 2020-03-31\n\n### Fixed\n\n- Fixed background in Syntax\n\n## [0.8.7] - 2020-03-31\n\n### Fixed\n\n- Broken wrapping of long lines\n- Fixed wrapping in Syntax\n\n### Changed\n\n- Added word_wrap option to Syntax, which defaults to False.\n- Added word_wrap option to Traceback.\n\n## [0.8.6] - 2020-03-29\n\n### Added\n\n- Experimental Jupyter notebook support: from rich.jupyter import print\n\n## [0.8.5] - 2020-03-29\n\n### Changed\n\n- Smarter number parsing regex for repr highlighter\n\n### Added\n\n- uuid highlighter for repr\n\n## [0.8.4] - 2020-03-28\n\n### Added\n\n- Added 'test card', run python -m rich\n\n### Changed\n\n- Detected windows terminal, defaulting to colorama support\n\n### Fixed\n\n- Fixed table scaling issue\n\n## [0.8.3] - 2020-03-27\n\n### Fixed\n\n- CJK right align\n\n## [0.8.2] - 2020-03-27\n\n### Changed\n\n- Fixed issue with 0 speed resulting in zero division error\n- Changed signature of Progress.update\n- Made calling start() a second time a no-op\n\n## [0.8.1] - 2020-03-22\n\n### Added\n\n- Added progress.DownloadColumn\n\n## [0.8.0] - 2020-03-17\n\n### Added\n\n- CJK support\n- Console level highlight flag\n- Added encoding argument to Syntax.from_path\n\n### Changed\n\n- Dropped support for Windows command prompt (try https://www.microsoft.com/en-gb/p/windows-terminal-preview/)\n- Added task_id to Progress.track\n\n## [0.7.2] - 2020-03-15\n\n### Fixed\n\n- KeyError for missing pygments style\n\n## [0.7.1] - 2020-03-13\n\n### Fixed\n\n- Issue with control codes being used in length calculation\n\n### Changed\n\n- Remove current_style concept, which wasn't really used and was problematic for concurrency\n\n## [0.7.0] - 2020-03-12\n\n### Changed\n\n- Added width option to Panel\n- Change special method `__render_width__` to `__measure__`\n- Dropped the \"markdown style\" syntax in console markup\n- Optimized style rendering\n\n### Added\n\n- Added Console.show_cursor method\n- Added Progress bars\n\n### Fixed\n\n- Fixed wrapping when a single word was too large to fit in a line\n\n## [0.6.0] - 2020-03-03\n\n### Added\n\n- Added tab_size to Console and Text\n- Added protocol.is_renderable for runtime check\n- Added emoji switch to Console\n- Added inherit boolean to Theme\n- Made Console thread safe, with a thread local buffer\n\n### Changed\n\n- Console.markup attribute now effects Table\n- SeparatedConsoleRenderable and RichCast types\n\n### Fixed\n\n- Fixed tabs breaking rendering by converting to spaces\n\n## [0.5.0] - 2020-02-23\n\n### Changed\n\n- Replaced `__console_str__` with `__rich__`\n\n## [0.4.1] - 2020-02-22\n\n### Fixed\n\n- Readme links in PyPI\n\n## [0.4.0] - 2020-02-22\n\n### Added\n\n- Added Traceback rendering and handler\n- Added rich.constrain\n- Added rich.rule\n\n### Fixed\n\n- Fixed unnecessary padding\n\n## [0.3.3] - 2020-02-04\n\n### Fixed\n\n- Fixed Windows color support\n- Fixed line width on windows issue (https://github.com/willmcgugan/rich/issues/7)\n- Fixed Pretty print on Windows\n\n## [0.3.2] - 2020-01-26\n\n### Added\n\n- Added rich.logging\n\n## [0.3.1] - 2020-01-22\n\n### Added\n\n- Added colorama for Windows support\n\n## [0.3.0] - 2020-01-19\n\n### Added\n\n- First official release, API still to be stabilized\n", "CONTRIBUTORS.md": "# Contributors\n\nThe following people have contributed to the development of Rich:\n\n<!-- Add your name below, sort alphabetically by surname. Link to Github profile / your home page. -->\n\n- [Gregory Beauregard](https://github.com/GBeauregard/pyffstream)\n- [Pete Davison](https://github.com/pd93)\n- [James Estevez](https://github.com/jstvz)\n- [Oleksis Fraga](https://github.com/oleksis)\n- [Finn Hughes](https://github.com/finnhughes)\n- [Josh Karpel](https://github.com/JoshKarpel)\n- [Andrew Kettmann](https://github.com/akettmann)\n- [Hedy Li](https://github.com/hedythedev)\n- [Alexander Mancevice](https://github.com/amancevice)\n- [Will McGugan](https://github.com/willmcgugan)\n- [Nathan Page](https://github.com/nathanrpage97)\n- [Avi Perl](https://github.com/avi-perl)\n- [Laurent Peuch](https://github.com/psycojoker)\n- [Kylian Point](https://github.com/p0lux)\n- [Kyle Pollina](https://github.com/kylepollina)\n- [Clément Robert](https://github.com/neutrinoceros)\n- [Tushar Sadhwani](https://github.com/tusharsadhwani)\n- [Tim Savage](https://github.com/timsavage)\n- [Gabriele N. Tornetta](https://github.com/p403n1x87)\n", "docs/source/text.rst": ".. _rich_text:\n\nRich Text\n=========\n\nRich has a :class:`~rich.text.Text` class you can use to mark up strings with color and style attributes. You can use a Text instance anywhere a string is accepted, which gives you a lot of control over presentation.\n\nYou can consider this class to be like a string with marked up regions of text. Unlike a builtin ``str``, a Text instance is mutable, and most methods operate in-place rather than returning a new instance. \n\nOne way to add a style to Text is the :meth:`~rich.text.Text.stylize` method which applies a style to a start and end offset. Here is an example::\n\n from rich.console import Console\n from rich.text import Text\n\n console = Console()\n text = Text(\"Hello, World!\")\n text.stylize(\"bold magenta\", 0, 6)\n console.print(text)\n\nThis will print \"Hello, World!\" to the terminal, with the first word in bold magenta.\n\nAlternatively, you can construct styled text by calling :meth:`~rich.text.Text.append` to add a string and style to the end of the Text. Here's an example::\n\n text = Text()\n text.append(\"Hello\", style=\"bold magenta\")\n text.append(\" World!\")\n console.print(text)\n\nSince building Text instances from parts is a common requirement, Rich offers :meth:`~rich.text.Text.assemble` which will combine strings or pairs of string and Style, and return a Text instance. The follow example is equivalent to the code above::\n\n text = Text.assemble((\"Hello\", \"bold magenta\"), \" World!\")\n console.print(text)\n\nYou can apply a style to given words in the text with :meth:`~rich.text.Text.highlight_words` or for ultimate control call :meth:`~rich.text.Text.highlight_regex` to highlight text matching a *regular expression*. \n\n\nText attributes\n~~~~~~~~~~~~~~~\n\nThe Text class has a number of parameters you can set on the constructor to modify how the text is displayed.\n\n- ``justify`` should be \"left\", \"center\", \"right\", or \"full\", and will override default justify behavior.\n- ``overflow`` should be \"fold\", \"crop\", or \"ellipsis\", and will override default overflow.\n- ``no_wrap`` prevents wrapping if the text is longer then the available width.\n- ``tab_size`` Sets the number of characters in a tab.\n\nA Text instance may be used in place of a plain string virtually everywhere in the Rich API, which gives you a lot of control in how text renders within other Rich renderables. For instance, the following example right aligns text within a :class:`~rich.panel.Panel`::\n\n from rich import print\n from rich.panel import Panel\n from rich.text import Text\n panel = Panel(Text(\"Hello\", justify=\"right\"))\n print(panel)\n\n\n", "rich/text.py": "import re\nfrom functools import partial, reduce\nfrom math import gcd\nfrom operator import attrgetter, itemgetter\nfrom rich.emoji import EmojiVariant\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n NamedTuple,\n Optional,\n Tuple,\n Union,\n)\n\nfrom ._loop import loop_last\nfrom ._pick import pick_bool\nfrom ._wrap import divide_line\nfrom .align import AlignMethod\nfrom .cells import cell_len, set_cell_size\nfrom .containers import Lines\nfrom .control import strip_control_codes\nfrom .emoji import EmojiVariant\nfrom .jupyter import JupyterMixin\nfrom .measure import Measurement\nfrom .segment import Segment\nfrom .style import Style, StyleType\n\nif TYPE_CHECKING: # pragma: no cover\n from .console import Console, ConsoleOptions, JustifyMethod, OverflowMethod\n\nDEFAULT_JUSTIFY: \"JustifyMethod\" = \"default\"\nDEFAULT_OVERFLOW: \"OverflowMethod\" = \"fold\"\n\n\n_re_whitespace = re.compile(r\"\\s+$\")\n\nTextType = Union[str, \"Text\"]\n\nGetStyleCallable = Callable[[str], Optional[StyleType]]\n\n\nclass Span(NamedTuple):\n \"\"\"A marked up region in some text.\"\"\"\n\n start: int\n \"\"\"Span start index.\"\"\"\n end: int\n \"\"\"Span end index.\"\"\"\n style: Union[str, Style]\n \"\"\"Style associated with the span.\"\"\"\n\n def __repr__(self) -> str:\n return (\n f\"Span({self.start}, {self.end}, {self.style!r})\"\n if (isinstance(self.style, Style) and self.style._meta)\n else f\"Span({self.start}, {self.end}, {repr(self.style)})\"\n )\n\n def __bool__(self) -> bool:\n return self.end > self.start\n\n def split(self, offset: int) -> Tuple[\"Span\", Optional[\"Span\"]]:\n \"\"\"Split a span in to 2 from a given offset.\"\"\"\n\n if offset < self.start:\n return self, None\n if offset >= self.end:\n return self, None\n\n start, end, style = self\n span1 = Span(start, min(end, offset), style)\n span2 = Span(span1.end, end, style)\n return span1, span2\n\n def move(self, offset: int) -> \"Span\":\n \"\"\"Move start and end by a given offset.\n\n Args:\n offset (int): Number of characters to add to start and end.\n\n Returns:\n TextSpan: A new TextSpan with adjusted position.\n \"\"\"\n start, end, style = self\n return Span(start + offset, end + offset, style)\n\n def right_crop(self, offset: int) -> \"Span\":\n \"\"\"Crop the span at the given offset.\n\n Args:\n offset (int): A value between start and end.\n\n Returns:\n Span: A new (possibly smaller) span.\n \"\"\"\n start, end, style = self\n if offset >= end:\n return self\n return Span(start, min(offset, end), style)\n\n\nclass Text(JupyterMixin):\n \"\"\"Text with color / style.\n\n Args:\n text (str, optional): Default unstyled text. Defaults to \"\".\n style (Union[str, Style], optional): Base style for text. Defaults to \"\".\n justify (str, optional): Justify method: \"left\", \"center\", \"full\", \"right\". Defaults to None.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", \"ellipsis\". Defaults to None.\n no_wrap (bool, optional): Disable text wrapping, or None for default. Defaults to None.\n end (str, optional): Character to end text with. Defaults to \"\\\\\\\\n\".\n tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8.\n spans (List[Span], optional). A list of predefined style spans. Defaults to None.\n \"\"\"\n\n __slots__ = [\n \"_text\",\n \"style\",\n \"justify\",\n \"overflow\",\n \"no_wrap\",\n \"end\",\n \"tab_size\",\n \"_spans\",\n \"_length\",\n ]\n\n def __init__(\n self,\n text: str = \"\",\n style: Union[str, Style] = \"\",\n *,\n justify: Optional[\"JustifyMethod\"] = None,\n overflow: Optional[\"OverflowMethod\"] = None,\n no_wrap: Optional[bool] = None,\n end: str = \"\\n\",\n tab_size: Optional[int] = 8,\n spans: Optional[List[Span]] = None,\n ) -> None:\n self._text = [strip_control_codes(text)]\n self.style = style\n self.justify: Optional[\"JustifyMethod\"] = justify\n self.overflow: Optional[\"OverflowMethod\"] = overflow\n self.no_wrap = no_wrap\n self.end = end\n self.tab_size = tab_size\n self._spans: List[Span] = spans or []\n self._length: int = len(text)\n\n def __len__(self) -> int:\n return self._length\n\n def __bool__(self) -> bool:\n return bool(self._length)\n\n def __str__(self) -> str:\n return self.plain\n\n def __repr__(self) -> str:\n return f\"<text {self.plain!r} {self._spans!r}>\"\n\n def __add__(self, other: Any) -> \"Text\":\n if isinstance(other, (str, Text)):\n result = self.copy()\n result.append(other)\n return result\n return NotImplemented\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, Text):\n return NotImplemented\n return self.plain == other.plain and self._spans == other._spans\n\n def __contains__(self, other: object) -> bool:\n if isinstance(other, str):\n return other in self.plain\n elif isinstance(other, Text):\n return other.plain in self.plain\n return False\n\n def __getitem__(self, slice: Union[int, slice]) -> \"Text\":\n def get_text_at(offset: int) -> \"Text\":\n _Span = Span\n text = Text(\n self.plain[offset],\n spans=[\n _Span(0, 1, style)\n for start, end, style in self._spans\n if end > offset >= start\n ],\n end=\"\",\n )\n return text\n\n if isinstance(slice, int):\n return get_text_at(slice)\n else:\n start, stop, step = slice.indices(len(self.plain))\n if step == 1:\n lines = self.divide([start, stop])\n return lines[1]\n else:\n # This would be a bit of work to implement efficiently\n # For now, its not required\n raise TypeError(\"slices with step!=1 are not supported\")\n\n @property\n def cell_len(self) -> int:\n \"\"\"Get the number of cells required to render this text.\"\"\"\n return cell_len(self.plain)\n\n @classmethod\n def from_markup(\n cls,\n text: str,\n *,\n style: Union[str, Style] = \"\",\n emoji: bool = True,\n emoji_variant: Optional[EmojiVariant] = None,\n justify: Optional[\"JustifyMethod\"] = None,\n overflow: Optional[\"OverflowMethod\"] = None,\n ) -> \"Text\":\n \"\"\"Create Text instance from markup.\n\n Args:\n text (str): A string containing console markup.\n emoji (bool, optional): Also render emoji code. Defaults to True.\n justify (str, optional): Justify method: \"left\", \"center\", \"full\", \"right\". Defaults to None.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", \"ellipsis\". Defaults to None.\n\n Returns:\n Text: A Text instance with markup rendered.\n \"\"\"\n from .markup import render\n\n rendered_text = render(text, style, emoji=emoji, emoji_variant=emoji_variant)\n rendered_text.justify = justify\n rendered_text.overflow = overflow\n return rendered_text\n\n @classmethod\n def styled(\n cls,\n text: str,\n style: StyleType = \"\",\n *,\n justify: Optional[\"JustifyMethod\"] = None,\n overflow: Optional[\"OverflowMethod\"] = None,\n ) -> \"Text\":\n \"\"\"Construct a Text instance with a pre-applied styled. A style applied in this way won't be used\n to pad the text when it is justified.\n\n Args:\n text (str): A string containing console markup.\n style (Union[str, Style]): Style to apply to the text. Defaults to \"\".\n justify (str, optional): Justify method: \"left\", \"center\", \"full\", \"right\". Defaults to None.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", \"ellipsis\". Defaults to None.\n\n Returns:\n Text: A text instance with a style applied to the entire string.\n \"\"\"\n styled_text = cls(text, justify=justify, overflow=overflow)\n styled_text.stylize(style)\n return styled_text\n\n @classmethod\n def assemble(\n cls,\n *parts: Union[str, \"Text\", Tuple[str, StyleType]],\n style: Union[str, Style] = \"\",\n justify: Optional[\"JustifyMethod\"] = None,\n overflow: Optional[\"OverflowMethod\"] = None,\n no_wrap: Optional[bool] = None,\n end: str = \"\\n\",\n tab_size: int = 8,\n meta: Optional[Dict[str, Any]] = None,\n ) -> \"Text\":\n \"\"\"Construct a text instance by combining a sequence of strings with optional styles.\n The positional arguments should be either strings, or a tuple of string + style.\n\n Args:\n style (Union[str, Style], optional): Base style for text. Defaults to \"\".\n justify (str, optional): Justify method: \"left\", \"center\", \"full\", \"right\". Defaults to None.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", \"ellipsis\". Defaults to None.\n end (str, optional): Character to end text with. Defaults to \"\\\\\\\\n\".\n tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8.\n meta (Dict[str, Any], optional). Meta data to apply to text, or None for no meta data. Default to None\n\n Returns:\n Text: A new text instance.\n \"\"\"\n text = cls(\n style=style,\n justify=justify,\n overflow=overflow,\n no_wrap=no_wrap,\n end=end,\n tab_size=tab_size,\n )\n append = text.append\n _Text = Text\n for part in parts:\n if isinstance(part, (_Text, str)):\n append(part)\n else:\n append(*part)\n if meta:\n text.apply_meta(meta)\n return text\n\n @property\n def plain(self) -> str:\n \"\"\"Get the text as a single string.\"\"\"\n if len(self._text) != 1:\n self._text[:] = [\"\".join(self._text)]\n return self._text[0]\n\n @plain.setter\n def plain(self, new_text: str) -> None:\n \"\"\"Set the text to a new value.\"\"\"\n if new_text != self.plain:\n self._text[:] = [new_text]\n old_length = self._length\n self._length = len(new_text)\n if old_length > self._length:\n self._trim_spans()\n\n @property\n def spans(self) -> List[Span]:\n \"\"\"Get a reference to the internal list of spans.\"\"\"\n return self._spans\n\n @spans.setter\n def spans(self, spans: List[Span]) -> None:\n \"\"\"Set spans.\"\"\"\n self._spans = spans[:]\n\n def blank_copy(self, plain: str = \"\") -> \"Text\":\n \"\"\"Return a new Text instance with copied meta data (but not the string or spans).\"\"\"\n copy_self = Text(\n plain,\n style=self.style,\n justify=self.justify,\n overflow=self.overflow,\n no_wrap=self.no_wrap,\n end=self.end,\n tab_size=self.tab_size,\n )\n return copy_self\n\n def copy(self) -> \"Text\":\n \"\"\"Return a copy of this instance.\"\"\"\n copy_self = Text(\n self.plain,\n style=self.style,\n justify=self.justify,\n overflow=self.overflow,\n no_wrap=self.no_wrap,\n end=self.end,\n tab_size=self.tab_size,\n )\n copy_self._spans[:] = self._spans\n return copy_self\n\n def stylize(\n self,\n style: Union[str, Style],\n start: int = 0,\n end: Optional[int] = None,\n ) -> None:\n \"\"\"Apply a style to the text, or a portion of the text.\n\n Args:\n style (Union[str, Style]): Style instance or style definition to apply.\n start (int): Start offset (negative indexing is supported). Defaults to 0.\n end (Optional[int], optional): End offset (negative indexing is supported), or None for end of text. Defaults to None.\n\n \"\"\"\n if style:\n length = len(self)\n if start < 0:\n start = length + start\n if end is None:\n end = length\n if end < 0:\n end = length + end\n if start >= length or end <= start:\n # Span not in text or not valid\n return\n self._spans.append(Span(start, min(length, end), style))\n\n def apply_meta(\n self, meta: Dict[str, Any], start: int = 0, end: Optional[int] = None\n ) -> None:\n \"\"\"Apply meta data to the text, or a portion of the text.\n\n Args:\n meta (Dict[str, Any]): A dict of meta information.\n start (int): Start offset (negative indexing is supported). Defaults to 0.\n end (Optional[int], optional): End offset (negative indexing is supported), or None for end of text. Defaults to None.\n\n \"\"\"\n style = Style.from_meta(meta)\n self.stylize(style, start=start, end=end)\n\n def on(self, meta: Optional[Dict[str, Any]] = None, **handlers: Any) -> \"Text\":\n \"\"\"Apply event handlers (used by Textual project).\n\n Example:\n >>> from rich.text import Text\n >>> text = Text(\"hello world\")\n >>> text.on(click=\"view.toggle('world')\")\n\n Args:\n meta (Dict[str, Any]): Mapping of meta information.\n **handlers: Keyword args are prefixed with \"@\" to defined handlers.\n\n Returns:\n Text: Self is returned to method may be chained.\n \"\"\"\n meta = {} if meta is None else meta\n meta.update({f\"@{key}\": value for key, value in handlers.items()})\n self.stylize(Style.from_meta(meta))\n return self\n\n def remove_suffix(self, suffix: str) -> None:\n \"\"\"Remove a suffix if it exists.\n\n Args:\n suffix (str): Suffix to remove.\n \"\"\"\n if self.plain.endswith(suffix):\n self.right_crop(len(suffix))\n\n def get_style_at_offset(self, console: \"Console\", offset: int) -> Style:\n \"\"\"Get the style of a character at give offset.\n\n Args:\n console (~Console): Console where text will be rendered.\n offset (int): Offset in to text (negative indexing supported)\n\n Returns:\n Style: A Style instance.\n \"\"\"\n # TODO: This is a little inefficient, it is only used by full justify\n if offset < 0:\n offset = len(self) + offset\n get_style = console.get_style\n style = get_style(self.style).copy()\n for start, end, span_style in self._spans:\n if end > offset >= start:\n style += get_style(span_style, default=\"\")\n return style\n\n def highlight_regex(\n self,\n re_highlight: str,\n style: Optional[Union[GetStyleCallable, StyleType]] = None,\n *,\n style_prefix: str = \"\",\n ) -> int:\n \"\"\"Highlight text with a regular expression, where group names are\n translated to styles.\n\n Args:\n re_highlight (str): A regular expression.\n style (Union[GetStyleCallable, StyleType]): Optional style to apply to whole match, or a callable\n which accepts the matched text and returns a style. Defaults to None.\n style_prefix (str, optional): Optional prefix to add to style group names.\n\n Returns:\n int: Number of regex matches\n \"\"\"\n count = 0\n append_span = self._spans.append\n _Span = Span\n plain = self.plain\n for match in re.finditer(re_highlight, plain):\n get_span = match.span\n if style:\n start, end = get_span()\n match_style = style(plain[start:end]) if callable(style) else style\n if match_style is not None and end > start:\n append_span(_Span(start, end, match_style))\n\n count += 1\n for name in match.groupdict().keys():\n start, end = get_span(name)\n if start != -1 and end > start:\n append_span(_Span(start, end, f\"{style_prefix}{name}\"))\n return count\n\n def highlight_words(\n self,\n words: Iterable[str],\n style: Union[str, Style],\n *,\n case_sensitive: bool = True,\n ) -> int:\n \"\"\"Highlight words with a style.\n\n Args:\n words (Iterable[str]): Worlds to highlight.\n style (Union[str, Style]): Style to apply.\n case_sensitive (bool, optional): Enable case sensitive matchings. Defaults to True.\n\n Returns:\n int: Number of words highlighted.\n \"\"\"\n re_words = \"|\".join(re.escape(word) for word in words)\n add_span = self._spans.append\n count = 0\n _Span = Span\n for match in re.finditer(\n re_words, self.plain, flags=0 if case_sensitive else re.IGNORECASE\n ):\n start, end = match.span(0)\n add_span(_Span(start, end, style))\n count += 1\n return count\n\n def rstrip(self) -> None:\n \"\"\"Strip whitespace from end of text.\"\"\"\n self.plain = self.plain.rstrip()\n\n def rstrip_end(self, size: int) -> None:\n \"\"\"Remove whitespace beyond a certain width at the end of the text.\n\n Args:\n size (int): The desired size of the text.\n \"\"\"\n text_length = len(self)\n if text_length > size:\n excess = text_length - size\n whitespace_match = _re_whitespace.search(self.plain)\n if whitespace_match is not None:\n whitespace_count = len(whitespace_match.group(0))\n self.right_crop(min(whitespace_count, excess))\n\n def set_length(self, new_length: int) -> None:\n \"\"\"Set new length of the text, clipping or padding is required.\"\"\"\n length = len(self)\n if length != new_length:\n if length < new_length:\n self.pad_right(new_length - length)\n else:\n self.right_crop(length - new_length)\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> Iterable[Segment]:\n tab_size: int = console.tab_size or self.tab_size or 8\n justify = self.justify or options.justify or DEFAULT_JUSTIFY\n\n overflow = self.overflow or options.overflow or DEFAULT_OVERFLOW\n\n lines = self.wrap(\n console,\n options.max_width,\n justify=justify,\n overflow=overflow,\n tab_size=tab_size or 8,\n no_wrap=pick_bool(self.no_wrap, options.no_wrap, False),\n )\n all_lines = Text(\"\\n\").join(lines)\n yield from all_lines.render(console, end=self.end)\n\n def __rich_measure__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> Measurement:\n text = self.plain\n lines = text.splitlines()\n max_text_width = max(cell_len(line) for line in lines) if lines else 0\n words = text.split()\n min_text_width = (\n max(cell_len(word) for word in words) if words else max_text_width\n )\n return Measurement(min_text_width, max_text_width)\n\n def render(self, console: \"Console\", end: str = \"\") -> Iterable[\"Segment\"]:\n \"\"\"Render the text as Segments.\n\n Args:\n console (Console): Console instance.\n end (Optional[str], optional): Optional end character.\n\n Returns:\n Iterable[Segment]: Result of render that may be written to the console.\n \"\"\"\n _Segment = Segment\n text = self.plain\n if not self._spans:\n yield Segment(text)\n if end:\n yield _Segment(end)\n return\n get_style = partial(console.get_style, default=Style.null())\n\n enumerated_spans = list(enumerate(self._spans, 1))\n style_map = {index: get_style(span.style) for index, span in enumerated_spans}\n style_map[0] = get_style(self.style)\n\n spans = [\n (0, False, 0),\n *((span.start, False, index) for index, span in enumerated_spans),\n *((span.end, True, index) for index, span in enumerated_spans),\n (len(text), True, 0),\n ]\n spans.sort(key=itemgetter(0, 1))\n\n stack: List[int] = []\n stack_append = stack.append\n stack_pop = stack.remove\n\n style_cache: Dict[Tuple[Style, ...], Style] = {}\n style_cache_get = style_cache.get\n combine = Style.combine\n\n def get_current_style() -> Style:\n \"\"\"Construct current style from stack.\"\"\"\n styles = tuple(style_map[_style_id] for _style_id in sorted(stack))\n cached_style = style_cache_get(styles)\n if cached_style is not None:\n return cached_style\n current_style = combine(styles)\n style_cache[styles] = current_style\n return current_style\n\n for (offset, leaving, style_id), (next_offset, _, _) in zip(spans, spans[1:]):\n if leaving:\n stack_pop(style_id)\n else:\n stack_append(style_id)\n if next_offset > offset:\n yield _Segment(text[offset:next_offset], get_current_style())\n if end:\n yield _Segment(end)\n\n def join(self, lines: Iterable[\"Text\"]) -> \"Text\":\n \"\"\"Join text together with this instance as the separator.\n\n Args:\n lines (Iterable[Text]): An iterable of Text instances to join.\n\n Returns:\n Text: A new text instance containing join text.\n \"\"\"\n\n new_text = self.blank_copy()\n\n def iter_text() -> Iterable[\"Text\"]:\n if self.plain:\n for last, line in loop_last(lines):\n yield line\n if not last:\n yield self\n else:\n yield from lines\n\n extend_text = new_text._text.extend\n append_span = new_text._spans.append\n extend_spans = new_text._spans.extend\n offset = 0\n _Span = Span\n\n for text in iter_text():\n extend_text(text._text)\n if text.style:\n append_span(_Span(offset, offset + len(text), text.style))\n extend_spans(\n _Span(offset + start, offset + end, style)\n for start, end, style in text._spans\n )\n offset += len(text)\n new_text._length = offset\n return new_text\n\n def expand_tabs(self, tab_size: Optional[int] = None) -> None:\n \"\"\"Converts tabs to spaces.\n\n Args:\n tab_size (int, optional): Size of tabs. Defaults to 8.\n\n \"\"\"\n if \"\\t\" not in self.plain:\n return\n pos = 0\n if tab_size is None:\n tab_size = self.tab_size\n assert tab_size is not None\n result = self.blank_copy()\n append = result.append\n\n _style = self.style\n for line in self.split(\"\\n\", include_separator=True):\n parts = line.split(\"\\t\", include_separator=True)\n for part in parts:\n if part.plain.endswith(\"\\t\"):\n part._text = [part.plain[:-1] + \" \"]\n append(part)\n pos += len(part)\n spaces = tab_size - ((pos - 1) % tab_size) - 1\n if spaces:\n append(\" \" * spaces, _style)\n pos += spaces\n else:\n append(part)\n self._text = [result.plain]\n self._length = len(self.plain)\n self._spans[:] = result._spans\n\n def truncate(\n self,\n max_width: int,\n *,\n overflow: Optional[\"OverflowMethod\"] = None,\n pad: bool = False,\n ) -> None:\n \"\"\"Truncate text if it is longer that a given width.\n\n Args:\n max_width (int): Maximum number of characters in text.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", or \"ellipsis\". Defaults to None, to use self.overflow.\n pad (bool, optional): Pad with spaces if the length is less than max_width. Defaults to False.\n \"\"\"\n _overflow = overflow or self.overflow or DEFAULT_OVERFLOW\n if _overflow != \"ignore\":\n length = cell_len(self.plain)\n if length > max_width:\n if _overflow == \"ellipsis\":\n self.plain = set_cell_size(self.plain, max_width - 1) + \"…\"\n else:\n self.plain = set_cell_size(self.plain, max_width)\n if pad and length < max_width:\n spaces = max_width - length\n self._text = [f\"{self.plain}{' ' * spaces}\"]\n self._length = len(self.plain)\n\n def _trim_spans(self) -> None:\n \"\"\"Remove or modify any spans that are over the end of the text.\"\"\"\n max_offset = len(self.plain)\n _Span = Span\n self._spans[:] = [\n (\n span\n if span.end < max_offset\n else _Span(span.start, min(max_offset, span.end), span.style)\n )\n for span in self._spans\n if span.start < max_offset\n ]\n\n def pad(self, count: int, character: str = \" \") -> None:\n \"\"\"Pad left and right with a given number of characters.\n\n Args:\n count (int): Width of padding.\n \"\"\"\n assert len(character) == 1, \"Character must be a string of length 1\"\n if count:\n pad_characters = character * count\n self.plain = f\"{pad_characters}{self.plain}{pad_characters}\"\n _Span = Span\n self._spans[:] = [\n _Span(start + count, end + count, style)\n for start, end, style in self._spans\n ]\n\n def pad_left(self, count: int, character: str = \" \") -> None:\n \"\"\"Pad the left with a given character.\n\n Args:\n count (int): Number of characters to pad.\n character (str, optional): Character to pad with. Defaults to \" \".\n \"\"\"\n assert len(character) == 1, \"Character must be a string of length 1\"\n if count:\n self.plain = f\"{character * count}{self.plain}\"\n _Span = Span\n self._spans[:] = [\n _Span(start + count, end + count, style)\n for start, end, style in self._spans\n ]\n\n def pad_right(self, count: int, character: str = \" \") -> None:\n \"\"\"Pad the right with a given character.\n\n Args:\n count (int): Number of characters to pad.\n character (str, optional): Character to pad with. Defaults to \" \".\n \"\"\"\n assert len(character) == 1, \"Character must be a string of length 1\"\n if count:\n self.plain = f\"{self.plain}{character * count}\"\n\n def align(self, align: AlignMethod, width: int, character: str = \" \") -> None:\n \"\"\"Align text to a given width.\n\n Args:\n align (AlignMethod): One of \"left\", \"center\", or \"right\".\n width (int): Desired width.\n character (str, optional): Character to pad with. Defaults to \" \".\n \"\"\"\n self.truncate(width)\n excess_space = width - cell_len(self.plain)\n if excess_space:\n if align == \"left\":\n self.pad_right(excess_space, character)\n elif align == \"center\":\n left = excess_space // 2\n self.pad_left(left, character)\n self.pad_right(excess_space - left, character)\n else:\n self.pad_left(excess_space, character)\n\n def append(\n self, text: Union[\"Text\", str], style: Optional[Union[str, \"Style\"]] = None\n ) -> \"Text\":\n \"\"\"Add text with an optional style.\n\n Args:\n text (Union[Text, str]): A str or Text to append.\n style (str, optional): A style name. Defaults to None.\n\n Returns:\n Text: Returns self for chaining.\n \"\"\"\n\n if not isinstance(text, (str, Text)):\n raise TypeError(\"Only str or Text can be appended to Text\")\n\n if len(text):\n if isinstance(text, str):\n text = strip_control_codes(text)\n self._text.append(text)\n offset = len(self)\n text_length = len(text)\n if style is not None:\n self._spans.append(Span(offset, offset + text_length, style))\n self._length += text_length\n elif isinstance(text, Text):\n _Span = Span\n if style is not None:\n raise ValueError(\n \"style must not be set when appending Text instance\"\n )\n text_length = self._length\n if text.style is not None:\n self._spans.append(\n _Span(text_length, text_length + len(text), text.style)\n )\n self._text.append(text.plain)\n self._spans.extend(\n _Span(start + text_length, end + text_length, style)\n for start, end, style in text._spans\n )\n self._length += len(text)\n return self\n\n def append_text(self, text: \"Text\") -> \"Text\":\n \"\"\"Append another Text instance. This method is more performant that Text.append, but\n only works for Text.\n\n Returns:\n Text: Returns self for chaining.\n \"\"\"\n _Span = Span\n text_length = self._length\n if text.style is not None:\n self._spans.append(_Span(text_length, text_length + len(text), text.style))\n self._text.append(text.plain)\n self._spans.extend(\n _Span(start + text_length, end + text_length, style)\n for start, end, style in text._spans\n )\n self._length += len(text)\n return self\n\n def append_tokens(\n self, tokens: Iterable[Tuple[str, Optional[StyleType]]]\n ) -> \"Text\":\n \"\"\"Append iterable of str and style. Style may be a Style instance or a str style definition.\n\n Args:\n pairs (Iterable[Tuple[str, Optional[StyleType]]]): An iterable of tuples containing str content and style.\n\n Returns:\n Text: Returns self for chaining.\n \"\"\"\n append_text = self._text.append\n append_span = self._spans.append\n _Span = Span\n offset = len(self)\n for content, style in tokens:\n append_text(content)\n if style is not None:\n append_span(_Span(offset, offset + len(content), style))\n offset += len(content)\n self._length = offset\n return self\n\n def copy_styles(self, text: \"Text\") -> None:\n \"\"\"Copy styles from another Text instance.\n\n Args:\n text (Text): A Text instance to copy styles from, must be the same length.\n \"\"\"\n self._spans.extend(text._spans)\n\n def split(\n self,\n separator: str = \"\\n\",\n *,\n include_separator: bool = False,\n allow_blank: bool = False,\n ) -> Lines:\n \"\"\"Split rich text in to lines, preserving styles.\n\n Args:\n separator (str, optional): String to split on. Defaults to \"\\\\\\\\n\".\n include_separator (bool, optional): Include the separator in the lines. Defaults to False.\n allow_blank (bool, optional): Return a blank line if the text ends with a separator. Defaults to False.\n\n Returns:\n List[RichText]: A list of rich text, one per line of the original.\n \"\"\"\n assert separator, \"separator must not be empty\"\n\n text = self.plain\n if separator not in text:\n return Lines([self.copy()])\n\n if include_separator:\n lines = self.divide(\n match.end() for match in re.finditer(re.escape(separator), text)\n )\n else:\n\n def flatten_spans() -> Iterable[int]:\n for match in re.finditer(re.escape(separator), text):\n start, end = match.span()\n yield start\n yield end\n\n lines = Lines(\n line for line in self.divide(flatten_spans()) if line.plain != separator\n )\n\n if not allow_blank and text.endswith(separator):\n lines.pop()\n\n return lines\n\n def divide(self, offsets: Iterable[int]) -> Lines:\n \"\"\"Divide text in to a number of lines at given offsets.\n\n Args:\n offsets (Iterable[int]): Offsets used to divide text.\n\n Returns:\n Lines: New RichText instances between offsets.\n \"\"\"\n _offsets = list(offsets)\n if not _offsets:\n return Lines([self.copy()])\n\n text = self.plain\n text_length = len(text)\n divide_offsets = [0, *_offsets, text_length]\n line_ranges = list(zip(divide_offsets, divide_offsets[1:]))\n\n style = self.style\n justify = self.justify\n overflow = self.overflow\n _Text = Text\n new_lines = Lines(\n _Text(\n text[start:end],\n style=style,\n justify=justify,\n overflow=overflow,\n )\n for start, end in line_ranges\n )\n if not self._spans:\n return new_lines\n order = {span: span_index for span_index, span in enumerate(self._spans)}\n span_stack = sorted(self._spans, key=attrgetter(\"start\"), reverse=True)\n\n pop = span_stack.pop\n push = span_stack.append\n _Span = Span\n get_order = order.__getitem__\n\n for line, (start, end) in zip(new_lines, line_ranges):\n if not span_stack:\n break\n append_span = line._spans.append\n position = len(span_stack) - 1\n while span_stack[position].start < end:\n span = pop(position)\n add_span, remaining_span = span.split(end)\n if remaining_span:\n push(remaining_span)\n order[remaining_span] = order[span]\n span_start, span_end, span_style = add_span\n line_span = _Span(span_start - start, span_end - start, span_style)\n order[line_span] = order[span]\n append_span(line_span)\n position -= 1\n if position < 0 or not span_stack:\n break # pragma: no cover\n line._spans.sort(key=get_order)\n\n return new_lines\n\n def right_crop(self, amount: int = 1) -> None:\n \"\"\"Remove a number of characters from the end of the text.\"\"\"\n max_offset = len(self.plain) - amount\n _Span = Span\n self._spans[:] = [\n (\n span\n if span.end < max_offset\n else _Span(span.start, min(max_offset, span.end), span.style)\n )\n for span in self._spans\n if span.start < max_offset\n ]\n self._text = [self.plain[:-amount]]\n self._length -= amount\n\n def wrap(\n self,\n console: \"Console\",\n width: int,\n *,\n justify: Optional[\"JustifyMethod\"] = None,\n overflow: Optional[\"OverflowMethod\"] = None,\n tab_size: int = 8,\n no_wrap: Optional[bool] = None,\n ) -> Lines:\n \"\"\"Word wrap the text.\n\n Args:\n console (Console): Console instance.\n width (int): Number of characters per line.\n emoji (bool, optional): Also render emoji code. Defaults to True.\n justify (str, optional): Justify method: \"default\", \"left\", \"center\", \"full\", \"right\". Defaults to \"default\".\n overflow (str, optional): Overflow method: \"crop\", \"fold\", or \"ellipsis\". Defaults to None.\n tab_size (int, optional): Default tab size. Defaults to 8.\n no_wrap (bool, optional): Disable wrapping, Defaults to False.\n\n Returns:\n Lines: Number of lines.\n \"\"\"\n wrap_justify = justify or self.justify or DEFAULT_JUSTIFY\n wrap_overflow = overflow or self.overflow or DEFAULT_OVERFLOW\n\n no_wrap = pick_bool(no_wrap, self.no_wrap, False) or overflow == \"ignore\"\n\n lines = Lines()\n for line in self.split(allow_blank=True):\n if \"\\t\" in line:\n line.expand_tabs(tab_size)\n if no_wrap:\n new_lines = Lines([line])\n else:\n offsets = divide_line(str(line), width, fold=wrap_overflow == \"fold\")\n new_lines = line.divide(offsets)\n for line in new_lines:\n line.rstrip_end(width)\n if wrap_justify:\n new_lines.justify(\n console, width, justify=wrap_justify, overflow=wrap_overflow\n )\n for line in new_lines:\n line.truncate(width, overflow=wrap_overflow)\n lines.extend(new_lines)\n return lines\n\n def fit(self, width: int) -> Lines:\n \"\"\"Fit the text in to given width by chopping in to lines.\n\n Args:\n width (int): Maximum characters in a line.\n\n Returns:\n Lines: List of lines.\n \"\"\"\n lines: Lines = Lines()\n append = lines.append\n for line in self.split():\n line.set_length(width)\n append(line)\n return lines\n\n def detect_indentation(self) -> int:\n \"\"\"Auto-detect indentation of code.\n\n Returns:\n int: Number of spaces used to indent code.\n \"\"\"\n\n _indentations = {\n len(match.group(1))\n for match in re.finditer(r\"^( *)(.*)$\", self.plain, flags=re.MULTILINE)\n }\n\n try:\n indentation = (\n reduce(gcd, [indent for indent in _indentations if not indent % 2]) or 1\n )\n except TypeError:\n indentation = 1\n\n return indentation\n\n def with_indent_guides(\n self,\n indent_size: Optional[int] = None,\n *,\n character: str = \"│\",\n style: StyleType = \"dim green\",\n ) -> \"Text\":\n \"\"\"Adds indent guide lines to text.\n\n Args:\n indent_size (Optional[int]): Size of indentation, or None to auto detect. Defaults to None.\n character (str, optional): Character to use for indentation. Defaults to \"│\".\n style (Union[Style, str], optional): Style of indent guides.\n\n Returns:\n Text: New text with indentation guides.\n \"\"\"\n\n _indent_size = self.detect_indentation() if indent_size is None else indent_size\n\n text = self.copy()\n text.expand_tabs()\n indent_line = f\"{character}{' ' * (_indent_size - 1)}\"\n\n re_indent = re.compile(r\"^( *)(.*)$\")\n new_lines: List[Text] = []\n add_line = new_lines.append\n blank_lines = 0\n for line in text.split(allow_blank=True):\n match = re_indent.match(line.plain)\n if not match or not match.group(2):\n blank_lines += 1\n continue\n indent = match.group(1)\n full_indents, remaining_space = divmod(len(indent), _indent_size)\n new_indent = f\"{indent_line * full_indents}{' ' * remaining_space}\"\n line.plain = new_indent + line.plain[len(new_indent) :]\n line.stylize(style, 0, len(new_indent))\n if blank_lines:\n new_lines.extend([Text(new_indent, style=style)] * blank_lines)\n blank_lines = 0\n add_line(line)\n if blank_lines:\n new_lines.extend([Text(\"\", style=style)] * blank_lines)\n\n new_text = text.blank_copy(\"\\n\").join(new_lines)\n return new_text\n\n\nif __name__ == \"__main__\": # pragma: no cover\n from rich.console import Console\n\n text = Text(\n \"\"\"\\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\\n\"\"\"\n )\n text.highlight_words([\"Lorem\"], \"bold\")\n text.highlight_words([\"ipsum\"], \"italic\")\n\n console = Console()\n console.rule(\"justify='left'\")\n console.print(text, style=\"red\")\n console.print()\n\n console.rule(\"justify='center'\")\n console.print(text, style=\"green\", justify=\"center\")\n console.print()\n\n console.rule(\"justify='right'\")\n console.print(text, style=\"blue\", justify=\"right\")\n console.print()\n\n console.rule(\"justify='full'\")\n console.print(text, style=\"magenta\", justify=\"full\")\n console.print()\n"}
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 77dd5a0a9b..2b1dc69d04 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -24,6 +24,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Allowed `__rich__` to work recursively
- Allowed Text classes to work with sep in print https://github.com/willmcgugan/rich/issues/1689
+### Added
+
+- Added a `rich.text.Text.from_ansi` helper method for handling pre-formatted input strings https://github.com/willmcgugan/rich/issues/1670
+
## [10.13.0] - 2021-11-07
### Added
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 0e299be6e7..557f548e8d 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -22,4 +22,5 @@ The following people have contributed to the development of Rich:
- [Clément Robert](https://github.com/neutrinoceros)
- [Tushar Sadhwani](https://github.com/tusharsadhwani)
- [Tim Savage](https://github.com/timsavage)
+- [Nicolas Simonds](https://github.com/0xDEC0DE)
- [Gabriele N. Tornetta](https://github.com/p403n1x87)
diff --git a/docs/source/text.rst b/docs/source/text.rst
index b5f2eaf8f1..142b0639e1 100644
--- a/docs/source/text.rst
+++ b/docs/source/text.rst
@@ -26,6 +26,11 @@ Alternatively, you can construct styled text by calling :meth:`~rich.text.Text.a
text.append(" World!")
console.print(text)
+If you would like to use text that is already formatted with ANSI codes, call :meth:`~rich.text.Text.from_ansi` to convert it to a ``Text`` object:
+
+ text = Text.from_ansi("\033[1mHello, World!\033[0m")
+ console.print(text.spans)
+
Since building Text instances from parts is a common requirement, Rich offers :meth:`~rich.text.Text.assemble` which will combine strings or pairs of string and Style, and return a Text instance. The follow example is equivalent to the code above::
text = Text.assemble(("Hello", "bold magenta"), " World!")
|
{"rich/text.py": [{"type": "function", "name": "Text.from_ansi", "lines": [246, 277], "signature": "def from_ansi( cls, text: str, *, style: Union[str, Style] = \"\", justify: Optional[\"JustifyMethod\"] = None, overflow: Optional[\"OverflowMethod\"] = None, no_wrap: Optional[bool] = None, end: str = \"\\n\", tab_size: Optional[int] = 8, ) -> \"Text\":", "doc": "Create a Text object from pre-formatted ANSI.\n\nArgs:\n text (str): A string containing ANSI color codes.\n style (Union[str, Style], optional): Base style for text. Defaults to \"\".\n justify (str, optional): Justify method: \"left\", \"center\", \"full\", \"right\". Defaults to None.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", \"ellipsis\". Defaults to None.\n no_wrap (bool, optional): Disable text wrapping, or None for default. Defaults to None.\n end (str, optional): Character to end text with. Defaults to \"\\\\n\".\n tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8."}]}
| null |
["tests/test_text.py::test_from_ansi"]
|
["tests/test_text.py::test_span", "tests/test_text.py::test_span_split", "tests/test_text.py::test_span_move", "tests/test_text.py::test_span_right_crop", "tests/test_text.py::test_len", "tests/test_text.py::test_cell_len", "tests/test_text.py::test_bool", "tests/test_text.py::test_str", "tests/test_text.py::test_repr", "tests/test_text.py::test_add", "tests/test_text.py::test_eq", "tests/test_text.py::test_contain", "tests/test_text.py::test_plain_property", "tests/test_text.py::test_plain_property_setter", "tests/test_text.py::test_from_markup", "tests/test_text.py::test_copy", "tests/test_text.py::test_rstrip", "tests/test_text.py::test_rstrip_end", "tests/test_text.py::test_stylize", "tests/test_text.py::test_stylize_negative_index", "tests/test_text.py::test_highlight_regex", "tests/test_text.py::test_highlight_regex_callable", "tests/test_text.py::test_highlight_words", "tests/test_text.py::test_set_length", "tests/test_text.py::test_console_width", "tests/test_text.py::test_join", "tests/test_text.py::test_trim_spans", "tests/test_text.py::test_pad_left", "tests/test_text.py::test_pad_right", "tests/test_text.py::test_append", "tests/test_text.py::test_append_text", "tests/test_text.py::test_split", "tests/test_text.py::test_split_spans", "tests/test_text.py::test_divide", "tests/test_text.py::test_right_crop", "tests/test_text.py::test_wrap_3", "tests/test_text.py::test_wrap_4", "tests/test_text.py::test_wrap_long", "tests/test_text.py::test_wrap_overflow", "tests/test_text.py::test_wrap_overflow_long", "tests/test_text.py::test_wrap_long_words", "tests/test_text.py::test_no_wrap_no_crop", "tests/test_text.py::test_fit", "tests/test_text.py::test_wrap_tabs", "tests/test_text.py::test_render", "tests/test_text.py::test_render_simple", "tests/test_text.py::test_print[.-.\\n]", "tests/test_text.py::test_print[print_text1-.", "tests/test_text.py::test_print[print_text2-Hello", "tests/test_text.py::test_print_sep_end[.-.X]", "tests/test_text.py::test_print_sep_end[print_text1-..X]", "tests/test_text.py::test_print_sep_end[print_text2-HelloWorld!X]", "tests/test_text.py::test_tabs_to_spaces", "tests/test_text.py::test_markup_switch", "tests/test_text.py::test_emoji", "tests/test_text.py::test_emoji_switch", "tests/test_text.py::test_assemble", "tests/test_text.py::test_assemble_meta", "tests/test_text.py::test_styled", "tests/test_text.py::test_strip_control_codes", "tests/test_text.py::test_get_style_at_offset", "tests/test_text.py::test_truncate_ellipsis[Hello-10-Hello]", "tests/test_text.py::test_truncate_ellipsis[Hello-5-Hello]", "tests/test_text.py::test_truncate_ellipsis[Hello-4-Hel\\u2026]", "tests/test_text.py::test_truncate_ellipsis[Hello-3-He\\u2026]", "tests/test_text.py::test_truncate_ellipsis[Hello-2-H\\u2026]", "tests/test_text.py::test_truncate_ellipsis[Hello-1-\\u2026]", "tests/test_text.py::test_truncate_ellipsis_pad[Hello-5-Hello]", "tests/test_text.py::test_truncate_ellipsis_pad[Hello-10-Hello", "tests/test_text.py::test_truncate_ellipsis_pad[Hello-3-He\\u2026]", "tests/test_text.py::test_pad", "tests/test_text.py::test_align_left", "tests/test_text.py::test_align_right", "tests/test_text.py::test_align_center", "tests/test_text.py::test_detect_indentation", "tests/test_text.py::test_indentation_guides", "tests/test_text.py::test_slice", "tests/test_text.py::test_wrap_invalid_style", "tests/test_text.py::test_apply_meta", "tests/test_text.py::test_on"]
|
b0661de34bab35af9b4b1d3ba8e28b186b225e84
|
{"first_commit_time": 1636564203.0, "pr_title": "Add a Text.from_ansi helper method", "pr_body": "Add a simple little helper to run `AnsiDecoder.decode_line` over \"pre-cooked\" inputs.\r\n\r\nFixes issue: #1670\r\n\r\n## Type of changes\r\n\r\n- [x] Bug fix\r\n- [x] New feature\r\n- [x] Documentation / docstrings\r\n- [x] Tests\r\n\r\n## Checklist\r\n\r\n- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.\r\n- [x] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.\r\n- [x] I've added tests for new code.\r\n- [x] I accept that @willmcgugan may be pedantic in the code review.\r\n", "pr_timeline": [{"time": 1637184658.0, "comment": "# [Codecov](https://codecov.io/gh/willmcgugan/rich/pull/1706?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan) Report\n> Merging [#1706](https://codecov.io/gh/willmcgugan/rich/pull/1706?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan) (62ead7e) into [master](https://codecov.io/gh/willmcgugan/rich/commit/3c811afa9143c41d5decf392acb09260c243174b?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan) (3c811af) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/willmcgugan/rich/pull/1706?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan)\n\n```diff\n@@ Coverage Diff @@\n## master #1706 +/- ##\n=======================================\n Coverage 99.68% 99.68% \n=======================================\n Files 71 71 \n Lines 6890 6901 +11 \n=======================================\n+ Hits 6868 6879 +11 \n Misses 22 22 \n```\n\n| Flag | Coverage \u0394 | |\n|---|---|---|\n| unittests | `99.68% <100.00%> (+<0.01%)` | :arrow_up: |\n\nFlags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan#carryforward-flags-in-the-pull-request-comment) to find out more.\n\n| [Impacted Files](https://codecov.io/gh/willmcgugan/rich/pull/1706?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan) | Coverage \u0394 | |\n|---|---|---|\n| [rich/text.py](https://codecov.io/gh/willmcgugan/rich/pull/1706/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan#diff-cmljaC90ZXh0LnB5) | `100.00% <100.00%> (\u00f8)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/willmcgugan/rich/pull/1706?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan)\n> `\u0394 = absolute <relative> (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/willmcgugan/rich/pull/1706?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan). Last update [008854c...62ead7e](https://codecov.io/gh/willmcgugan/rich/pull/1706?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Will+McGugan).\n"}, {"time": 1638115439.0, "comment": "thanks"}], "issues": {}}
|
Textualize/rich
| 1,894
|
https://github.com/Textualize/rich/pull/1894
|
Textualize__rich-1894
|
[]
|
633faab16dc3a8c01a6562648cc2186c19a476e3
|
diff --git a/rich/progress.py b/rich/progress.py
index 1f670db438..fe35b6c175 100644
--- a/rich/progress.py
+++ b/rich/progress.py
@@ -588,12 +588,7 @@ def __init__(
refresh_per_second is None or refresh_per_second > 0
), "refresh_per_second must be > 0"
self._lock = RLock()
- self.columns = columns or (
- TextColumn("[progress.description]{task.description}"),
- BarColumn(),
- TextColumn("[progress.percentage]{task.percentage:>3.0f}%"),
- TimeRemainingColumn(),
- )
+ self.columns = columns or self.get_default_columns()
self.speed_estimate_period = speed_estimate_period
self.disable = disable
@@ -613,6 +608,37 @@ def __init__(
self.print = self.console.print
self.log = self.console.log
+ @classmethod
+ def get_default_columns(cls) -> Tuple[ProgressColumn, ...]:
+ """Get the default columns used for a new Progress instance:
+ - a text column for the description (TextColumn)
+ - the bar itself (BarColumn)
+ - a text column showing completion percentage (TextColumn)
+ - an estimated-time-remaining column (TimeRemainingColumn)
+ If the Progress instance is created without passing a columns argument,
+ the default columns defined here will be used.
+
+ You can also create a Progress instance using custom columns before
+ and/or after the defaults, as in this example:
+
+ progress = Progress(
+ SpinnerColumn(),
+ *Progress.default_columns(),
+ "Elapsed:",
+ TimeElapsedColumn(),
+ )
+
+ This code shows the creation of a Progress display, containing
+ a spinner to the left, the default columns, and a labeled elapsed
+ time column.
+ """
+ return (
+ TextColumn("[progress.description]{task.description}"),
+ BarColumn(),
+ TextColumn("[progress.percentage]{task.percentage:>3.0f}%"),
+ TimeRemainingColumn(),
+ )
+
@property
def console(self) -> Console:
return self.live.console
@@ -1015,10 +1041,7 @@ def remove_task(self, task_id: TaskID) -> None:
with Progress(
SpinnerColumn(),
- TextColumn("[progress.description]{task.description}"),
- BarColumn(),
- TextColumn("[progress.percentage]{task.percentage:>3.0f}%"),
- TimeRemainingColumn(),
+ *Progress.get_default_columns(),
TimeElapsedColumn(),
console=console,
transient=True,
|
diff --git a/tests/test_progress.py b/tests/test_progress.py
index 2020f91ffb..20b9d32ed4 100644
--- a/tests/test_progress.py
+++ b/tests/test_progress.py
@@ -334,6 +334,32 @@ def test_columns() -> None:
assert result == expected
+def test_using_default_columns() -> None:
+ # can only check types, as the instances do not '==' each other
+ expected_default_types = [
+ TextColumn,
+ BarColumn,
+ TextColumn,
+ TimeRemainingColumn,
+ ]
+
+ progress = Progress()
+ assert [type(c) for c in progress.columns] == expected_default_types
+
+ progress = Progress(
+ SpinnerColumn(),
+ *Progress.get_default_columns(),
+ "Elapsed:",
+ TimeElapsedColumn(),
+ )
+ assert [type(c) for c in progress.columns] == [
+ SpinnerColumn,
+ *expected_default_types,
+ str,
+ TimeElapsedColumn,
+ ]
+
+
def test_task_create() -> None:
task = Task(TaskID(1), "foo", 100, 0, _get_time=lambda: 1)
assert task.elapsed is None
| 2022-01-31T14:32:01
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"rich/progress.py": "from abc import ABC, abstractmethod\nfrom collections import deque\nfrom collections.abc import Sized\nfrom dataclasses import dataclass, field\nfrom datetime import timedelta\nfrom math import ceil\nfrom threading import Event, RLock, Thread\nfrom types import TracebackType\nfrom typing import (\n Any,\n Callable,\n Deque,\n Dict,\n Iterable,\n List,\n NamedTuple,\n NewType,\n Optional,\n Sequence,\n Tuple,\n Type,\n TypeVar,\n Union,\n)\n\nfrom . import filesize, get_console\nfrom .console import Console, JustifyMethod, RenderableType, Group\nfrom .highlighter import Highlighter\nfrom .jupyter import JupyterMixin\nfrom .live import Live\nfrom .progress_bar import ProgressBar\nfrom .spinner import Spinner\nfrom .style import StyleType\nfrom .table import Column, Table\nfrom .text import Text, TextType\n\nTaskID = NewType(\"TaskID\", int)\n\nProgressType = TypeVar(\"ProgressType\")\n\nGetTimeCallable = Callable[[], float]\n\n\nclass _TrackThread(Thread):\n \"\"\"A thread to periodically update progress.\"\"\"\n\n def __init__(self, progress: \"Progress\", task_id: \"TaskID\", update_period: float):\n self.progress = progress\n self.task_id = task_id\n self.update_period = update_period\n self.done = Event()\n\n self.completed = 0\n super().__init__()\n\n def run(self) -> None:\n task_id = self.task_id\n advance = self.progress.advance\n update_period = self.update_period\n last_completed = 0\n wait = self.done.wait\n while not wait(update_period):\n completed = self.completed\n if last_completed != completed:\n advance(task_id, completed - last_completed)\n last_completed = completed\n\n self.progress.update(self.task_id, completed=self.completed, refresh=True)\n\n def __enter__(self) -> \"_TrackThread\":\n self.start()\n return self\n\n def __exit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n self.done.set()\n self.join()\n\n\ndef track(\n sequence: Union[Sequence[ProgressType], Iterable[ProgressType]],\n description: str = \"Working...\",\n total: Optional[float] = None,\n auto_refresh: bool = True,\n console: Optional[Console] = None,\n transient: bool = False,\n get_time: Optional[Callable[[], float]] = None,\n refresh_per_second: float = 10,\n style: StyleType = \"bar.back\",\n complete_style: StyleType = \"bar.complete\",\n finished_style: StyleType = \"bar.finished\",\n pulse_style: StyleType = \"bar.pulse\",\n update_period: float = 0.1,\n disable: bool = False,\n) -> Iterable[ProgressType]:\n \"\"\"Track progress by iterating over a sequence.\n\n Args:\n sequence (Iterable[ProgressType]): A sequence (must support \"len\") you wish to iterate over.\n description (str, optional): Description of task show next to progress bar. Defaults to \"Working\".\n total: (float, optional): Total number of steps. Default is len(sequence).\n auto_refresh (bool, optional): Automatic refresh, disable to force a refresh after each iteration. Default is True.\n transient: (bool, optional): Clear the progress on exit. Defaults to False.\n console (Console, optional): Console to write to. Default creates internal Console instance.\n refresh_per_second (float): Number of times per second to refresh the progress information. Defaults to 10.\n style (StyleType, optional): Style for the bar background. Defaults to \"bar.back\".\n complete_style (StyleType, optional): Style for the completed bar. Defaults to \"bar.complete\".\n finished_style (StyleType, optional): Style for a finished bar. Defaults to \"bar.done\".\n pulse_style (StyleType, optional): Style for pulsing bars. Defaults to \"bar.pulse\".\n update_period (float, optional): Minimum time (in seconds) between calls to update(). Defaults to 0.1.\n disable (bool, optional): Disable display of progress.\n Returns:\n Iterable[ProgressType]: An iterable of the values in the sequence.\n\n \"\"\"\n\n columns: List[\"ProgressColumn\"] = (\n [TextColumn(\"[progress.description]{task.description}\")] if description else []\n )\n columns.extend(\n (\n BarColumn(\n style=style,\n complete_style=complete_style,\n finished_style=finished_style,\n pulse_style=pulse_style,\n ),\n TextColumn(\"[progress.percentage]{task.percentage:>3.0f}%\"),\n TimeRemainingColumn(),\n )\n )\n progress = Progress(\n *columns,\n auto_refresh=auto_refresh,\n console=console,\n transient=transient,\n get_time=get_time,\n refresh_per_second=refresh_per_second or 10,\n disable=disable,\n )\n\n with progress:\n yield from progress.track(\n sequence, total=total, description=description, update_period=update_period\n )\n\n\nclass ProgressColumn(ABC):\n \"\"\"Base class for a widget to use in progress display.\"\"\"\n\n max_refresh: Optional[float] = None\n\n def __init__(self, table_column: Optional[Column] = None) -> None:\n self._table_column = table_column\n self._renderable_cache: Dict[TaskID, Tuple[float, RenderableType]] = {}\n self._update_time: Optional[float] = None\n\n def get_table_column(self) -> Column:\n \"\"\"Get a table column, used to build tasks table.\"\"\"\n return self._table_column or Column()\n\n def __call__(self, task: \"Task\") -> RenderableType:\n \"\"\"Called by the Progress object to return a renderable for the given task.\n\n Args:\n task (Task): An object containing information regarding the task.\n\n Returns:\n RenderableType: Anything renderable (including str).\n \"\"\"\n current_time = task.get_time()\n if self.max_refresh is not None and not task.completed:\n try:\n timestamp, renderable = self._renderable_cache[task.id]\n except KeyError:\n pass\n else:\n if timestamp + self.max_refresh > current_time:\n return renderable\n\n renderable = self.render(task)\n self._renderable_cache[task.id] = (current_time, renderable)\n return renderable\n\n @abstractmethod\n def render(self, task: \"Task\") -> RenderableType:\n \"\"\"Should return a renderable object.\"\"\"\n\n\nclass RenderableColumn(ProgressColumn):\n \"\"\"A column to insert an arbitrary column.\n\n Args:\n renderable (RenderableType, optional): Any renderable. Defaults to empty string.\n \"\"\"\n\n def __init__(\n self, renderable: RenderableType = \"\", *, table_column: Optional[Column] = None\n ):\n self.renderable = renderable\n super().__init__(table_column=table_column)\n\n def render(self, task: \"Task\") -> RenderableType:\n return self.renderable\n\n\nclass SpinnerColumn(ProgressColumn):\n \"\"\"A column with a 'spinner' animation.\n\n Args:\n spinner_name (str, optional): Name of spinner animation. Defaults to \"dots\".\n style (StyleType, optional): Style of spinner. Defaults to \"progress.spinner\".\n speed (float, optional): Speed factor of spinner. Defaults to 1.0.\n finished_text (TextType, optional): Text used when task is finished. Defaults to \" \".\n \"\"\"\n\n def __init__(\n self,\n spinner_name: str = \"dots\",\n style: Optional[StyleType] = \"progress.spinner\",\n speed: float = 1.0,\n finished_text: TextType = \" \",\n table_column: Optional[Column] = None,\n ):\n self.spinner = Spinner(spinner_name, style=style, speed=speed)\n self.finished_text = (\n Text.from_markup(finished_text)\n if isinstance(finished_text, str)\n else finished_text\n )\n super().__init__(table_column=table_column)\n\n def set_spinner(\n self,\n spinner_name: str,\n spinner_style: Optional[StyleType] = \"progress.spinner\",\n speed: float = 1.0,\n ) -> None:\n \"\"\"Set a new spinner.\n\n Args:\n spinner_name (str): Spinner name, see python -m rich.spinner.\n spinner_style (Optional[StyleType], optional): Spinner style. Defaults to \"progress.spinner\".\n speed (float, optional): Speed factor of spinner. Defaults to 1.0.\n \"\"\"\n self.spinner = Spinner(spinner_name, style=spinner_style, speed=speed)\n\n def render(self, task: \"Task\") -> RenderableType:\n text = (\n self.finished_text\n if task.finished\n else self.spinner.render(task.get_time())\n )\n return text\n\n\nclass TextColumn(ProgressColumn):\n \"\"\"A column containing text.\"\"\"\n\n def __init__(\n self,\n text_format: str,\n style: StyleType = \"none\",\n justify: JustifyMethod = \"left\",\n markup: bool = True,\n highlighter: Optional[Highlighter] = None,\n table_column: Optional[Column] = None,\n ) -> None:\n self.text_format = text_format\n self.justify: JustifyMethod = justify\n self.style = style\n self.markup = markup\n self.highlighter = highlighter\n super().__init__(table_column=table_column or Column(no_wrap=True))\n\n def render(self, task: \"Task\") -> Text:\n _text = self.text_format.format(task=task)\n if self.markup:\n text = Text.from_markup(_text, style=self.style, justify=self.justify)\n else:\n text = Text(_text, style=self.style, justify=self.justify)\n if self.highlighter:\n self.highlighter.highlight(text)\n return text\n\n\nclass BarColumn(ProgressColumn):\n \"\"\"Renders a visual progress bar.\n\n Args:\n bar_width (Optional[int], optional): Width of bar or None for full width. Defaults to 40.\n style (StyleType, optional): Style for the bar background. Defaults to \"bar.back\".\n complete_style (StyleType, optional): Style for the completed bar. Defaults to \"bar.complete\".\n finished_style (StyleType, optional): Style for a finished bar. Defaults to \"bar.done\".\n pulse_style (StyleType, optional): Style for pulsing bars. Defaults to \"bar.pulse\".\n \"\"\"\n\n def __init__(\n self,\n bar_width: Optional[int] = 40,\n style: StyleType = \"bar.back\",\n complete_style: StyleType = \"bar.complete\",\n finished_style: StyleType = \"bar.finished\",\n pulse_style: StyleType = \"bar.pulse\",\n table_column: Optional[Column] = None,\n ) -> None:\n self.bar_width = bar_width\n self.style = style\n self.complete_style = complete_style\n self.finished_style = finished_style\n self.pulse_style = pulse_style\n super().__init__(table_column=table_column)\n\n def render(self, task: \"Task\") -> ProgressBar:\n \"\"\"Gets a progress bar widget for a task.\"\"\"\n return ProgressBar(\n total=max(0, task.total),\n completed=max(0, task.completed),\n width=None if self.bar_width is None else max(1, self.bar_width),\n pulse=not task.started,\n animation_time=task.get_time(),\n style=self.style,\n complete_style=self.complete_style,\n finished_style=self.finished_style,\n pulse_style=self.pulse_style,\n )\n\n\nclass TimeElapsedColumn(ProgressColumn):\n \"\"\"Renders time elapsed.\"\"\"\n\n def render(self, task: \"Task\") -> Text:\n \"\"\"Show time remaining.\"\"\"\n elapsed = task.finished_time if task.finished else task.elapsed\n if elapsed is None:\n return Text(\"-:--:--\", style=\"progress.elapsed\")\n delta = timedelta(seconds=int(elapsed))\n return Text(str(delta), style=\"progress.elapsed\")\n\n\nclass TimeRemainingColumn(ProgressColumn):\n \"\"\"Renders estimated time remaining.\"\"\"\n\n # Only refresh twice a second to prevent jitter\n max_refresh = 0.5\n\n def render(self, task: \"Task\") -> Text:\n \"\"\"Show time remaining.\"\"\"\n remaining = task.time_remaining\n if remaining is None:\n return Text(\"-:--:--\", style=\"progress.remaining\")\n remaining_delta = timedelta(seconds=int(remaining))\n return Text(str(remaining_delta), style=\"progress.remaining\")\n\n\nclass FileSizeColumn(ProgressColumn):\n \"\"\"Renders completed filesize.\"\"\"\n\n def render(self, task: \"Task\") -> Text:\n \"\"\"Show data completed.\"\"\"\n data_size = filesize.decimal(int(task.completed))\n return Text(data_size, style=\"progress.filesize\")\n\n\nclass TotalFileSizeColumn(ProgressColumn):\n \"\"\"Renders total filesize.\"\"\"\n\n def render(self, task: \"Task\") -> Text:\n \"\"\"Show data completed.\"\"\"\n data_size = filesize.decimal(int(task.total))\n return Text(data_size, style=\"progress.filesize.total\")\n\n\nclass DownloadColumn(ProgressColumn):\n \"\"\"Renders file size downloaded and total, e.g. '0.5/2.3 GB'.\n\n Args:\n binary_units (bool, optional): Use binary units, KiB, MiB etc. Defaults to False.\n \"\"\"\n\n def __init__(\n self, binary_units: bool = False, table_column: Optional[Column] = None\n ) -> None:\n self.binary_units = binary_units\n super().__init__(table_column=table_column)\n\n def render(self, task: \"Task\") -> Text:\n \"\"\"Calculate common unit for completed and total.\"\"\"\n completed = int(task.completed)\n total = int(task.total)\n if self.binary_units:\n unit, suffix = filesize.pick_unit_and_suffix(\n total,\n [\"bytes\", \"KiB\", \"MiB\", \"GiB\", \"TiB\", \"PiB\", \"EiB\", \"ZiB\", \"YiB\"],\n 1024,\n )\n else:\n unit, suffix = filesize.pick_unit_and_suffix(\n total, [\"bytes\", \"KB\", \"MB\", \"GB\", \"TB\", \"PB\", \"EB\", \"ZB\", \"YB\"], 1000\n )\n completed_ratio = completed / unit\n total_ratio = total / unit\n precision = 0 if unit == 1 else 1\n completed_str = f\"{completed_ratio:,.{precision}f}\"\n total_str = f\"{total_ratio:,.{precision}f}\"\n download_status = f\"{completed_str}/{total_str} {suffix}\"\n download_text = Text(download_status, style=\"progress.download\")\n return download_text\n\n\nclass TransferSpeedColumn(ProgressColumn):\n \"\"\"Renders human readable transfer speed.\"\"\"\n\n def render(self, task: \"Task\") -> Text:\n \"\"\"Show data transfer speed.\"\"\"\n speed = task.finished_speed or task.speed\n if speed is None:\n return Text(\"?\", style=\"progress.data.speed\")\n data_speed = filesize.decimal(int(speed))\n return Text(f\"{data_speed}/s\", style=\"progress.data.speed\")\n\n\nclass ProgressSample(NamedTuple):\n \"\"\"Sample of progress for a given time.\"\"\"\n\n timestamp: float\n \"\"\"Timestamp of sample.\"\"\"\n completed: float\n \"\"\"Number of steps completed.\"\"\"\n\n\n@dataclass\nclass Task:\n \"\"\"Information regarding a progress task.\n\n This object should be considered read-only outside of the :class:`~Progress` class.\n\n \"\"\"\n\n id: TaskID\n \"\"\"Task ID associated with this task (used in Progress methods).\"\"\"\n\n description: str\n \"\"\"str: Description of the task.\"\"\"\n\n total: float\n \"\"\"str: Total number of steps in this task.\"\"\"\n\n completed: float\n \"\"\"float: Number of steps completed\"\"\"\n\n _get_time: GetTimeCallable\n \"\"\"Callable to get the current time.\"\"\"\n\n finished_time: Optional[float] = None\n \"\"\"float: Time task was finished.\"\"\"\n\n visible: bool = True\n \"\"\"bool: Indicates if this task is visible in the progress display.\"\"\"\n\n fields: Dict[str, Any] = field(default_factory=dict)\n \"\"\"dict: Arbitrary fields passed in via Progress.update.\"\"\"\n\n start_time: Optional[float] = field(default=None, init=False, repr=False)\n \"\"\"Optional[float]: Time this task was started, or None if not started.\"\"\"\n\n stop_time: Optional[float] = field(default=None, init=False, repr=False)\n \"\"\"Optional[float]: Time this task was stopped, or None if not stopped.\"\"\"\n\n finished_speed: Optional[float] = None\n \"\"\"Optional[float]: The last speed for a finished task.\"\"\"\n\n _progress: Deque[ProgressSample] = field(\n default_factory=deque, init=False, repr=False\n )\n\n _lock: RLock = field(repr=False, default_factory=RLock)\n \"\"\"Thread lock.\"\"\"\n\n def get_time(self) -> float:\n \"\"\"float: Get the current time, in seconds.\"\"\"\n return self._get_time()\n\n @property\n def started(self) -> bool:\n \"\"\"bool: Check if the task as started.\"\"\"\n return self.start_time is not None\n\n @property\n def remaining(self) -> float:\n \"\"\"float: Get the number of steps remaining.\"\"\"\n return self.total - self.completed\n\n @property\n def elapsed(self) -> Optional[float]:\n \"\"\"Optional[float]: Time elapsed since task was started, or ``None`` if the task hasn't started.\"\"\"\n if self.start_time is None:\n return None\n if self.stop_time is not None:\n return self.stop_time - self.start_time\n return self.get_time() - self.start_time\n\n @property\n def finished(self) -> bool:\n \"\"\"Check if the task has finished.\"\"\"\n return self.finished_time is not None\n\n @property\n def percentage(self) -> float:\n \"\"\"float: Get progress of task as a percentage.\"\"\"\n if not self.total:\n return 0.0\n completed = (self.completed / self.total) * 100.0\n completed = min(100.0, max(0.0, completed))\n return completed\n\n @property\n def speed(self) -> Optional[float]:\n \"\"\"Optional[float]: Get the estimated speed in steps per second.\"\"\"\n if self.start_time is None:\n return None\n with self._lock:\n progress = self._progress\n if not progress:\n return None\n total_time = progress[-1].timestamp - progress[0].timestamp\n if total_time == 0:\n return None\n iter_progress = iter(progress)\n next(iter_progress)\n total_completed = sum(sample.completed for sample in iter_progress)\n speed = total_completed / total_time\n return speed\n\n @property\n def time_remaining(self) -> Optional[float]:\n \"\"\"Optional[float]: Get estimated time to completion, or ``None`` if no data.\"\"\"\n if self.finished:\n return 0.0\n speed = self.speed\n if not speed:\n return None\n estimate = ceil(self.remaining / speed)\n return estimate\n\n def _reset(self) -> None:\n \"\"\"Reset progress.\"\"\"\n self._progress.clear()\n self.finished_time = None\n self.finished_speed = None\n\n\nclass Progress(JupyterMixin):\n \"\"\"Renders an auto-updating progress bar(s).\n\n Args:\n console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout.\n auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()`.\n refresh_per_second (Optional[float], optional): Number of times per second to refresh the progress information or None to use default (10). Defaults to None.\n speed_estimate_period: (float, optional): Period (in seconds) used to calculate the speed estimate. Defaults to 30.\n transient: (bool, optional): Clear the progress on exit. Defaults to False.\n redirect_stdout: (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True.\n redirect_stderr: (bool, optional): Enable redirection of stderr. Defaults to True.\n get_time: (Callable, optional): A callable that gets the current time, or None to use Console.get_time. Defaults to None.\n disable (bool, optional): Disable progress display. Defaults to False\n expand (bool, optional): Expand tasks table to fit width. Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n *columns: Union[str, ProgressColumn],\n console: Optional[Console] = None,\n auto_refresh: bool = True,\n refresh_per_second: float = 10,\n speed_estimate_period: float = 30.0,\n transient: bool = False,\n redirect_stdout: bool = True,\n redirect_stderr: bool = True,\n get_time: Optional[GetTimeCallable] = None,\n disable: bool = False,\n expand: bool = False,\n ) -> None:\n assert (\n refresh_per_second is None or refresh_per_second > 0\n ), \"refresh_per_second must be > 0\"\n self._lock = RLock()\n self.columns = columns or (\n TextColumn(\"[progress.description]{task.description}\"),\n BarColumn(),\n TextColumn(\"[progress.percentage]{task.percentage:>3.0f}%\"),\n TimeRemainingColumn(),\n )\n self.speed_estimate_period = speed_estimate_period\n\n self.disable = disable\n self.expand = expand\n self._tasks: Dict[TaskID, Task] = {}\n self._task_index: TaskID = TaskID(0)\n self.live = Live(\n console=console or get_console(),\n auto_refresh=auto_refresh,\n refresh_per_second=refresh_per_second,\n transient=transient,\n redirect_stdout=redirect_stdout,\n redirect_stderr=redirect_stderr,\n get_renderable=self.get_renderable,\n )\n self.get_time = get_time or self.console.get_time\n self.print = self.console.print\n self.log = self.console.log\n\n @property\n def console(self) -> Console:\n return self.live.console\n\n @property\n def tasks(self) -> List[Task]:\n \"\"\"Get a list of Task instances.\"\"\"\n with self._lock:\n return list(self._tasks.values())\n\n @property\n def task_ids(self) -> List[TaskID]:\n \"\"\"A list of task IDs.\"\"\"\n with self._lock:\n return list(self._tasks.keys())\n\n @property\n def finished(self) -> bool:\n \"\"\"Check if all tasks have been completed.\"\"\"\n with self._lock:\n if not self._tasks:\n return True\n return all(task.finished for task in self._tasks.values())\n\n def start(self) -> None:\n \"\"\"Start the progress display.\"\"\"\n if not self.disable:\n self.live.start(refresh=True)\n\n def stop(self) -> None:\n \"\"\"Stop the progress display.\"\"\"\n self.live.stop()\n if not self.console.is_interactive:\n self.console.print()\n\n def __enter__(self) -> \"Progress\":\n self.start()\n return self\n\n def __exit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n self.stop()\n\n def track(\n self,\n sequence: Union[Iterable[ProgressType], Sequence[ProgressType]],\n total: Optional[float] = None,\n task_id: Optional[TaskID] = None,\n description: str = \"Working...\",\n update_period: float = 0.1,\n ) -> Iterable[ProgressType]:\n \"\"\"Track progress by iterating over a sequence.\n\n Args:\n sequence (Sequence[ProgressType]): A sequence of values you want to iterate over and track progress.\n total: (float, optional): Total number of steps. Default is len(sequence).\n task_id: (TaskID): Task to track. Default is new task.\n description: (str, optional): Description of task, if new task is created.\n update_period (float, optional): Minimum time (in seconds) between calls to update(). Defaults to 0.1.\n\n Returns:\n Iterable[ProgressType]: An iterable of values taken from the provided sequence.\n \"\"\"\n\n if total is None:\n if isinstance(sequence, Sized):\n task_total = float(len(sequence))\n else:\n raise ValueError(\n f\"unable to get size of {sequence!r}, please specify 'total'\"\n )\n else:\n task_total = total\n\n if task_id is None:\n task_id = self.add_task(description, total=task_total)\n else:\n self.update(task_id, total=task_total)\n\n if self.live.auto_refresh:\n with _TrackThread(self, task_id, update_period) as track_thread:\n for value in sequence:\n yield value\n track_thread.completed += 1\n else:\n advance = self.advance\n refresh = self.refresh\n for value in sequence:\n yield value\n advance(task_id, 1)\n refresh()\n\n def start_task(self, task_id: TaskID) -> None:\n \"\"\"Start a task.\n\n Starts a task (used when calculating elapsed time). You may need to call this manually,\n if you called ``add_task`` with ``start=False``.\n\n Args:\n task_id (TaskID): ID of task.\n \"\"\"\n with self._lock:\n task = self._tasks[task_id]\n if task.start_time is None:\n task.start_time = self.get_time()\n\n def stop_task(self, task_id: TaskID) -> None:\n \"\"\"Stop a task.\n\n This will freeze the elapsed time on the task.\n\n Args:\n task_id (TaskID): ID of task.\n \"\"\"\n with self._lock:\n task = self._tasks[task_id]\n current_time = self.get_time()\n if task.start_time is None:\n task.start_time = current_time\n task.stop_time = current_time\n\n def update(\n self,\n task_id: TaskID,\n *,\n total: Optional[float] = None,\n completed: Optional[float] = None,\n advance: Optional[float] = None,\n description: Optional[str] = None,\n visible: Optional[bool] = None,\n refresh: bool = False,\n **fields: Any,\n ) -> None:\n \"\"\"Update information associated with a task.\n\n Args:\n task_id (TaskID): Task id (returned by add_task).\n total (float, optional): Updates task.total if not None.\n completed (float, optional): Updates task.completed if not None.\n advance (float, optional): Add a value to task.completed if not None.\n description (str, optional): Change task description if not None.\n visible (bool, optional): Set visible flag if not None.\n refresh (bool): Force a refresh of progress information. Default is False.\n **fields (Any): Additional data fields required for rendering.\n \"\"\"\n with self._lock:\n task = self._tasks[task_id]\n completed_start = task.completed\n\n if total is not None and total != task.total:\n task.total = total\n task._reset()\n if advance is not None:\n task.completed += advance\n if completed is not None:\n task.completed = completed\n if description is not None:\n task.description = description\n if visible is not None:\n task.visible = visible\n task.fields.update(fields)\n update_completed = task.completed - completed_start\n\n current_time = self.get_time()\n old_sample_time = current_time - self.speed_estimate_period\n _progress = task._progress\n\n popleft = _progress.popleft\n while _progress and _progress[0].timestamp < old_sample_time:\n popleft()\n while len(_progress) > 1000:\n popleft()\n if update_completed > 0:\n _progress.append(ProgressSample(current_time, update_completed))\n if task.completed >= task.total and task.finished_time is None:\n task.finished_time = task.elapsed\n\n if refresh:\n self.refresh()\n\n def reset(\n self,\n task_id: TaskID,\n *,\n start: bool = True,\n total: Optional[float] = None,\n completed: int = 0,\n visible: Optional[bool] = None,\n description: Optional[str] = None,\n **fields: Any,\n ) -> None:\n \"\"\"Reset a task so completed is 0 and the clock is reset.\n\n Args:\n task_id (TaskID): ID of task.\n start (bool, optional): Start the task after reset. Defaults to True.\n total (float, optional): New total steps in task, or None to use current total. Defaults to None.\n completed (int, optional): Number of steps completed. Defaults to 0.\n **fields (str): Additional data fields required for rendering.\n \"\"\"\n current_time = self.get_time()\n with self._lock:\n task = self._tasks[task_id]\n task._reset()\n task.start_time = current_time if start else None\n if total is not None:\n task.total = total\n task.completed = completed\n if visible is not None:\n task.visible = visible\n if fields:\n task.fields = fields\n if description is not None:\n task.description = description\n task.finished_time = None\n self.refresh()\n\n def advance(self, task_id: TaskID, advance: float = 1) -> None:\n \"\"\"Advance task by a number of steps.\n\n Args:\n task_id (TaskID): ID of task.\n advance (float): Number of steps to advance. Default is 1.\n \"\"\"\n current_time = self.get_time()\n with self._lock:\n task = self._tasks[task_id]\n completed_start = task.completed\n task.completed += advance\n update_completed = task.completed - completed_start\n old_sample_time = current_time - self.speed_estimate_period\n _progress = task._progress\n\n popleft = _progress.popleft\n while _progress and _progress[0].timestamp < old_sample_time:\n popleft()\n while len(_progress) > 1000:\n popleft()\n _progress.append(ProgressSample(current_time, update_completed))\n if task.completed >= task.total and task.finished_time is None:\n task.finished_time = task.elapsed\n task.finished_speed = task.speed\n\n def refresh(self) -> None:\n \"\"\"Refresh (render) the progress information.\"\"\"\n if not self.disable and self.live.is_started:\n self.live.refresh()\n\n def get_renderable(self) -> RenderableType:\n \"\"\"Get a renderable for the progress display.\"\"\"\n renderable = Group(*self.get_renderables())\n return renderable\n\n def get_renderables(self) -> Iterable[RenderableType]:\n \"\"\"Get a number of renderables for the progress display.\"\"\"\n table = self.make_tasks_table(self.tasks)\n yield table\n\n def make_tasks_table(self, tasks: Iterable[Task]) -> Table:\n \"\"\"Get a table to render the Progress display.\n\n Args:\n tasks (Iterable[Task]): An iterable of Task instances, one per row of the table.\n\n Returns:\n Table: A table instance.\n \"\"\"\n table_columns = (\n (\n Column(no_wrap=True)\n if isinstance(_column, str)\n else _column.get_table_column().copy()\n )\n for _column in self.columns\n )\n table = Table.grid(*table_columns, padding=(0, 1), expand=self.expand)\n\n for task in tasks:\n if task.visible:\n table.add_row(\n *(\n (\n column.format(task=task)\n if isinstance(column, str)\n else column(task)\n )\n for column in self.columns\n )\n )\n return table\n\n def __rich__(self) -> RenderableType:\n \"\"\"Makes the Progress class itself renderable.\"\"\"\n with self._lock:\n return self.get_renderable()\n\n def add_task(\n self,\n description: str,\n start: bool = True,\n total: float = 100.0,\n completed: int = 0,\n visible: bool = True,\n **fields: Any,\n ) -> TaskID:\n \"\"\"Add a new 'task' to the Progress display.\n\n Args:\n description (str): A description of the task.\n start (bool, optional): Start the task immediately (to calculate elapsed time). If set to False,\n you will need to call `start` manually. Defaults to True.\n total (float, optional): Number of total steps in the progress if know. Defaults to 100.\n completed (int, optional): Number of steps completed so far.. Defaults to 0.\n visible (bool, optional): Enable display of the task. Defaults to True.\n **fields (str): Additional data fields required for rendering.\n\n Returns:\n TaskID: An ID you can use when calling `update`.\n \"\"\"\n with self._lock:\n task = Task(\n self._task_index,\n description,\n total,\n completed,\n visible=visible,\n fields=fields,\n _get_time=self.get_time,\n _lock=self._lock,\n )\n self._tasks[self._task_index] = task\n if start:\n self.start_task(self._task_index)\n new_task_index = self._task_index\n self._task_index = TaskID(int(self._task_index) + 1)\n self.refresh()\n return new_task_index\n\n def remove_task(self, task_id: TaskID) -> None:\n \"\"\"Delete a task if it exists.\n\n Args:\n task_id (TaskID): A task ID.\n\n \"\"\"\n with self._lock:\n del self._tasks[task_id]\n\n\nif __name__ == \"__main__\": # pragma: no coverage\n\n import random\n import time\n\n from .panel import Panel\n from .rule import Rule\n from .syntax import Syntax\n from .table import Table\n\n syntax = Syntax(\n '''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:\n \"\"\"Iterate and generate a tuple with a flag for last value.\"\"\"\n iter_values = iter(values)\n try:\n previous_value = next(iter_values)\n except StopIteration:\n return\n for value in iter_values:\n yield False, previous_value\n previous_value = value\n yield True, previous_value''',\n \"python\",\n line_numbers=True,\n )\n\n table = Table(\"foo\", \"bar\", \"baz\")\n table.add_row(\"1\", \"2\", \"3\")\n\n progress_renderables = [\n \"Text may be printed while the progress bars are rendering.\",\n Panel(\"In fact, [i]any[/i] renderable will work\"),\n \"Such as [magenta]tables[/]...\",\n table,\n \"Pretty printed structures...\",\n {\"type\": \"example\", \"text\": \"Pretty printed\"},\n \"Syntax...\",\n syntax,\n Rule(\"Give it a try!\"),\n ]\n\n from itertools import cycle\n\n examples = cycle(progress_renderables)\n\n console = Console(record=True)\n\n with Progress(\n SpinnerColumn(),\n TextColumn(\"[progress.description]{task.description}\"),\n BarColumn(),\n TextColumn(\"[progress.percentage]{task.percentage:>3.0f}%\"),\n TimeRemainingColumn(),\n TimeElapsedColumn(),\n console=console,\n transient=True,\n ) as progress:\n\n task1 = progress.add_task(\"[red]Downloading\", total=1000)\n task2 = progress.add_task(\"[green]Processing\", total=1000)\n task3 = progress.add_task(\"[yellow]Thinking\", total=1000, start=False)\n\n while not progress.finished:\n progress.update(task1, advance=0.5)\n progress.update(task2, advance=0.3)\n time.sleep(0.01)\n if random.randint(0, 100) < 1:\n progress.log(next(examples))\n"}
|
{"rich/progress.py": [{"type": "function", "name": "Progress.get_default_columns", "lines": [612, 639], "signature": "def get_default_columns(cls) -> Tuple[ProgressColumn, ...]:", "doc": "Get the default columns used for a new Progress instance:\n - a text column for the description (TextColumn)\n - the bar itself (BarColumn)\n - a text column showing completion percentage (TextColumn)\n - an estimated-time-remaining column (TimeRemainingColumn)\nIf the Progress instance is created without passing a columns argument,\nthe default columns defined here will be used.\n\nYou can also create a Progress instance using custom columns before\nand/or after the defaults, as in this example:\n\n progress = Progress(\n SpinnerColumn(),\n *Progress.default_columns(),\n \"Elapsed:\",\n TimeElapsedColumn(),\n )\n\nThis code shows the creation of a Progress display, containing\na spinner to the left, the default columns, and a labeled elapsed\ntime column."}]}
| null |
["tests/test_progress.py::test_using_default_columns"]
|
["tests/test_progress.py::test_bar_columns", "tests/test_progress.py::test_text_column", "tests/test_progress.py::test_time_elapsed_column", "tests/test_progress.py::test_time_remaining_column", "tests/test_progress.py::test_renderable_column", "tests/test_progress.py::test_spinner_column", "tests/test_progress.py::test_download_progress_uses_decimal_units", "tests/test_progress.py::test_download_progress_uses_binary_units", "tests/test_progress.py::test_task_ids", "tests/test_progress.py::test_finished", "tests/test_progress.py::test_expand_bar", "tests/test_progress.py::test_render", "tests/test_progress.py::test_track", "tests/test_progress.py::test_progress_track", "tests/test_progress.py::test_columns", "tests/test_progress.py::test_task_create", "tests/test_progress.py::test_task_start", "tests/test_progress.py::test_task_zero_total", "tests/test_progress.py::test_progress_create", "tests/test_progress.py::test_track_thread", "tests/test_progress.py::test_reset", "tests/test_progress.py::test_progress_max_refresh", "tests/test_progress.py::test_live_is_started_if_progress_is_enabled", "tests/test_progress.py::test_live_is_not_started_if_progress_is_disabled", "tests/test_progress.py::test_no_output_if_progress_is_disabled"]
|
b0661de34bab35af9b4b1d3ba8e28b186b225e84
|
{"first_commit_time": 1643639341.0, "pr_title": "Add default_columns classmethod to Progress class", "pr_body": "## Type of changes\r\n\r\n- [ ] Bug fix\r\n- [x] New feature\r\n- [ ] Documentation / docstrings\r\n- [ ] Tests\r\n- [ ] Other\r\n\r\n## Checklist\r\n\r\n- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.\r\n- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.\r\n- [x] I've added tests for new code.\r\n- [x] I accept that @willmcgugan may be pedantic in the code review.\r\n\r\n## Description\r\n\r\nAdded new default_columns() classmethod to the Progress class, so that client code does not need to replicate the defaults literally, but can just add `*Progress.default_columns()`", "pr_timeline": [{"time": 1643656454.0, "comment": "# [Codecov](https://codecov.io/gh/Textualize/rich/pull/1894?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) Report\n> Merging [#1894](https://codecov.io/gh/Textualize/rich/pull/1894?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) (76e234b) into [master](https://codecov.io/gh/Textualize/rich/commit/e839bfb3593b0de1dca4221c32a8098e72639893?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) (e839bfb) will **decrease** coverage by `0.02%`.\n> The diff coverage is `99.00%`.\n\n[](https://codecov.io/gh/Textualize/rich/pull/1894?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None)\n\n```diff\n@@ Coverage Diff @@\n## master #1894 +/- ##\n==========================================\n- Coverage 99.82% 99.80% -0.03% \n==========================================\n Files 71 71 \n Lines 6943 7035 +92 \n==========================================\n+ Hits 6931 7021 +90 \n- Misses 12 14 +2 \n```\n\n| Flag | Coverage \u0394 | |\n|---|---|---|\n| unittests | `99.80% <99.00%> (-0.03%)` | :arrow_down: |\n\nFlags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#carryforward-flags-in-the-pull-request-comment) to find out more.\n\n| [Impacted Files](https://codecov.io/gh/Textualize/rich/pull/1894?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) | Coverage \u0394 | |\n|---|---|---|\n| [rich/default\\_styles.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9kZWZhdWx0X3N0eWxlcy5weQ==) | `100.00% <\u00f8> (\u00f8)` | |\n| [rich/syntax.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9zeW50YXgucHk=) | `99.27% <97.72%> (-0.34%)` | :arrow_down: |\n| [rich/pretty.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9wcmV0dHkucHk=) | `99.71% <98.88%> (-0.29%)` | :arrow_down: |\n| [rich/\\_\\_main\\_\\_.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9fX21haW5fXy5weQ==) | `100.00% <100.00%> (\u00f8)` | |\n| [rich/\\_inspect.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9faW5zcGVjdC5weQ==) | `100.00% <100.00%> (\u00f8)` | |\n| [rich/console.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9jb25zb2xlLnB5) | `100.00% <100.00%> (\u00f8)` | |\n| [rich/progress.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9wcm9ncmVzcy5weQ==) | `98.13% <100.00%> (+0.01%)` | :arrow_up: |\n| [rich/segment.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC9zZWdtZW50LnB5) | `99.34% <100.00%> (+0.05%)` | :arrow_up: |\n| [rich/table.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC90YWJsZS5weQ==) | `100.00% <100.00%> (\u00f8)` | |\n| [rich/traceback.py](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-cmljaC90cmFjZWJhY2sucHk=) | `99.55% <100.00%> (\u00f8)` | |\n| ... and [1 more](https://codecov.io/gh/Textualize/rich/pull/1894/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/Textualize/rich/pull/1894?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None)\n> `\u0394 = absolute <relative> (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/Textualize/rich/pull/1894?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None). Last update [633faab...76e234b](https://codecov.io/gh/Textualize/rich/pull/1894?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None).\n"}, {"time": 1643838094.0, "comment": "This should be ready whenever you are. Unless you have further comments, I'll assume this is in your queue for the next release."}, {"time": 1643981602.0, "comment": "Thanks!"}], "issues": {}}
|
|
Textualize/rich
| 376
|
https://github.com/Textualize/rich/pull/376
|
Textualize__rich-376
|
[]
|
a83ee864e67d97be926894c7b5d3cf470194d6c1
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index f813ffeb0a..132fafbbc2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Addded box.SQUARE_DOUBLE_HEAD
- Added highlighting of EUI-48 and EUI-64 (MAC addresses)
- Added Console.pager
+- Added Console.out
### Changed
diff --git a/docs/source/console.rst b/docs/source/console.rst
index a3c690c794..f5565ee6b3 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -69,6 +69,15 @@ The :meth:`~rich.console.Console.log` methods offers the same capabilities as pr
To help with debugging, the log() method has a ``log_locals`` parameter. If you set this to ``True``, Rich will display a table of local variables where the method was called.
+Low level output
+----------------
+
+In additional to :meth:`~rich.console.Console.print` and :meth:`~rich.console.Console.log`, Rich has a :meth:`~rich.console.Console.out` method which provides a lower-level way of writing to the terminal. The out() method converts all the positional arguments to strings and won't pretty print, word wrap, or apply markup to the output, but can apply a basic style and will optionally do highlighting.
+
+Here's an example::
+
+ >>> console.out("Locals", locals())
+
Justify / Alignment
-------------------
diff --git a/rich/console.py b/rich/console.py
index 64d0109200..2e3afe8573 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -1012,6 +1012,37 @@ def control(self, control_codes: Union["Control", str]) -> None:
self._buffer.append(Segment.control(str(control_codes)))
self._check_buffer()
+ def out(
+ self,
+ *objects: Any,
+ sep=" ",
+ end="\n",
+ style: Union[str, Style] = None,
+ highlight: bool = True,
+ ) -> None:
+ """Output to the terminal. This is a low-level way of writing to the terminal which unlike
+ :meth:`~rich.console.Console.print` doesn't pretty print, wrap text, nor markup, but will highlighting
+ and apply basic style.
+
+ Args:
+ sep (str, optional): String to write between print data. Defaults to " ".
+ end (str, optional): String to write at end of print data. Defaults to "\\n".
+ style (Union[str, Style], optional): A style to apply to output. Defaults to None.
+ highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``.
+ """
+ raw_output: str = sep.join(str(_object) for _object in objects)
+ self.print(
+ raw_output,
+ style=style,
+ highlight=highlight,
+ emoji=False,
+ markup=False,
+ no_wrap=True,
+ overflow="ignore",
+ crop=False,
+ end=end,
+ )
+
def print(
self,
*objects: Any,
|
diff --git a/tests/test_console.py b/tests/test_console.py
index eca300095a..8dbe06933a 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -370,3 +370,10 @@ def mock_pager(content: str) -> None:
console.print("[bold link https:/example.org]Hello World")
assert pager_content == "Hello World\n"
+
+
+def test_out() -> None:
+ console = Console(width=10)
+ console.begin_capture()
+ console.out(*(["foo bar"] * 5), sep=".", end="X")
+ assert console.end_capture() == "foo bar.foo bar.foo bar.foo bar.foo barX"
diff --git a/tests/test_padding.py b/tests/test_padding.py
index fa86b5b176..419f0503dc 100644
--- a/tests/test_padding.py
+++ b/tests/test_padding.py
@@ -32,7 +32,11 @@ def test_rich_console():
renderable = "test renderable"
style = Style(color="red")
options = ConsoleOptions(
- min_width=10, max_width=20, is_terminal=False, encoding="utf-8"
+ legacy_windows=False,
+ min_width=10,
+ max_width=20,
+ is_terminal=False,
+ encoding="utf-8",
)
expected_outputs = [
| 2020-10-11T16:35:48
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"CHANGELOG.md": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),\nand this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).\n\n## [9.0.0] - unreleased\n\n### Fixed\n\n- Progress download column now displays decimal units\n\n### Added\n\n- Added legacy_windows to ConsoleOptions\n- Added ascii_only to ConsoleOptions\n- Addded box.SQUARE_DOUBLE_HEAD\n- Added highlighting of EUI-48 and EUI-64 (MAC addresses)\n- Added Console.pager\n\n### Changed\n\n- Dropped box.get_safe_box function in favor of Box.substitute\n\n### Fixed\n\n- Fixed typo in `Style.transparent_background` method name.\n\n## [8.0.0] - 2020-10-03\n\n### Added\n\n- Added Console.bell method\n- Added Set to types that Console.print will automatically pretty print\n- Added show_locals to Traceback\n- Added theme stack mechanism, see Console.push_theme and Console.pop_theme\n\n### Changed\n\n- Changed Style.empty to Style.null to better reflect what it does\n- Optimized combining styles involving a null style\n- Change error messages in Style.parse to read better\n\n### Fixed\n\n- Fixed Table.\\_\\_rich_measure\\_\\_\n- Fixed incorrect calculation of fixed width columns\n\n## [7.1.0] - 2020-09-26\n\n### Added\n\n- Added Console.begin_capture, Console.end_capture and Console.capture\n- Added Table.title_justify and Table.caption_justify https://github.com/willmcgugan/rich/issues/301\n\n### Changed\n\n- Improved formatting of exceptions\n- Enabled Rich exceptions in logging https://github.com/taliraj\n- UTF-8 encoding is now mentioned in HTML head section\n\n### Removed\n\n- Removed line_numbers argument from traceback.install, which was undocumented and did nothing\n\n## [7.0.0] - 2020-09-18\n\n### Added\n\n- New ansi_dark and ansi_light themes\n- Added Text.append_tokens for fast appending of string + Style pairs\n- Added Text.remove_suffix\n- Added Text.append_tokens\n\n### Changed\n\n- Text.tabs_to_spaces was renamed to Text.expand_tabs, which works in place rather than returning a new instance\n- Renamed Column.index to Column.\\_index\n- Optimized Style.combine and Style.chain\n- Optimized text rendering by fixing internal cache mechanism\n- Optimized hash generation for Styles\n\n## [6.2.0] - 2020-09-13\n\n### Added\n\n- Added inline code highlighting to Markdown\n\n## [6.1.2] - 2020-09-11\n\n### Added\n\n- Added ipv4 and ipv6 to ReprHighlighter\n\n### Changed\n\n- The `#` sign is included in url highlighting\n\n### Fixed\n\n- Fixed force-color switch in rich.syntax and rich.markdown commands\n\n## [6.1.1] - 2020-09-07\n\n### Changed\n\n- Restored \"def\" in inspect signature\n\n## [6.1.0] - 2020-09-07\n\n### Added\n\n- New inspect module\n- Added os.\\_Environ to pretty print\n\n### Fixed\n\n- Prevented recursive renderables from getting stuck\n\n## Changed\n\n- force_terminal and force_jupyter can now be used to force the disabled state, or left as None to auto-detect.\n- Panel now expands to fit title if supplied\n\n## [6.0.0] - 2020-08-25\n\n### Fixed\n\n- Fixed use of `__rich__` cast\n\n### Changed\n\n- New algorithm to pretty print which fits more on a line if possible\n- Deprecated `character` parameter in Rule and Console.rule, in favor of `characters`\n- Optimized Syntax.from_path to avoid searching all lexers, which also speeds up tracebacks\n\n### Added\n\n- Added soft_wrap flag to Console.print\n\n## [5.2.1] - 2020-08-19\n\n### Fixed\n\n- Fixed underscore with display hook https://github.com/willmcgugan/rich/issues/235\n\n## [5.2.0] - 2020-08-14\n\n### Changed\n\n- Added crop argument to Console.print\n- Added \"ignore\" overflow method\n- Added multiple characters per rule @hedythedev https://github.com/willmcgugan/rich/pull/207\n\n## [5.1.2] - 2020-08-10\n\n### Fixed\n\n- Further optimized pretty printing ~5X.\n\n## [5.1.1] - 2020-08-09\n\n### Fixed\n\n- Optimized pretty printing ~3X faster\n\n## [5.1.0] - 2020-08-08\n\n### Added\n\n- Added Text.cell_len\n- Added helpful message regarding unicode decoding errors https://github.com/willmcgugan/rich/issues/212\n- Added display hook with pretty.install()\n\n### Fixed\n\n- Fixed deprecation warnings re backslash https://github.com/willmcgugan/rich/issues/210\n- Fixed repr highlighting of scientific notation, e.g. 1e100\n\n### Changed\n\n- Implemented pretty printing, and removed pprintpp from dependencies\n- Optimized Text.join\n\n## [5.0.0] - 2020-08-02\n\n### Changed\n\n- Change to console markup syntax to not parse Python structures as markup, i.e. `[1,2,3]` is treated as a literal, not a tag.\n- Standard color numbers syntax has changed to `\"color(<number>)\"` so that `[5]` (for example) is considered a literal.\n- Markup escape method has changed from double brackets to preceding with a backslash, so `foo[[]]` would be `foo\\[bar]`\n\n## [4.2.2] - 2020-07-30\n\n### Changed\n\n- Added thread to automatically call update() in progress.track(). Replacing previous adaptive algorithm.\n- Second attempt at working around https://bugs.python.org/issue37871\n\n## [4.2.1] - 2020-07-29\n\n### Added\n\n- Added show_time and show_level parameters to RichHandler https://github.com/willmcgugan/rich/pull/182\n\n### Fixed\n\n- Fixed progress.track iterator exiting early https://github.com/willmcgugan/rich/issues/189\n- Added workaround for Python bug https://bugs.python.org/issue37871, fixing https://github.com/willmcgugan/rich/issues/186\n\n### Changed\n\n- Set overflow=fold for log messages https://github.com/willmcgugan/rich/issues/190\n\n## [4.2.0] - 2020-07-27\n\n### Fixed\n\n- Fixed missing new lines https://github.com/willmcgugan/rich/issues/178\n- Fixed Progress.track https://github.com/willmcgugan/rich/issues/184\n- Remove control codes from exported text https://github.com/willmcgugan/rich/issues/181\n- Implemented auto-detection and color rendition of 16-color mode\n\n## [4.1.0] - 2020-07-26\n\n### Changed\n\n- Optimized progress.track for very quick iterations\n- Force default size of 80x25 if get_terminal_size reports size of 0,0\n\n## [4.0.0] - 2020-07-23\n\nMajor version bump for a breaking change to `Text.stylize signature`, which corrects a minor but irritating API wart. The style now comes first and the `start` and `end` offsets default to the entire text. This allows for `text.stylize_all(style)` to be replaced with `text.stylize(style)`. The `start` and `end` offsets now support negative indexing, so `text.stylize(\"bold\", -1)` makes the last character bold.\n\n### Added\n\n- Added markup switch to RichHandler https://github.com/willmcgugan/rich/issues/171\n\n### Changed\n\n- Change signature of Text.stylize to accept style first\n- Remove Text.stylize_all which is no longer necessary\n\n### Fixed\n\n- Fixed rendering of Confirm prompt https://github.com/willmcgugan/rich/issues/170\n\n## [3.4.1] - 2020-07-22\n\n### Fixed\n\n- Fixed incorrect default of expand in Table.grid\n\n## [3.4.0] - 2020-07-22\n\n### Added\n\n- Added stream parameter to Console.input\n- Added password parameter to Console.input\n- Added description parameter to Progress.update\n- Added rich.prompt\n- Added detecting 'dumb' terminals\n- Added Text.styled alternative constructor\n\n### Fixes\n\n- Fixed progress bars so that they are readable when color is disabled\n\n## [3.3.2] - 2020-07-14\n\n### Changed\n\n- Optimized Text.pad\n\n### Added\n\n- Added rich.scope\n- Change log_locals to use scope.render_scope\n- Added title parameter to Columns\n\n## [3.3.1] - 2020-07-13\n\n### Added\n\n- box.ASCII_DOUBLE_HEAD\n\n### Changed\n\n- Removed replace of -- --- ... from Markdown, as it made it impossible to include CLI info\n\n## [3.3.0] - 2020-07-12\n\n### Added\n\n- Added title and title_align options to Panel\n- Added pad and width parameters to Align\n- Added end parameter to Rule\n- Added Text.pad and Text.align methods\n- Added leading parameter to Table\n\n## [3.2.0] - 2020-07-10\n\n### Added\n\n- Added Align.left Align.center Align.right shortcuts\n- Added Panel.fit shortcut\n- Added align parameter to Columns\n\n### Fixed\n\n- Align class now pads to the right, like Text\n- ipywidgets added as an optional dependency\n- Issue with Panel and background color\n- Fixed missing `__bool__` on Segment\n\n### Changed\n\n- Added `border_style` argument to Panel (note, `style` now applies to interior of the panel)\n\n## [3.1.0] - 2020-07-09\n\n### Changed\n\n- Progress bars now work in Jupyter\n\n## Added\n\n- Added refresh_per_second to progress.track\n- Added styles to BarColumn and progress.track\n\n## [3.0.5] - 2020-07-07\n\n### Fixed\n\n- Fixed Windows version number require for truecolor\n\n## [3.0.4] - 2020-07-07\n\n### Changed\n\n- More precise detection of Windows console https://github.com/willmcgugan/rich/issues/140\n\n## [3.0.3] - 2020-07-03\n\n### Fixed\n\n- Fixed edge case with wrapped and overflowed text\n\n### Changed\n\n- New algorithm for compressing table that priorities smaller columns\n\n### Added\n\n- Added safe_box parameter to Console constructor\n\n## [3.0.2] - 2020-07-02\n\n### Added\n\n- Added rich.styled.Styled class to apply styles to renderable\n- Table.add_row now has an optional style parameter\n- Added table_movie.py to examples\n\n### Changed\n\n- Modified box options to use half line characters at edges\n- Non no_wrap columns will now shrink below minimum width if table is compressed\n\n## [3.0.1] - 2020-06-30\n\n### Added\n\n- Added box.ASCII2\n- Added markup argument to logging extra\n\n### Changed\n\n- Setting a non-None width now implies expand=True\n\n## [3.0.0] - 2020-06-28\n\n### Changed\n\n- Enabled supported box chars for legacy Windows, and introduce `safe_box` flag\n- Disable hyperlinks on legacy Windows\n- Constructors for Rule and Panel now have keyword only arguments (reason for major version bump)\n- Table.add_colum added keyword only arguments\n\n### Fixed\n\n- Fixed Table measure\n\n## [2.3.1] - 2020-06-26\n\n### Fixed\n\n- Disabled legacy_windows if jupyter is detected https://github.com/willmcgugan/rich/issues/125\n\n## [2.3.0] - 2020-06-26\n\n### Fixed\n\n- Fixed highlighting of paths / filenames\n- Corrected docs for RichHandler which erroneously said default console writes to stderr\n\n### Changed\n\n- Allowed `style` parameter for `highlight_regex` to be a callable that returns a style\n\n### Added\n\n- Added optional highlighter parameter to RichHandler\n\n## [2.2.6] - 2020-06-24\n\n### Changed\n\n- Store a \"link id\" on Style instance, so links containing different styles are highlighted together. (https://github.com/willmcgugan/rich/pull/123)\n\n## [2.2.5] - 2020-06-23\n\n### Fixed\n\n- Fixed justify of tables (https://github.com/willmcgugan/rich/issues/117)\n\n## [2.2.4] - 2020-06-21\n\n### Added\n\n- Added enable_link_path to RichHandler\n- Added legacy_windows switch to Console constructor\n\n## [2.2.3] - 2020-06-15\n\n### Fixed\n\n- Fixed console.log hyperlink not containing full path\n\n### Changed\n\n- Used random number for hyperlink id\n\n## [2.2.2] - 2020-06-14\n\n### Changed\n\n- Exposed RichHandler highlighter as a class var\n\n## [2.2.1] - 2020-06-14\n\n### Changed\n\n- Linked path in log render to file\n\n## [2.2.0] - 2020-06-14\n\n### Added\n\n- Added redirect_stdout and redirect_stderr to Progress\n\n### Changed\n\n- printing to console with an active Progress doesn't break visuals\n\n## [2.1.0] - 2020-06-11\n\n### Added\n\n- Added 'transient' option to Progress\n\n### Changed\n\n- Truncated overly long text in Rule with ellipsis overflow\n\n## [2.0.1] - 2020-06-10\n\n### Added\n\n- Added expand option to Padding\n\n### Changed\n\n- Some minor optimizations in Text\n\n### Fixed\n\n- Fixed broken rule with CJK text\n\n## [2.0.0] - 2020-06-06\n\n### Added\n\n- Added overflow methods\n- Added no_wrap option to print()\n- Added width option to print\n- Improved handling of compressed tables\n\n### Fixed\n\n- Fixed erroneous space at end of log\n- Fixed erroneous space at end of progress bar\n\n### Changed\n\n- Renamed \\_ratio.ratio_divide to \\_ratio.ratio_distribute\n- Renamed JustifyValues to JustifyMethod (backwards incompatible)\n- Optimized \\_trim_spans\n- Enforced keyword args in Console / Text interfaces (backwards incompatible)\n- Return self from text.append\n\n## [1.3.1] - 2020-06-01\n\n### Changed\n\n- Changed defaults of Table.grid\n- Polished listdir.py example\n\n### Added\n\n- Added width argument to Columns\n\n### Fixed\n\n- Fixed for `columns_first` argument in Columns\n- Fixed incorrect padding in columns with fixed width\n\n## [1.3.0] - 2020-05-31\n\n### Added\n\n- Added rich.get_console() function to get global console instance.\n- Added Columns class\n\n### Changed\n\n- Updated `markdown.Heading.create()` to work with subclassing.\n- Console now transparently works with Jupyter\n\n### Fixed\n\n- Fixed issue with broken table with show_edge=False and a non-None box arg\n\n## [1.2.3] - 2020-05-24\n\n### Added\n\n- Added `padding` parameter to Panel\n- Added 'indeterminate' state when progress bars aren't started\n\n### Fixed\n\n- Fixed Progress deadlock https://github.com/willmcgugan/rich/issues/90\n\n### Changed\n\n- Auto-detect \"truecolor\" color system when in Windows Terminal\n\n## [1.2.2] - 2020-05-22\n\n### Fixed\n\n- Issue with right aligned wrapped text adding extra spaces\n\n## [1.2.1] - 2020-05-22\n\n### Fixed\n\n- Issue with sum and Style\n\n## [1.2.0] - 2020-05-22\n\n### Added\n\n- Support for double underline, framed, encircled, and overlined attributes\n\n### Changed\n\n- Optimized Style\n- Changed methods `__console__` to `__rich_console__`, and `__measure__` to `__rich_measure__`\n\n## [1.1.9] - 2020-05-20\n\n### Fixed\n\n- Exception when BarColumn.bar_width == None\n\n## [1.1.8] - 2020-05-20\n\n### Changed\n\n- Optimizations for Segment, Console and Table\n\n### Added\n\n- Added Console.clear method\n- Added exporting of links to HTML\n\n## [1.1.7] - 2020-05-19\n\n### Added\n\n- Added collapse_padding option to Table.\n\n### Changed\n\n- Some style attributes may be abbreviated (b for bold, i for italic etc). Previously abbreviations worked in console markup but only one at a time, i.e. \"[b]Hello[/]\" but not \"[b i]Hello[/]\" -- now they work everywhere.\n- Renamed 'text' property on Text to 'plain'. i.e. text.plain returns a string version of the Text instance.\n\n### Fixed\n\n- Fixed zero division if total is 0 in progress bar\n\n## [1.1.6] - 2020-05-17\n\n### Added\n\n- Added rich.align.Align class\n- Added justify argument to Console.print and console.log\n\n## [1.1.5] - 2020-05-15\n\n### Changed\n\n- Changed progress bars to write to stdout on terminal and hide on non-terminal\n\n## [1.1.4] - 2020-05-15\n\n### Fixed\n\n- Fixed incorrect file and link in progress.log\n- Fixes for legacy windows: Bar, Panel, and Rule now use ASCII characters\n- show_cursor is now a no-op on legacy windows\n\n### Added\n\n- Added Console.input\n\n### Changed\n\n- Disable progress bars when not writing to a terminal\n\n## [1.1.3] - 2020-05-14\n\n### Fixed\n\n- Issue with progress of one line`\n\n## [1.1.2] - 2020-05-14\n\n### Added\n\n- Added -p switch to python -m rich.markdown to page output\n- Added Console.control to output control codes\n\n### Changed\n\n- Changed Console log_time_format to no longer require a space at the end\n- Added print and log to Progress to render terminal output when progress is active\n\n## [1.1.1] - 2020-05-12\n\n### Changed\n\n- Stripped cursor moving control codes from text\n\n## [1.1.0] - 2020-05-10\n\n### Added\n\n- Added hyperlinks to Style and markup\n- Added justify and code theme switches to markdown command\n\n## [1.0.3] - 2020-05-08\n\n### Added\n\n- Added `python -m rich.syntax` command\n\n## [1.0.2] - 2020-05-08\n\n### Fixed\n\n- Issue with Windows legacy support https://github.com/willmcgugan/rich/issues/59\n\n## [1.0.1] - 2020-05-08\n\n### Changed\n\n- Applied console markup after highlighting\n- Documented highlighting\n- Changed Markup parser to handle overlapping styles\n- Relaxed dependency on colorama\n- Allowed Theme to accept values as style definitions (str) as well as Style instances\n- Added a panel to emphasize code in Markdown\n\n### Added\n\n- Added markup.escape\n- Added `python -m rich.theme` command\n- Added `python -m rich.markdown` command\n- Added rendering of images in Readme (links only)\n\n### Fixed\n\n- Fixed Text.assemble not working with strings https://github.com/willmcgugan/rich/issues/57\n- Fixed table when column widths must be compressed to fit\n\n## [1.0.0] - 2020-05-03\n\n### Changed\n\n- Improvements to repr highlighter to highlight URLs\n\n## [0.8.13] - 2020-04-28\n\n### Fixed\n\n- Fixed incorrect markdown rendering for quotes and changed style\n\n## [0.8.12] - 2020-04-21\n\n### Fixed\n\n- Removed debug print from rich.progress\n\n## [0.8.11] - 2020-04-14\n\n### Added\n\n- Added Table.show_lines to render lines between rows\n\n### Changed\n\n- Added markup escape with double square brackets\n\n## [0.8.10] - 2020-04-12\n\n### Fixed\n\n- Fix row_styles applying to header\n\n## [0.8.9] - 2020-04-12\n\n### Changed\n\n- Added force_terminal option to `Console.__init__`\n\n### Added\n\n- Added Table.row_styles to enable zebra striping.\n\n## [0.8.8] - 2020-03-31\n\n### Fixed\n\n- Fixed background in Syntax\n\n## [0.8.7] - 2020-03-31\n\n### Fixed\n\n- Broken wrapping of long lines\n- Fixed wrapping in Syntax\n\n### Changed\n\n- Added word_wrap option to Syntax, which defaults to False.\n- Added word_wrap option to Traceback.\n\n## [0.8.6] - 2020-03-29\n\n### Added\n\n- Experimental Jupyter notebook support: from rich.jupyter import print\n\n## [0.8.5] - 2020-03-29\n\n### Changed\n\n- Smarter number parsing regex for repr highlighter\n\n### Added\n\n- uuid highlighter for repr\n\n## [0.8.4] - 2020-03-28\n\n### Added\n\n- Added 'test card', run python -m rich\n\n### Changed\n\n- Detected windows terminal, defaulting to colorama support\n\n### Fixed\n\n- Fixed table scaling issue\n\n## [0.8.3] - 2020-03-27\n\n### Fixed\n\n- CJK right align\n\n## [0.8.2] - 2020-03-27\n\n### Changed\n\n- Fixed issue with 0 speed resulting in zero division error\n- Changed signature of Progress.update\n- Made calling start() a second time a no-op\n\n## [0.8.1] - 2020-03-22\n\n### Added\n\n- Added progress.DownloadColumn\n\n## [0.8.0] - 2020-03-17\n\n### Added\n\n- CJK support\n- Console level highlight flag\n- Added encoding argument to Syntax.from_path\n\n### Changed\n\n- Dropped support for Windows command prompt (try https://www.microsoft.com/en-gb/p/windows-terminal-preview/)\n- Added task_id to Progress.track\n\n## [0.7.2] - 2020-03-15\n\n### Fixed\n\n- KeyError for missing pygments style\n\n## [0.7.1] - 2020-03-13\n\n### Fixed\n\n- Issue with control codes being used in length calculation\n\n### Changed\n\n- Remove current_style concept, which wasn't really used and was problematic for concurrency\n\n## [0.7.0] - 2020-03-12\n\n### Changed\n\n- Added width option to Panel\n- Change special method `__render_width__` to `__measure__`\n- Dropped the \"markdown style\" syntax in console markup\n- Optimized style rendering\n\n### Added\n\n- Added Console.show_cursor method\n- Added Progress bars\n\n### Fixed\n\n- Fixed wrapping when a single word was too large to fit in a line\n\n## [0.6.0] - 2020-03-03\n\n### Added\n\n- Added tab_size to Console and Text\n- Added protocol.is_renderable for runtime check\n- Added emoji switch to Console\n- Added inherit boolean to Theme\n- Made Console thread safe, with a thread local buffer\n\n### Changed\n\n- Console.markup attribute now effects Table\n- SeparatedConsoleRenderable and RichCast types\n\n### Fixed\n\n- Fixed tabs breaking rendering by converting to spaces\n\n## [0.5.0] - 2020-02-23\n\n### Changed\n\n- Replaced `__console_str__` with `__rich__`\n\n## [0.4.1] - 2020-02-22\n\n### Fixed\n\n- Readme links in Pypi\n\n## [0.4.0] - 2020-02-22\n\n### Added\n\n- Added Traceback rendering and handler\n- Added rich.constrain\n- Added rich.rule\n\n### Fixed\n\n- Fixed unnecessary padding\n\n## [0.3.3] - 2020-02-04\n\n### Fixed\n\n- Fixed Windows color support\n- Fixed line width on windows issue (https://github.com/willmcgugan/rich/issues/7)\n- Fixed Pretty print on Windows\n\n## [0.3.2] - 2020-01-26\n\n### Added\n\n- Added rich.logging\n\n## [0.3.1] - 2020-01-22\n\n### Added\n\n- Added colorama for Windows support\n\n## [0.3.0] - 2020-01-19\n\n### Added\n\n- First official release, API still to be stabilized\n", "docs/source/console.rst": "Console API\n===========\n\nFor complete control over terminal formatting, Rich offers a :class:`~rich.console.Console` class. Most applications will require a single Console instance, so you may want to create one at the module level or as an attribute of your top-level object. For example, you could add a file called \"console.py\" to your project::\n\n from rich.console import Console\n console = Console()\n\nThen you can import the console from anywhere in your project like this::\n\n from my_project.console import console\n\nThe console object handles the mechanics of generating ANSI escape sequences for color and style. It will auto-detect the capabilities of the terminal and convert colors if necessary.\n\n\nAttributes\n----------\n\nThe console will auto-detect a number of properties required when rendering.\n\n* :obj:`~rich.console.Console.size` is the current dimensions of the terminal (which may change if you resize the window).\n* :obj:`~rich.console.Console.encoding` is the default encoding (typically \"utf-8\").\n* :obj:`~rich.console.Console.is_terminal` is a boolean that indicates if the Console instance is writing to a terminal or not.\n* :obj:`~rich.console.Console.color_system` is a string containing the Console color system (see below).\n\n\nColor systems\n-------------\n\nThere are several \"standards\" for writing color to the terminal which are not all universally supported. Rich will auto-detect the appropriate color system, or you can set it manually by supplying a value for ``color_system`` to the :class:`~rich.console.Console` constructor.\n\nYou can set ``color_system`` to one of the following values:\n\n* ``None`` Disables color entirely.\n* ``\"auto\"`` Will auto-detect the color system.\n* ``\"standard\"`` Can display 8 colors, with normal and bright variations, for 16 colors in total.\n* ``\"256\"`` Can display the 16 colors from \"standard\" plus a fixed palette of 240 colors.\n* ``\"truecolor\"`` Can display 16.7 million colors, which is likely all the colors your monitor can display.\n* ``\"windows\"`` Can display 8 colors in legacy Windows terminal. New Windows terminal can display \"truecolor\".\n\n.. warning::\n Be careful when setting a color system, if you set a higher color system than your terminal supports, your text may be unreadable.\n\n\nPrinting\n--------\n\nTo write rich content to the terminal use the :meth:`~rich.console.Console.print` method. Rich will convert any object to a string via its (``__str__``) method and perform some simple syntax highlighting. It will also do pretty printing of any containers, such as dicts and lists. If you print a string it will render :ref:`console_markup`. Here are some examples::\n\n console.print([1, 2, 3])\n console.print(\"[blue underline]Looks like a link\")\n console.print(locals())\n console.print(\"FOO\", style=\"white on blue\")\n\nYou can also use :meth:`~rich.console.Console.print` to render objects that support the :ref:`protocol`, which includes Rich's built in objects such as :class:`~rich.text.Text`, :class:`~rich.table.Table`, and :class:`~rich.syntax.Syntax` -- or other custom objects.\n\n\nLogging\n-------\n\nThe :meth:`~rich.console.Console.log` methods offers the same capabilities as print, but adds some features useful for debugging a running application. Logging writes the current time in a column to the left, and the file and line where the method was called to a column on the right. Here's an example::\n\n >>> console.log(\"Hello, World!\")\n\n.. raw:: html\n\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf\">[16:32:08] </span>Hello, World! <span style=\"color: #7f7f7f\"><stdin>:1</span>\n </pre>\n\nTo help with debugging, the log() method has a ``log_locals`` parameter. If you set this to ``True``, Rich will display a table of local variables where the method was called.\n\n\nJustify / Alignment\n-------------------\n\nBoth print and log support a ``justify`` argument which if set must be one of \"default\", \"left\", \"right\", \"center\", or \"full\". If \"left\", any text printed (or logged) will be left aligned, if \"right\" text will be aligned to the right of the terminal, if \"center\" the text will be centered, and if \"full\" the text will be lined up with both the left and right edges of the terminal (like printed text in a book). \n\nThe default for ``justify`` is ``\"default\"`` which will generally look the same as ``\"left\"`` but with a subtle difference. Left justify will pad the right of the text with spaces, while a default justify will not. You will only notice the difference if you set a background color with the ``style`` argument. The following example demonstrates the difference::\n\n from rich.console import Console\n\n console = Console(width=20)\n\n style = \"bold white on blue\"\n console.print(\"Rich\", style=style)\n console.print(\"Rich\", style=style, justify=\"left\")\n console.print(\"Rich\", style=style, justify=\"center\")\n console.print(\"Rich\", style=style, justify=\"right\")\n\n\nThis produces the following output:\n\n.. raw:: html\n\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #c0c0c0; background-color: #000080; font-weight: bold\">Rich\n Rich \n Rich \n Rich\n </span></pre>\n\nOverflow\n--------\n\nOverflow is what happens when text you print is larger than the available space. Overflow may occur if you print long 'words' such as URLs for instance, or if you have text inside a panel or table cell with restricted space.\n\nYou can specify how Rich should handle overflow with the ``overflow`` argument to :meth:`~rich.console.Console.print` which should be one of the following strings: \"fold\", \"crop\", \"ellipsis\", or \"ignore\". The default is \"fold\" which will put any excess characters on the following line, creating as many new lines as required to fit the text.\n\nThe \"crop\" method truncates the text at the end of the line, discarding any characters that would overflow.\n\nThe \"ellipsis\" method is similar to \"crop\", but will insert an ellipsis character (\"…\") at the end of any text that has been truncated.\n\nThe following code demonstrates the basic overflow methods::\n\n from typing import List\n from rich.console import Console, OverflowMethod\n\n console = Console(width=14)\n supercali = \"supercalifragilisticexpialidocious\"\n\n overflow_methods: List[OverflowMethod] = [\"fold\", \"crop\", \"ellipsis\"]\n for overflow in overflow_methods:\n console.rule(overflow)\n console.print(supercali, overflow=overflow, style=\"bold blue\")\n console.print()\n\nThis produces the following output:\n\n.. raw:: html\n\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #00ff00\">──── </span>fold<span style=\"color: #00ff00\"> ────</span>\n <span style=\"color: #000080; font-weight: bold\">supercalifragi\n listicexpialid\n ocious\n </span>\n <span style=\"color: #00ff00\">──── </span>crop<span style=\"color: #00ff00\"> ────</span>\n <span style=\"color: #000080; font-weight: bold\">supercalifragi\n </span>\n <span style=\"color: #00ff00\">── </span>ellipsis<span style=\"color: #00ff00\"> ──</span>\n <span style=\"color: #000080; font-weight: bold\">supercalifrag…\n </span>\n </pre>\n\nYou can also set overflow to \"ignore\" which allows text to run on to the next line. In practice this will look the same as \"crop\" unless you also set ``crop=False`` when calling :meth:`~rich.console.Console.print`.\n\n\nSoft Wrapping\n-------------\n\nRich word wraps text you print by inserting line breaks. You can disable this behavior by setting ``soft_wrap=True`` when calling :meth:`~rich.console.Console.print`. With *soft wrapping* enabled any text that doesn't fit will run on to the following line(s), just like the builtin ``print``.\n\n\nCropping\n--------\n\nThe :meth:`~rich.console.Console.print` method has a boolean ``crop`` argument. The default value for crop is True which tells Rich to crop any content that would otherwise run on to the next line. You generally don't need to think about cropping, as Rich will resize content to fit within the available width.\n\n.. note::\n Cropping is automatically disabled if you print with ``soft_wrap==True``.\n\n\nInput\n-----\n\nThe console class has an :meth:`~rich.console.Console.input` which works in the same way as Python's builtin ``input()`` method, but can use anything that Rich can print as a prompt. For example, here's a colorful prompt with an emoji::\n\n from rich.console import Console\n console = Console()\n console.input(\"What is [i]your[/i] [bold red]name[/]? :smiley: \")\n\nExporting\n---------\n\nThe Console class can export anything written to it as either text or html. To enable exporting, first set ``record=True`` on the constructor. This tells Rich to save a copy of any data you ``print()`` or ``log()``. Here's an example::\n\n from rich.console import Console\n console = Console(record=True)\n\nAfter you have written content, you can call :meth:`~rich.console.Console.export_text` or :meth:`~rich.console.Console.export_html` to get the console output as a string. You can also call :meth:`~rich.console.Console.save_text` or :meth:`~rich.console.Console.save_html` to write the contents directly to disk.\n\nFor examples of the html output generated by Rich Console, see :ref:`appendix-colors`.\n\n\nFile output\n-----------\n\nThe Console object will write to standard output (i.e. the terminal). You can also tell the Console object to write to another file by setting the ``file`` argument on the constructor -- which should be a file-like object opened for writing text. One use of this capability is to create a Console for writing to standard error by setting file to ``sys.stderr``. Here's an example::\n\n import sys\n from rich.console import Console\n error_console = Console(file=sys.stderr)\n error_console.print(\"[bold red]This is an error!\")\n\n\nCapturing output\n----------------\n\nThere may be situations where you want to *capture* the output from a Console rather than writing it directly to the terminal. You can do this with the :meth:`~rich.console.Console.capture` method which returns a context manager. On exit from this context manager, call :meth:`~rich.console.Capture.get` to return the string that would have been written to the terminal. Here's an example::\n\n from rich.console import Console\n console = Console()\n with console.capture() as capture:\n console.print(\"[bold red]Hello[/] World\")\n str_output = capture.get()\n\nAn alternative way of capturing output is to set the Console file to a :py:class:`io.StringIO`. This is the recommended method if you are testing console output in unit tests. Here's an example::\n\n from io import StringIO\n from rich.console import Console\n console = Console(file=StringIO())\n console.print(\"[bold red]Hello[/] World\")\n str_output = console.file.getvalue()\n\nPaging\n------\n\nIf you have some long output to present to the user you can use a *pager* to display it. A pager is typically an application on by your operating system which will at least support pressing a key to scroll, but will often support scrolling up and down through the text and other features.\n\nYou can page output from a Console by calling :meth:`~rich.console.Console.pager` which returns a context manger. When the pager exits, anything that was printed will be sent to the pager. Here's an example::\n\n from rich.__main__ import make_test_card\n from rich.console import Console\n\n console = Console()\n with console.pager():\n console.print(make_test_card())\n\nSince the default pager on most platforms don't support color, Rich will strip color from the output. If you know that your pager supports color, you can set ``style=True`` when calling the :meth:`~rich.console.Console.pager` method.\n\n.. note::\n Rich will use the ``PAGER`` environment variable to get the pager command. On Linux and macOS you can set this to ``less -r`` to enable paging with ANSI styles.\n\n\nTerminal detection\n------------------\n\nIf Rich detects that it is not writing to a terminal it will strip control codes from the output. If you want to write control codes to a regular file then set ``force_terminal=True`` on the constructor.\n\nLetting Rich auto-detect terminals is useful as it will write plain text when you pipe output to a file or other application.\n\n\nEnvironment variables\n---------------------\n\nRich respects some standard environment variables.\n\nSetting the environment variable ``TERM`` to ``\"dumb\"`` or ``\"unknown\"`` will disable color/style and some features that require moving the cursor, such as progress bars.\n\nIf the environment variable ``NO_COLOR`` is set, Rich will disable all color in the output.\n", "rich/console.py": "import inspect\nimport os\nimport platform\nimport shutil\nimport sys\nimport threading\nfrom abc import ABC, abstractmethod\nfrom collections import abc\nfrom dataclasses import dataclass, field, replace\nfrom functools import wraps\nfrom getpass import getpass\nfrom typing import (\n IO,\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n NamedTuple,\n Optional,\n TextIO,\n Union,\n cast,\n)\n\nfrom typing_extensions import Literal, Protocol, runtime_checkable\n\nfrom . import errors, themes\nfrom ._emoji_replace import _emoji_replace\nfrom ._log_render import LogRender\nfrom .align import Align, AlignValues\nfrom .color import ColorSystem\nfrom .control import Control\nfrom .highlighter import NullHighlighter, ReprHighlighter\nfrom .markup import render as render_markup\nfrom .measure import Measurement, measure_renderables\nfrom .pager import Pager, SystemPager\nfrom .pretty import Pretty\nfrom .scope import render_scope\nfrom .segment import Segment\nfrom .style import Style\nfrom .terminal_theme import DEFAULT_TERMINAL_THEME, TerminalTheme\nfrom .text import Text, TextType\nfrom .theme import Theme, ThemeStack\n\nif TYPE_CHECKING:\n from ._windows import WindowsConsoleFeatures\n\nWINDOWS = platform.system() == \"Windows\"\n\nHighlighterType = Callable[[Union[str, \"Text\"]], \"Text\"]\nJustifyMethod = Literal[\"default\", \"left\", \"center\", \"right\", \"full\"]\nOverflowMethod = Literal[\"fold\", \"crop\", \"ellipsis\", \"ignore\"]\n\n\nCONSOLE_HTML_FORMAT = \"\"\"\\\n<!DOCTYPE html>\n<head>\n<meta charset=\"UTF-8\">\n<style>\n{stylesheet}\nbody {{\n color: {foreground};\n background-color: {background};\n}}\n</style>\n</head>\n<html>\n<body>\n <code>\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">{code}</pre>\n </code>\n</body>\n</html>\n\"\"\"\n\n_TERM_COLORS = {\"256color\": ColorSystem.EIGHT_BIT, \"16color\": ColorSystem.STANDARD}\n\n\n@dataclass\nclass ConsoleOptions:\n \"\"\"Options for __rich_console__ method.\"\"\"\n\n legacy_windows: bool\n \"\"\"legacy_windows: flag for legacy windows.\"\"\"\n min_width: int\n \"\"\"Minimum width of renderable.\"\"\"\n max_width: int\n \"\"\"Maximum width of renderable.\"\"\"\n is_terminal: bool\n \"\"\"True if the target is a terminal, otherwise False.\"\"\"\n encoding: str\n \"\"\"Encoding of terminal.\"\"\"\n justify: Optional[JustifyMethod] = None\n \"\"\"Justify value override for renderable.\"\"\"\n overflow: Optional[OverflowMethod] = None\n \"\"\"Overflow value override for renderable.\"\"\"\n no_wrap: Optional[bool] = False\n \"\"\"\"Disable wrapping for text.\"\"\"\n\n @property\n def ascii_only(self) -> bool:\n \"\"\"Check if renderables should use ascii only.\"\"\"\n return not self.encoding.startswith(\"utf\")\n\n def update(\n self,\n width: int = None,\n min_width: int = None,\n max_width: int = None,\n justify: JustifyMethod = None,\n overflow: OverflowMethod = None,\n no_wrap: bool = None,\n ) -> \"ConsoleOptions\":\n \"\"\"Update values, return a copy.\"\"\"\n options = replace(self)\n if width is not None:\n options.min_width = options.max_width = width\n if min_width is not None:\n options.min_width = min_width\n if max_width is not None:\n options.max_width = max_width\n if justify is not None:\n options.justify = justify\n if overflow is not None:\n options.overflow = overflow\n if no_wrap is not None:\n options.no_wrap = no_wrap\n return options\n\n\n@runtime_checkable\nclass RichCast(Protocol):\n \"\"\"An object that may be 'cast' to a console renderable.\"\"\"\n\n def __rich__(self) -> Union[\"ConsoleRenderable\", str]: # pragma: no cover\n ...\n\n\n@runtime_checkable\nclass ConsoleRenderable(Protocol):\n \"\"\"An object that supports the console protocol.\"\"\"\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"RenderResult\": # pragma: no cover\n ...\n\n\nRenderableType = Union[ConsoleRenderable, RichCast, str]\n\"\"\"A type that may be rendered by Console.\"\"\"\n\nRenderResult = Iterable[Union[RenderableType, Segment]]\n\"\"\"The result of calling a __rich_console__ method.\"\"\"\n\n\n_null_highlighter = NullHighlighter()\n\n\nclass CaptureError(Exception):\n \"\"\"An error in the Capture context manager.\"\"\"\n\n\nclass Capture:\n \"\"\"Context manager to capture the result of printing to the console.\n See :meth:`~rich.console.Console.capture` for how to use.\n\n Args:\n console (Console): A console instance to capture output.\n \"\"\"\n\n def __init__(self, console: \"Console\") -> None:\n self._console = console\n self._result: Optional[str] = None\n\n def __enter__(self) -> \"Capture\":\n self._console.begin_capture()\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb) -> None:\n self._result = self._console.end_capture()\n\n def get(self) -> str:\n \"\"\"Get the result of the capture.\"\"\"\n if self._result is None:\n raise CaptureError(\n \"Capture result is not available until context manager exits.\"\n )\n return self._result\n\n\nclass ThemeContext:\n \"\"\"A context manager to use a temporary theme. See :meth:`~rich.console.Console.theme` for usage.\"\"\"\n\n def __init__(self, console: \"Console\", theme: Theme, inherit: bool = True) -> None:\n self.console = console\n self.theme = theme\n self.inherit = inherit\n\n def __enter__(self) -> \"ThemeContext\":\n self.console.push_theme(self.theme)\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb) -> None:\n self.console.pop_theme()\n\n\nclass PagerContext:\n \"\"\"A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage.\"\"\"\n\n def __init__(\n self,\n console: \"Console\",\n pager: Pager = None,\n styles: bool = False,\n links: bool = False,\n ) -> None:\n self._console = console\n self.pager = SystemPager() if pager is None else pager\n self.styles = styles\n self.links = links\n\n def __enter__(self) -> \"PagerContext\":\n self._console._enter_buffer()\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb) -> None:\n if exc_type is None:\n with self._console._lock:\n buffer: List[Segment] = self._console._buffer[:]\n del self._console._buffer[:]\n segments: Iterable[Segment] = buffer\n if not self.styles:\n segments = Segment.strip_styles(segments)\n elif not self.links:\n segments = Segment.strip_links(segments)\n content = self._console._render_buffer(segments)\n self.pager.show(content)\n self._console._exit_buffer()\n\n\nclass RenderGroup:\n \"\"\"Takes a group of renderables and returns a renderable object that renders the group.\n\n Args:\n renderables (Iterable[RenderableType]): An iterable of renderable objects.\n\n \"\"\"\n\n def __init__(self, *renderables: \"RenderableType\", fit: bool = True) -> None:\n self._renderables = renderables\n self.fit = fit\n self._render: Optional[List[RenderableType]] = None\n\n @property\n def renderables(self) -> List[\"RenderableType\"]:\n if self._render is None:\n self._render = list(self._renderables)\n return self._render\n\n def __rich_measure__(self, console: \"Console\", max_width: int) -> \"Measurement\":\n if self.fit:\n return measure_renderables(console, self.renderables, max_width)\n else:\n return Measurement(max_width, max_width)\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> RenderResult:\n yield from self.renderables\n\n\ndef render_group(fit: bool = False) -> Callable:\n \"\"\"A decorator that turns an iterable of renderables in to a group.\"\"\"\n\n def decorator(method):\n \"\"\"Convert a method that returns an iterable of renderables in to a RenderGroup.\"\"\"\n\n @wraps(method)\n def _replace(*args, **kwargs):\n renderables = method(*args, **kwargs)\n return RenderGroup(*renderables, fit=fit)\n\n return _replace\n\n return decorator\n\n\nclass ConsoleDimensions(NamedTuple):\n \"\"\"Size of the terminal.\"\"\"\n\n width: int\n \"\"\"The width of the console in 'cells'.\"\"\"\n height: int\n \"\"\"The height of the console in lines.\"\"\"\n\n\ndef _is_jupyter() -> bool: # pragma: no cover\n \"\"\"Check if we're running in a Jupyter notebook.\"\"\"\n try:\n get_ipython # type: ignore\n except NameError:\n return False\n shell = get_ipython().__class__.__name__ # type: ignore\n if shell == \"ZMQInteractiveShell\":\n return True # Jupyter notebook or qtconsole\n elif shell == \"TerminalInteractiveShell\":\n return False # Terminal running IPython\n else:\n return False # Other type (?)\n\n\nCOLOR_SYSTEMS = {\n \"standard\": ColorSystem.STANDARD,\n \"256\": ColorSystem.EIGHT_BIT,\n \"truecolor\": ColorSystem.TRUECOLOR,\n \"windows\": ColorSystem.WINDOWS,\n}\n\n\n_COLOR_SYSTEMS_NAMES = {system: name for name, system in COLOR_SYSTEMS.items()}\n\n\n@dataclass\nclass ConsoleThreadLocals(threading.local):\n \"\"\"Thread local values for Console context.\"\"\"\n\n theme_stack: ThemeStack\n buffer: List[Segment] = field(default_factory=list)\n buffer_index: int = 0\n\n\nclass RenderHook(ABC):\n \"\"\"Provides hooks in to the render process.\"\"\"\n\n @abstractmethod\n def process_renderables(\n self, renderables: List[ConsoleRenderable]\n ) -> List[ConsoleRenderable]:\n \"\"\"Called with a list of objects to render.\n\n This method can return a new list of renderables, or modify and return the same list.\n\n Args:\n renderables (List[ConsoleRenderable]): A number of renderable objects.\n\n Returns:\n List[ConsoleRenderable]: A replacement list of renderables.\n \"\"\"\n\n\n_windows_console_features: Optional[\"WindowsConsoleFeatures\"] = None\n\n\ndef get_windows_console_features() -> \"WindowsConsoleFeatures\": # pragma: no cover\n global _windows_console_features\n if _windows_console_features is not None:\n return _windows_console_features\n from ._windows import get_windows_console_features\n\n _windows_console_features = get_windows_console_features()\n return _windows_console_features\n\n\ndef detect_legacy_windows() -> bool:\n \"\"\"Detect legacy Windows.\"\"\"\n return WINDOWS and not get_windows_console_features().vt\n\n\nif detect_legacy_windows(): # pragma: no cover\n from colorama import init\n\n init()\n\n\nclass Console:\n \"\"\"A high level console interface.\n\n Args:\n color_system (str, optional): The color system supported by your terminal,\n either ``\"standard\"``, ``\"256\"`` or ``\"truecolor\"``. Leave as ``\"auto\"`` to autodetect.\n force_terminal (Optional[bool], optional): Enable/disable terminal control codes, or None to auto-detect terminal. Defaults to None.\n force_jupyter (Optional[bool], optional): Enable/disable Jupyter rendering, or None to auto-detect Jupyter. Defaults to None.\n theme (Theme, optional): An optional style theme object, or ``None`` for default theme.\n file (IO, optional): A file object where the console should write to. Defaults to stdout.\n width (int, optional): The width of the terminal. Leave as default to auto-detect width.\n height (int, optional): The height of the terminal. Leave as default to auto-detect height.\n record (bool, optional): Boolean to enable recording of terminal output,\n required to call :meth:`export_html` and :meth:`export_text`. Defaults to False.\n markup (bool, optional): Boolean to enable :ref:`console_markup`. Defaults to True.\n emoji (bool, optional): Enable emoji code. Defaults to True.\n highlight (bool, optional): Enable automatic highlighting. Defaults to True.\n log_time (bool, optional): Boolean to enable logging of time by :meth:`log` methods. Defaults to True.\n log_path (bool, optional): Boolean to enable the logging of the caller by :meth:`log`. Defaults to True.\n log_time_format (str, optional): Log time format if ``log_time`` is enabled. Defaults to \"[%X] \".\n highlighter (HighlighterType, optional): Default highlighter.\n legacy_windows (bool, optional): Enable legacy Windows mode, or ``None`` to auto detect. Defaults to ``None``.\n safe_box (bool, optional): Restrict box options that don't render on legacy Windows.\n \"\"\"\n\n def __init__(\n self,\n *,\n color_system: Optional[\n Literal[\"auto\", \"standard\", \"256\", \"truecolor\", \"windows\"]\n ] = \"auto\",\n force_terminal: bool = None,\n force_jupyter: bool = None,\n theme: Theme = None,\n file: IO[str] = None,\n width: int = None,\n height: int = None,\n tab_size: int = 8,\n record: bool = False,\n markup: bool = True,\n emoji: bool = True,\n highlight: bool = True,\n log_time: bool = True,\n log_path: bool = True,\n log_time_format: str = \"[%X]\",\n highlighter: Optional[\"HighlighterType\"] = ReprHighlighter(),\n legacy_windows: bool = None,\n safe_box: bool = True,\n _environ: Dict[str, str] = None,\n ):\n # Copy of os.environ allows us to replace it for testing\n self._environ = os.environ if _environ is None else _environ\n\n self.is_jupyter = _is_jupyter() if force_jupyter is None else force_jupyter\n if self.is_jupyter:\n width = width or 93\n height = height or 100\n self._width = width\n self._height = height\n self.tab_size = tab_size\n self.record = record\n self._markup = markup\n self._emoji = emoji\n self._highlight = highlight\n self.legacy_windows: bool = (\n (detect_legacy_windows() and not self.is_jupyter)\n if legacy_windows is None\n else legacy_windows\n )\n\n self._color_system: Optional[ColorSystem]\n self._force_terminal = force_terminal\n self.file = file or sys.stdout\n\n if color_system is None:\n self._color_system = None\n elif color_system == \"auto\":\n self._color_system = self._detect_color_system()\n else:\n self._color_system = COLOR_SYSTEMS[color_system]\n\n self._lock = threading.RLock()\n self._log_render = LogRender(\n show_time=log_time,\n show_path=log_path,\n time_format=log_time_format,\n )\n self.highlighter: HighlighterType = highlighter or _null_highlighter\n self.safe_box = safe_box\n\n self._record_buffer_lock = threading.RLock()\n self._thread_locals = ConsoleThreadLocals(\n theme_stack=ThemeStack(themes.DEFAULT if theme is None else theme)\n )\n self._record_buffer: List[Segment] = []\n self._render_hooks: List[RenderHook] = []\n\n def __repr__(self) -> str:\n return f\"<console width={self.width} {str(self._color_system)}>\"\n\n @property\n def _buffer(self) -> List[Segment]:\n \"\"\"Get a thread local buffer.\"\"\"\n return self._thread_locals.buffer\n\n @property\n def _buffer_index(self) -> int:\n \"\"\"Get a thread local buffer.\"\"\"\n return self._thread_locals.buffer_index\n\n @_buffer_index.setter\n def _buffer_index(self, value: int) -> None:\n self._thread_locals.buffer_index = value\n\n @property\n def _theme_stack(self) -> ThemeStack:\n \"\"\"Get the thread local theme stack.\"\"\"\n return self._thread_locals.theme_stack\n\n def _detect_color_system(self) -> Optional[ColorSystem]:\n \"\"\"Detect color system from env vars.\"\"\"\n if self.is_jupyter:\n return ColorSystem.TRUECOLOR\n if not self.is_terminal or \"NO_COLOR\" in self._environ or self.is_dumb_terminal:\n return None\n if WINDOWS: # pragma: no cover\n if self.legacy_windows: # pragma: no cover\n return ColorSystem.WINDOWS\n windows_console_features = get_windows_console_features()\n return (\n ColorSystem.TRUECOLOR\n if windows_console_features.truecolor\n else ColorSystem.EIGHT_BIT\n )\n else:\n color_term = self._environ.get(\"COLORTERM\", \"\").strip().lower()\n if color_term in (\"truecolor\", \"24bit\"):\n return ColorSystem.TRUECOLOR\n term = self._environ.get(\"TERM\", \"\").strip().lower()\n _term_name, _hyphen, colors = term.partition(\"-\")\n color_system = _TERM_COLORS.get(colors, ColorSystem.STANDARD)\n return color_system\n\n def _enter_buffer(self) -> None:\n \"\"\"Enter in to a buffer context, and buffer all output.\"\"\"\n self._buffer_index += 1\n\n def _exit_buffer(self) -> None:\n \"\"\"Leave buffer context, and render content if required.\"\"\"\n self._buffer_index -= 1\n self._check_buffer()\n\n def push_render_hook(self, hook: RenderHook) -> None:\n \"\"\"Add a new render hook to the stack.\n\n Args:\n hook (RenderHook): Render hook instance.\n \"\"\"\n\n self._render_hooks.append(hook)\n\n def pop_render_hook(self) -> None:\n \"\"\"Pop the last renderhook from the stack.\"\"\"\n self._render_hooks.pop()\n\n def __enter__(self) -> \"Console\":\n \"\"\"Own context manager to enter buffer context.\"\"\"\n self._enter_buffer()\n return self\n\n def __exit__(self, exc_type, exc_value, traceback) -> None:\n \"\"\"Exit buffer context.\"\"\"\n self._exit_buffer()\n\n def begin_capture(self) -> None:\n \"\"\"Begin capturing console output. Call :meth:`end_capture` to exit capture mode and return output.\"\"\"\n self._enter_buffer()\n\n def end_capture(self) -> str:\n \"\"\"End capture mode and return captured string.\n\n Returns:\n str: Console output.\n \"\"\"\n render_result = self._render_buffer(self._buffer)\n del self._buffer[:]\n self._exit_buffer()\n return render_result\n\n def push_theme(self, theme: Theme, *, inherit: bool = True) -> None:\n \"\"\"Push a new theme on to the top of the stack, replacing the styles from the previous theme.\n Generally speaking, you should call :meth:`~rich.console.Console.use_theme` to get a context manager, rather\n than calling this method directly.\n\n Args:\n theme (Theme): A theme instance.\n inherit (bool, optional): Inherit existing styles. Defaults to True.\n \"\"\"\n self._theme_stack.push_theme(theme, inherit=inherit)\n\n def pop_theme(self) -> None:\n \"\"\"Remove theme from top of stack, restoring previous theme.\"\"\"\n self._theme_stack.pop_theme()\n\n def use_theme(self, theme: Theme, *, inherit: bool = True) -> ThemeContext:\n \"\"\"Use a different theme for the duration of the context manager.\n\n Args:\n theme (Theme): Theme instance to user.\n inherit (bool, optional): Inherit existing console styles. Defaults to True.\n\n Returns:\n ThemeContext: [description]\n \"\"\"\n return ThemeContext(self, theme, inherit)\n\n @property\n def color_system(self) -> Optional[str]:\n \"\"\"Get color system string.\n\n Returns:\n Optional[str]: \"standard\", \"256\" or \"truecolor\".\n \"\"\"\n\n if self._color_system is not None:\n return _COLOR_SYSTEMS_NAMES[self._color_system]\n else:\n return None\n\n @property\n def encoding(self) -> str:\n \"\"\"Get the encoding of the console file, e.g. ``\"utf-8\"``.\n\n Returns:\n str: A standard encoding string.\n \"\"\"\n return (getattr(self.file, \"encoding\", \"utf-8\") or \"utf-8\").lower()\n\n @property\n def is_terminal(self) -> bool:\n \"\"\"Check if the console is writing to a terminal.\n\n Returns:\n bool: True if the console writing to a device capable of\n understanding terminal codes, otherwise False.\n \"\"\"\n if self._force_terminal is not None:\n return self._force_terminal\n isatty = getattr(self.file, \"isatty\", None)\n return False if isatty is None else isatty()\n\n @property\n def is_dumb_terminal(self) -> bool:\n \"\"\"Detect dumb terminal.\n\n Returns:\n bool: True if writing to a dumb terminal, otherwise False.\n\n \"\"\"\n _term = self._environ.get(\"TERM\", \"\")\n is_dumb = _term.lower() in (\"dumb\", \"unknown\")\n return self.is_terminal and is_dumb\n\n @property\n def options(self) -> ConsoleOptions:\n \"\"\"Get default console options.\"\"\"\n return ConsoleOptions(\n legacy_windows=self.legacy_windows,\n min_width=1,\n max_width=self.width,\n encoding=self.encoding,\n is_terminal=self.is_terminal,\n )\n\n @property\n def size(self) -> ConsoleDimensions:\n \"\"\"Get the size of the console.\n\n Returns:\n ConsoleDimensions: A named tuple containing the dimensions.\n \"\"\"\n\n if self._width is not None and self._height is not None:\n return ConsoleDimensions(self._width, self._height)\n\n if self.is_dumb_terminal:\n return ConsoleDimensions(80, 25)\n\n width, height = shutil.get_terminal_size()\n # get_terminal_size can report 0, 0 if run from pseudo-terminal\n width = width or 80\n height = height or 25\n return ConsoleDimensions(\n (width - self.legacy_windows) if self._width is None else self._width,\n height if self._height is None else self._height,\n )\n\n @property\n def width(self) -> int:\n \"\"\"Get the width of the console.\n\n Returns:\n int: The width (in characters) of the console.\n \"\"\"\n width, _ = self.size\n return width\n\n def bell(self) -> None:\n \"\"\"Play a 'bell' sound (if supported by the terminal).\"\"\"\n self.control(\"\\x07\")\n\n def capture(self) -> Capture:\n \"\"\"A context manager to *capture* the result of print() or log() in a string,\n rather than writing it to the console.\n\n Example:\n >>> from rich.console import Console\n >>> console = Console()\n >>> with console.capture() as capture:\n ... console.print(\"[bold magenta]Hello World[/]\")\n >>> print(capture.get())\n\n Returns:\n Capture: Context manager with disables writing to the terminal.\n \"\"\"\n capture = Capture(self)\n return capture\n\n def pager(\n self, pager: Pager = None, styles: bool = False, links: bool = False\n ) -> PagerContext:\n \"\"\"A context manager to display anything printed within a \"pager\". The pager used\n is defined by the system and will typically support at less pressing a key to scroll.\n\n Args:\n pager (Pager, optional): A pager object, or None to use :class:~rich.pager.SystemPager`. Defaults to None.\n styles (bool, optional): Show styles in pager. Defaults to False.\n links (bool, optional): Show links in pager. Defaults to False.\n\n Example:\n >>> from rich.console import Console\n >>> from rich.__main__ import make_test_card\n >>> console = Console()\n >>> with console.pager():\n console.print(make_test_card())\n\n Returns:\n PagerContext: A context manager.\n \"\"\"\n return PagerContext(self, pager=pager, styles=styles, links=links)\n\n def line(self, count: int = 1) -> None:\n \"\"\"Write new line(s).\n\n Args:\n count (int, optional): Number of new lines. Defaults to 1.\n \"\"\"\n\n assert count >= 0, \"count must be >= 0\"\n if count:\n self._buffer.append(Segment(\"\\n\" * count))\n self._check_buffer()\n\n def clear(self, home: bool = True) -> None:\n \"\"\"Clear the screen.\n\n Args:\n home (bool, optional): Also move the cursor to 'home' position. Defaults to True.\n \"\"\"\n self.control(\"\\033[2J\\033[H\" if home else \"\\033[2J\")\n\n def show_cursor(self, show: bool = True) -> None:\n \"\"\"Show or hide the cursor.\n\n Args:\n show (bool, optional): Set visibility of the cursor.\n \"\"\"\n if self.is_terminal and not self.legacy_windows:\n self.control(\"\\033[?25h\" if show else \"\\033[?25l\")\n\n def render(\n self, renderable: RenderableType, options: ConsoleOptions = None\n ) -> Iterable[Segment]:\n \"\"\"Render an object in to an iterable of `Segment` instances.\n\n This method contains the logic for rendering objects with the console protocol.\n You are unlikely to need to use it directly, unless you are extending the library.\n\n Args:\n renderable (RenderableType): An object supporting the console protocol, or\n an object that may be converted to a string.\n options (ConsoleOptions, optional): An options object, or None to use self.options. Defaults to None.\n\n Returns:\n Iterable[Segment]: An iterable of segments that may be rendered.\n \"\"\"\n\n _options = options or self.options\n if _options.max_width < 1:\n # No space to render anything. This prevents potential recursion errors.\n return\n render_iterable: RenderResult\n if isinstance(renderable, RichCast):\n renderable = renderable.__rich__()\n if isinstance(renderable, ConsoleRenderable):\n render_iterable = renderable.__rich_console__(self, _options)\n elif isinstance(renderable, str):\n yield from self.render(self.render_str(renderable), _options)\n return\n else:\n raise errors.NotRenderableError(\n f\"Unable to render {renderable!r}; \"\n \"A str, Segment or object with __rich_console__ method is required\"\n )\n\n try:\n iter_render = iter(render_iterable)\n except TypeError:\n raise errors.NotRenderableError(\n f\"object {render_iterable!r} is not renderable\"\n )\n for render_output in iter_render:\n if isinstance(render_output, Segment):\n yield render_output\n else:\n yield from self.render(render_output, _options)\n\n def render_lines(\n self,\n renderable: RenderableType,\n options: Optional[ConsoleOptions] = None,\n *,\n style: Optional[Style] = None,\n pad: bool = True,\n ) -> List[List[Segment]]:\n \"\"\"Render objects in to a list of lines.\n\n The output of render_lines is useful when further formatting of rendered console text\n is required, such as the Panel class which draws a border around any renderable object.\n\n Args:\n renderable (RenderableType): Any object renderable in the console.\n options (Optional[ConsoleOptions], optional): Console options, or None to use self.options. Default to ``None``.\n style (Style, optional): Optional style to apply to renderables. Defaults to ``None``.\n pad (bool, optional): Pad lines shorter than render width. Defaults to ``True``.\n\n Returns:\n List[List[Segment]]: A list of lines, where a line is a list of Segment objects.\n \"\"\"\n render_options = options or self.options\n _rendered = self.render(renderable, render_options)\n if style is not None:\n _rendered = Segment.apply_style(_rendered, style)\n lines = list(\n Segment.split_and_crop_lines(\n _rendered, render_options.max_width, include_new_lines=False, pad=pad\n )\n )\n return lines\n\n def render_str(\n self,\n text: str,\n *,\n style: Union[str, Style] = \"\",\n justify: JustifyMethod = None,\n overflow: OverflowMethod = None,\n emoji: bool = None,\n markup: bool = None,\n highlighter: HighlighterType = None,\n ) -> \"Text\":\n \"\"\"Convert a string to a Text instance. This is is called automatically if\n you print or log a string.\n\n Args:\n text (str): Text to render.\n style (Union[str, Style], optional): Style to apply to rendered text.\n justify (str, optional): Justify method: \"default\", \"left\", \"center\", \"full\", or \"right\". Defaults to ``None``.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", or \"ellipsis\". Defaults to ``None``.\n emoji (Optional[bool], optional): Enable emoji, or ``None`` to use Console default.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use Console default.\n highlighter (HighlighterType, optional): Optional highlighter to apply.\n Returns:\n ConsoleRenderable: Renderable object.\n\n \"\"\"\n emoji_enabled = emoji or (emoji is None and self._emoji)\n markup_enabled = markup or (markup is None and self._markup)\n\n if markup_enabled:\n rich_text = render_markup(text, style=style, emoji=emoji_enabled)\n rich_text.justify = justify\n rich_text.overflow = overflow\n else:\n rich_text = Text(\n _emoji_replace(text) if emoji_enabled else text,\n justify=justify,\n overflow=overflow,\n style=style,\n )\n\n if highlighter is not None:\n highlight_text = highlighter(str(rich_text))\n highlight_text.copy_styles(rich_text)\n return highlight_text\n\n return rich_text\n\n def get_style(\n self, name: Union[str, Style], *, default: Union[Style, str] = None\n ) -> Style:\n \"\"\"Get a Style instance by it's theme name or parse a definition.\n\n Args:\n name (str): The name of a style or a style definition.\n\n Returns:\n Style: A Style object.\n\n Raises:\n MissingStyle: If no style could be parsed from name.\n\n \"\"\"\n if isinstance(name, Style):\n return name\n\n try:\n style = self._theme_stack.get(name)\n if style is None:\n style = Style.parse(name)\n return style.copy() if style.link else style\n except errors.StyleSyntaxError as error:\n if default is not None:\n return self.get_style(default)\n raise errors.MissingStyle(f\"Failed to get style {name!r}; {error}\")\n\n def _collect_renderables(\n self,\n objects: Iterable[Any],\n sep: str,\n end: str,\n *,\n justify: JustifyMethod = None,\n emoji: bool = None,\n markup: bool = None,\n highlight: bool = None,\n ) -> List[ConsoleRenderable]:\n \"\"\"Combined a number of renderables and text in to one renderable.\n\n Args:\n objects (Iterable[Any]): Anything that Rich can render.\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\\\n\".\n justify (str, optional): One of \"left\", \"right\", \"center\", or \"full\". Defaults to ``None``.\n emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use console default.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default.\n\n Returns:\n List[ConsoleRenderable]: A list of things to render.\n \"\"\"\n renderables: List[ConsoleRenderable] = []\n _append = renderables.append\n text: List[Text] = []\n append_text = text.append\n\n append = _append\n if justify in (\"left\", \"center\", \"right\"):\n\n def align_append(renderable: RenderableType) -> None:\n _append(Align(renderable, cast(AlignValues, justify)))\n\n append = align_append\n\n _highlighter: HighlighterType = _null_highlighter\n if highlight or (highlight is None and self._highlight):\n _highlighter = self.highlighter\n\n def check_text() -> None:\n if text:\n sep_text = Text(sep, end=end)\n append(sep_text.join(text))\n del text[:]\n\n for renderable in objects:\n rich_cast = getattr(renderable, \"__rich__\", None)\n if rich_cast:\n renderable = rich_cast()\n if isinstance(renderable, str):\n append_text(\n self.render_str(\n renderable,\n emoji=emoji,\n markup=markup,\n highlighter=_highlighter,\n )\n )\n elif isinstance(renderable, ConsoleRenderable):\n check_text()\n append(renderable)\n elif isinstance(renderable, (abc.Mapping, abc.Sequence, abc.Set)):\n check_text()\n append(Pretty(renderable, highlighter=_highlighter))\n else:\n append_text(_highlighter(str(renderable)))\n\n check_text()\n\n return renderables\n\n def rule(\n self,\n title: str = \"\",\n *,\n characters: str = \"─\",\n style: Union[str, Style] = \"rule.line\",\n ) -> None:\n \"\"\"Draw a line with optional centered title.\n\n Args:\n title (str, optional): Text to render over the rule. Defaults to \"\".\n characters (str, optional): Character(s) to form the line. Defaults to \"─\".\n \"\"\"\n from .rule import Rule\n\n rule = Rule(title=title, characters=characters, style=style)\n self.print(rule)\n\n def control(self, control_codes: Union[\"Control\", str]) -> None:\n \"\"\"Insert non-printing control codes.\n\n Args:\n control_codes (str): Control codes, such as those that may move the cursor.\n \"\"\"\n if not self.is_dumb_terminal:\n self._buffer.append(Segment.control(str(control_codes)))\n self._check_buffer()\n\n def print(\n self,\n *objects: Any,\n sep=\" \",\n end=\"\\n\",\n style: Union[str, Style] = None,\n justify: JustifyMethod = None,\n overflow: OverflowMethod = None,\n no_wrap: bool = None,\n emoji: bool = None,\n markup: bool = None,\n highlight: bool = None,\n width: int = None,\n crop: bool = True,\n soft_wrap: bool = False,\n ) -> None:\n \"\"\"Print to the console.\n\n Args:\n objects (positional args): Objects to log to the terminal.\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\\\n\".\n style (Union[str, Style], optional): A style to apply to output. Defaults to None.\n justify (str, optional): Justify method: \"default\", \"left\", \"right\", \"center\", or \"full\". Defaults to ``None``.\n overflow (str, optional): Overflow method: \"ignore\", \"crop\", \"fold\", or \"ellipsis\". Defaults to None.\n no_wrap (Optional[bool], optional): Disable word wrapping. Defaults to None.\n emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to ``None``.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to ``None``.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``.\n width (Optional[int], optional): Width of output, or ``None`` to auto-detect. Defaults to ``None``.\n crop (Optional[bool], optional): Crop output to width of terminal. Defaults to True.\n soft_wrap (bool, optional): Enable soft wrap mode which disables word wrapping and cropping. Defaults to False.\n \"\"\"\n if not objects:\n self.line()\n return\n\n if soft_wrap:\n if no_wrap is None:\n no_wrap = True\n if overflow is None:\n overflow = \"ignore\"\n crop = False\n\n with self:\n renderables = self._collect_renderables(\n objects,\n sep,\n end,\n justify=justify,\n emoji=emoji,\n markup=markup,\n highlight=highlight,\n )\n for hook in self._render_hooks:\n renderables = hook.process_renderables(renderables)\n render_options = self.options.update(\n justify=justify, overflow=overflow, width=width, no_wrap=no_wrap\n )\n new_segments: List[Segment] = []\n extend = new_segments.extend\n render = self.render\n if style is None:\n for renderable in renderables:\n extend(render(renderable, render_options))\n else:\n for renderable in renderables:\n extend(\n Segment.apply_style(\n render(renderable, render_options), self.get_style(style)\n )\n )\n if crop:\n buffer_extend = self._buffer.extend\n for line in Segment.split_and_crop_lines(\n new_segments, self.width, pad=False\n ):\n buffer_extend(line)\n else:\n self._buffer.extend(new_segments)\n\n def print_exception(\n self,\n *,\n width: Optional[int] = 100,\n extra_lines: int = 3,\n theme: Optional[str] = None,\n word_wrap: bool = False,\n show_locals: bool = False,\n ) -> None:\n \"\"\"Prints a rich render of the last exception and traceback.\n\n Args:\n width (Optional[int], optional): Number of characters used to render code. Defaults to 88.\n extra_lines (int, optional): Additional lines of code to render. Defaults to 3.\n theme (str, optional): Override pygments theme used in traceback\n word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.\n show_locals (bool, optional): Enable display of local variables. Defaults to False.\n \"\"\"\n from .traceback import Traceback\n\n traceback = Traceback(\n width=width,\n extra_lines=extra_lines,\n theme=theme,\n word_wrap=word_wrap,\n show_locals=show_locals,\n )\n self.print(traceback)\n\n def log(\n self,\n *objects: Any,\n sep=\" \",\n end=\"\\n\",\n justify: JustifyMethod = None,\n emoji: bool = None,\n markup: bool = None,\n highlight: bool = None,\n log_locals: bool = False,\n _stack_offset=1,\n ) -> None:\n \"\"\"Log rich content to the terminal.\n\n Args:\n objects (positional args): Objects to log to the terminal.\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\\\n\".\n justify (str, optional): One of \"left\", \"right\", \"center\", or \"full\". Defaults to ``None``.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", or \"ellipsis\". Defaults to None.\n emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to None.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to None.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to None.\n log_locals (bool, optional): Boolean to enable logging of locals where ``log()``\n was called. Defaults to False.\n _stack_offset (int, optional): Offset of caller from end of call stack. Defaults to 1.\n \"\"\"\n if not objects:\n self.line()\n return\n with self:\n renderables = self._collect_renderables(\n objects,\n sep,\n end,\n justify=justify,\n emoji=emoji,\n markup=markup,\n highlight=highlight,\n )\n\n caller = inspect.stack()[_stack_offset]\n link_path = (\n None\n if caller.filename.startswith(\"<\")\n else os.path.abspath(caller.filename)\n )\n path = caller.filename.rpartition(os.sep)[-1]\n line_no = caller.lineno\n if log_locals:\n locals_map = {\n key: value\n for key, value in caller.frame.f_locals.items()\n if not key.startswith(\"__\")\n }\n renderables.append(render_scope(locals_map, title=\"[i]locals\"))\n\n renderables = [\n self._log_render(\n self,\n renderables,\n path=path,\n line_no=line_no,\n link_path=link_path,\n )\n ]\n for hook in self._render_hooks:\n renderables = hook.process_renderables(renderables)\n new_segments: List[Segment] = []\n extend = new_segments.extend\n render = self.render\n render_options = self.options\n for renderable in renderables:\n extend(render(renderable, render_options))\n buffer_extend = self._buffer.extend\n for line in Segment.split_and_crop_lines(\n new_segments, self.width, pad=False\n ):\n buffer_extend(line)\n\n def _check_buffer(self) -> None:\n \"\"\"Check if the buffer may be rendered.\"\"\"\n with self._lock:\n if self._buffer_index == 0:\n if self.is_jupyter: # pragma: no cover\n from .jupyter import display\n\n display(self._buffer)\n del self._buffer[:]\n else:\n text = self._render_buffer(self._buffer[:])\n del self._buffer[:]\n if text:\n try:\n if WINDOWS: # pragma: no cover\n # https://bugs.python.org/issue37871\n write = self.file.write\n for line in text.splitlines(True):\n write(line)\n else:\n self.file.write(text)\n self.file.flush()\n except UnicodeEncodeError as error:\n error.reason = f\"{error.reason}\\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***\"\n raise\n\n def _render_buffer(self, buffer: Iterable[Segment]) -> str:\n \"\"\"Render buffered output, and clear buffer.\"\"\"\n output: List[str] = []\n append = output.append\n color_system = self._color_system\n legacy_windows = self.legacy_windows\n if self.record:\n with self._record_buffer_lock:\n self._record_buffer.extend(buffer)\n not_terminal = not self.is_terminal\n for text, style, is_control in buffer:\n if style and not is_control:\n append(\n style.render(\n text,\n color_system=color_system,\n legacy_windows=legacy_windows,\n )\n )\n elif not (not_terminal and is_control):\n append(text)\n\n rendered = \"\".join(output)\n return rendered\n\n def input(\n self,\n prompt: TextType = \"\",\n *,\n markup: bool = True,\n emoji: bool = True,\n password: bool = False,\n stream: TextIO = None,\n ) -> str:\n \"\"\"Displays a prompt and waits for input from the user. The prompt may contain color / style.\n\n Args:\n prompt (Union[str, Text]): Text to render in the prompt.\n markup (bool, optional): Enable console markup (requires a str prompt). Defaults to True.\n emoji (bool, optional): Enable emoji (requires a str prompt). Defaults to True.\n password: (bool, optional): Hide typed text. Defaults to False.\n stream: (TextIO, optional): Optional file to read input from (rather than stdin). Defaults to None.\n\n Returns:\n str: Text read from stdin.\n \"\"\"\n prompt_str = \"\"\n if prompt:\n with self.capture() as capture:\n self.print(prompt, markup=markup, emoji=emoji, end=\"\")\n prompt_str = capture.get()\n if password:\n result = getpass(prompt_str, stream=stream)\n else:\n if stream:\n self.file.write(prompt_str)\n result = stream.readline()\n else:\n result = input(prompt_str)\n return result\n\n def export_text(self, *, clear: bool = True, styles: bool = False) -> str:\n \"\"\"Generate text from console contents (requires record=True argument in constructor).\n\n Args:\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n styles (bool, optional): If ``True``, ansi escape codes will be included. ``False`` for plain text.\n Defaults to ``False``.\n\n Returns:\n str: String containing console contents.\n\n \"\"\"\n assert (\n self.record\n ), \"To export console contents set record=True in the constructor or instance\"\n\n with self._record_buffer_lock:\n if styles:\n text = \"\".join(\n (style.render(text) if style else text)\n for text, style, _ in self._record_buffer\n )\n else:\n text = \"\".join(\n segment.text\n for segment in self._record_buffer\n if not segment.is_control\n )\n if clear:\n del self._record_buffer[:]\n return text\n\n def save_text(self, path: str, *, clear: bool = True, styles: bool = False) -> None:\n \"\"\"Generate text from console and save to a given location (requires record=True argument in constructor).\n\n Args:\n path (str): Path to write text files.\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n styles (bool, optional): If ``True``, ansi style codes will be included. ``False`` for plain text.\n Defaults to ``False``.\n\n \"\"\"\n text = self.export_text(clear=clear, styles=styles)\n with open(path, \"wt\", encoding=\"utf-8\") as write_file:\n write_file.write(text)\n\n def export_html(\n self,\n *,\n theme: TerminalTheme = None,\n clear: bool = True,\n code_format: str = None,\n inline_styles: bool = False,\n ) -> str:\n \"\"\"Generate HTML from console contents (requires record=True argument in constructor).\n\n Args:\n theme (TerminalTheme, optional): TerminalTheme object containing console colors.\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n code_format (str, optional): Format string to render HTML, should contain {foreground}\n {background} and {code}.\n inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files\n larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.\n Defaults to False.\n\n Returns:\n str: String containing console contents as HTML.\n \"\"\"\n assert (\n self.record\n ), \"To export console contents set record=True in the constructor or instance\"\n fragments: List[str] = []\n append = fragments.append\n _theme = theme or DEFAULT_TERMINAL_THEME\n stylesheet = \"\"\n\n def escape(text: str) -> str:\n \"\"\"Escape html.\"\"\"\n return text.replace(\"&\", \"&\").replace(\"<\", \"<\").replace(\">\", \">\")\n\n render_code_format = CONSOLE_HTML_FORMAT if code_format is None else code_format\n\n with self._record_buffer_lock:\n if inline_styles:\n for text, style, _ in Segment.filter_control(\n Segment.simplify(self._record_buffer)\n ):\n text = escape(text)\n if style:\n rule = style.get_html_style(_theme)\n text = f'<span style=\"{rule}\">{text}</span>' if rule else text\n if style.link:\n text = f'<a href=\"{style.link}\">{text}</a>'\n append(text)\n else:\n styles: Dict[str, int] = {}\n for text, style, _ in Segment.filter_control(\n Segment.simplify(self._record_buffer)\n ):\n text = escape(text)\n if style:\n rule = style.get_html_style(_theme)\n if rule:\n style_number = styles.setdefault(rule, len(styles) + 1)\n text = f'<span class=\"r{style_number}\">{text}</span>'\n if style.link:\n text = f'<a href=\"{style.link}\">{text}</a>'\n append(text)\n stylesheet_rules: List[str] = []\n stylesheet_append = stylesheet_rules.append\n for style_rule, style_number in styles.items():\n if style_rule:\n stylesheet_append(f\".r{style_number} {{{style_rule}}}\")\n stylesheet = \"\\n\".join(stylesheet_rules)\n\n rendered_code = render_code_format.format(\n code=\"\".join(fragments),\n stylesheet=stylesheet,\n foreground=_theme.foreground_color.hex,\n background=_theme.background_color.hex,\n )\n if clear:\n del self._record_buffer[:]\n return rendered_code\n\n def save_html(\n self,\n path: str,\n *,\n theme: TerminalTheme = None,\n clear: bool = True,\n code_format=CONSOLE_HTML_FORMAT,\n inline_styles: bool = False,\n ) -> None:\n \"\"\"Generate HTML from console contents and write to a file (requires record=True argument in constructor).\n\n Args:\n path (str): Path to write html file.\n theme (TerminalTheme, optional): TerminalTheme object containing console colors.\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n code_format (str, optional): Format string to render HTML, should contain {foreground}\n {background} and {code}.\n inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files\n larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.\n Defaults to False.\n\n \"\"\"\n html = self.export_html(\n theme=theme,\n clear=clear,\n code_format=code_format,\n inline_styles=inline_styles,\n )\n with open(path, \"wt\", encoding=\"utf-8\") as write_file:\n write_file.write(html)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n console = Console()\n\n console.log(\n \"JSONRPC [i]request[/i]\",\n 5,\n 1.3,\n True,\n False,\n None,\n {\n \"jsonrpc\": \"2.0\",\n \"method\": \"subtract\",\n \"params\": {\"minuend\": 42, \"subtrahend\": 23},\n \"id\": 3,\n },\n )\n\n console.log(\"Hello, World!\", \"{'a': 1}\", repr(console))\n\n console.print(\n {\n \"name\": None,\n \"empty\": [],\n \"quiz\": {\n \"sport\": {\n \"answered\": True,\n \"q1\": {\n \"question\": \"Which one is correct team name in NBA?\",\n \"options\": [\n \"New York Bulls\",\n \"Los Angeles Kings\",\n \"Golden State Warriors\",\n \"Huston Rocket\",\n ],\n \"answer\": \"Huston Rocket\",\n },\n },\n \"maths\": {\n \"answered\": False,\n \"q1\": {\n \"question\": \"5 + 7 = ?\",\n \"options\": [10, 11, 12, 13],\n \"answer\": 12,\n },\n \"q2\": {\n \"question\": \"12 - 8 = ?\",\n \"options\": [1, 2, 3, 4],\n \"answer\": 4,\n },\n },\n },\n }\n )\n console.log(\"foo\")\n"}
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index f813ffeb0a..132fafbbc2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Addded box.SQUARE_DOUBLE_HEAD
- Added highlighting of EUI-48 and EUI-64 (MAC addresses)
- Added Console.pager
+- Added Console.out
### Changed
diff --git a/docs/source/console.rst b/docs/source/console.rst
index a3c690c794..f5565ee6b3 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -69,6 +69,15 @@ The :meth:`~rich.console.Console.log` methods offers the same capabilities as pr
To help with debugging, the log() method has a ``log_locals`` parameter. If you set this to ``True``, Rich will display a table of local variables where the method was called.
+Low level output
+----------------
+
+In additional to :meth:`~rich.console.Console.print` and :meth:`~rich.console.Console.log`, Rich has a :meth:`~rich.console.Console.out` method which provides a lower-level way of writing to the terminal. The out() method converts all the positional arguments to strings and won't pretty print, word wrap, or apply markup to the output, but can apply a basic style and will optionally do highlighting.
+
+Here's an example::
+
+ >>> console.out("Locals", locals())
+
Justify / Alignment
-------------------
|
{"rich/console.py": [{"type": "function", "name": "Console.out", "lines": [1015, 1043], "signature": "def out( self, *objects: Any, sep=\" \", end=\"\\n\", style: Union[str, Style] = None, highlight: bool = True, ) -> None:", "doc": "Output to the terminal. This is a low-level way of writing to the terminal which unlike\n:meth:`~rich.console.Console.print` doesn't pretty print, wrap text, nor markup, but will highlighting\nand apply basic style.\n\nArgs:\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\n\".\n style (Union[str, Style], optional): A style to apply to output. Defaults to None.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``."}]}
| null |
["tests/test_console.py::test_out"]
|
["tests/test_console.py::test_dumb_terminal", "tests/test_console.py::test_16color_terminal", "tests/test_console.py::test_truecolor_terminal", "tests/test_console.py::test_console_options_update", "tests/test_console.py::test_init", "tests/test_console.py::test_size", "tests/test_console.py::test_repr", "tests/test_console.py::test_print", "tests/test_console.py::test_print_empty", "tests/test_console.py::test_markup_highlight", "tests/test_console.py::test_print_style", "tests/test_console.py::test_show_cursor", "tests/test_console.py::test_clear", "tests/test_console.py::test_clear_no_terminal", "tests/test_console.py::test_get_style", "tests/test_console.py::test_get_style_default", "tests/test_console.py::test_get_style_error", "tests/test_console.py::test_render_error", "tests/test_console.py::test_control", "tests/test_console.py::test_capture", "tests/test_console.py::test_input", "tests/test_console.py::test_input_password", "tests/test_console.py::test_justify_none", "tests/test_console.py::test_justify_left", "tests/test_console.py::test_justify_center", "tests/test_console.py::test_justify_right", "tests/test_console.py::test_justify_renderable_none", "tests/test_console.py::test_justify_renderable_left", "tests/test_console.py::test_justify_renderable_center", "tests/test_console.py::test_justify_renderable_right", "tests/test_console.py::test_render_broken_renderable", "tests/test_console.py::test_export_text", "tests/test_console.py::test_export_html", "tests/test_console.py::test_export_html_inline", "tests/test_console.py::test_save_text", "tests/test_console.py::test_save_html", "tests/test_console.py::test_no_wrap", "tests/test_console.py::test_soft_wrap", "tests/test_console.py::test_unicode_error", "tests/test_console.py::test_bell", "tests/test_console.py::test_pager", "tests/test_padding.py::test_repr", "tests/test_padding.py::test_indent", "tests/test_padding.py::test_unpack", "tests/test_padding.py::test_rich_console"]
|
b0661de34bab35af9b4b1d3ba8e28b186b225e84
|
{"first_commit_time": 1602433996.0, "pr_title": "console out", "pr_body": "Adds a Console.out method which is like a low-level print does simple styling only.\r\n\r\n## Type of changes\r\n\r\n- [ ] Bug fix\r\n- [x] New feature\r\n- [x] Documentation / docstrings\r\n- [ ] Tests\r\n- [ ] Other\r\n\r\n## Checklist\r\n\r\n- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.\r\n- [x] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.\r\n- [x] I've added tests for new code.\r\n- [x] I accept that @willmcgugan may be pedantic in the code review.\r\n\r\n## Description\r\n\r\nPlease describe your changes here. If this fixes a bug, please link to the issue, if possible.\r\n", "pr_timeline": [{"time": 1602434572.0, "comment": "# [Codecov](https://codecov.io/gh/willmcgugan/rich/pull/376?src=pr&el=h1) Report\n> Merging [#376](https://codecov.io/gh/willmcgugan/rich/pull/376?src=pr&el=desc) into [master](https://codecov.io/gh/willmcgugan/rich/commit/e11800a2889fa35228bd22b0121f42cd6e856976?el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/willmcgugan/rich/pull/376?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #376 +/- ##\n==========================================\n+ Coverage 99.35% 99.42% +0.06% \n==========================================\n Files 51 51 \n Lines 4374 4373 -1 \n==========================================\n+ Hits 4346 4348 +2 \n+ Misses 28 25 -3 \n```\n\n| Flag | Coverage \u0394 | |\n|---|---|---|\n| #unittests | `99.42% <100.00%> (+0.06%)` | :arrow_up: |\n\nFlags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags#carryforward-flags-in-the-pull-request-comment) to find out more.\n\n| [Impacted Files](https://codecov.io/gh/willmcgugan/rich/pull/376?src=pr&el=tree) | Coverage \u0394 | |\n|---|---|---|\n| [rich/console.py](https://codecov.io/gh/willmcgugan/rich/pull/376/diff?src=pr&el=tree#diff-cmljaC9jb25zb2xlLnB5) | `100.00% <100.00%> (\u00f8)` | |\n| [rich/containers.py](https://codecov.io/gh/willmcgugan/rich/pull/376/diff?src=pr&el=tree#diff-cmljaC9jb250YWluZXJzLnB5) | `100.00% <100.00%> (+1.17%)` | :arrow_up: |\n| [rich/measure.py](https://codecov.io/gh/willmcgugan/rich/pull/376/diff?src=pr&el=tree#diff-cmljaC9tZWFzdXJlLnB5) | `100.00% <0.00%> (+4.65%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/willmcgugan/rich/pull/376?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute <relative> (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/willmcgugan/rich/pull/376?src=pr&el=footer). Last update [a83ee86...ba78d0e](https://codecov.io/gh/willmcgugan/rich/pull/376?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"}], "issues": {}}
|
Textualize/rich
| 901
|
https://github.com/Textualize/rich/pull/901
|
Textualize__rich-901
|
[]
|
a9c0f917aed8d0bba232b7584742962f03a9a293
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index d8378b729a..acb53b1308 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added
- Added rich.tree
+- Added no_color argument to Console
## [9.6.2] - 2021-01-07
diff --git a/rich/console.py b/rich/console.py
index 2ef5b3d75b..d955ecd1f4 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -402,6 +402,7 @@ class Console:
width (int, optional): The width of the terminal. Leave as default to auto-detect width.
height (int, optional): The height of the terminal. Leave as default to auto-detect height.
style (StyleType, optional): Style to apply to all output, or None for no style. Defaults to None.
+ no_color (Optional[bool], optional): Enabled no color mode, or None to auto detect. Defaults to None.
record (bool, optional): Boolean to enable recording of terminal output,
required to call :meth:`export_html` and :meth:`export_text`. Defaults to False.
markup (bool, optional): Boolean to enable :ref:`console_markup`. Defaults to True.
@@ -433,6 +434,7 @@ def __init__(
width: int = None,
height: int = None,
style: StyleType = None,
+ no_color: bool = None,
tab_size: int = 8,
record: bool = False,
markup: bool = True,
@@ -492,6 +494,9 @@ def __init__(
self.get_datetime = get_datetime or datetime.now
self.get_time = get_time or monotonic
self.style = style
+ self.no_color = (
+ no_color if no_color is not None else "NO_COLOR" in self._environ
+ )
self._record_buffer_lock = threading.RLock()
self._thread_locals = ConsoleThreadLocals(
@@ -538,7 +543,7 @@ def _detect_color_system(self) -> Optional[ColorSystem]:
"""Detect color system from env vars."""
if self.is_jupyter:
return ColorSystem.TRUECOLOR
- if not self.is_terminal or "NO_COLOR" in self._environ or self.is_dumb_terminal:
+ if not self.is_terminal or self.is_dumb_terminal:
return None
if WINDOWS: # pragma: no cover
if self.legacy_windows: # pragma: no cover
@@ -1374,6 +1379,8 @@ def _render_buffer(self, buffer: Iterable[Segment]) -> str:
with self._record_buffer_lock:
self._record_buffer.extend(buffer)
not_terminal = not self.is_terminal
+ if self.no_color and color_system:
+ buffer = Segment.remove_color(buffer)
for text, style, is_control in buffer:
if style:
append(
diff --git a/rich/segment.py b/rich/segment.py
index db17b0fbb6..34ae7ab0d0 100644
--- a/rich/segment.py
+++ b/rich/segment.py
@@ -1,4 +1,4 @@
-from typing import NamedTuple, Optional
+from typing import Dict, NamedTuple, Optional
from .cells import cell_len, set_cell_size
from .style import Style
@@ -331,12 +331,37 @@ def strip_links(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
def strip_styles(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
"""Remove all styles from an iterable of segments.
+ Args:
+ segments (Iterable[Segment]): An iterable segments.
+
Yields:
Segment: Segments with styles replace with None
"""
for text, _style, is_control in segments:
yield cls(text, None, is_control)
+ @classmethod
+ def remove_color(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
+ """Remove all color from an iterable of segments.
+
+ Args:
+ segments (Iterable[Segment]): An iterable segments.
+
+ Yields:
+ Segment: Segments with colorless style.
+ """
+
+ cache: Dict[Style, Style] = {}
+ for text, style, is_control in segments:
+ if style:
+ colorless_style = cache.get(style)
+ if colorless_style is None:
+ colorless_style = style.without_color
+ cache[style] = colorless_style
+ yield cls(text, colorless_style, is_control)
+ else:
+ yield cls(text, None, is_control)
+
if __name__ == "__main__": # pragma: no cover
lines = [[Segment("Hello")]]
diff --git a/rich/style.py b/rich/style.py
index 5a802d0762..cda8380638 100644
--- a/rich/style.py
+++ b/rich/style.py
@@ -383,6 +383,24 @@ def background_style(self) -> "Style":
"""A Style with background only."""
return Style(bgcolor=self.bgcolor)
+ @property
+ def without_color(self) -> "Style":
+ """Get a copy of the style with color removed."""
+ if self._null:
+ return NULL_STYLE
+ style = self.__new__(Style)
+ style._ansi = None
+ style._style_definition = None
+ style._color = None
+ style._bgcolor = None
+ style._attributes = self._attributes
+ style._set_attributes = self._set_attributes
+ style._link = self._link
+ style._link_id = f"{time()}-{randint(0, 999999)}" if self._link else ""
+ style._hash = self._hash
+ style._null = False
+ return style
+
@classmethod
@lru_cache(maxsize=4096)
def parse(cls, style_definition: str) -> "Style":
|
diff --git a/tests/test_console.py b/tests/test_console.py
index b82c765911..82bc5b48d4 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -467,3 +467,14 @@ def test_console_style() -> None:
expected = "\x1b[31mfoo\x1b[0m\n"
result = console.file.getvalue()
assert result == expected
+
+
+def test_no_color():
+ console = Console(
+ file=io.StringIO(), color_system="truecolor", force_terminal=True, no_color=True
+ )
+ console.print("[bold magenta on red]FOO")
+ expected = "\x1b[1mFOO\x1b[0m\n"
+ result = console.file.getvalue()
+ print(repr(result))
+ assert result == expected
diff --git a/tests/test_segment.py b/tests/test_segment.py
index 2c3344dd1b..42abdcbacf 100644
--- a/tests/test_segment.py
+++ b/tests/test_segment.py
@@ -96,3 +96,14 @@ def test_strip_styles():
def test_strip_links():
segments = [Segment("foo", Style(bold=True, link="https://www.example.org"))]
assert list(Segment.strip_links(segments)) == [Segment("foo", Style(bold=True))]
+
+
+def test_remove_color():
+ segments = [
+ Segment("foo", Style(bold=True, color="red")),
+ Segment("bar", None),
+ ]
+ assert list(Segment.remove_color(segments)) == [
+ Segment("foo", Style(bold=True)),
+ Segment("bar", None),
+ ]
diff --git a/tests/test_style.py b/tests/test_style.py
index a5075e80fb..8282331f30 100644
--- a/tests/test_style.py
+++ b/tests/test_style.py
@@ -196,3 +196,13 @@ def test_background_style():
assert Style(bold=True, color="yellow", bgcolor="red").background_style == Style(
bgcolor="red"
)
+
+
+def test_without_color():
+ style = Style(bold=True, color="red", bgcolor="blue")
+ colorless_style = style.without_color
+ assert colorless_style.color == None
+ assert colorless_style.bgcolor == None
+ assert colorless_style.bold == True
+ null_style = Style.null()
+ assert null_style.without_color == null_style
| 2021-01-09T16:12:28
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"CHANGELOG.md": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),\nand this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).\n\n## [9.7.0] - Unreleased\n\n### Added\n\n- Added rich.tree\n\n## [9.6.2] - 2021-01-07\n\n### Fixed\n\n- Fixed markup escaping edge case https://github.com/willmcgugan/rich/issues/878\n- Double tag escape, i.e. `\"\\\\[foo]\"` results in a backslash plus `[foo]` tag\n\n## [9.6.1] - 2020-12-31\n\n### Fixed\n\n- Fixed encoding error on Windows when loading code for Tracebacks\n\n## [9.6.0] - 2020-12-30\n\n### Changed\n\n- MarkupError exception raise from None to omit internal exception\n- Factored out RichHandler.render and RichHandler.render_message for easier extending\n- Display pretty printed value in rich.inspect\n\n### Added\n\n- Added Progress.TimeElapsedColumn\n- Added IPython support to pretty.install\n\n### Fixed\n\n- Fixed display of locals in Traceback for stdin\n\n## [9.5.1] - 2020-12-19\n\n### Fixed\n\n- Fixed terminal size detection on Windows https://github.com/willmcgugan/rich/issues/836\n- Fixed hex number highlighting\n\n## [9.5.0] - 2020-12-18\n\n### Changed\n\n- If file is not specified on Console then the Console.file will return the current sys.stdout. Prior to 9.5.0 sys.stdout was cached on the Console, which could break code that wrapped sys.stdout after the Console was constructed.\n- Changed `Color.__str__` to not include ansi codes\n- Changed Console.size to get the terminal dimensions via sys.stdin. This means that if you set file to be an io.StringIO file then the width will be set to the current terminal dimensions and not a default of 80.\n\n### Added\n\n- Added stderr parameter to Console\n- Added rich.reconfigure\n- Added `Color.__rich__`\n- Added Console.soft_wrap\n- Added Console.style parameter\n- Added Table.highlight parameter to enable highlighting of cells\n- Added Panel.highlight parameter to enable highlighting of panel title\n- Added highlight to ConsoleOptions\n\n### Fixed\n\n- Fixed double output in rich.live https://github.com/willmcgugan/rich/issues/485\n- Fixed Console.out highlighting not reflecting defaults https://github.com/willmcgugan/rich/issues/827\n- FileProxy now raises TypeError for empty non-str arguments https://github.com/willmcgugan/rich/issues/828\n\n## [9.4.0] - 2020-12-12\n\n### Added\n\n- Added rich.live https://github.com/willmcgugan/rich/pull/382\n- Added algin parameter to Rule and Console.rule\n- Added rich.Status class and Console.status\n- Added getitem to Text\n- Added style parameter to Console.log\n- Added rich.diagnose command\n\n### Changed\n\n- Table.add_row style argument now applies to entire line and not just cells\n- Added end_section parameter to Table.add_row to force a line underneath row\n\n## Fixed\n\n- Fixed suppressed traceback context https://github.com/willmcgugan/rich/issues/468\n\n## [9.3.0] - 2020-12-1\n\n### Added\n\n- Added get_datetime parameter to Console, to allow for repeatable tests\n- Added get_time parameter to Console\n- Added rich.abc.RichRenderable\n- Added expand_all to rich.pretty.install()\n- Added locals_max_length, and locals_max_string to Traceback and logging.RichHandler\n- Set defaults of max_length and max_string for Traceback to 10 and 80\n- Added disable argument to Progress\n\n### Changed\n\n- Reformatted test card (python -m rich)\n\n### Fixed\n\n- Fixed redirecting of stderr in Progress\n- Fixed broken expanded tuple of one https://github.com/willmcgugan/rich/issues/445\n- Fixed traceback message with `from` exceptions\n- Fixed justify argument not working in console.log https://github.com/willmcgugan/rich/issues/460\n\n## [9.2.0] - 2020-11-08\n\n### Added\n\n- Added tracebacks_show_locals parameter to RichHandler\n- Added max_string to Pretty\n- Added rich.ansi.AnsiDecoder\n- Added decoding of ansi codes to captured stdout in Progress\n- Added expand_all to rich.pretty.pprint\n\n### Changed\n\n- Applied dim=True to indent guide styles\n- Factored out RichHandler.get_style_and_level to allow for overriding in subclasses\n- Hid progress bars from html export\n- rich.pretty.pprint now soft wraps\n\n## [9.1.0] - 2020-10-23\n\n### Added\n\n- Added Text.with_indentation_guide\n- Added Text.detect_indentation\n- Added Pretty.indent_guides\n- Added Syntax.indent_guides\n- Added indent_guides parameter on pretty.install\n- Added rich.pretty.pprint\n- Added max_length to Pretty\n\n### Changed\n\n- Enabled indent guides on Tracebacks\n\n### Fixed\n\n- Fixed negative time remaining in Progress bars https://github.com/willmcgugan/rich/issues/378\n\n## [9.0.1] - 2020-10-19\n\n### Fixed\n\n- Fixed broken ANSI codes in input on windows legacy https://github.com/willmcgugan/rich/issues/393\n\n## [9.0.0] - 2020-10-18\n\n### Fixed\n\n- Progress download column now displays decimal units\n\n### Added\n\n- Support for Python 3.9\n- Added legacy_windows to ConsoleOptions\n- Added ascii_only to ConsoleOptions\n- Added box.SQUARE_DOUBLE_HEAD\n- Added highlighting of EUI-48 and EUI-64 (MAC addresses)\n- Added Console.pager\n- Added Console.out\n- Added binary_units in progress download column\n- Added Progress.reset\n- Added Style.background_style property\n- Added Bar renderable https://github.com/willmcgugan/rich/pull/361\n- Added Table.min_width\n- Added table.Column.min_width and table.Column.max_width, and same to Table.add_column\n\n### Changed\n\n- Dropped box.get_safe_box function in favor of Box.substitute\n- Changed default padding in Panel from 0 to (0, 1) https://github.com/willmcgugan/rich/issues/385\n- Table with row_styles will extend background color between cells if the box has no vertical dividerhttps://github.com/willmcgugan/rich/issues/383\n- Changed default of fit kwarg in render_group() from False to True\n- Renamed rich.bar to rich.progress_bar, and Bar class to ProgressBar, rich.bar is now the new solid bar class\n\n### Fixed\n\n- Fixed typo in `Style.transparent_background` method name.\n\n## [8.0.0] - 2020-10-03\n\n### Added\n\n- Added Console.bell method\n- Added Set to types that Console.print will automatically pretty print\n- Added show_locals to Traceback\n- Added theme stack mechanism, see Console.push_theme and Console.pop_theme\n\n### Changed\n\n- Changed Style.empty to Style.null to better reflect what it does\n- Optimized combining styles involving a null style\n- Change error messages in Style.parse to read better\n\n### Fixed\n\n- Fixed Table.\\_\\_rich_measure\\_\\_\n- Fixed incorrect calculation of fixed width columns\n\n## [7.1.0] - 2020-09-26\n\n### Added\n\n- Added Console.begin_capture, Console.end_capture and Console.capture\n- Added Table.title_justify and Table.caption_justify https://github.com/willmcgugan/rich/issues/301\n\n### Changed\n\n- Improved formatting of exceptions\n- Enabled Rich exceptions in logging https://github.com/taliraj\n- UTF-8 encoding is now mentioned in HTML head section\n\n### Removed\n\n- Removed line_numbers argument from traceback.install, which was undocumented and did nothing\n\n## [7.0.0] - 2020-09-18\n\n### Added\n\n- New ansi_dark and ansi_light themes\n- Added Text.append_tokens for fast appending of string + Style pairs\n- Added Text.remove_suffix\n- Added Text.append_tokens\n\n### Changed\n\n- Text.tabs_to_spaces was renamed to Text.expand_tabs, which works in place rather than returning a new instance\n- Renamed Column.index to Column.\\_index\n- Optimized Style.combine and Style.chain\n- Optimized text rendering by fixing internal cache mechanism\n- Optimized hash generation for Styles\n\n## [6.2.0] - 2020-09-13\n\n### Added\n\n- Added inline code highlighting to Markdown\n\n## [6.1.2] - 2020-09-11\n\n### Added\n\n- Added ipv4 and ipv6 to ReprHighlighter\n\n### Changed\n\n- The `#` sign is included in url highlighting\n\n### Fixed\n\n- Fixed force-color switch in rich.syntax and rich.markdown commands\n\n## [6.1.1] - 2020-09-07\n\n### Changed\n\n- Restored \"def\" in inspect signature\n\n## [6.1.0] - 2020-09-07\n\n### Added\n\n- New inspect module\n- Added os.\\_Environ to pretty print\n\n### Fixed\n\n- Prevented recursive renderables from getting stuck\n\n## Changed\n\n- force_terminal and force_jupyter can now be used to force the disabled state, or left as None to auto-detect.\n- Panel now expands to fit title if supplied\n\n## [6.0.0] - 2020-08-25\n\n### Fixed\n\n- Fixed use of `__rich__` cast\n\n### Changed\n\n- New algorithm to pretty print which fits more on a line if possible\n- Deprecated `character` parameter in Rule and Console.rule, in favor of `characters`\n- Optimized Syntax.from_path to avoid searching all lexers, which also speeds up tracebacks\n\n### Added\n\n- Added soft_wrap flag to Console.print\n\n## [5.2.1] - 2020-08-19\n\n### Fixed\n\n- Fixed underscore with display hook https://github.com/willmcgugan/rich/issues/235\n\n## [5.2.0] - 2020-08-14\n\n### Changed\n\n- Added crop argument to Console.print\n- Added \"ignore\" overflow method\n- Added multiple characters per rule @hedythedev https://github.com/willmcgugan/rich/pull/207\n\n## [5.1.2] - 2020-08-10\n\n### Fixed\n\n- Further optimized pretty printing ~5X.\n\n## [5.1.1] - 2020-08-09\n\n### Fixed\n\n- Optimized pretty printing ~3X faster\n\n## [5.1.0] - 2020-08-08\n\n### Added\n\n- Added Text.cell_len\n- Added helpful message regarding unicode decoding errors https://github.com/willmcgugan/rich/issues/212\n- Added display hook with pretty.install()\n\n### Fixed\n\n- Fixed deprecation warnings re backslash https://github.com/willmcgugan/rich/issues/210\n- Fixed repr highlighting of scientific notation, e.g. 1e100\n\n### Changed\n\n- Implemented pretty printing, and removed pprintpp from dependencies\n- Optimized Text.join\n\n## [5.0.0] - 2020-08-02\n\n### Changed\n\n- Change to console markup syntax to not parse Python structures as markup, i.e. `[1,2,3]` is treated as a literal, not a tag.\n- Standard color numbers syntax has changed to `\"color(<number>)\"` so that `[5]` (for example) is considered a literal.\n- Markup escape method has changed from double brackets to preceding with a backslash, so `foo[[]]` would be `foo\\[bar]`\n\n## [4.2.2] - 2020-07-30\n\n### Changed\n\n- Added thread to automatically call update() in progress.track(). Replacing previous adaptive algorithm.\n- Second attempt at working around https://bugs.python.org/issue37871\n\n## [4.2.1] - 2020-07-29\n\n### Added\n\n- Added show_time and show_level parameters to RichHandler https://github.com/willmcgugan/rich/pull/182\n\n### Fixed\n\n- Fixed progress.track iterator exiting early https://github.com/willmcgugan/rich/issues/189\n- Added workaround for Python bug https://bugs.python.org/issue37871, fixing https://github.com/willmcgugan/rich/issues/186\n\n### Changed\n\n- Set overflow=fold for log messages https://github.com/willmcgugan/rich/issues/190\n\n## [4.2.0] - 2020-07-27\n\n### Fixed\n\n- Fixed missing new lines https://github.com/willmcgugan/rich/issues/178\n- Fixed Progress.track https://github.com/willmcgugan/rich/issues/184\n- Remove control codes from exported text https://github.com/willmcgugan/rich/issues/181\n- Implemented auto-detection and color rendition of 16-color mode\n\n## [4.1.0] - 2020-07-26\n\n### Changed\n\n- Optimized progress.track for very quick iterations\n- Force default size of 80x25 if get_terminal_size reports size of 0,0\n\n## [4.0.0] - 2020-07-23\n\nMajor version bump for a breaking change to `Text.stylize signature`, which corrects a minor but irritating API wart. The style now comes first and the `start` and `end` offsets default to the entire text. This allows for `text.stylize_all(style)` to be replaced with `text.stylize(style)`. The `start` and `end` offsets now support negative indexing, so `text.stylize(\"bold\", -1)` makes the last character bold.\n\n### Added\n\n- Added markup switch to RichHandler https://github.com/willmcgugan/rich/issues/171\n\n### Changed\n\n- Change signature of Text.stylize to accept style first\n- Remove Text.stylize_all which is no longer necessary\n\n### Fixed\n\n- Fixed rendering of Confirm prompt https://github.com/willmcgugan/rich/issues/170\n\n## [3.4.1] - 2020-07-22\n\n### Fixed\n\n- Fixed incorrect default of expand in Table.grid\n\n## [3.4.0] - 2020-07-22\n\n### Added\n\n- Added stream parameter to Console.input\n- Added password parameter to Console.input\n- Added description parameter to Progress.update\n- Added rich.prompt\n- Added detecting 'dumb' terminals\n- Added Text.styled alternative constructor\n\n### Fixes\n\n- Fixed progress bars so that they are readable when color is disabled\n\n## [3.3.2] - 2020-07-14\n\n### Changed\n\n- Optimized Text.pad\n\n### Added\n\n- Added rich.scope\n- Change log_locals to use scope.render_scope\n- Added title parameter to Columns\n\n## [3.3.1] - 2020-07-13\n\n### Added\n\n- box.ASCII_DOUBLE_HEAD\n\n### Changed\n\n- Removed replace of -- --- ... from Markdown, as it made it impossible to include CLI info\n\n## [3.3.0] - 2020-07-12\n\n### Added\n\n- Added title and title_align options to Panel\n- Added pad and width parameters to Align\n- Added end parameter to Rule\n- Added Text.pad and Text.align methods\n- Added leading parameter to Table\n\n## [3.2.0] - 2020-07-10\n\n### Added\n\n- Added Align.left Align.center Align.right shortcuts\n- Added Panel.fit shortcut\n- Added align parameter to Columns\n\n### Fixed\n\n- Align class now pads to the right, like Text\n- ipywidgets added as an optional dependency\n- Issue with Panel and background color\n- Fixed missing `__bool__` on Segment\n\n### Changed\n\n- Added `border_style` argument to Panel (note, `style` now applies to interior of the panel)\n\n## [3.1.0] - 2020-07-09\n\n### Changed\n\n- Progress bars now work in Jupyter\n\n## Added\n\n- Added refresh_per_second to progress.track\n- Added styles to BarColumn and progress.track\n\n## [3.0.5] - 2020-07-07\n\n### Fixed\n\n- Fixed Windows version number require for truecolor\n\n## [3.0.4] - 2020-07-07\n\n### Changed\n\n- More precise detection of Windows console https://github.com/willmcgugan/rich/issues/140\n\n## [3.0.3] - 2020-07-03\n\n### Fixed\n\n- Fixed edge case with wrapped and overflowed text\n\n### Changed\n\n- New algorithm for compressing table that priorities smaller columns\n\n### Added\n\n- Added safe_box parameter to Console constructor\n\n## [3.0.2] - 2020-07-02\n\n### Added\n\n- Added rich.styled.Styled class to apply styles to renderable\n- Table.add_row now has an optional style parameter\n- Added table_movie.py to examples\n\n### Changed\n\n- Modified box options to use half line characters at edges\n- Non no_wrap columns will now shrink below minimum width if table is compressed\n\n## [3.0.1] - 2020-06-30\n\n### Added\n\n- Added box.ASCII2\n- Added markup argument to logging extra\n\n### Changed\n\n- Setting a non-None width now implies expand=True\n\n## [3.0.0] - 2020-06-28\n\n### Changed\n\n- Enabled supported box chars for legacy Windows, and introduce `safe_box` flag\n- Disable hyperlinks on legacy Windows\n- Constructors for Rule and Panel now have keyword only arguments (reason for major version bump)\n- Table.add_colum added keyword only arguments\n\n### Fixed\n\n- Fixed Table measure\n\n## [2.3.1] - 2020-06-26\n\n### Fixed\n\n- Disabled legacy_windows if jupyter is detected https://github.com/willmcgugan/rich/issues/125\n\n## [2.3.0] - 2020-06-26\n\n### Fixed\n\n- Fixed highlighting of paths / filenames\n- Corrected docs for RichHandler which erroneously said default console writes to stderr\n\n### Changed\n\n- Allowed `style` parameter for `highlight_regex` to be a callable that returns a style\n\n### Added\n\n- Added optional highlighter parameter to RichHandler\n\n## [2.2.6] - 2020-06-24\n\n### Changed\n\n- Store a \"link id\" on Style instance, so links containing different styles are highlighted together. (https://github.com/willmcgugan/rich/pull/123)\n\n## [2.2.5] - 2020-06-23\n\n### Fixed\n\n- Fixed justify of tables (https://github.com/willmcgugan/rich/issues/117)\n\n## [2.2.4] - 2020-06-21\n\n### Added\n\n- Added enable_link_path to RichHandler\n- Added legacy_windows switch to Console constructor\n\n## [2.2.3] - 2020-06-15\n\n### Fixed\n\n- Fixed console.log hyperlink not containing full path\n\n### Changed\n\n- Used random number for hyperlink id\n\n## [2.2.2] - 2020-06-14\n\n### Changed\n\n- Exposed RichHandler highlighter as a class var\n\n## [2.2.1] - 2020-06-14\n\n### Changed\n\n- Linked path in log render to file\n\n## [2.2.0] - 2020-06-14\n\n### Added\n\n- Added redirect_stdout and redirect_stderr to Progress\n\n### Changed\n\n- printing to console with an active Progress doesn't break visuals\n\n## [2.1.0] - 2020-06-11\n\n### Added\n\n- Added 'transient' option to Progress\n\n### Changed\n\n- Truncated overly long text in Rule with ellipsis overflow\n\n## [2.0.1] - 2020-06-10\n\n### Added\n\n- Added expand option to Padding\n\n### Changed\n\n- Some minor optimizations in Text\n\n### Fixed\n\n- Fixed broken rule with CJK text\n\n## [2.0.0] - 2020-06-06\n\n### Added\n\n- Added overflow methods\n- Added no_wrap option to print()\n- Added width option to print\n- Improved handling of compressed tables\n\n### Fixed\n\n- Fixed erroneous space at end of log\n- Fixed erroneous space at end of progress bar\n\n### Changed\n\n- Renamed \\_ratio.ratio_divide to \\_ratio.ratio_distribute\n- Renamed JustifyValues to JustifyMethod (backwards incompatible)\n- Optimized \\_trim_spans\n- Enforced keyword args in Console / Text interfaces (backwards incompatible)\n- Return self from text.append\n\n## [1.3.1] - 2020-06-01\n\n### Changed\n\n- Changed defaults of Table.grid\n- Polished listdir.py example\n\n### Added\n\n- Added width argument to Columns\n\n### Fixed\n\n- Fixed for `columns_first` argument in Columns\n- Fixed incorrect padding in columns with fixed width\n\n## [1.3.0] - 2020-05-31\n\n### Added\n\n- Added rich.get_console() function to get global console instance.\n- Added Columns class\n\n### Changed\n\n- Updated `markdown.Heading.create()` to work with subclassing.\n- Console now transparently works with Jupyter\n\n### Fixed\n\n- Fixed issue with broken table with show_edge=False and a non-None box arg\n\n## [1.2.3] - 2020-05-24\n\n### Added\n\n- Added `padding` parameter to Panel\n- Added 'indeterminate' state when progress bars aren't started\n\n### Fixed\n\n- Fixed Progress deadlock https://github.com/willmcgugan/rich/issues/90\n\n### Changed\n\n- Auto-detect \"truecolor\" color system when in Windows Terminal\n\n## [1.2.2] - 2020-05-22\n\n### Fixed\n\n- Issue with right aligned wrapped text adding extra spaces\n\n## [1.2.1] - 2020-05-22\n\n### Fixed\n\n- Issue with sum and Style\n\n## [1.2.0] - 2020-05-22\n\n### Added\n\n- Support for double underline, framed, encircled, and overlined attributes\n\n### Changed\n\n- Optimized Style\n- Changed methods `__console__` to `__rich_console__`, and `__measure__` to `__rich_measure__`\n\n## [1.1.9] - 2020-05-20\n\n### Fixed\n\n- Exception when BarColumn.bar_width == None\n\n## [1.1.8] - 2020-05-20\n\n### Changed\n\n- Optimizations for Segment, Console and Table\n\n### Added\n\n- Added Console.clear method\n- Added exporting of links to HTML\n\n## [1.1.7] - 2020-05-19\n\n### Added\n\n- Added collapse_padding option to Table.\n\n### Changed\n\n- Some style attributes may be abbreviated (b for bold, i for italic etc). Previously abbreviations worked in console markup but only one at a time, i.e. \"[b]Hello[/]\" but not \"[b i]Hello[/]\" -- now they work everywhere.\n- Renamed 'text' property on Text to 'plain'. i.e. text.plain returns a string version of the Text instance.\n\n### Fixed\n\n- Fixed zero division if total is 0 in progress bar\n\n## [1.1.6] - 2020-05-17\n\n### Added\n\n- Added rich.align.Align class\n- Added justify argument to Console.print and console.log\n\n## [1.1.5] - 2020-05-15\n\n### Changed\n\n- Changed progress bars to write to stdout on terminal and hide on non-terminal\n\n## [1.1.4] - 2020-05-15\n\n### Fixed\n\n- Fixed incorrect file and link in progress.log\n- Fixes for legacy windows: Bar, Panel, and Rule now use ASCII characters\n- show_cursor is now a no-op on legacy windows\n\n### Added\n\n- Added Console.input\n\n### Changed\n\n- Disable progress bars when not writing to a terminal\n\n## [1.1.3] - 2020-05-14\n\n### Fixed\n\n- Issue with progress of one line`\n\n## [1.1.2] - 2020-05-14\n\n### Added\n\n- Added -p switch to python -m rich.markdown to page output\n- Added Console.control to output control codes\n\n### Changed\n\n- Changed Console log_time_format to no longer require a space at the end\n- Added print and log to Progress to render terminal output when progress is active\n\n## [1.1.1] - 2020-05-12\n\n### Changed\n\n- Stripped cursor moving control codes from text\n\n## [1.1.0] - 2020-05-10\n\n### Added\n\n- Added hyperlinks to Style and markup\n- Added justify and code theme switches to markdown command\n\n## [1.0.3] - 2020-05-08\n\n### Added\n\n- Added `python -m rich.syntax` command\n\n## [1.0.2] - 2020-05-08\n\n### Fixed\n\n- Issue with Windows legacy support https://github.com/willmcgugan/rich/issues/59\n\n## [1.0.1] - 2020-05-08\n\n### Changed\n\n- Applied console markup after highlighting\n- Documented highlighting\n- Changed Markup parser to handle overlapping styles\n- Relaxed dependency on colorama\n- Allowed Theme to accept values as style definitions (str) as well as Style instances\n- Added a panel to emphasize code in Markdown\n\n### Added\n\n- Added markup.escape\n- Added `python -m rich.theme` command\n- Added `python -m rich.markdown` command\n- Added rendering of images in Readme (links only)\n\n### Fixed\n\n- Fixed Text.assemble not working with strings https://github.com/willmcgugan/rich/issues/57\n- Fixed table when column widths must be compressed to fit\n\n## [1.0.0] - 2020-05-03\n\n### Changed\n\n- Improvements to repr highlighter to highlight URLs\n\n## [0.8.13] - 2020-04-28\n\n### Fixed\n\n- Fixed incorrect markdown rendering for quotes and changed style\n\n## [0.8.12] - 2020-04-21\n\n### Fixed\n\n- Removed debug print from rich.progress\n\n## [0.8.11] - 2020-04-14\n\n### Added\n\n- Added Table.show_lines to render lines between rows\n\n### Changed\n\n- Added markup escape with double square brackets\n\n## [0.8.10] - 2020-04-12\n\n### Fixed\n\n- Fix row_styles applying to header\n\n## [0.8.9] - 2020-04-12\n\n### Changed\n\n- Added force_terminal option to `Console.__init__`\n\n### Added\n\n- Added Table.row_styles to enable zebra striping.\n\n## [0.8.8] - 2020-03-31\n\n### Fixed\n\n- Fixed background in Syntax\n\n## [0.8.7] - 2020-03-31\n\n### Fixed\n\n- Broken wrapping of long lines\n- Fixed wrapping in Syntax\n\n### Changed\n\n- Added word_wrap option to Syntax, which defaults to False.\n- Added word_wrap option to Traceback.\n\n## [0.8.6] - 2020-03-29\n\n### Added\n\n- Experimental Jupyter notebook support: from rich.jupyter import print\n\n## [0.8.5] - 2020-03-29\n\n### Changed\n\n- Smarter number parsing regex for repr highlighter\n\n### Added\n\n- uuid highlighter for repr\n\n## [0.8.4] - 2020-03-28\n\n### Added\n\n- Added 'test card', run python -m rich\n\n### Changed\n\n- Detected windows terminal, defaulting to colorama support\n\n### Fixed\n\n- Fixed table scaling issue\n\n## [0.8.3] - 2020-03-27\n\n### Fixed\n\n- CJK right align\n\n## [0.8.2] - 2020-03-27\n\n### Changed\n\n- Fixed issue with 0 speed resulting in zero division error\n- Changed signature of Progress.update\n- Made calling start() a second time a no-op\n\n## [0.8.1] - 2020-03-22\n\n### Added\n\n- Added progress.DownloadColumn\n\n## [0.8.0] - 2020-03-17\n\n### Added\n\n- CJK support\n- Console level highlight flag\n- Added encoding argument to Syntax.from_path\n\n### Changed\n\n- Dropped support for Windows command prompt (try https://www.microsoft.com/en-gb/p/windows-terminal-preview/)\n- Added task_id to Progress.track\n\n## [0.7.2] - 2020-03-15\n\n### Fixed\n\n- KeyError for missing pygments style\n\n## [0.7.1] - 2020-03-13\n\n### Fixed\n\n- Issue with control codes being used in length calculation\n\n### Changed\n\n- Remove current_style concept, which wasn't really used and was problematic for concurrency\n\n## [0.7.0] - 2020-03-12\n\n### Changed\n\n- Added width option to Panel\n- Change special method `__render_width__` to `__measure__`\n- Dropped the \"markdown style\" syntax in console markup\n- Optimized style rendering\n\n### Added\n\n- Added Console.show_cursor method\n- Added Progress bars\n\n### Fixed\n\n- Fixed wrapping when a single word was too large to fit in a line\n\n## [0.6.0] - 2020-03-03\n\n### Added\n\n- Added tab_size to Console and Text\n- Added protocol.is_renderable for runtime check\n- Added emoji switch to Console\n- Added inherit boolean to Theme\n- Made Console thread safe, with a thread local buffer\n\n### Changed\n\n- Console.markup attribute now effects Table\n- SeparatedConsoleRenderable and RichCast types\n\n### Fixed\n\n- Fixed tabs breaking rendering by converting to spaces\n\n## [0.5.0] - 2020-02-23\n\n### Changed\n\n- Replaced `__console_str__` with `__rich__`\n\n## [0.4.1] - 2020-02-22\n\n### Fixed\n\n- Readme links in Pypi\n\n## [0.4.0] - 2020-02-22\n\n### Added\n\n- Added Traceback rendering and handler\n- Added rich.constrain\n- Added rich.rule\n\n### Fixed\n\n- Fixed unnecessary padding\n\n## [0.3.3] - 2020-02-04\n\n### Fixed\n\n- Fixed Windows color support\n- Fixed line width on windows issue (https://github.com/willmcgugan/rich/issues/7)\n- Fixed Pretty print on Windows\n\n## [0.3.2] - 2020-01-26\n\n### Added\n\n- Added rich.logging\n\n## [0.3.1] - 2020-01-22\n\n### Added\n\n- Added colorama for Windows support\n\n## [0.3.0] - 2020-01-19\n\n### Added\n\n- First official release, API still to be stabilized\n", "rich/console.py": "import inspect\nimport os\nimport platform\nimport shutil\nimport sys\nimport threading\nfrom abc import ABC, abstractmethod\nfrom collections import abc\nfrom dataclasses import dataclass, field, replace\nfrom datetime import datetime\nfrom functools import wraps\nfrom getpass import getpass\nfrom time import monotonic\nfrom typing import (\n IO,\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n NamedTuple,\n Optional,\n TextIO,\n Union,\n cast,\n)\n\nfrom typing_extensions import Literal, Protocol, runtime_checkable\n\nfrom . import errors, themes\nfrom ._emoji_replace import _emoji_replace\nfrom ._log_render import LogRender\nfrom .align import Align, AlignMethod\nfrom .color import ColorSystem\nfrom .control import Control\nfrom .highlighter import NullHighlighter, ReprHighlighter\nfrom .markup import render as render_markup\nfrom .measure import Measurement, measure_renderables\nfrom .pager import Pager, SystemPager\nfrom .pretty import Pretty\nfrom .scope import render_scope\nfrom .segment import Segment\nfrom .style import Style, StyleType\nfrom .styled import Styled\nfrom .terminal_theme import DEFAULT_TERMINAL_THEME, TerminalTheme\nfrom .text import Text, TextType\nfrom .theme import Theme, ThemeStack\n\nif TYPE_CHECKING:\n from ._windows import WindowsConsoleFeatures\n from .status import Status\n\nWINDOWS = platform.system() == \"Windows\"\n\nHighlighterType = Callable[[Union[str, \"Text\"]], \"Text\"]\nJustifyMethod = Literal[\"default\", \"left\", \"center\", \"right\", \"full\"]\nOverflowMethod = Literal[\"fold\", \"crop\", \"ellipsis\", \"ignore\"]\n\n\nCONSOLE_HTML_FORMAT = \"\"\"\\\n<!DOCTYPE html>\n<head>\n<meta charset=\"UTF-8\">\n<style>\n{stylesheet}\nbody {{\n color: {foreground};\n background-color: {background};\n}}\n</style>\n</head>\n<html>\n<body>\n <code>\n <pre style=\"font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">{code}</pre>\n </code>\n</body>\n</html>\n\"\"\"\n\n_TERM_COLORS = {\"256color\": ColorSystem.EIGHT_BIT, \"16color\": ColorSystem.STANDARD}\n\n\n@dataclass\nclass ConsoleOptions:\n \"\"\"Options for __rich_console__ method.\"\"\"\n\n legacy_windows: bool\n \"\"\"legacy_windows: flag for legacy windows.\"\"\"\n min_width: int\n \"\"\"Minimum width of renderable.\"\"\"\n max_width: int\n \"\"\"Maximum width of renderable.\"\"\"\n is_terminal: bool\n \"\"\"True if the target is a terminal, otherwise False.\"\"\"\n encoding: str\n \"\"\"Encoding of terminal.\"\"\"\n justify: Optional[JustifyMethod] = None\n \"\"\"Justify value override for renderable.\"\"\"\n overflow: Optional[OverflowMethod] = None\n \"\"\"Overflow value override for renderable.\"\"\"\n no_wrap: Optional[bool] = False\n \"\"\"Disable wrapping for text.\"\"\"\n highlight: Optional[bool] = None\n \"\"\"Highlight override for render_str.\"\"\"\n\n @property\n def ascii_only(self) -> bool:\n \"\"\"Check if renderables should use ascii only.\"\"\"\n return not self.encoding.startswith(\"utf\")\n\n def update(\n self,\n width: int = None,\n min_width: int = None,\n max_width: int = None,\n justify: JustifyMethod = None,\n overflow: OverflowMethod = None,\n no_wrap: bool = None,\n highlight: bool = None,\n ) -> \"ConsoleOptions\":\n \"\"\"Update values, return a copy.\"\"\"\n options = replace(self)\n if width is not None:\n options.min_width = options.max_width = width\n if min_width is not None:\n options.min_width = min_width\n if max_width is not None:\n options.max_width = max_width\n if justify is not None:\n options.justify = justify\n if overflow is not None:\n options.overflow = overflow\n if no_wrap is not None:\n options.no_wrap = no_wrap\n if highlight is not None:\n options.highlight = highlight\n return options\n\n\n@runtime_checkable\nclass RichCast(Protocol):\n \"\"\"An object that may be 'cast' to a console renderable.\"\"\"\n\n def __rich__(self) -> Union[\"ConsoleRenderable\", str]: # pragma: no cover\n ...\n\n\n@runtime_checkable\nclass ConsoleRenderable(Protocol):\n \"\"\"An object that supports the console protocol.\"\"\"\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"RenderResult\": # pragma: no cover\n ...\n\n\nRenderableType = Union[ConsoleRenderable, RichCast, str]\n\"\"\"A type that may be rendered by Console.\"\"\"\n\nRenderResult = Iterable[Union[RenderableType, Segment]]\n\"\"\"The result of calling a __rich_console__ method.\"\"\"\n\n\n_null_highlighter = NullHighlighter()\n\n\nclass CaptureError(Exception):\n \"\"\"An error in the Capture context manager.\"\"\"\n\n\nclass Capture:\n \"\"\"Context manager to capture the result of printing to the console.\n See :meth:`~rich.console.Console.capture` for how to use.\n\n Args:\n console (Console): A console instance to capture output.\n \"\"\"\n\n def __init__(self, console: \"Console\") -> None:\n self._console = console\n self._result: Optional[str] = None\n\n def __enter__(self) -> \"Capture\":\n self._console.begin_capture()\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb) -> None:\n self._result = self._console.end_capture()\n\n def get(self) -> str:\n \"\"\"Get the result of the capture.\"\"\"\n if self._result is None:\n raise CaptureError(\n \"Capture result is not available until context manager exits.\"\n )\n return self._result\n\n\nclass ThemeContext:\n \"\"\"A context manager to use a temporary theme. See :meth:`~rich.console.Console.theme` for usage.\"\"\"\n\n def __init__(self, console: \"Console\", theme: Theme, inherit: bool = True) -> None:\n self.console = console\n self.theme = theme\n self.inherit = inherit\n\n def __enter__(self) -> \"ThemeContext\":\n self.console.push_theme(self.theme)\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb) -> None:\n self.console.pop_theme()\n\n\nclass PagerContext:\n \"\"\"A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage.\"\"\"\n\n def __init__(\n self,\n console: \"Console\",\n pager: Pager = None,\n styles: bool = False,\n links: bool = False,\n ) -> None:\n self._console = console\n self.pager = SystemPager() if pager is None else pager\n self.styles = styles\n self.links = links\n\n def __enter__(self) -> \"PagerContext\":\n self._console._enter_buffer()\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb) -> None:\n if exc_type is None:\n with self._console._lock:\n buffer: List[Segment] = self._console._buffer[:]\n del self._console._buffer[:]\n segments: Iterable[Segment] = buffer\n if not self.styles:\n segments = Segment.strip_styles(segments)\n elif not self.links:\n segments = Segment.strip_links(segments)\n content = self._console._render_buffer(segments)\n self.pager.show(content)\n self._console._exit_buffer()\n\n\nclass RenderGroup:\n \"\"\"Takes a group of renderables and returns a renderable object that renders the group.\n\n Args:\n renderables (Iterable[RenderableType]): An iterable of renderable objects.\n fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.\n \"\"\"\n\n def __init__(self, *renderables: \"RenderableType\", fit: bool = True) -> None:\n self._renderables = renderables\n self.fit = fit\n self._render: Optional[List[RenderableType]] = None\n\n @property\n def renderables(self) -> List[\"RenderableType\"]:\n if self._render is None:\n self._render = list(self._renderables)\n return self._render\n\n def __rich_measure__(self, console: \"Console\", max_width: int) -> \"Measurement\":\n if self.fit:\n return measure_renderables(console, self.renderables, max_width)\n else:\n return Measurement(max_width, max_width)\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> RenderResult:\n yield from self.renderables\n\n\ndef render_group(fit: bool = True) -> Callable:\n \"\"\"A decorator that turns an iterable of renderables in to a group.\n\n Args:\n fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.\n \"\"\"\n\n def decorator(method):\n \"\"\"Convert a method that returns an iterable of renderables in to a RenderGroup.\"\"\"\n\n @wraps(method)\n def _replace(*args, **kwargs):\n renderables = method(*args, **kwargs)\n return RenderGroup(*renderables, fit=fit)\n\n return _replace\n\n return decorator\n\n\nclass ConsoleDimensions(NamedTuple):\n \"\"\"Size of the terminal.\"\"\"\n\n width: int\n \"\"\"The width of the console in 'cells'.\"\"\"\n height: int\n \"\"\"The height of the console in lines.\"\"\"\n\n\ndef _is_jupyter() -> bool: # pragma: no cover\n \"\"\"Check if we're running in a Jupyter notebook.\"\"\"\n try:\n get_ipython # type: ignore\n except NameError:\n return False\n shell = get_ipython().__class__.__name__ # type: ignore\n if shell == \"ZMQInteractiveShell\":\n return True # Jupyter notebook or qtconsole\n elif shell == \"TerminalInteractiveShell\":\n return False # Terminal running IPython\n else:\n return False # Other type (?)\n\n\nCOLOR_SYSTEMS = {\n \"standard\": ColorSystem.STANDARD,\n \"256\": ColorSystem.EIGHT_BIT,\n \"truecolor\": ColorSystem.TRUECOLOR,\n \"windows\": ColorSystem.WINDOWS,\n}\n\n\n_COLOR_SYSTEMS_NAMES = {system: name for name, system in COLOR_SYSTEMS.items()}\n\n\n@dataclass\nclass ConsoleThreadLocals(threading.local):\n \"\"\"Thread local values for Console context.\"\"\"\n\n theme_stack: ThemeStack\n buffer: List[Segment] = field(default_factory=list)\n buffer_index: int = 0\n\n\nclass RenderHook(ABC):\n \"\"\"Provides hooks in to the render process.\"\"\"\n\n @abstractmethod\n def process_renderables(\n self, renderables: List[ConsoleRenderable]\n ) -> List[ConsoleRenderable]:\n \"\"\"Called with a list of objects to render.\n\n This method can return a new list of renderables, or modify and return the same list.\n\n Args:\n renderables (List[ConsoleRenderable]): A number of renderable objects.\n\n Returns:\n List[ConsoleRenderable]: A replacement list of renderables.\n \"\"\"\n\n\n_windows_console_features: Optional[\"WindowsConsoleFeatures\"] = None\n\n\ndef get_windows_console_features() -> \"WindowsConsoleFeatures\": # pragma: no cover\n global _windows_console_features\n if _windows_console_features is not None:\n return _windows_console_features\n from ._windows import get_windows_console_features\n\n _windows_console_features = get_windows_console_features()\n return _windows_console_features\n\n\ndef detect_legacy_windows() -> bool:\n \"\"\"Detect legacy Windows.\"\"\"\n return WINDOWS and not get_windows_console_features().vt\n\n\nif detect_legacy_windows(): # pragma: no cover\n from colorama import init\n\n init()\n\n\nclass Console:\n \"\"\"A high level console interface.\n\n Args:\n color_system (str, optional): The color system supported by your terminal,\n either ``\"standard\"``, ``\"256\"`` or ``\"truecolor\"``. Leave as ``\"auto\"`` to autodetect.\n force_terminal (Optional[bool], optional): Enable/disable terminal control codes, or None to auto-detect terminal. Defaults to None.\n force_jupyter (Optional[bool], optional): Enable/disable Jupyter rendering, or None to auto-detect Jupyter. Defaults to None.\n soft_wrap (Optional[bool], optional): Set soft wrap default on print method. Defaults to False.\n theme (Theme, optional): An optional style theme object, or ``None`` for default theme.\n stderr (bool, optional): Use stderr rather than stdout if ``file `` is not specified. Defaults to False.\n file (IO, optional): A file object where the console should write to. Defaults to stdout.\n width (int, optional): The width of the terminal. Leave as default to auto-detect width.\n height (int, optional): The height of the terminal. Leave as default to auto-detect height.\n style (StyleType, optional): Style to apply to all output, or None for no style. Defaults to None.\n record (bool, optional): Boolean to enable recording of terminal output,\n required to call :meth:`export_html` and :meth:`export_text`. Defaults to False.\n markup (bool, optional): Boolean to enable :ref:`console_markup`. Defaults to True.\n emoji (bool, optional): Enable emoji code. Defaults to True.\n highlight (bool, optional): Enable automatic highlighting. Defaults to True.\n log_time (bool, optional): Boolean to enable logging of time by :meth:`log` methods. Defaults to True.\n log_path (bool, optional): Boolean to enable the logging of the caller by :meth:`log`. Defaults to True.\n log_time_format (str, optional): Log time format if ``log_time`` is enabled. Defaults to \"[%X] \".\n highlighter (HighlighterType, optional): Default highlighter.\n legacy_windows (bool, optional): Enable legacy Windows mode, or ``None`` to auto detect. Defaults to ``None``.\n safe_box (bool, optional): Restrict box options that don't render on legacy Windows.\n get_datetime (Callable[[], datetime], optional): Callable that gets the current time as a datetime.datetime object (used by Console.log),\n or None for datetime.now.\n get_time (Callable[[], time], optional): Callable that gets the current time in seconds, default uses time.monotonic.\n \"\"\"\n\n def __init__(\n self,\n *,\n color_system: Optional[\n Literal[\"auto\", \"standard\", \"256\", \"truecolor\", \"windows\"]\n ] = \"auto\",\n force_terminal: bool = None,\n force_jupyter: bool = None,\n soft_wrap: bool = False,\n theme: Theme = None,\n stderr: bool = False,\n file: IO[str] = None,\n width: int = None,\n height: int = None,\n style: StyleType = None,\n tab_size: int = 8,\n record: bool = False,\n markup: bool = True,\n emoji: bool = True,\n highlight: bool = True,\n log_time: bool = True,\n log_path: bool = True,\n log_time_format: str = \"[%X]\",\n highlighter: Optional[\"HighlighterType\"] = ReprHighlighter(),\n legacy_windows: bool = None,\n safe_box: bool = True,\n get_datetime: Callable[[], datetime] = None,\n get_time: Callable[[], float] = None,\n _environ: Dict[str, str] = None,\n ):\n # Copy of os.environ allows us to replace it for testing\n self._environ = os.environ if _environ is None else _environ\n\n self.is_jupyter = _is_jupyter() if force_jupyter is None else force_jupyter\n if self.is_jupyter:\n width = width or 93\n height = height or 100\n self.soft_wrap = soft_wrap\n self._width = width\n self._height = height\n self.tab_size = tab_size\n self.record = record\n self._markup = markup\n self._emoji = emoji\n self._highlight = highlight\n self.legacy_windows: bool = (\n (detect_legacy_windows() and not self.is_jupyter)\n if legacy_windows is None\n else legacy_windows\n )\n\n self._color_system: Optional[ColorSystem]\n self._force_terminal = force_terminal\n self._file = file\n self.stderr = stderr\n\n if color_system is None:\n self._color_system = None\n elif color_system == \"auto\":\n self._color_system = self._detect_color_system()\n else:\n self._color_system = COLOR_SYSTEMS[color_system]\n\n self._lock = threading.RLock()\n self._log_render = LogRender(\n show_time=log_time,\n show_path=log_path,\n time_format=log_time_format,\n )\n self.highlighter: HighlighterType = highlighter or _null_highlighter\n self.safe_box = safe_box\n self.get_datetime = get_datetime or datetime.now\n self.get_time = get_time or monotonic\n self.style = style\n\n self._record_buffer_lock = threading.RLock()\n self._thread_locals = ConsoleThreadLocals(\n theme_stack=ThemeStack(themes.DEFAULT if theme is None else theme)\n )\n self._record_buffer: List[Segment] = []\n self._render_hooks: List[RenderHook] = []\n\n def __repr__(self) -> str:\n return f\"<console width={self.width} {str(self._color_system)}>\"\n\n @property\n def file(self) -> IO[str]:\n \"\"\"Get the file object to write to.\"\"\"\n file = self._file or (sys.stderr if self.stderr else sys.stdout)\n file = getattr(file, \"rich_proxied_file\", file)\n return file\n\n @file.setter\n def file(self, new_file: IO[str]) -> None:\n \"\"\"Set a new file object.\"\"\"\n self._file = new_file\n\n @property\n def _buffer(self) -> List[Segment]:\n \"\"\"Get a thread local buffer.\"\"\"\n return self._thread_locals.buffer\n\n @property\n def _buffer_index(self) -> int:\n \"\"\"Get a thread local buffer.\"\"\"\n return self._thread_locals.buffer_index\n\n @_buffer_index.setter\n def _buffer_index(self, value: int) -> None:\n self._thread_locals.buffer_index = value\n\n @property\n def _theme_stack(self) -> ThemeStack:\n \"\"\"Get the thread local theme stack.\"\"\"\n return self._thread_locals.theme_stack\n\n def _detect_color_system(self) -> Optional[ColorSystem]:\n \"\"\"Detect color system from env vars.\"\"\"\n if self.is_jupyter:\n return ColorSystem.TRUECOLOR\n if not self.is_terminal or \"NO_COLOR\" in self._environ or self.is_dumb_terminal:\n return None\n if WINDOWS: # pragma: no cover\n if self.legacy_windows: # pragma: no cover\n return ColorSystem.WINDOWS\n windows_console_features = get_windows_console_features()\n return (\n ColorSystem.TRUECOLOR\n if windows_console_features.truecolor\n else ColorSystem.EIGHT_BIT\n )\n else:\n color_term = self._environ.get(\"COLORTERM\", \"\").strip().lower()\n if color_term in (\"truecolor\", \"24bit\"):\n return ColorSystem.TRUECOLOR\n term = self._environ.get(\"TERM\", \"\").strip().lower()\n _term_name, _hyphen, colors = term.partition(\"-\")\n color_system = _TERM_COLORS.get(colors, ColorSystem.STANDARD)\n return color_system\n\n def _enter_buffer(self) -> None:\n \"\"\"Enter in to a buffer context, and buffer all output.\"\"\"\n self._buffer_index += 1\n\n def _exit_buffer(self) -> None:\n \"\"\"Leave buffer context, and render content if required.\"\"\"\n self._buffer_index -= 1\n self._check_buffer()\n\n def push_render_hook(self, hook: RenderHook) -> None:\n \"\"\"Add a new render hook to the stack.\n\n Args:\n hook (RenderHook): Render hook instance.\n \"\"\"\n\n self._render_hooks.append(hook)\n\n def pop_render_hook(self) -> None:\n \"\"\"Pop the last renderhook from the stack.\"\"\"\n self._render_hooks.pop()\n\n def __enter__(self) -> \"Console\":\n \"\"\"Own context manager to enter buffer context.\"\"\"\n self._enter_buffer()\n return self\n\n def __exit__(self, exc_type, exc_value, traceback) -> None:\n \"\"\"Exit buffer context.\"\"\"\n self._exit_buffer()\n\n def begin_capture(self) -> None:\n \"\"\"Begin capturing console output. Call :meth:`end_capture` to exit capture mode and return output.\"\"\"\n self._enter_buffer()\n\n def end_capture(self) -> str:\n \"\"\"End capture mode and return captured string.\n\n Returns:\n str: Console output.\n \"\"\"\n render_result = self._render_buffer(self._buffer)\n del self._buffer[:]\n self._exit_buffer()\n return render_result\n\n def push_theme(self, theme: Theme, *, inherit: bool = True) -> None:\n \"\"\"Push a new theme on to the top of the stack, replacing the styles from the previous theme.\n Generally speaking, you should call :meth:`~rich.console.Console.use_theme` to get a context manager, rather\n than calling this method directly.\n\n Args:\n theme (Theme): A theme instance.\n inherit (bool, optional): Inherit existing styles. Defaults to True.\n \"\"\"\n self._theme_stack.push_theme(theme, inherit=inherit)\n\n def pop_theme(self) -> None:\n \"\"\"Remove theme from top of stack, restoring previous theme.\"\"\"\n self._theme_stack.pop_theme()\n\n def use_theme(self, theme: Theme, *, inherit: bool = True) -> ThemeContext:\n \"\"\"Use a different theme for the duration of the context manager.\n\n Args:\n theme (Theme): Theme instance to user.\n inherit (bool, optional): Inherit existing console styles. Defaults to True.\n\n Returns:\n ThemeContext: [description]\n \"\"\"\n return ThemeContext(self, theme, inherit)\n\n @property\n def color_system(self) -> Optional[str]:\n \"\"\"Get color system string.\n\n Returns:\n Optional[str]: \"standard\", \"256\" or \"truecolor\".\n \"\"\"\n\n if self._color_system is not None:\n return _COLOR_SYSTEMS_NAMES[self._color_system]\n else:\n return None\n\n @property\n def encoding(self) -> str:\n \"\"\"Get the encoding of the console file, e.g. ``\"utf-8\"``.\n\n Returns:\n str: A standard encoding string.\n \"\"\"\n return (getattr(self.file, \"encoding\", \"utf-8\") or \"utf-8\").lower()\n\n @property\n def is_terminal(self) -> bool:\n \"\"\"Check if the console is writing to a terminal.\n\n Returns:\n bool: True if the console writing to a device capable of\n understanding terminal codes, otherwise False.\n \"\"\"\n if self._force_terminal is not None:\n return self._force_terminal\n isatty = getattr(self.file, \"isatty\", None)\n return False if isatty is None else isatty()\n\n @property\n def is_dumb_terminal(self) -> bool:\n \"\"\"Detect dumb terminal.\n\n Returns:\n bool: True if writing to a dumb terminal, otherwise False.\n\n \"\"\"\n _term = self._environ.get(\"TERM\", \"\")\n is_dumb = _term.lower() in (\"dumb\", \"unknown\")\n return self.is_terminal and is_dumb\n\n @property\n def options(self) -> ConsoleOptions:\n \"\"\"Get default console options.\"\"\"\n return ConsoleOptions(\n legacy_windows=self.legacy_windows,\n min_width=1,\n max_width=self.width,\n encoding=self.encoding,\n is_terminal=self.is_terminal,\n )\n\n @property\n def size(self) -> ConsoleDimensions:\n \"\"\"Get the size of the console.\n\n Returns:\n ConsoleDimensions: A named tuple containing the dimensions.\n \"\"\"\n\n if self._width is not None and self._height is not None:\n return ConsoleDimensions(self._width, self._height)\n\n if self.is_dumb_terminal:\n return ConsoleDimensions(80, 25)\n\n width: Optional[int] = None\n height: Optional[int] = None\n if WINDOWS: # pragma: no cover\n width, height = shutil.get_terminal_size()\n else:\n try:\n width, height = os.get_terminal_size(sys.stdin.fileno())\n except (AttributeError, ValueError, OSError):\n pass\n\n # get_terminal_size can report 0, 0 if run from pseudo-terminal\n width = width or 80\n height = height or 25\n return ConsoleDimensions(\n (width - self.legacy_windows) if self._width is None else self._width,\n height if self._height is None else self._height,\n )\n\n @property\n def width(self) -> int:\n \"\"\"Get the width of the console.\n\n Returns:\n int: The width (in characters) of the console.\n \"\"\"\n width, _ = self.size\n return width\n\n def bell(self) -> None:\n \"\"\"Play a 'bell' sound (if supported by the terminal).\"\"\"\n self.control(\"\\x07\")\n\n def capture(self) -> Capture:\n \"\"\"A context manager to *capture* the result of print() or log() in a string,\n rather than writing it to the console.\n\n Example:\n >>> from rich.console import Console\n >>> console = Console()\n >>> with console.capture() as capture:\n ... console.print(\"[bold magenta]Hello World[/]\")\n >>> print(capture.get())\n\n Returns:\n Capture: Context manager with disables writing to the terminal.\n \"\"\"\n capture = Capture(self)\n return capture\n\n def pager(\n self, pager: Pager = None, styles: bool = False, links: bool = False\n ) -> PagerContext:\n \"\"\"A context manager to display anything printed within a \"pager\". The pager application\n is defined by the system and will typically support at least pressing a key to scroll.\n\n Args:\n pager (Pager, optional): A pager object, or None to use :class:~rich.pager.SystemPager`. Defaults to None.\n styles (bool, optional): Show styles in pager. Defaults to False.\n links (bool, optional): Show links in pager. Defaults to False.\n\n Example:\n >>> from rich.console import Console\n >>> from rich.__main__ import make_test_card\n >>> console = Console()\n >>> with console.pager():\n console.print(make_test_card())\n\n Returns:\n PagerContext: A context manager.\n \"\"\"\n return PagerContext(self, pager=pager, styles=styles, links=links)\n\n def line(self, count: int = 1) -> None:\n \"\"\"Write new line(s).\n\n Args:\n count (int, optional): Number of new lines. Defaults to 1.\n \"\"\"\n\n assert count >= 0, \"count must be >= 0\"\n if count:\n self._buffer.append(Segment(\"\\n\" * count))\n self._check_buffer()\n\n def clear(self, home: bool = True) -> None:\n \"\"\"Clear the screen.\n\n Args:\n home (bool, optional): Also move the cursor to 'home' position. Defaults to True.\n \"\"\"\n self.control(\"\\033[2J\\033[H\" if home else \"\\033[2J\")\n\n def status(\n self,\n status: RenderableType,\n spinner: str = \"dots\",\n spinner_style: str = \"status.spinner\",\n speed: float = 1.0,\n refresh_per_second: float = 12.5,\n ) -> \"Status\":\n \"\"\"Display a status and spinner.\n\n Args:\n status (RenderableType): A status renderable (str or Text typically).\n console (Console, optional): Console instance to use, or None for global console. Defaults to None.\n spinner (str, optional): Name of spinner animation (see python -m rich.spinner). Defaults to \"dots\".\n spinner_style (StyleType, optional): Style of spinner. Defaults to \"status.spinner\".\n speed (float, optional): Speed factor for spinner animation. Defaults to 1.0.\n refresh_per_second (float, optional): Number of refreshes per second. Defaults to 12.5.\n\n Returns:\n Status: A Status object that may be used as a context manager.\n \"\"\"\n from .status import Status\n\n status_renderable = Status(\n status,\n console=self,\n spinner=spinner,\n spinner_style=spinner_style,\n speed=speed,\n refresh_per_second=refresh_per_second,\n )\n return status_renderable\n\n def show_cursor(self, show: bool = True) -> None:\n \"\"\"Show or hide the cursor.\n\n Args:\n show (bool, optional): Set visibility of the cursor.\n \"\"\"\n if self.is_terminal and not self.legacy_windows:\n self.control(\"\\033[?25h\" if show else \"\\033[?25l\")\n\n def render(\n self, renderable: RenderableType, options: ConsoleOptions = None\n ) -> Iterable[Segment]:\n \"\"\"Render an object in to an iterable of `Segment` instances.\n\n This method contains the logic for rendering objects with the console protocol.\n You are unlikely to need to use it directly, unless you are extending the library.\n\n Args:\n renderable (RenderableType): An object supporting the console protocol, or\n an object that may be converted to a string.\n options (ConsoleOptions, optional): An options object, or None to use self.options. Defaults to None.\n\n Returns:\n Iterable[Segment]: An iterable of segments that may be rendered.\n \"\"\"\n\n _options = options or self.options\n if _options.max_width < 1:\n # No space to render anything. This prevents potential recursion errors.\n return\n render_iterable: RenderResult\n if isinstance(renderable, RichCast):\n renderable = renderable.__rich__()\n if isinstance(renderable, ConsoleRenderable):\n render_iterable = renderable.__rich_console__(self, _options)\n elif isinstance(renderable, str):\n yield from self.render(\n self.render_str(renderable, highlight=_options.highlight), _options\n )\n return\n else:\n raise errors.NotRenderableError(\n f\"Unable to render {renderable!r}; \"\n \"A str, Segment or object with __rich_console__ method is required\"\n )\n\n try:\n iter_render = iter(render_iterable)\n except TypeError:\n raise errors.NotRenderableError(\n f\"object {render_iterable!r} is not renderable\"\n )\n for render_output in iter_render:\n if isinstance(render_output, Segment):\n yield render_output\n else:\n yield from self.render(render_output, _options)\n\n def render_lines(\n self,\n renderable: RenderableType,\n options: Optional[ConsoleOptions] = None,\n *,\n style: Optional[Style] = None,\n pad: bool = True,\n ) -> List[List[Segment]]:\n \"\"\"Render objects in to a list of lines.\n\n The output of render_lines is useful when further formatting of rendered console text\n is required, such as the Panel class which draws a border around any renderable object.\n\n Args:\n renderable (RenderableType): Any object renderable in the console.\n options (Optional[ConsoleOptions], optional): Console options, or None to use self.options. Default to ``None``.\n style (Style, optional): Optional style to apply to renderables. Defaults to ``None``.\n pad (bool, optional): Pad lines shorter than render width. Defaults to ``True``.\n\n Returns:\n List[List[Segment]]: A list of lines, where a line is a list of Segment objects.\n \"\"\"\n render_options = options or self.options\n _rendered = self.render(renderable, render_options)\n if style is not None:\n _rendered = Segment.apply_style(_rendered, style)\n lines = list(\n Segment.split_and_crop_lines(\n _rendered, render_options.max_width, include_new_lines=False, pad=pad\n )\n )\n return lines\n\n def render_str(\n self,\n text: str,\n *,\n style: Union[str, Style] = \"\",\n justify: JustifyMethod = None,\n overflow: OverflowMethod = None,\n emoji: bool = None,\n markup: bool = None,\n highlight: bool = None,\n highlighter: HighlighterType = None,\n ) -> \"Text\":\n \"\"\"Convert a string to a Text instance. This is is called automatically if\n you print or log a string.\n\n Args:\n text (str): Text to render.\n style (Union[str, Style], optional): Style to apply to rendered text.\n justify (str, optional): Justify method: \"default\", \"left\", \"center\", \"full\", or \"right\". Defaults to ``None``.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", or \"ellipsis\". Defaults to ``None``.\n emoji (Optional[bool], optional): Enable emoji, or ``None`` to use Console default.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use Console default.\n highlight (Optional[bool], optional): Enable highlighting, or ``None`` to use Console default.\n highlighter (HighlighterType, optional): Optional highlighter to apply.\n Returns:\n ConsoleRenderable: Renderable object.\n\n \"\"\"\n emoji_enabled = emoji or (emoji is None and self._emoji)\n markup_enabled = markup or (markup is None and self._markup)\n highlight_enabled = highlight or (highlight is None and self._highlight)\n\n if markup_enabled:\n rich_text = render_markup(text, style=style, emoji=emoji_enabled)\n rich_text.justify = justify\n rich_text.overflow = overflow\n else:\n rich_text = Text(\n _emoji_replace(text) if emoji_enabled else text,\n justify=justify,\n overflow=overflow,\n style=style,\n )\n\n _highlighter = (highlighter or self.highlighter) if highlight_enabled else None\n if _highlighter is not None:\n highlight_text = _highlighter(str(rich_text))\n highlight_text.copy_styles(rich_text)\n return highlight_text\n\n return rich_text\n\n def get_style(\n self, name: Union[str, Style], *, default: Union[Style, str] = None\n ) -> Style:\n \"\"\"Get a Style instance by it's theme name or parse a definition.\n\n Args:\n name (str): The name of a style or a style definition.\n\n Returns:\n Style: A Style object.\n\n Raises:\n MissingStyle: If no style could be parsed from name.\n\n \"\"\"\n if isinstance(name, Style):\n return name\n\n try:\n style = self._theme_stack.get(name)\n if style is None:\n style = Style.parse(name)\n return style.copy() if style.link else style\n except errors.StyleSyntaxError as error:\n if default is not None:\n return self.get_style(default)\n raise errors.MissingStyle(f\"Failed to get style {name!r}; {error}\")\n\n def _collect_renderables(\n self,\n objects: Iterable[Any],\n sep: str,\n end: str,\n *,\n justify: JustifyMethod = None,\n emoji: bool = None,\n markup: bool = None,\n highlight: bool = None,\n ) -> List[ConsoleRenderable]:\n \"\"\"Combined a number of renderables and text in to one renderable.\n\n Args:\n objects (Iterable[Any]): Anything that Rich can render.\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\\\n\".\n justify (str, optional): One of \"left\", \"right\", \"center\", or \"full\". Defaults to ``None``.\n emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use console default.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default.\n\n Returns:\n List[ConsoleRenderable]: A list oxf things to render.\n \"\"\"\n renderables: List[ConsoleRenderable] = []\n _append = renderables.append\n text: List[Text] = []\n append_text = text.append\n\n append = _append\n if justify in (\"left\", \"center\", \"right\"):\n\n def align_append(renderable: RenderableType) -> None:\n _append(Align(renderable, cast(AlignMethod, justify)))\n\n append = align_append\n\n _highlighter: HighlighterType = _null_highlighter\n if highlight or (highlight is None and self._highlight):\n _highlighter = self.highlighter\n\n def check_text() -> None:\n if text:\n sep_text = Text(sep, justify=justify, end=end)\n append(sep_text.join(text))\n del text[:]\n\n for renderable in objects:\n rich_cast = getattr(renderable, \"__rich__\", None)\n if rich_cast:\n renderable = rich_cast()\n if isinstance(renderable, str):\n append_text(\n self.render_str(\n renderable, emoji=emoji, markup=markup, highlighter=_highlighter\n )\n )\n elif isinstance(renderable, ConsoleRenderable):\n check_text()\n append(renderable)\n elif isinstance(renderable, (abc.Mapping, abc.Sequence, abc.Set)):\n check_text()\n append(Pretty(renderable, highlighter=_highlighter))\n else:\n append_text(_highlighter(str(renderable)))\n\n check_text()\n\n if self.style is not None:\n style = self.get_style(self.style)\n renderables = [Styled(renderable, style) for renderable in renderables]\n\n return renderables\n\n def rule(\n self,\n title: TextType = \"\",\n *,\n characters: str = \"─\",\n style: Union[str, Style] = \"rule.line\",\n align: AlignMethod = \"center\",\n ) -> None:\n \"\"\"Draw a line with optional centered title.\n\n Args:\n title (str, optional): Text to render over the rule. Defaults to \"\".\n characters (str, optional): Character(s) to form the line. Defaults to \"─\".\n style (str, optional): Style of line. Defaults to \"rule.line\".\n align (str, optional): How to align the title, one of \"left\", \"center\", or \"right\". Defaults to \"center\".\n \"\"\"\n from .rule import Rule\n\n rule = Rule(title=title, characters=characters, style=style, align=align)\n self.print(rule)\n\n def control(self, control_codes: Union[\"Control\", str]) -> None:\n \"\"\"Insert non-printing control codes.\n\n Args:\n control_codes (str): Control codes, such as those that may move the cursor.\n \"\"\"\n if not self.is_dumb_terminal:\n self._buffer.append(Segment.control(str(control_codes)))\n self._check_buffer()\n\n def out(\n self,\n *objects: Any,\n sep=\" \",\n end=\"\\n\",\n style: Union[str, Style] = None,\n highlight: bool = None,\n ) -> None:\n \"\"\"Output to the terminal. This is a low-level way of writing to the terminal which unlike\n :meth:`~rich.console.Console.print` won't pretty print, wrap text, or apply markup, but will\n optionally apply highlighting and a basic style.\n\n Args:\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\\\n\".\n style (Union[str, Style], optional): A style to apply to output. Defaults to None.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use\n console default. Defaults to ``None``.\n \"\"\"\n raw_output: str = sep.join(str(_object) for _object in objects)\n self.print(\n raw_output,\n style=style,\n highlight=highlight,\n emoji=False,\n markup=False,\n no_wrap=True,\n overflow=\"ignore\",\n crop=False,\n end=end,\n )\n\n def print(\n self,\n *objects: Any,\n sep=\" \",\n end=\"\\n\",\n style: Union[str, Style] = None,\n justify: JustifyMethod = None,\n overflow: OverflowMethod = None,\n no_wrap: bool = None,\n emoji: bool = None,\n markup: bool = None,\n highlight: bool = None,\n width: int = None,\n crop: bool = True,\n soft_wrap: bool = None,\n ) -> None:\n \"\"\"Print to the console.\n\n Args:\n objects (positional args): Objects to log to the terminal.\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\\\n\".\n style (Union[str, Style], optional): A style to apply to output. Defaults to None.\n justify (str, optional): Justify method: \"default\", \"left\", \"right\", \"center\", or \"full\". Defaults to ``None``.\n overflow (str, optional): Overflow method: \"ignore\", \"crop\", \"fold\", or \"ellipsis\". Defaults to None.\n no_wrap (Optional[bool], optional): Disable word wrapping. Defaults to None.\n emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to ``None``.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to ``None``.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``.\n width (Optional[int], optional): Width of output, or ``None`` to auto-detect. Defaults to ``None``.\n crop (Optional[bool], optional): Crop output to width of terminal. Defaults to True.\n soft_wrap (bool, optional): Enable soft wrap mode which disables word wrapping and cropping of text or None for\n Console default. Defaults to ``None``.\n \"\"\"\n if not objects:\n self.line()\n return\n\n if soft_wrap is None:\n soft_wrap = self.soft_wrap\n if soft_wrap:\n if no_wrap is None:\n no_wrap = True\n if overflow is None:\n overflow = \"ignore\"\n crop = False\n\n with self:\n renderables = self._collect_renderables(\n objects,\n sep,\n end,\n justify=justify,\n emoji=emoji,\n markup=markup,\n highlight=highlight,\n )\n for hook in self._render_hooks:\n renderables = hook.process_renderables(renderables)\n render_options = self.options.update(\n justify=justify,\n overflow=overflow,\n width=min(width, self.width) if width else None,\n no_wrap=no_wrap,\n )\n new_segments: List[Segment] = []\n extend = new_segments.extend\n render = self.render\n if style is None:\n for renderable in renderables:\n extend(render(renderable, render_options))\n else:\n for renderable in renderables:\n extend(\n Segment.apply_style(\n render(renderable, render_options), self.get_style(style)\n )\n )\n if crop:\n buffer_extend = self._buffer.extend\n for line in Segment.split_and_crop_lines(\n new_segments, self.width, pad=False\n ):\n buffer_extend(line)\n else:\n self._buffer.extend(new_segments)\n\n def print_exception(\n self,\n *,\n width: Optional[int] = 100,\n extra_lines: int = 3,\n theme: Optional[str] = None,\n word_wrap: bool = False,\n show_locals: bool = False,\n ) -> None:\n \"\"\"Prints a rich render of the last exception and traceback.\n\n Args:\n width (Optional[int], optional): Number of characters used to render code. Defaults to 88.\n extra_lines (int, optional): Additional lines of code to render. Defaults to 3.\n theme (str, optional): Override pygments theme used in traceback\n word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.\n show_locals (bool, optional): Enable display of local variables. Defaults to False.\n \"\"\"\n from .traceback import Traceback\n\n traceback = Traceback(\n width=width,\n extra_lines=extra_lines,\n theme=theme,\n word_wrap=word_wrap,\n show_locals=show_locals,\n )\n self.print(traceback)\n\n def log(\n self,\n *objects: Any,\n sep=\" \",\n end=\"\\n\",\n style: Union[str, Style] = None,\n justify: JustifyMethod = None,\n emoji: bool = None,\n markup: bool = None,\n highlight: bool = None,\n log_locals: bool = False,\n _stack_offset=1,\n ) -> None:\n \"\"\"Log rich content to the terminal.\n\n Args:\n objects (positional args): Objects to log to the terminal.\n sep (str, optional): String to write between print data. Defaults to \" \".\n end (str, optional): String to write at end of print data. Defaults to \"\\\\n\".\n style (Union[str, Style], optional): A style to apply to output. Defaults to None.\n justify (str, optional): One of \"left\", \"right\", \"center\", or \"full\". Defaults to ``None``.\n overflow (str, optional): Overflow method: \"crop\", \"fold\", or \"ellipsis\". Defaults to None.\n emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to None.\n markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to None.\n highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to None.\n log_locals (bool, optional): Boolean to enable logging of locals where ``log()``\n was called. Defaults to False.\n _stack_offset (int, optional): Offset of caller from end of call stack. Defaults to 1.\n \"\"\"\n if not objects:\n self.line()\n return\n with self:\n renderables = self._collect_renderables(\n objects,\n sep,\n end,\n justify=justify,\n emoji=emoji,\n markup=markup,\n highlight=highlight,\n )\n if style is not None:\n renderables = [Styled(renderable, style) for renderable in renderables]\n\n caller = inspect.stack()[_stack_offset]\n link_path = (\n None\n if caller.filename.startswith(\"<\")\n else os.path.abspath(caller.filename)\n )\n path = caller.filename.rpartition(os.sep)[-1]\n line_no = caller.lineno\n if log_locals:\n locals_map = {\n key: value\n for key, value in caller.frame.f_locals.items()\n if not key.startswith(\"__\")\n }\n renderables.append(render_scope(locals_map, title=\"[i]locals\"))\n\n renderables = [\n self._log_render(\n self,\n renderables,\n log_time=self.get_datetime(),\n path=path,\n line_no=line_no,\n link_path=link_path,\n )\n ]\n for hook in self._render_hooks:\n renderables = hook.process_renderables(renderables)\n new_segments: List[Segment] = []\n extend = new_segments.extend\n render = self.render\n render_options = self.options\n for renderable in renderables:\n extend(render(renderable, render_options))\n buffer_extend = self._buffer.extend\n for line in Segment.split_and_crop_lines(\n new_segments, self.width, pad=False\n ):\n buffer_extend(line)\n\n def _check_buffer(self) -> None:\n \"\"\"Check if the buffer may be rendered.\"\"\"\n with self._lock:\n if self._buffer_index == 0:\n if self.is_jupyter: # pragma: no cover\n from .jupyter import display\n\n display(self._buffer)\n del self._buffer[:]\n else:\n text = self._render_buffer(self._buffer[:])\n del self._buffer[:]\n if text:\n try:\n if WINDOWS: # pragma: no cover\n # https://bugs.python.org/issue37871\n write = self.file.write\n for line in text.splitlines(True):\n write(line)\n else:\n self.file.write(text)\n self.file.flush()\n except UnicodeEncodeError as error:\n error.reason = f\"{error.reason}\\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***\"\n raise\n\n def _render_buffer(self, buffer: Iterable[Segment]) -> str:\n \"\"\"Render buffered output, and clear buffer.\"\"\"\n output: List[str] = []\n append = output.append\n color_system = self._color_system\n legacy_windows = self.legacy_windows\n if self.record:\n with self._record_buffer_lock:\n self._record_buffer.extend(buffer)\n not_terminal = not self.is_terminal\n for text, style, is_control in buffer:\n if style:\n append(\n style.render(\n text,\n color_system=color_system,\n legacy_windows=legacy_windows,\n )\n )\n elif not (not_terminal and is_control):\n append(text)\n\n rendered = \"\".join(output)\n return rendered\n\n def input(\n self,\n prompt: TextType = \"\",\n *,\n markup: bool = True,\n emoji: bool = True,\n password: bool = False,\n stream: TextIO = None,\n ) -> str:\n \"\"\"Displays a prompt and waits for input from the user. The prompt may contain color / style.\n\n Args:\n prompt (Union[str, Text]): Text to render in the prompt.\n markup (bool, optional): Enable console markup (requires a str prompt). Defaults to True.\n emoji (bool, optional): Enable emoji (requires a str prompt). Defaults to True.\n password: (bool, optional): Hide typed text. Defaults to False.\n stream: (TextIO, optional): Optional file to read input from (rather than stdin). Defaults to None.\n\n Returns:\n str: Text read from stdin.\n \"\"\"\n prompt_str = \"\"\n if prompt:\n with self.capture() as capture:\n self.print(prompt, markup=markup, emoji=emoji, end=\"\")\n prompt_str = capture.get()\n if self.legacy_windows:\n # Legacy windows doesn't like ANSI codes in getpass or input (colorama bug)?\n self.file.write(prompt_str)\n prompt_str = \"\"\n if password:\n result = getpass(prompt_str, stream=stream)\n else:\n if stream:\n self.file.write(prompt_str)\n result = stream.readline()\n else:\n result = input(prompt_str)\n return result\n\n def export_text(self, *, clear: bool = True, styles: bool = False) -> str:\n \"\"\"Generate text from console contents (requires record=True argument in constructor).\n\n Args:\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n styles (bool, optional): If ``True``, ansi escape codes will be included. ``False`` for plain text.\n Defaults to ``False``.\n\n Returns:\n str: String containing console contents.\n\n \"\"\"\n assert (\n self.record\n ), \"To export console contents set record=True in the constructor or instance\"\n\n with self._record_buffer_lock:\n if styles:\n text = \"\".join(\n (style.render(text) if style else text)\n for text, style, _ in self._record_buffer\n )\n else:\n text = \"\".join(\n segment.text\n for segment in self._record_buffer\n if not segment.is_control\n )\n if clear:\n del self._record_buffer[:]\n return text\n\n def save_text(self, path: str, *, clear: bool = True, styles: bool = False) -> None:\n \"\"\"Generate text from console and save to a given location (requires record=True argument in constructor).\n\n Args:\n path (str): Path to write text files.\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n styles (bool, optional): If ``True``, ansi style codes will be included. ``False`` for plain text.\n Defaults to ``False``.\n\n \"\"\"\n text = self.export_text(clear=clear, styles=styles)\n with open(path, \"wt\", encoding=\"utf-8\") as write_file:\n write_file.write(text)\n\n def export_html(\n self,\n *,\n theme: TerminalTheme = None,\n clear: bool = True,\n code_format: str = None,\n inline_styles: bool = False,\n ) -> str:\n \"\"\"Generate HTML from console contents (requires record=True argument in constructor).\n\n Args:\n theme (TerminalTheme, optional): TerminalTheme object containing console colors.\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n code_format (str, optional): Format string to render HTML, should contain {foreground}\n {background} and {code}.\n inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files\n larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.\n Defaults to False.\n\n Returns:\n str: String containing console contents as HTML.\n \"\"\"\n assert (\n self.record\n ), \"To export console contents set record=True in the constructor or instance\"\n fragments: List[str] = []\n append = fragments.append\n _theme = theme or DEFAULT_TERMINAL_THEME\n stylesheet = \"\"\n\n def escape(text: str) -> str:\n \"\"\"Escape html.\"\"\"\n return text.replace(\"&\", \"&\").replace(\"<\", \"<\").replace(\">\", \">\")\n\n render_code_format = CONSOLE_HTML_FORMAT if code_format is None else code_format\n\n with self._record_buffer_lock:\n if inline_styles:\n for text, style, _ in Segment.filter_control(\n Segment.simplify(self._record_buffer)\n ):\n text = escape(text)\n if style:\n rule = style.get_html_style(_theme)\n text = f'<span style=\"{rule}\">{text}</span>' if rule else text\n if style.link:\n text = f'<a href=\"{style.link}\">{text}</a>'\n append(text)\n else:\n styles: Dict[str, int] = {}\n for text, style, _ in Segment.filter_control(\n Segment.simplify(self._record_buffer)\n ):\n text = escape(text)\n if style:\n rule = style.get_html_style(_theme)\n if rule:\n style_number = styles.setdefault(rule, len(styles) + 1)\n text = f'<span class=\"r{style_number}\">{text}</span>'\n if style.link:\n text = f'<a href=\"{style.link}\">{text}</a>'\n append(text)\n stylesheet_rules: List[str] = []\n stylesheet_append = stylesheet_rules.append\n for style_rule, style_number in styles.items():\n if style_rule:\n stylesheet_append(f\".r{style_number} {{{style_rule}}}\")\n stylesheet = \"\\n\".join(stylesheet_rules)\n\n rendered_code = render_code_format.format(\n code=\"\".join(fragments),\n stylesheet=stylesheet,\n foreground=_theme.foreground_color.hex,\n background=_theme.background_color.hex,\n )\n if clear:\n del self._record_buffer[:]\n return rendered_code\n\n def save_html(\n self,\n path: str,\n *,\n theme: TerminalTheme = None,\n clear: bool = True,\n code_format=CONSOLE_HTML_FORMAT,\n inline_styles: bool = False,\n ) -> None:\n \"\"\"Generate HTML from console contents and write to a file (requires record=True argument in constructor).\n\n Args:\n path (str): Path to write html file.\n theme (TerminalTheme, optional): TerminalTheme object containing console colors.\n clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.\n code_format (str, optional): Format string to render HTML, should contain {foreground}\n {background} and {code}.\n inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files\n larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.\n Defaults to False.\n\n \"\"\"\n html = self.export_html(\n theme=theme,\n clear=clear,\n code_format=code_format,\n inline_styles=inline_styles,\n )\n with open(path, \"wt\", encoding=\"utf-8\") as write_file:\n write_file.write(html)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n console = Console()\n\n console.log(\n \"JSONRPC [i]request[/i]\",\n 5,\n 1.3,\n True,\n False,\n None,\n {\n \"jsonrpc\": \"2.0\",\n \"method\": \"subtract\",\n \"params\": {\"minuend\": 42, \"subtrahend\": 23},\n \"id\": 3,\n },\n )\n\n console.log(\"Hello, World!\", \"{'a': 1}\", repr(console))\n\n console.print(\n {\n \"name\": None,\n \"empty\": [],\n \"quiz\": {\n \"sport\": {\n \"answered\": True,\n \"q1\": {\n \"question\": \"Which one is correct team name in NBA?\",\n \"options\": [\n \"New York Bulls\",\n \"Los Angeles Kings\",\n \"Golden State Warriors\",\n \"Huston Rocket\",\n ],\n \"answer\": \"Huston Rocket\",\n },\n },\n \"maths\": {\n \"answered\": False,\n \"q1\": {\n \"question\": \"5 + 7 = ?\",\n \"options\": [10, 11, 12, 13],\n \"answer\": 12,\n },\n \"q2\": {\n \"question\": \"12 - 8 = ?\",\n \"options\": [1, 2, 3, 4],\n \"answer\": 4,\n },\n },\n },\n }\n )\n console.log(\"foo\")\n", "rich/segment.py": "from typing import NamedTuple, Optional\n\nfrom .cells import cell_len, set_cell_size\nfrom .style import Style\n\nfrom itertools import filterfalse, zip_longest\nfrom operator import attrgetter\nfrom typing import Iterable, List, Tuple\n\n\nclass Segment(NamedTuple):\n \"\"\"A piece of text with associated style.\n\n Args:\n text (str): A piece of text.\n style (:class:`~rich.style.Style`, optional): An optional style to apply to the text.\n is_control (bool, optional): Boolean that marks segment as containing non-printable control codes.\n \"\"\"\n\n text: str = \"\"\n \"\"\"Raw text.\"\"\"\n style: Optional[Style] = None\n \"\"\"An optional style.\"\"\"\n is_control: bool = False\n \"\"\"True if the segment contains control codes, otherwise False.\"\"\"\n\n def __repr__(self) -> str:\n \"\"\"Simplified repr.\"\"\"\n if self.is_control:\n return f\"Segment.control({self.text!r}, {self.style!r})\"\n else:\n return f\"Segment({self.text!r}, {self.style!r})\"\n\n def __bool__(self) -> bool:\n \"\"\"Check if the segment contains text.\"\"\"\n return bool(self.text)\n\n @property\n def cell_length(self) -> int:\n \"\"\"Get cell length of segment.\"\"\"\n return 0 if self.is_control else cell_len(self.text)\n\n @classmethod\n def control(cls, text: str, style: Optional[Style] = None) -> \"Segment\":\n \"\"\"Create a Segment with control codes.\n\n Args:\n text (str): Text containing non-printable control codes.\n style (Optional[style]): Optional style.\n\n Returns:\n Segment: A Segment instance with ``is_control=True``.\n \"\"\"\n return cls(text, style, is_control=True)\n\n @classmethod\n def make_control(cls, segments: Iterable[\"Segment\"]) -> Iterable[\"Segment\"]:\n \"\"\"Convert all segments in to control segments.\n\n Returns:\n Iterable[Segments]: Segments with is_control=True\n \"\"\"\n return [cls(text, style, True) for text, style, _ in segments]\n\n @classmethod\n def line(cls, is_control: bool = False) -> \"Segment\":\n \"\"\"Make a new line segment.\"\"\"\n return cls(\"\\n\", is_control=is_control)\n\n @classmethod\n def apply_style(\n cls, segments: Iterable[\"Segment\"], style: Style = None\n ) -> Iterable[\"Segment\"]:\n \"\"\"Apply a style to an iterable of segments.\n\n Args:\n segments (Iterable[Segment]): Segments to process.\n style (Style, optional): A style to apply. Defaults to None.\n\n Returns:\n Iterable[Segments]: A new iterable of segments (possibly the same iterable).\n \"\"\"\n if style is None:\n return segments\n apply = style.__add__\n return (\n cls(text, None if is_control else apply(style), is_control)\n for text, style, is_control in segments\n )\n\n @classmethod\n def filter_control(\n cls, segments: Iterable[\"Segment\"], is_control=False\n ) -> Iterable[\"Segment\"]:\n \"\"\"Filter segments by ``is_control`` attribute.\n\n Args:\n segments (Iterable[Segment]): An iterable of Segment instances.\n is_control (bool, optional): is_control flag to match in search.\n\n Returns:\n Iterable[Segment]: And iterable of Segment instances.\n\n \"\"\"\n if is_control:\n return filter(attrgetter(\"is_control\"), segments)\n else:\n return filterfalse(attrgetter(\"is_control\"), segments)\n\n @classmethod\n def split_lines(cls, segments: Iterable[\"Segment\"]) -> Iterable[List[\"Segment\"]]:\n \"\"\"Split a sequence of segments in to a list of lines.\n\n Args:\n segments (Iterable[Segment]): Segments potentially containing line feeds.\n\n Yields:\n Iterable[List[Segment]]: Iterable of segment lists, one per line.\n \"\"\"\n line: List[Segment] = []\n append = line.append\n\n for segment in segments:\n if \"\\n\" in segment.text and not segment.is_control:\n text, style, _ = segment\n while text:\n _text, new_line, text = text.partition(\"\\n\")\n if _text:\n append(cls(_text, style))\n if new_line:\n yield line\n line = []\n append = line.append\n else:\n append(segment)\n if line:\n yield line\n\n @classmethod\n def split_and_crop_lines(\n cls,\n segments: Iterable[\"Segment\"],\n length: int,\n style: Style = None,\n pad: bool = True,\n include_new_lines: bool = True,\n ) -> Iterable[List[\"Segment\"]]:\n \"\"\"Split segments in to lines, and crop lines greater than a given length.\n\n Args:\n segments (Iterable[Segment]): An iterable of segments, probably\n generated from console.render.\n length (int): Desired line length.\n style (Style, optional): Style to use for any padding.\n pad (bool): Enable padding of lines that are less than `length`.\n\n Returns:\n Iterable[List[Segment]]: An iterable of lines of segments.\n \"\"\"\n line: List[Segment] = []\n append = line.append\n\n adjust_line_length = cls.adjust_line_length\n new_line_segment = cls(\"\\n\")\n\n for segment in segments:\n if \"\\n\" in segment.text and not segment.is_control:\n text, style, _ = segment\n while text:\n _text, new_line, text = text.partition(\"\\n\")\n if _text:\n append(cls(_text, style))\n if new_line:\n cropped_line = adjust_line_length(\n line, length, style=style, pad=pad\n )\n if include_new_lines:\n cropped_line.append(new_line_segment)\n yield cropped_line\n del line[:]\n else:\n append(segment)\n if line:\n yield adjust_line_length(line, length, style=style, pad=pad)\n\n @classmethod\n def adjust_line_length(\n cls, line: List[\"Segment\"], length: int, style: Style = None, pad: bool = True\n ) -> List[\"Segment\"]:\n \"\"\"Adjust a line to a given width (cropping or padding as required).\n\n Args:\n segments (Iterable[Segment]): A list of segments in a single line.\n length (int): The desired width of the line.\n style (Style, optional): The style of padding if used (space on the end). Defaults to None.\n pad (bool, optional): Pad lines with spaces if they are shorter than `length`. Defaults to True.\n\n Returns:\n List[Segment]: A line of segments with the desired length.\n \"\"\"\n line_length = sum(segment.cell_length for segment in line)\n new_line: List[Segment]\n\n if line_length < length:\n if pad:\n new_line = line + [cls(\" \" * (length - line_length), style)]\n else:\n new_line = line[:]\n elif line_length > length:\n new_line = []\n append = new_line.append\n line_length = 0\n for segment in line:\n segment_length = segment.cell_length\n if line_length + segment_length < length or segment.is_control:\n append(segment)\n line_length += segment_length\n else:\n text, segment_style, _ = segment\n text = set_cell_size(text, length - line_length)\n append(cls(text, segment_style))\n break\n else:\n new_line = line[:]\n return new_line\n\n @classmethod\n def get_line_length(cls, line: List[\"Segment\"]) -> int:\n \"\"\"Get the length of list of segments.\n\n Args:\n line (List[Segment]): A line encoded as a list of Segments (assumes no '\\\\n' characters),\n\n Returns:\n int: The length of the line.\n \"\"\"\n return sum(segment.cell_length for segment in line)\n\n @classmethod\n def get_shape(cls, lines: List[List[\"Segment\"]]) -> Tuple[int, int]:\n \"\"\"Get the shape (enclosing rectangle) of a list of lines.\n\n Args:\n lines (List[List[Segment]]): A list of lines (no '\\\\n' characters).\n\n Returns:\n Tuple[int, int]: Width and height in characters.\n \"\"\"\n get_line_length = cls.get_line_length\n max_width = max(get_line_length(line) for line in lines) if lines else 0\n return (max_width, len(lines))\n\n @classmethod\n def set_shape(\n cls,\n lines: List[List[\"Segment\"]],\n width: int,\n height: int = None,\n style: Style = None,\n ) -> List[List[\"Segment\"]]:\n \"\"\"Set the shape of a list of lines (enclosing rectangle).\n\n Args:\n lines (List[List[Segment]]): A list of lines.\n width (int): Desired width.\n height (int, optional): Desired height or None for no change.\n style (Style, optional): Style of any padding added. Defaults to None.\n\n Returns:\n List[List[Segment]]: New list of lines that fits width x height.\n \"\"\"\n if height is None:\n height = len(lines)\n new_lines: List[List[Segment]] = []\n pad_line = [Segment(\" \" * width, style)]\n append = new_lines.append\n adjust_line_length = cls.adjust_line_length\n line: Optional[List[Segment]]\n for line, _ in zip_longest(lines, range(height)):\n if line is None:\n append(pad_line)\n else:\n append(adjust_line_length(line, width, style=style))\n return new_lines\n\n @classmethod\n def simplify(cls, segments: Iterable[\"Segment\"]) -> Iterable[\"Segment\"]:\n \"\"\"Simplify an iterable of segments by combining contiguous segments with the same style.\n\n Args:\n segments (Iterable[Segment]): An iterable of segments.\n\n Returns:\n Iterable[Segment]: A possibly smaller iterable of segments that will render the same way.\n \"\"\"\n iter_segments = iter(segments)\n try:\n last_segment = next(iter_segments)\n except StopIteration:\n return\n\n _Segment = Segment\n for segment in iter_segments:\n if last_segment.style == segment.style and not segment.is_control:\n last_segment = _Segment(\n last_segment.text + segment.text, last_segment.style\n )\n else:\n yield last_segment\n last_segment = segment\n yield last_segment\n\n @classmethod\n def strip_links(cls, segments: Iterable[\"Segment\"]) -> Iterable[\"Segment\"]:\n \"\"\"Remove all links from an iterable of styles.\n\n Args:\n segments (Iterable[Segment]): An iterable segments.\n\n Yields:\n Segment: Segments with link removed.\n \"\"\"\n for segment in segments:\n if segment.is_control or segment.style is None:\n yield segment\n else:\n text, style, _is_control = segment\n yield cls(text, style.update_link(None) if style else None)\n\n @classmethod\n def strip_styles(cls, segments: Iterable[\"Segment\"]) -> Iterable[\"Segment\"]:\n \"\"\"Remove all styles from an iterable of segments.\n\n Yields:\n Segment: Segments with styles replace with None\n \"\"\"\n for text, _style, is_control in segments:\n yield cls(text, None, is_control)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n lines = [[Segment(\"Hello\")]]\n lines = Segment.set_shape(lines, 50, 4, style=Style.parse(\"on blue\"))\n for line in lines:\n print(line)\n\n print(Style.parse(\"on blue\") + Style.parse(\"on red\"))\n", "rich/style.py": "import sys\nfrom functools import lru_cache\nfrom random import randint\nfrom time import time\nfrom typing import Any, Dict, Iterable, List, Optional, Type, Union\n\nfrom . import errors\nfrom .color import Color, ColorParseError, ColorSystem, blend_rgb\nfrom .terminal_theme import DEFAULT_TERMINAL_THEME, TerminalTheme\n\n# Style instances and style definitions are often interchangeable\nStyleType = Union[str, \"Style\"]\n\n\nclass _Bit:\n \"\"\"A descriptor to get/set a style attribute bit.\"\"\"\n\n __slots__ = [\"bit\"]\n\n def __init__(self, bit_no: int) -> None:\n self.bit = 1 << bit_no\n\n def __get__(self, obj: \"Style\", objtype: Type[\"Style\"]) -> Optional[bool]:\n if obj._set_attributes & self.bit:\n return obj._attributes & self.bit != 0\n return None\n\n\nclass Style:\n \"\"\"A terminal style.\n\n A terminal style consists of a color (`color`), a background color (`bgcolor`), and a number of attributes, such\n as bold, italic etc. The attributes have 3 states: they can either be on\n (``True``), off (``False``), or not set (``None``).\n\n Args:\n color (Union[Color, str], optional): Color of terminal text. Defaults to None.\n bgcolor (Union[Color, str], optional): Color of terminal background. Defaults to None.\n bold (bool, optional): Enable bold text. Defaults to None.\n dim (bool, optional): Enable dim text. Defaults to None.\n italic (bool, optional): Enable italic text. Defaults to None.\n underline (bool, optional): Enable underlined text. Defaults to None.\n blink (bool, optional): Enabled blinking text. Defaults to None.\n blink2 (bool, optional): Enable fast blinking text. Defaults to None.\n reverse (bool, optional): Enabled reverse text. Defaults to None.\n conceal (bool, optional): Enable concealed text. Defaults to None.\n strike (bool, optional): Enable strikethrough text. Defaults to None.\n underline2 (bool, optional): Enable doubly underlined text. Defaults to None.\n frame (bool, optional): Enable framed text. Defaults to None.\n encircle (bool, optional): Enable encircled text. Defaults to None.\n overline (bool, optional): Enable overlined text. Defaults to None.\n link (str, link): Link URL. Defaults to None.\n\n \"\"\"\n\n _color: Optional[Color]\n _bgcolor: Optional[Color]\n _attributes: int\n _set_attributes: int\n _hash: int\n _null: bool\n\n __slots__ = [\n \"_color\",\n \"_bgcolor\",\n \"_attributes\",\n \"_set_attributes\",\n \"_link\",\n \"_link_id\",\n \"_ansi\",\n \"_style_definition\",\n \"_hash\",\n \"_null\",\n ]\n\n # maps bits on to SGR parameter\n _style_map = {\n 0: \"1\",\n 1: \"2\",\n 2: \"3\",\n 3: \"4\",\n 4: \"5\",\n 5: \"6\",\n 6: \"7\",\n 7: \"8\",\n 8: \"9\",\n 9: \"21\",\n 10: \"51\",\n 11: \"52\",\n 12: \"53\",\n }\n\n def __init__(\n self,\n *,\n color: Union[Color, str] = None,\n bgcolor: Union[Color, str] = None,\n bold: bool = None,\n dim: bool = None,\n italic: bool = None,\n underline: bool = None,\n blink: bool = None,\n blink2: bool = None,\n reverse: bool = None,\n conceal: bool = None,\n strike: bool = None,\n underline2: bool = None,\n frame: bool = None,\n encircle: bool = None,\n overline: bool = None,\n link: str = None,\n ):\n self._ansi: Optional[str] = None\n self._style_definition: Optional[str] = None\n\n def _make_color(color: Union[Color, str]) -> Color:\n return color if isinstance(color, Color) else Color.parse(color)\n\n self._color = None if color is None else _make_color(color)\n self._bgcolor = None if bgcolor is None else _make_color(bgcolor)\n self._set_attributes = sum(\n (\n bold is not None,\n dim is not None and 2,\n italic is not None and 4,\n underline is not None and 8,\n blink is not None and 16,\n blink2 is not None and 32,\n reverse is not None and 64,\n conceal is not None and 128,\n strike is not None and 256,\n underline2 is not None and 512,\n frame is not None and 1024,\n encircle is not None and 2048,\n overline is not None and 4096,\n )\n )\n self._attributes = (\n sum(\n (\n bold and 1 or 0,\n dim and 2 or 0,\n italic and 4 or 0,\n underline and 8 or 0,\n blink and 16 or 0,\n blink2 and 32 or 0,\n reverse and 64 or 0,\n conceal and 128 or 0,\n strike and 256 or 0,\n underline2 and 512 or 0,\n frame and 1024 or 0,\n encircle and 2048 or 0,\n overline and 4096 or 0,\n )\n )\n if self._set_attributes\n else 0\n )\n\n self._link = link\n self._link_id = f\"{time()}-{randint(0, 999999)}\" if link else \"\"\n self._hash = hash(\n (\n self._color,\n self._bgcolor,\n self._attributes,\n self._set_attributes,\n link,\n )\n )\n self._null = not (self._set_attributes or color or bgcolor or link)\n\n @classmethod\n def null(cls) -> \"Style\":\n \"\"\"Create an 'null' style, equivalent to Style(), but more performant.\"\"\"\n return NULL_STYLE\n\n @classmethod\n def from_color(cls, color: Color = None, bgcolor: Color = None) -> \"Style\":\n \"\"\"Create a new style with colors and no attributes.\n\n Returns:\n color (Optional[Color]): A (foreground) color, or None for no color. Defaults to None.\n bgcolor (Optional[Color]): A (background) color, or None for no color. Defaults to None.\n \"\"\"\n style = cls.__new__(Style)\n style._ansi = None\n style._style_definition = None\n style._color = color\n style._bgcolor = bgcolor\n style._set_attributes = 0\n style._attributes = 0\n style._link = None\n style._link_id = \"\"\n style._hash = hash(\n (\n color,\n bgcolor,\n None,\n None,\n None,\n )\n )\n style._null = not (color or bgcolor)\n return style\n\n bold = _Bit(0)\n dim = _Bit(1)\n italic = _Bit(2)\n underline = _Bit(3)\n blink = _Bit(4)\n blink2 = _Bit(5)\n reverse = _Bit(6)\n conceal = _Bit(7)\n strike = _Bit(8)\n underline2 = _Bit(9)\n frame = _Bit(10)\n encircle = _Bit(11)\n overline = _Bit(12)\n\n @property\n def link_id(self) -> str:\n \"\"\"Get a link id, used in ansi code for links.\"\"\"\n return self._link_id\n\n def __str__(self) -> str:\n \"\"\"Re-generate style definition from attributes.\"\"\"\n if self._style_definition is None:\n attributes: List[str] = []\n append = attributes.append\n bits = self._set_attributes\n if bits & 0b0000000001111:\n if bits & 1:\n append(\"bold\" if self.bold else \"not bold\")\n if bits & (1 << 1):\n append(\"dim\" if self.dim else \"not dim\")\n if bits & (1 << 2):\n append(\"italic\" if self.italic else \"not italic\")\n if bits & (1 << 3):\n append(\"underline\" if self.underline else \"not underline\")\n if bits & 0b0000111110000:\n if bits & (1 << 4):\n append(\"blink\" if self.blink else \"not blink\")\n if bits & (1 << 5):\n append(\"blink2\" if self.blink2 else \"not blink2\")\n if bits & (1 << 6):\n append(\"reverse\" if self.reverse else \"not reverse\")\n if bits & (1 << 7):\n append(\"conceal\" if self.conceal else \"not conceal\")\n if bits & (1 << 8):\n append(\"strike\" if self.strike else \"not strike\")\n if bits & 0b1111000000000:\n if bits & (1 << 9):\n append(\"underline2\" if self.underline2 else \"not underline2\")\n if bits & (1 << 10):\n append(\"frame\" if self.frame else \"not frame\")\n if bits & (1 << 11):\n append(\"encircle\" if self.encircle else \"not encircle\")\n if bits & (1 << 12):\n append(\"overline\" if self.overline else \"not overline\")\n if self._color is not None:\n append(self._color.name)\n if self._bgcolor is not None:\n append(\"on\")\n append(self._bgcolor.name)\n if self._link:\n append(\"link\")\n append(self._link)\n self._style_definition = \" \".join(attributes) or \"none\"\n return self._style_definition\n\n def __bool__(self) -> bool:\n \"\"\"A Style is false if it has no attributes, colors, or links.\"\"\"\n return not self._null\n\n def _make_ansi_codes(self, color_system: ColorSystem) -> str:\n \"\"\"Generate ANSI codes for this style.\n\n Args:\n color_system (ColorSystem): Color system.\n\n Returns:\n str: String containing codes.\n \"\"\"\n if self._ansi is None:\n sgr: List[str] = []\n append = sgr.append\n _style_map = self._style_map\n attributes = self._attributes & self._set_attributes\n if attributes:\n if attributes & 1:\n append(_style_map[0])\n if attributes & 2:\n append(_style_map[1])\n if attributes & 4:\n append(_style_map[2])\n if attributes & 8:\n append(_style_map[3])\n if attributes & 0b0000111110000:\n for bit in range(4, 9):\n if attributes & (1 << bit):\n append(_style_map[bit])\n if attributes & 0b1111000000000:\n for bit in range(9, 13):\n if attributes & (1 << bit):\n append(_style_map[bit])\n if self._color is not None:\n sgr.extend(self._color.downgrade(color_system).get_ansi_codes())\n if self._bgcolor is not None:\n sgr.extend(\n self._bgcolor.downgrade(color_system).get_ansi_codes(\n foreground=False\n )\n )\n self._ansi = \";\".join(sgr)\n return self._ansi\n\n @classmethod\n @lru_cache(maxsize=1024)\n def normalize(cls, style: str) -> str:\n \"\"\"Normalize a style definition so that styles with the same effect have the same string\n representation.\n\n Args:\n style (str): A style definition.\n\n Returns:\n str: Normal form of style definition.\n \"\"\"\n try:\n return str(cls.parse(style))\n except errors.StyleSyntaxError:\n return style.strip().lower()\n\n @classmethod\n def pick_first(cls, *values: Optional[StyleType]) -> StyleType:\n \"\"\"Pick first non-None style.\"\"\"\n for value in values:\n if value is not None:\n return value\n raise ValueError(\"expected at least one non-None style\")\n\n def __repr__(self) -> str:\n \"\"\"Render a named style differently from an anonymous style.\"\"\"\n return f'Style.parse(\"{self}\")'\n\n def __eq__(self, other: Any) -> bool:\n if not isinstance(other, Style):\n return NotImplemented\n return (\n self._color == other._color\n and self._bgcolor == other._bgcolor\n and self._set_attributes == other._set_attributes\n and self._attributes == other._attributes\n and self._link == other._link\n )\n\n def __hash__(self) -> int:\n return self._hash\n\n @property\n def color(self) -> Optional[Color]:\n \"\"\"The foreground color or None if it is not set.\"\"\"\n return self._color\n\n @property\n def bgcolor(self) -> Optional[Color]:\n \"\"\"The background color or None if it is not set.\"\"\"\n return self._bgcolor\n\n @property\n def link(self) -> Optional[str]:\n \"\"\"Link text, if set.\"\"\"\n return self._link\n\n @property\n def transparent_background(self) -> bool:\n \"\"\"Check if the style specified a transparent background.\"\"\"\n return self.bgcolor is None or self.bgcolor.is_default\n\n @property\n def background_style(self) -> \"Style\":\n \"\"\"A Style with background only.\"\"\"\n return Style(bgcolor=self.bgcolor)\n\n @classmethod\n @lru_cache(maxsize=4096)\n def parse(cls, style_definition: str) -> \"Style\":\n \"\"\"Parse a style definition.\n\n Args:\n style_definition (str): A string containing a style.\n\n Raises:\n errors.StyleSyntaxError: If the style definition syntax is invalid.\n\n Returns:\n `Style`: A Style instance.\n \"\"\"\n if style_definition.strip() == \"none\":\n return cls.null()\n\n style_attributes = {\n \"dim\": \"dim\",\n \"d\": \"dim\",\n \"bold\": \"bold\",\n \"b\": \"bold\",\n \"italic\": \"italic\",\n \"i\": \"italic\",\n \"underline\": \"underline\",\n \"u\": \"underline\",\n \"blink\": \"blink\",\n \"blink2\": \"blink2\",\n \"reverse\": \"reverse\",\n \"r\": \"reverse\",\n \"conceal\": \"conceal\",\n \"c\": \"conceal\",\n \"strike\": \"strike\",\n \"s\": \"strike\",\n \"underline2\": \"underline2\",\n \"uu\": \"underline2\",\n \"frame\": \"frame\",\n \"encircle\": \"encircle\",\n \"overline\": \"overline\",\n \"o\": \"overline\",\n }\n color: Optional[str] = None\n bgcolor: Optional[str] = None\n attributes: Dict[str, Optional[bool]] = {}\n link: Optional[str] = None\n\n words = iter(style_definition.split())\n for original_word in words:\n word = original_word.lower()\n if word == \"on\":\n word = next(words, \"\")\n if not word:\n raise errors.StyleSyntaxError(\"color expected after 'on'\")\n try:\n Color.parse(word) is None\n except ColorParseError as error:\n raise errors.StyleSyntaxError(\n f\"unable to parse {word!r} as background color; {error}\"\n ) from None\n bgcolor = word\n\n elif word == \"not\":\n word = next(words, \"\")\n attribute = style_attributes.get(word)\n if attribute is None:\n raise errors.StyleSyntaxError(\n f\"expected style attribute after 'not', found {word!r}\"\n )\n attributes[attribute] = False\n\n elif word == \"link\":\n word = next(words, \"\")\n if not word:\n raise errors.StyleSyntaxError(\"URL expected after 'link'\")\n link = word\n\n elif word in style_attributes:\n attributes[style_attributes[word]] = True\n\n else:\n try:\n Color.parse(word)\n except ColorParseError as error:\n raise errors.StyleSyntaxError(\n f\"unable to parse {word!r} as color; {error}\"\n ) from None\n color = word\n style = Style(color=color, bgcolor=bgcolor, link=link, **attributes)\n return style\n\n @lru_cache(maxsize=1024)\n def get_html_style(self, theme: TerminalTheme = None) -> str:\n \"\"\"Get a CSS style rule.\"\"\"\n theme = theme or DEFAULT_TERMINAL_THEME\n css: List[str] = []\n append = css.append\n\n color = self.color\n bgcolor = self.bgcolor\n if self.reverse:\n color, bgcolor = bgcolor, color\n if self.dim:\n foreground_color = (\n theme.foreground_color if color is None else color.get_truecolor(theme)\n )\n color = Color.from_triplet(\n blend_rgb(foreground_color, theme.background_color, 0.5)\n )\n if color is not None:\n theme_color = color.get_truecolor(theme)\n append(f\"color: {theme_color.hex}\")\n if bgcolor is not None:\n theme_color = bgcolor.get_truecolor(theme, foreground=False)\n append(f\"background-color: {theme_color.hex}\")\n if self.bold:\n append(\"font-weight: bold\")\n if self.italic:\n append(\"font-style: italic\")\n if self.underline:\n append(\"text-decoration: underline\")\n if self.strike:\n append(\"text-decoration: line-through\")\n if self.overline:\n append(\"text-decoration: overline\")\n return \"; \".join(css)\n\n @classmethod\n def combine(cls, styles: Iterable[\"Style\"]) -> \"Style\":\n \"\"\"Combine styles and get result.\n\n Args:\n styles (Iterable[Style]): Styles to combine.\n\n Returns:\n Style: A new style instance.\n \"\"\"\n iter_styles = iter(styles)\n return sum(iter_styles, next(iter_styles))\n\n @classmethod\n def chain(cls, *styles: \"Style\") -> \"Style\":\n \"\"\"Combine styles from positional argument in to a single style.\n\n Args:\n *styles (Iterable[Style]): Styles to combine.\n\n Returns:\n Style: A new style instance.\n \"\"\"\n iter_styles = iter(styles)\n return sum(iter_styles, next(iter_styles))\n\n def copy(self) -> \"Style\":\n \"\"\"Get a copy of this style.\n\n Returns:\n Style: A new Style instance with identical attributes.\n \"\"\"\n if self._null:\n return NULL_STYLE\n style = self.__new__(Style)\n style._ansi = self._ansi\n style._style_definition = self._style_definition\n style._color = self._color\n style._bgcolor = self._bgcolor\n style._attributes = self._attributes\n style._set_attributes = self._set_attributes\n style._link = self._link\n style._link_id = f\"{time()}-{randint(0, 999999)}\" if self._link else \"\"\n style._hash = self._hash\n style._null = False\n return style\n\n def update_link(self, link: str = None) -> \"Style\":\n \"\"\"Get a copy with a different value for link.\n\n Args:\n link (str, optional): New value for link. Defaults to None.\n\n Returns:\n Style: A new Style instance.\n \"\"\"\n style = self.__new__(Style)\n style._ansi = self._ansi\n style._style_definition = self._style_definition\n style._color = self._color\n style._bgcolor = self._bgcolor\n style._attributes = self._attributes\n style._set_attributes = self._set_attributes\n style._link = link\n style._link_id = f\"{time()}-{randint(0, 999999)}\" if link else \"\"\n style._hash = self._hash\n style._null = False\n return style\n\n def render(\n self,\n text: str = \"\",\n *,\n color_system: Optional[ColorSystem] = ColorSystem.TRUECOLOR,\n legacy_windows: bool = False,\n ) -> str:\n \"\"\"Render the ANSI codes for the style.\n\n Args:\n text (str, optional): A string to style. Defaults to \"\".\n color_system (Optional[ColorSystem], optional): Color system to render to. Defaults to ColorSystem.TRUECOLOR.\n\n Returns:\n str: A string containing ANSI style codes.\n \"\"\"\n if not text or color_system is None:\n return text\n attrs = self._make_ansi_codes(color_system)\n rendered = f\"\\x1b[{attrs}m{text}\\x1b[0m\" if attrs else text\n if self._link and not legacy_windows:\n rendered = (\n f\"\\x1b]8;id={self._link_id};{self._link}\\x1b\\\\{rendered}\\x1b]8;;\\x1b\\\\\"\n )\n return rendered\n\n def test(self, text: Optional[str] = None) -> None:\n \"\"\"Write text with style directly to terminal.\n\n This method is for testing purposes only.\n\n Args:\n text (Optional[str], optional): Text to style or None for style name.\n\n \"\"\"\n text = text or str(self)\n sys.stdout.write(f\"{self.render(text)}\\n\")\n\n def __add__(self, style: Optional[\"Style\"]) -> \"Style\":\n if not (isinstance(style, Style) or style is None):\n return NotImplemented # type: ignore\n if style is None or style._null:\n return self\n if self._null:\n return style\n new_style = self.__new__(Style)\n new_style._ansi = None\n new_style._style_definition = None\n new_style._color = style._color or self._color\n new_style._bgcolor = style._bgcolor or self._bgcolor\n new_style._attributes = (self._attributes & ~style._set_attributes) | (\n style._attributes & style._set_attributes\n )\n new_style._set_attributes = self._set_attributes | style._set_attributes\n new_style._link = style._link or self._link\n new_style._link_id = style._link_id or self._link_id\n new_style._hash = style._hash\n new_style._null = self._null or style._null\n return new_style\n\n\nNULL_STYLE = Style()\n\n\nclass StyleStack:\n \"\"\"A stack of styles.\"\"\"\n\n __slots__ = [\"_stack\"]\n\n def __init__(self, default_style: \"Style\") -> None:\n self._stack: List[Style] = [default_style]\n\n def __repr__(self) -> str:\n return f\"<stylestack {self._stack!r}>\"\n\n @property\n def current(self) -> Style:\n \"\"\"Get the Style at the top of the stack.\"\"\"\n return self._stack[-1]\n\n def push(self, style: Style) -> None:\n \"\"\"Push a new style on to the stack.\n\n Args:\n style (Style): New style to combine with current style.\n \"\"\"\n self._stack.append(self._stack[-1] + style)\n\n def pop(self) -> Style:\n \"\"\"Pop last style and discard.\n\n Returns:\n Style: New current style (also available as stack.current)\n \"\"\"\n self._stack.pop()\n return self._stack[-1]\n"}
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index d8378b729a..acb53b1308 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added
- Added rich.tree
+- Added no_color argument to Console
## [9.6.2] - 2021-01-07
|
{"rich/segment.py": [{"type": "function", "name": "Segment.remove_color", "lines": [344, 363], "signature": "def remove_color(cls, segments: Iterable[\"Segment\"]) -> Iterable[\"Segment\"]:", "doc": "Remove all color from an iterable of segments.\n\nArgs:\n segments (Iterable[Segment]): An iterable segments.\n\nYields:\n Segment: Segments with colorless style."}], "rich/style.py": [{"type": "function", "name": "Style.without_color", "lines": [387, 402], "signature": "def without_color(self) -> \"Style\":", "doc": "Get a copy of the style with color removed."}]}
| null |
["tests/test_console.py::test_no_color", "tests/test_segment.py::test_remove_color", "tests/test_style.py::test_without_color"]
|
["tests/test_console.py::test_dumb_terminal", "tests/test_console.py::test_soft_wrap", "tests/test_console.py::test_16color_terminal", "tests/test_console.py::test_truecolor_terminal", "tests/test_console.py::test_console_options_update", "tests/test_console.py::test_init", "tests/test_console.py::test_size", "tests/test_console.py::test_repr", "tests/test_console.py::test_print", "tests/test_console.py::test_log", "tests/test_console.py::test_print_empty", "tests/test_console.py::test_markup_highlight", "tests/test_console.py::test_print_style", "tests/test_console.py::test_show_cursor", "tests/test_console.py::test_clear", "tests/test_console.py::test_clear_no_terminal", "tests/test_console.py::test_get_style", "tests/test_console.py::test_get_style_default", "tests/test_console.py::test_get_style_error", "tests/test_console.py::test_render_error", "tests/test_console.py::test_control", "tests/test_console.py::test_capture", "tests/test_console.py::test_input", "tests/test_console.py::test_input_legacy_windows", "tests/test_console.py::test_input_password", "tests/test_console.py::test_status", "tests/test_console.py::test_justify_none", "tests/test_console.py::test_justify_left", "tests/test_console.py::test_justify_center", "tests/test_console.py::test_justify_right", "tests/test_console.py::test_justify_renderable_none", "tests/test_console.py::test_justify_renderable_left", "tests/test_console.py::test_justify_renderable_center", "tests/test_console.py::test_justify_renderable_right", "tests/test_console.py::test_render_broken_renderable", "tests/test_console.py::test_export_text", "tests/test_console.py::test_export_html", "tests/test_console.py::test_export_html_inline", "tests/test_console.py::test_save_text", "tests/test_console.py::test_save_html", "tests/test_console.py::test_no_wrap", "tests/test_console.py::test_unicode_error", "tests/test_console.py::test_bell", "tests/test_console.py::test_pager", "tests/test_console.py::test_out", "tests/test_console.py::test_render_group", "tests/test_console.py::test_render_group_fit", "tests/test_console.py::test_get_time", "tests/test_console.py::test_console_style", "tests/test_segment.py::test_repr", "tests/test_segment.py::test_line", "tests/test_segment.py::test_apply_style", "tests/test_segment.py::test_split_lines", "tests/test_segment.py::test_split_and_crop_lines", "tests/test_segment.py::test_adjust_line_length", "tests/test_segment.py::test_get_line_length", "tests/test_segment.py::test_get_shape", "tests/test_segment.py::test_set_shape", "tests/test_segment.py::test_simplify", "tests/test_segment.py::test_filter_control", "tests/test_segment.py::test_strip_styles", "tests/test_segment.py::test_strip_links", "tests/test_style.py::test_str", "tests/test_style.py::test_ansi_codes", "tests/test_style.py::test_repr", "tests/test_style.py::test_eq", "tests/test_style.py::test_hash", "tests/test_style.py::test_empty", "tests/test_style.py::test_bool", "tests/test_style.py::test_color_property", "tests/test_style.py::test_bgcolor_property", "tests/test_style.py::test_parse", "tests/test_style.py::test_link_id", "tests/test_style.py::test_get_html_style", "tests/test_style.py::test_chain", "tests/test_style.py::test_copy", "tests/test_style.py::test_render", "tests/test_style.py::test_test", "tests/test_style.py::test_add", "tests/test_style.py::test_iadd", "tests/test_style.py::test_style_stack", "tests/test_style.py::test_pick_first", "tests/test_style.py::test_background_style"]
|
b0661de34bab35af9b4b1d3ba8e28b186b225e84
|
{"first_commit_time": 1610208516.0, "pr_title": "No color", "pr_body": "Fixes https://github.com/willmcgugan/rich/issues/882\r\n\r\nAdds a no_color mode which strips color before render.\r\n\r\n## Type of changes\r\n\r\n- [ ] Bug fix\r\n- [ ] New feature\r\n- [ ] Documentation / docstrings\r\n- [ ] Tests\r\n- [ ] Other\r\n\r\n## Checklist\r\n\r\n- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.\r\n- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.\r\n- [ ] I've added tests for new code.\r\n- [ ] I accept that @willmcgugan may be pedantic in the code review.\r\n\r\n## Description\r\n\r\nPlease describe your changes here. If this fixes a bug, please link to the issue, if possible.\r\n", "pr_timeline": [{"time": 1610209402.0, "comment": "# [Codecov](https://codecov.io/gh/willmcgugan/rich/pull/901?src=pr&el=h1) Report\n> Merging [#901](https://codecov.io/gh/willmcgugan/rich/pull/901?src=pr&el=desc) (6bac7ff) into [master](https://codecov.io/gh/willmcgugan/rich/commit/6853210b5ea1886b7380589239b9b055eaaa0687?el=desc) (6853210) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/willmcgugan/rich/pull/901?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #901 +/- ##\n=======================================\n Coverage 99.53% 99.54% \n=======================================\n Files 58 58 \n Lines 4992 5022 +30 \n=======================================\n+ Hits 4969 4999 +30 \n Misses 23 23 \n```\n\n| Flag | Coverage \u0394 | |\n|---|---|---|\n| unittests | `99.54% <100.00%> (+<0.01%)` | :arrow_up: |\n\nFlags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags#carryforward-flags-in-the-pull-request-comment) to find out more.\n\n| [Impacted Files](https://codecov.io/gh/willmcgugan/rich/pull/901?src=pr&el=tree) | Coverage \u0394 | |\n|---|---|---|\n| [rich/console.py](https://codecov.io/gh/willmcgugan/rich/pull/901/diff?src=pr&el=tree#diff-cmljaC9jb25zb2xlLnB5) | `100.00% <100.00%> (\u00f8)` | |\n| [rich/segment.py](https://codecov.io/gh/willmcgugan/rich/pull/901/diff?src=pr&el=tree#diff-cmljaC9zZWdtZW50LnB5) | `100.00% <100.00%> (\u00f8)` | |\n| [rich/style.py](https://codecov.io/gh/willmcgugan/rich/pull/901/diff?src=pr&el=tree#diff-cmljaC9zdHlsZS5weQ==) | `100.00% <100.00%> (\u00f8)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/willmcgugan/rich/pull/901?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute <relative> (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/willmcgugan/rich/pull/901?src=pr&el=footer). Last update [a9c0f91...6bac7ff](https://codecov.io/gh/willmcgugan/rich/pull/901?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"}], "issues": {}}
|
Textualize/textual
| 1,517
|
https://github.com/Textualize/textual/pull/1517
|
Textualize__textual-1517
|
[]
|
779b10a0e8b8581fab512eb684a06eb90381705d
|
diff --git a/src/textual/app.py b/src/textual/app.py
index b207bbd025..89da6aaf60 100644
--- a/src/textual/app.py
+++ b/src/textual/app.py
@@ -1,6 +1,8 @@
from __future__ import annotations
import asyncio
+from concurrent.futures import Future
+from functools import partial
import inspect
import io
import os
@@ -18,6 +20,7 @@
from typing import (
TYPE_CHECKING,
Any,
+ Awaitable,
Callable,
Generic,
Iterable,
@@ -206,6 +209,8 @@ def stop(self) -> None:
CSSPathType = Union[str, PurePath, List[Union[str, PurePath]], None]
+CallThreadReturnType = TypeVar("CallThreadReturnType")
+
@rich.repr.auto
class App(Generic[ReturnType], DOMNode):
@@ -353,6 +358,8 @@ def __init__(
else:
self.devtools = DevtoolsClient()
+ self._loop: asyncio.AbstractEventLoop | None = None
+ self._thread_id: int = 0
self._return_value: ReturnType | None = None
self._exit = False
@@ -604,6 +611,51 @@ def _log(
except Exception as error:
self._handle_exception(error)
+ def call_from_thread(
+ self,
+ callback: Callable[..., CallThreadReturnType | Awaitable[CallThreadReturnType]],
+ *args,
+ **kwargs,
+ ) -> CallThreadReturnType:
+ """Run a callback from another thread.
+
+ Like asyncio apps in general, Textual apps are not thread-safe. If you call methods
+ or set attributes on Textual objects from a thread, you may get unpredictable results.
+
+ This method will ensure that your code is ran within the correct context.
+
+ Args:
+ callback (Callable): A callable to run.
+ *args: Arguments to the callback.
+ **kwargs: Keyword arguments for the callback.
+
+ Raises:
+ RuntimeError: If the app isn't running or if this method is called from the same
+ thread where the app is running.
+ """
+
+ if self._loop is None:
+ raise RuntimeError("App is not running")
+
+ if self._thread_id == threading.get_ident():
+ raise RuntimeError(
+ "The `call_from_thread` method must run in a different thread from the app"
+ )
+
+ callback_with_args = partial(callback, *args, **kwargs)
+
+ async def run_callback() -> CallThreadReturnType:
+ """Run the callback, set the result or error on the future."""
+ self._set_active()
+ return await invoke(callback_with_args)
+
+ # Post the message to the main loop
+ future: Future[CallThreadReturnType] = asyncio.run_coroutine_threadsafe(
+ run_callback(), loop=self._loop
+ )
+ result = future.result()
+ return result
+
def action_toggle_dark(self) -> None:
"""Action to toggle dark mode."""
self.dark = not self.dark
@@ -874,11 +926,17 @@ def run(
async def run_app() -> None:
"""Run the app."""
- await self.run_async(
- headless=headless,
- size=size,
- auto_pilot=auto_pilot,
- )
+ self._loop = asyncio.get_running_loop()
+ self._thread_id = threading.get_ident()
+ try:
+ await self.run_async(
+ headless=headless,
+ size=size,
+ auto_pilot=auto_pilot,
+ )
+ finally:
+ self._loop = None
+ self._thread_id = 0
if _ASYNCIO_GET_EVENT_LOOP_IS_DEPRECATED:
# N.B. This doesn't work with Python<3.10, as we end up with 2 event loops:
|
diff --git a/tests/test_concurrency.py b/tests/test_concurrency.py
new file mode 100644
index 0000000000..c73418f2f2
--- /dev/null
+++ b/tests/test_concurrency.py
@@ -0,0 +1,53 @@
+import pytest
+
+from threading import Thread
+from textual.app import App, ComposeResult
+from textual.widgets import TextLog
+
+
+def test_call_from_thread_app_not_running():
+ app = App()
+
+ # Should fail if app is not running
+ with pytest.raises(RuntimeError):
+ app.call_from_thread(print)
+
+
+def test_call_from_thread():
+ """Test the call_from_thread method."""
+
+ class BackgroundThread(Thread):
+ """A background thread which will modify app in some way."""
+
+ def __init__(self, app: App) -> None:
+ self.app = app
+ super().__init__()
+
+ def run(self) -> None:
+ def write_stuff(text: str) -> None:
+ """Write stuff to a widget."""
+ self.app.query_one(TextLog).write(text)
+
+ self.app.call_from_thread(write_stuff, "Hello")
+ # Exit the app with a code we can assert
+ self.app.call_from_thread(self.app.exit, 123)
+
+ class ThreadTestApp(App):
+ """Trivial app with a single widget."""
+
+ def compose(self) -> ComposeResult:
+ yield TextLog()
+
+ def on_ready(self) -> None:
+ """Launch a thread which will modify the app."""
+ try:
+ self.call_from_thread(print)
+ except RuntimeError as error:
+ # Calling this from the same thread as the app is an error
+ self._runtime_error = error
+ BackgroundThread(self).start()
+
+ app = ThreadTestApp()
+ result = app.run(headless=True, size=(80, 24))
+ assert isinstance(app._runtime_error, RuntimeError)
+ assert result == 123
| 2023-01-07T14:10:40
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"src/textual/app.py": "from __future__ import annotations\n\nimport asyncio\nimport inspect\nimport io\nimport os\nimport platform\nimport sys\nimport threading\nimport unicodedata\nimport warnings\nfrom asyncio import Task\nfrom contextlib import asynccontextmanager, redirect_stderr, redirect_stdout\nfrom datetime import datetime\nfrom pathlib import Path, PurePath\nfrom queue import Queue\nfrom time import perf_counter\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Generic,\n Iterable,\n List,\n Type,\n TypeVar,\n Union,\n cast,\n overload,\n)\nfrom weakref import WeakSet, WeakValueDictionary\n\nimport nanoid\nimport rich\nimport rich.repr\nfrom rich.console import Console, RenderableType\nfrom rich.protocol import is_renderable\nfrom rich.segment import Segment, Segments\nfrom rich.traceback import Traceback\n\nfrom . import actions, Logger, LogGroup, LogVerbosity, events, log, messages\nfrom ._animator import DEFAULT_EASING, Animatable, Animator, EasingFunction\nfrom ._ansi_sequences import SYNC_END, SYNC_START\nfrom ._callback import invoke\nfrom ._context import active_app\nfrom ._event_broker import NoHandler, extract_handler_actions\nfrom ._filter import LineFilter, Monochrome\nfrom ._path import _make_path_object_relative\nfrom ._typing import Final, TypeAlias\nfrom .actions import SkipAction\nfrom .await_remove import AwaitRemove\nfrom .binding import Binding, Bindings\nfrom .css.query import NoMatches\nfrom .css.stylesheet import Stylesheet\nfrom .design import ColorSystem\nfrom .dom import DOMNode\nfrom .driver import Driver\nfrom .drivers.headless_driver import HeadlessDriver\nfrom .features import FeatureFlag, parse_features\nfrom .file_monitor import FileMonitor\nfrom .geometry import Offset, Region, Size\nfrom .keys import REPLACED_KEYS, _get_key_display\nfrom .messages import CallbackType\nfrom .reactive import Reactive\nfrom .renderables.blank import Blank\nfrom .screen import Screen\nfrom .widget import AwaitMount, Widget\n\n\nif TYPE_CHECKING:\n from .devtools.client import DevtoolsClient\n from .pilot import Pilot\n\nPLATFORM = platform.system()\nWINDOWS = PLATFORM == \"Windows\"\n\n# asyncio will warn against resources not being cleared\nwarnings.simplefilter(\"always\", ResourceWarning)\n\n# `asyncio.get_event_loop()` is deprecated since Python 3.10:\n_ASYNCIO_GET_EVENT_LOOP_IS_DEPRECATED = sys.version_info >= (3, 10, 0)\n\nLayoutDefinition = \"dict[str, Any]\"\n\nDEFAULT_COLORS = {\n \"dark\": ColorSystem(\n primary=\"#004578\",\n secondary=\"#ffa62b\",\n warning=\"#ffa62b\",\n error=\"#ba3c5b\",\n success=\"#4EBF71\",\n accent=\"#0178D4\",\n dark=True,\n ),\n \"light\": ColorSystem(\n primary=\"#004578\",\n secondary=\"#ffa62b\",\n warning=\"#ffa62b\",\n error=\"#ba3c5b\",\n success=\"#4EBF71\",\n accent=\"#0178D4\",\n dark=False,\n ),\n}\n\nComposeResult = Iterable[Widget]\nRenderResult = RenderableType\n\nAutopilotCallbackType: TypeAlias = \"Callable[[Pilot], Coroutine[Any, Any, None]]\"\n\n\nclass AppError(Exception):\n pass\n\n\nclass ActionError(Exception):\n pass\n\n\nclass ScreenError(Exception):\n pass\n\n\nclass ScreenStackError(ScreenError):\n \"\"\"Raised when attempting to pop the last screen from the stack.\"\"\"\n\n\nclass CssPathError(Exception):\n \"\"\"Raised when supplied CSS path(s) are invalid.\"\"\"\n\n\nReturnType = TypeVar(\"ReturnType\")\n\n\nclass _NullFile:\n \"\"\"A file-like where writes go nowhere.\"\"\"\n\n def write(self, text: str) -> None:\n pass\n\n def flush(self) -> None:\n pass\n\n\nMAX_QUEUED_WRITES: Final[int] = 30\n\n\nclass _WriterThread(threading.Thread):\n \"\"\"A thread / file-like to do writes to stdout in the background.\"\"\"\n\n def __init__(self) -> None:\n super().__init__(daemon=True)\n self._queue: Queue[str | None] = Queue(MAX_QUEUED_WRITES)\n self._file = sys.__stdout__\n\n def write(self, text: str) -> None:\n \"\"\"Write text. Text will be enqueued for writing.\n\n Args:\n text (str): Text to write to the file.\n \"\"\"\n self._queue.put(text)\n\n def isatty(self) -> bool:\n \"\"\"Pretend to be a terminal.\n\n Returns:\n bool: True if this is a tty.\n \"\"\"\n return True\n\n def fileno(self) -> int:\n \"\"\"Get file handle number.\n\n Returns:\n int: File number of proxied file.\n \"\"\"\n return self._file.fileno()\n\n def flush(self) -> None:\n \"\"\"Flush the file (a no-op, because flush is done in the thread).\"\"\"\n return\n\n def run(self) -> None:\n \"\"\"Run the thread.\"\"\"\n write = self._file.write\n flush = self._file.flush\n get = self._queue.get\n qsize = self._queue.qsize\n # Read from the queue, write to the file.\n # Flush when there is a break.\n while True:\n text: str | None = get()\n empty = qsize() == 0\n if text is None:\n break\n write(text)\n if empty:\n flush()\n\n def stop(self) -> None:\n \"\"\"Stop the thread, and block until it finished.\"\"\"\n self._queue.put(None)\n self.join()\n\n\nCSSPathType = Union[str, PurePath, List[Union[str, PurePath]], None]\n\n\[email protected]\nclass App(Generic[ReturnType], DOMNode):\n \"\"\"The base class for Textual Applications.\n Args:\n driver_class (Type[Driver] | None, optional): Driver class or ``None`` to auto-detect. Defaults to None.\n css_path (str | PurePath | list[str | PurePath] | None, optional): Path to CSS or ``None`` for no CSS file.\n Defaults to None. To load multiple CSS files, pass a list of strings or paths which will be loaded in order.\n watch_css (bool, optional): Watch CSS for changes. Defaults to False.\n\n Raises:\n CssPathError: When the supplied CSS path(s) are an unexpected type.\n \"\"\"\n\n CSS = \"\"\n \"\"\"Inline CSS, useful for quick scripts. This is loaded after CSS_PATH,\n and therefore takes priority in the event of a specificity clash.\"\"\"\n\n # Default (the lowest priority) CSS\n DEFAULT_CSS = \"\"\"\n App {\n background: $background;\n color: $text;\n }\n \"\"\"\n\n SCREENS: dict[str, Screen | Callable[[], Screen]] = {}\n _BASE_PATH: str | None = None\n CSS_PATH: CSSPathType = None\n TITLE: str | None = None\n SUB_TITLE: str | None = None\n\n BINDINGS = [\n Binding(\"ctrl+c\", \"quit\", \"Quit\", show=False, priority=True),\n Binding(\"tab\", \"focus_next\", \"Focus Next\", show=False),\n Binding(\"shift+tab\", \"focus_previous\", \"Focus Previous\", show=False),\n ]\n\n title: Reactive[str] = Reactive(\"\")\n sub_title: Reactive[str] = Reactive(\"\")\n dark: Reactive[bool] = Reactive(True)\n\n def __init__(\n self,\n driver_class: Type[Driver] | None = None,\n css_path: CSSPathType = None,\n watch_css: bool = False,\n ):\n # N.B. This must be done *before* we call the parent constructor, because MessagePump's\n # constructor instantiates a `asyncio.PriorityQueue` and in Python versions older than 3.10\n # this will create some first references to an asyncio loop.\n _init_uvloop()\n\n super().__init__()\n self.features: frozenset[FeatureFlag] = parse_features(os.getenv(\"TEXTUAL\", \"\"))\n\n self._filter: LineFilter | None = None\n environ = dict(os.environ)\n no_color = environ.pop(\"NO_COLOR\", None)\n if no_color is not None:\n self._filter = Monochrome()\n\n self._writer_thread: _WriterThread | None = None\n if sys.__stdout__ is None:\n file = _NullFile()\n else:\n self._writer_thread = _WriterThread()\n self._writer_thread.start()\n file = self._writer_thread\n\n self.console = Console(\n file=file,\n markup=False,\n highlight=False,\n emoji=False,\n legacy_windows=False,\n _environ=environ,\n )\n self.error_console = Console(markup=False, stderr=True)\n self.driver_class = driver_class or self.get_driver_class()\n self._screen_stack: list[Screen] = []\n self._sync_available = False\n\n self.mouse_over: Widget | None = None\n self.mouse_captured: Widget | None = None\n self._driver: Driver | None = None\n self._exit_renderables: list[RenderableType] = []\n\n self._action_targets = {\"app\", \"screen\"}\n self._animator = Animator(self)\n self._animate = self._animator.bind(self)\n self.mouse_position = Offset(0, 0)\n self.title = (\n self.TITLE if self.TITLE is not None else f\"{self.__class__.__name__}\"\n )\n self.sub_title = self.SUB_TITLE if self.SUB_TITLE is not None else \"\"\n\n self._logger = Logger(self._log)\n\n self._refresh_required = False\n\n self.design = DEFAULT_COLORS\n\n self.stylesheet = Stylesheet(variables=self.get_css_variables())\n self._require_stylesheet_update: set[DOMNode] = set()\n\n css_path = css_path or self.CSS_PATH\n if css_path is not None:\n # When value(s) are supplied for CSS_PATH, we normalise them to a list of Paths.\n if isinstance(css_path, str):\n css_paths = [Path(css_path)]\n elif isinstance(css_path, PurePath):\n css_paths = [css_path]\n elif isinstance(css_path, list):\n css_paths = []\n for path in css_path:\n css_paths.append(Path(path) if isinstance(path, str) else path)\n else:\n raise CssPathError(\n \"Expected a str, Path or list[str | Path] for the CSS_PATH.\"\n )\n\n # We want the CSS path to be resolved from the location of the App subclass\n css_paths = [\n _make_path_object_relative(css_path, self) for css_path in css_paths\n ]\n else:\n css_paths = []\n\n self.css_path = css_paths\n self._registry: WeakSet[DOMNode] = WeakSet()\n\n self._installed_screens: WeakValueDictionary[\n str, Screen | Callable[[], Screen]\n ] = WeakValueDictionary()\n self._installed_screens.update(**self.SCREENS)\n\n self.devtools: DevtoolsClient | None = None\n if \"devtools\" in self.features:\n try:\n from .devtools.client import DevtoolsClient\n except ImportError:\n # Dev dependencies not installed\n pass\n else:\n self.devtools = DevtoolsClient()\n\n self._return_value: ReturnType | None = None\n self._exit = False\n\n self.css_monitor = (\n FileMonitor(self.css_path, self._on_css_change)\n if ((watch_css or self.debug) and self.css_path)\n else None\n )\n self._screenshot: str | None = None\n self._dom_lock = asyncio.Lock()\n self._dom_ready = False\n self.set_class(self.dark, \"-dark-mode\")\n\n @property\n def return_value(self) -> ReturnType | None:\n \"\"\"ReturnType | None: The return type of the app.\"\"\"\n return self._return_value\n\n def animate(\n self,\n attribute: str,\n value: float | Animatable,\n *,\n final_value: object = ...,\n duration: float | None = None,\n speed: float | None = None,\n delay: float = 0.0,\n easing: EasingFunction | str = DEFAULT_EASING,\n on_complete: CallbackType | None = None,\n ) -> None:\n \"\"\"Animate an attribute.\n\n Args:\n attribute (str): Name of the attribute to animate.\n value (float | Animatable): The value to animate to.\n final_value (object, optional): The final value of the animation. Defaults to `value` if not set.\n duration (float | None, optional): The duration of the animate. Defaults to None.\n speed (float | None, optional): The speed of the animation. Defaults to None.\n delay (float, optional): A delay (in seconds) before the animation starts. Defaults to 0.0.\n easing (EasingFunction | str, optional): An easing method. Defaults to \"in_out_cubic\".\n on_complete (CallbackType | None, optional): A callable to invoke when the animation is finished. Defaults to None.\n\n \"\"\"\n self._animate(\n attribute,\n value,\n final_value=final_value,\n duration=duration,\n speed=speed,\n delay=delay,\n easing=easing,\n on_complete=on_complete,\n )\n\n @property\n def debug(self) -> bool:\n \"\"\"bool: Is debug mode is enabled?\"\"\"\n return \"debug\" in self.features\n\n @property\n def is_headless(self) -> bool:\n \"\"\"bool: Is the app running in 'headless' mode?\"\"\"\n return False if self._driver is None else self._driver.is_headless\n\n @property\n def screen_stack(self) -> list[Screen]:\n \"\"\"list[Screen]: A *copy* of the screen stack.\"\"\"\n return self._screen_stack.copy()\n\n def exit(\n self, result: ReturnType | None = None, message: RenderableType | None = None\n ) -> None:\n \"\"\"Exit the app, and return the supplied result.\n\n Args:\n result (ReturnType | None, optional): Return value. Defaults to None.\n message (RenderableType | None): Optional message to display on exit.\n \"\"\"\n self._exit = True\n self._return_value = result\n self.post_message_no_wait(messages.ExitApp(sender=self))\n if message:\n self._exit_renderables.append(message)\n\n @property\n def focused(self) -> Widget | None:\n \"\"\"Widget | None: the widget that is focused on the currently active screen.\"\"\"\n return self.screen.focused\n\n @property\n def namespace_bindings(self) -> dict[str, tuple[DOMNode, Binding]]:\n \"\"\"Get current bindings. If no widget is focused, then the app-level bindings\n are returned. If a widget is focused, then any bindings present in the active\n screen and app are merged and returned.\"\"\"\n\n namespace_binding_map: dict[str, tuple[DOMNode, Binding]] = {}\n for namespace, bindings in reversed(self._binding_chain):\n for key, binding in bindings.keys.items():\n namespace_binding_map[key] = (namespace, binding)\n\n return namespace_binding_map\n\n def _set_active(self) -> None:\n \"\"\"Set this app to be the currently active app.\"\"\"\n active_app.set(self)\n\n def compose(self) -> ComposeResult:\n \"\"\"Yield child widgets for a container.\"\"\"\n yield from ()\n\n def get_css_variables(self) -> dict[str, str]:\n \"\"\"Get a mapping of variables used to pre-populate CSS.\n\n Returns:\n dict[str, str]: A mapping of variable name to value.\n \"\"\"\n variables = self.design[\"dark\" if self.dark else \"light\"].generate()\n return variables\n\n def watch_dark(self, dark: bool) -> None:\n \"\"\"Watches the dark bool.\"\"\"\n self.set_class(dark, \"-dark-mode\")\n self.set_class(not dark, \"-light-mode\")\n try:\n self.refresh_css()\n except ScreenStackError:\n # It's possible that `dark` can be set before we have a default\n # screen, in an app's `on_load`, for example. So let's eat the\n # ScreenStackError -- the above styles will be handled once the\n # screen is spun up anyway.\n pass\n\n def get_driver_class(self) -> Type[Driver]:\n \"\"\"Get a driver class for this platform.\n\n Called by the constructor.\n\n Returns:\n Driver: A Driver class which manages input and display.\n \"\"\"\n driver_class: Type[Driver]\n if WINDOWS:\n from .drivers.windows_driver import WindowsDriver\n\n driver_class = WindowsDriver\n else:\n from .drivers.linux_driver import LinuxDriver\n\n driver_class = LinuxDriver\n return driver_class\n\n def __rich_repr__(self) -> rich.repr.Result:\n yield \"title\", self.title\n yield \"id\", self.id, None\n if self.name:\n yield \"name\", self.name\n if self.classes:\n yield \"classes\", set(self.classes)\n pseudo_classes = self.pseudo_classes\n if pseudo_classes:\n yield \"pseudo_classes\", set(pseudo_classes)\n\n @property\n def is_transparent(self) -> bool:\n return True\n\n @property\n def animator(self) -> Animator:\n return self._animator\n\n @property\n def screen(self) -> Screen:\n \"\"\"Screen: The current screen.\n\n Raises:\n ScreenStackError: If there are no screens on the stack.\n \"\"\"\n try:\n return self._screen_stack[-1]\n except IndexError:\n raise ScreenStackError(\"No screens on stack\") from None\n\n @property\n def size(self) -> Size:\n \"\"\"Size: The size of the terminal.\"\"\"\n if self._driver is not None and self._driver._size is not None:\n width, height = self._driver._size\n else:\n width, height = self.console.size\n return Size(width, height)\n\n @property\n def log(self) -> Logger:\n \"\"\"Logger: The logger object.\"\"\"\n return self._logger\n\n def _log(\n self,\n group: LogGroup,\n verbosity: LogVerbosity,\n _textual_calling_frame: inspect.FrameInfo,\n *objects: Any,\n **kwargs,\n ) -> None:\n \"\"\"Write to logs or devtools.\n\n Positional args will logged. Keyword args will be prefixed with the key.\n\n Example:\n ```python\n data = [1,2,3]\n self.log(\"Hello, World\", state=data)\n self.log(self.tree)\n self.log(locals())\n ```\n\n Args:\n verbosity (int, optional): Verbosity level 0-3. Defaults to 1.\n \"\"\"\n\n devtools = self.devtools\n if devtools is None or not devtools.is_connected:\n return\n\n if verbosity.value > LogVerbosity.NORMAL.value and not devtools.verbose:\n return\n\n try:\n from .devtools.client import DevtoolsLog\n\n if len(objects) == 1 and not kwargs:\n devtools.log(\n DevtoolsLog(objects, caller=_textual_calling_frame),\n group,\n verbosity,\n )\n else:\n output = \" \".join(str(arg) for arg in objects)\n if kwargs:\n key_values = \" \".join(\n f\"{key}={value!r}\" for key, value in kwargs.items()\n )\n output = f\"{output} {key_values}\" if output else key_values\n devtools.log(\n DevtoolsLog(output, caller=_textual_calling_frame),\n group,\n verbosity,\n )\n except Exception as error:\n self._handle_exception(error)\n\n def action_toggle_dark(self) -> None:\n \"\"\"Action to toggle dark mode.\"\"\"\n self.dark = not self.dark\n\n def action_screenshot(self, filename: str | None = None, path: str = \"./\") -> None:\n \"\"\"Save an SVG \"screenshot\". This action will save an SVG file containing the current contents of the screen.\n\n Args:\n filename (str | None, optional): Filename of screenshot, or None to auto-generate. Defaults to None.\n path (str, optional): Path to directory. Defaults to current working directory.\n \"\"\"\n self.save_screenshot(filename, path)\n\n def export_screenshot(self, *, title: str | None = None) -> str:\n \"\"\"Export an SVG screenshot of the current screen.\n\n Args:\n title (str | None, optional): The title of the exported screenshot or None\n to use app title. Defaults to None.\n\n \"\"\"\n assert self._driver is not None, \"App must be running\"\n width, height = self.size\n console = Console(\n width=width,\n height=height,\n file=io.StringIO(),\n force_terminal=True,\n color_system=\"truecolor\",\n record=True,\n legacy_windows=False,\n )\n screen_render = self.screen._compositor.render(full=True)\n console.print(screen_render)\n return console.export_svg(title=title or self.title)\n\n def save_screenshot(\n self,\n filename: str | None = None,\n path: str = \"./\",\n time_format: str = \"%Y%m%d %H%M%S %f\",\n ) -> str:\n \"\"\"Save an SVG screenshot of the current screen.\n\n Args:\n filename (str | None, optional): Filename of SVG screenshot, or None to auto-generate\n a filename with the date and time. Defaults to None.\n path (str, optional): Path to directory for output. Defaults to current working directory.\n time_format (str, optional): Time format to use if filename is None. Defaults to \"%Y-%m-%d %X %f\".\n\n Returns:\n str: Filename of screenshot.\n \"\"\"\n if filename is None:\n svg_filename = (\n f\"{self.title.lower()} {datetime.now().strftime(time_format)}.svg\"\n )\n for reserved in '<>:\"/\\\\|?*':\n svg_filename = svg_filename.replace(reserved, \"_\")\n else:\n svg_filename = filename\n svg_path = os.path.expanduser(os.path.join(path, svg_filename))\n screenshot_svg = self.export_screenshot()\n with open(svg_path, \"w\", encoding=\"utf-8\") as svg_file:\n svg_file.write(screenshot_svg)\n return svg_path\n\n def bind(\n self,\n keys: str,\n action: str,\n *,\n description: str = \"\",\n show: bool = True,\n key_display: str | None = None,\n ) -> None:\n \"\"\"Bind a key to an action.\n\n Args:\n keys (str): A comma separated list of keys, i.e.\n action (str): Action to bind to.\n description (str, optional): Short description of action. Defaults to \"\".\n show (bool, optional): Show key in UI. Defaults to True.\n key_display (str, optional): Replacement text for key, or None to use default. Defaults to None.\n \"\"\"\n self._bindings.bind(\n keys, action, description, show=show, key_display=key_display\n )\n\n def get_key_display(self, key: str) -> str:\n \"\"\"For a given key, return how it should be displayed in an app\n (e.g. in the Footer widget).\n By key, we refer to the string used in the \"key\" argument for\n a Binding instance. By overriding this method, you can ensure that\n keys are displayed consistently throughout your app, without\n needing to add a key_display to every binding.\n\n Args:\n key (str): The binding key string.\n\n Returns:\n str: The display string for the input key.\n \"\"\"\n return _get_key_display(key)\n\n async def _press_keys(self, keys: Iterable[str]) -> None:\n \"\"\"A task to send key events.\"\"\"\n app = self\n driver = app._driver\n assert driver is not None\n await asyncio.sleep(0.02)\n for key in keys:\n if key == \"_\":\n print(\"(pause 50ms)\")\n await asyncio.sleep(0.05)\n elif key.startswith(\"wait:\"):\n _, wait_ms = key.split(\":\")\n print(f\"(pause {wait_ms}ms)\")\n await asyncio.sleep(float(wait_ms) / 1000)\n else:\n if len(key) == 1 and not key.isalnum():\n key = (\n unicodedata.name(key)\n .lower()\n .replace(\"-\", \"_\")\n .replace(\" \", \"_\")\n )\n original_key = REPLACED_KEYS.get(key, key)\n char: str | None\n try:\n char = unicodedata.lookup(original_key.upper().replace(\"_\", \" \"))\n except KeyError:\n char = key if len(key) == 1 else None\n print(f\"press {key!r} (char={char!r})\")\n key_event = events.Key(app, key, char)\n driver.send_event(key_event)\n # TODO: A bit of a fudge - extra sleep after tabbing to help guard against race\n # condition between widget-level key handling and app/screen level handling.\n # More information here: https://github.com/Textualize/textual/issues/1009\n # This conditional sleep can be removed after that issue is closed.\n if key == \"tab\":\n await asyncio.sleep(0.05)\n await asyncio.sleep(0.025)\n await app._animator.wait_for_idle()\n\n @asynccontextmanager\n async def run_test(\n self,\n *,\n headless: bool = True,\n size: tuple[int, int] | None = (80, 24),\n ):\n \"\"\"An asynchronous context manager for testing app.\n\n Args:\n headless (bool, optional): Run in headless mode (no output or input). Defaults to True.\n size (tuple[int, int] | None, optional): Force terminal size to `(WIDTH, HEIGHT)`,\n or None to auto-detect. Defaults to None.\n\n \"\"\"\n from .pilot import Pilot\n\n app = self\n app_ready_event = asyncio.Event()\n\n def on_app_ready() -> None:\n \"\"\"Called when app is ready to process events.\"\"\"\n app_ready_event.set()\n\n async def run_app(app) -> None:\n await app._process_messages(\n ready_callback=on_app_ready,\n headless=headless,\n terminal_size=size,\n )\n\n # Launch the app in the \"background\"\n app_task = asyncio.create_task(run_app(app))\n\n # Wait until the app has performed all startup routines.\n await app_ready_event.wait()\n\n # Get the app in an active state.\n app._set_active()\n\n # Context manager returns pilot object to manipulate the app\n try:\n yield Pilot(app)\n finally:\n # Shutdown the app cleanly\n await app._shutdown()\n await app_task\n\n async def run_async(\n self,\n *,\n headless: bool = False,\n size: tuple[int, int] | None = None,\n auto_pilot: AutopilotCallbackType | None = None,\n ) -> ReturnType | None:\n \"\"\"Run the app asynchronously.\n\n Args:\n headless (bool, optional): Run in headless mode (no output). Defaults to False.\n size (tuple[int, int] | None, optional): Force terminal size to `(WIDTH, HEIGHT)`,\n or None to auto-detect. Defaults to None.\n auto_pilot (AutopilotCallbackType): An auto pilot coroutine.\n\n Returns:\n ReturnType | None: App return value.\n \"\"\"\n from .pilot import Pilot\n\n app = self\n\n auto_pilot_task: Task | None = None\n\n async def app_ready() -> None:\n \"\"\"Called by the message loop when the app is ready.\"\"\"\n nonlocal auto_pilot_task\n if auto_pilot is not None:\n\n async def run_auto_pilot(\n auto_pilot: AutopilotCallbackType, pilot: Pilot\n ) -> None:\n try:\n await auto_pilot(pilot)\n except Exception:\n app.exit()\n raise\n\n pilot = Pilot(app)\n auto_pilot_task = asyncio.create_task(run_auto_pilot(auto_pilot, pilot))\n\n try:\n await app._process_messages(\n ready_callback=None if auto_pilot is None else app_ready,\n headless=headless,\n terminal_size=size,\n )\n finally:\n try:\n if auto_pilot_task is not None:\n await auto_pilot_task\n finally:\n await app._shutdown()\n\n return app.return_value\n\n def run(\n self,\n *,\n headless: bool = False,\n size: tuple[int, int] | None = None,\n auto_pilot: AutopilotCallbackType | None = None,\n ) -> ReturnType | None:\n \"\"\"Run the app.\n\n Args:\n headless (bool, optional): Run in headless mode (no output). Defaults to False.\n size (tuple[int, int] | None, optional): Force terminal size to `(WIDTH, HEIGHT)`,\n or None to auto-detect. Defaults to None.\n auto_pilot (AutopilotCallbackType): An auto pilot coroutine.\n\n Returns:\n ReturnType | None: App return value.\n \"\"\"\n\n async def run_app() -> None:\n \"\"\"Run the app.\"\"\"\n await self.run_async(\n headless=headless,\n size=size,\n auto_pilot=auto_pilot,\n )\n\n if _ASYNCIO_GET_EVENT_LOOP_IS_DEPRECATED:\n # N.B. This doesn't work with Python<3.10, as we end up with 2 event loops:\n asyncio.run(run_app())\n else:\n # However, this works with Python<3.10:\n event_loop = asyncio.get_event_loop()\n event_loop.run_until_complete(run_app())\n return self.return_value\n\n async def _on_css_change(self) -> None:\n \"\"\"Called when the CSS changes (if watch_css is True).\"\"\"\n css_paths = self.css_path\n if css_paths:\n try:\n time = perf_counter()\n stylesheet = self.stylesheet.copy()\n stylesheet.read_all(css_paths)\n stylesheet.parse()\n elapsed = (perf_counter() - time) * 1000\n self.log.system(\n f\"<stylesheet> loaded {len(css_paths)} CSS files in {elapsed:.0f} ms\"\n )\n except Exception as error:\n # TODO: Catch specific exceptions\n self.log.error(error)\n self.bell()\n else:\n self.stylesheet = stylesheet\n self.reset_styles()\n self.stylesheet.update(self)\n self.screen.refresh(layout=True)\n\n def render(self) -> RenderableType:\n return Blank(self.styles.background)\n\n ExpectType = TypeVar(\"ExpectType\", bound=Widget)\n\n @overload\n def get_child_by_id(self, id: str) -> Widget:\n ...\n\n @overload\n def get_child_by_id(self, id: str, expect_type: type[ExpectType]) -> ExpectType:\n ...\n\n def get_child_by_id(\n self, id: str, expect_type: type[ExpectType] | None = None\n ) -> ExpectType | Widget:\n \"\"\"Shorthand for self.screen.get_child(id: str)\n Returns the first child (immediate descendent) of this DOMNode\n with the given ID.\n\n Args:\n id (str): The ID of the node to search for.\n expect_type (type | None, optional): Require the object be of the supplied type, or None for any type.\n Defaults to None.\n\n Returns:\n ExpectType | Widget: The first child of this node with the specified ID.\n\n Raises:\n NoMatches: if no children could be found for this ID\n WrongType: if the wrong type was found.\n \"\"\"\n return (\n self.screen.get_child_by_id(id)\n if expect_type is None\n else self.screen.get_child_by_id(id, expect_type)\n )\n\n @overload\n def get_widget_by_id(self, id: str) -> Widget:\n ...\n\n @overload\n def get_widget_by_id(self, id: str, expect_type: type[ExpectType]) -> ExpectType:\n ...\n\n def get_widget_by_id(\n self, id: str, expect_type: type[ExpectType] | None = None\n ) -> ExpectType | Widget:\n \"\"\"Shorthand for self.screen.get_widget_by_id(id)\n Return the first descendant widget with the given ID.\n\n Performs a breadth-first search rooted at the current screen.\n It will not return the Screen if that matches the ID.\n To get the screen, use `self.screen`.\n\n Args:\n id (str): The ID to search for in the subtree\n expect_type (type | None, optional): Require the object be of the supplied type, or None for any type.\n Defaults to None.\n\n Returns:\n ExpectType | Widget: The first descendant encountered with this ID.\n\n Raises:\n NoMatches: if no children could be found for this ID\n WrongType: if the wrong type was found.\n \"\"\"\n return (\n self.screen.get_widget_by_id(id)\n if expect_type is None\n else self.screen.get_widget_by_id(id, expect_type)\n )\n\n def update_styles(self, node: DOMNode | None = None) -> None:\n \"\"\"Request update of styles.\n\n Should be called whenever CSS classes / pseudo classes change.\n\n \"\"\"\n self._require_stylesheet_update.add(self.screen if node is None else node)\n self.check_idle()\n\n def mount(\n self,\n *widgets: Widget,\n before: int | str | Widget | None = None,\n after: int | str | Widget | None = None,\n ) -> AwaitMount:\n \"\"\"Mount the given widgets relative to the app's screen.\n\n Args:\n *widgets (Widget): The widget(s) to mount.\n before (int | str | Widget, optional): Optional location to mount before.\n after (int | str | Widget, optional): Optional location to mount after.\n\n Returns:\n AwaitMount: An awaitable object that waits for widgets to be mounted.\n\n Raises:\n MountError: If there is a problem with the mount request.\n\n Note:\n Only one of ``before`` or ``after`` can be provided. If both are\n provided a ``MountError`` will be raised.\n \"\"\"\n return self.screen.mount(*widgets, before=before, after=after)\n\n def mount_all(\n self,\n widgets: Iterable[Widget],\n before: int | str | Widget | None = None,\n after: int | str | Widget | None = None,\n ) -> AwaitMount:\n \"\"\"Mount widgets from an iterable.\n\n Args:\n widgets (Iterable[Widget]): An iterable of widgets.\n before (int | str | Widget, optional): Optional location to mount before.\n after (int | str | Widget, optional): Optional location to mount after.\n\n Returns:\n AwaitMount: An awaitable object that waits for widgets to be mounted.\n\n Raises:\n MountError: If there is a problem with the mount request.\n\n Note:\n Only one of ``before`` or ``after`` can be provided. If both are\n provided a ``MountError`` will be raised.\n \"\"\"\n return self.mount(*widgets, before=before, after=after)\n\n def is_screen_installed(self, screen: Screen | str) -> bool:\n \"\"\"Check if a given screen has been installed.\n\n Args:\n screen (Screen | str): Either a Screen object or screen name (the `name` argument when installed).\n\n Returns:\n bool: True if the screen is currently installed,\n \"\"\"\n if isinstance(screen, str):\n return screen in self._installed_screens\n else:\n return screen in self._installed_screens.values()\n\n def get_screen(self, screen: Screen | str) -> Screen:\n \"\"\"Get an installed screen.\n\n Args:\n screen (Screen | str): Either a Screen object or screen name (the `name` argument when installed).\n\n Raises:\n KeyError: If the named screen doesn't exist.\n\n Returns:\n Screen: A screen instance.\n \"\"\"\n if isinstance(screen, str):\n try:\n next_screen = self._installed_screens[screen]\n except KeyError:\n raise KeyError(f\"No screen called {screen!r} installed\") from None\n if callable(next_screen):\n next_screen = next_screen()\n self._installed_screens[screen] = next_screen\n else:\n next_screen = screen\n return next_screen\n\n def _get_screen(self, screen: Screen | str) -> tuple[Screen, AwaitMount]:\n \"\"\"Get an installed screen and an AwaitMount object.\n\n If the screen isn't running, it will be registered before it is run.\n\n Args:\n screen (Screen | str): Either a Screen object or screen name (the `name` argument when installed).\n\n Raises:\n KeyError: If the named screen doesn't exist.\n\n Returns:\n tuple[Screen, AwaitMount]: A screen instance and an awaitable that awaits the children mounting.\n\n \"\"\"\n _screen = self.get_screen(screen)\n if not _screen.is_running:\n widgets = self._register(self, _screen)\n return (_screen, AwaitMount(_screen, widgets))\n else:\n return (_screen, AwaitMount(_screen, []))\n\n def _replace_screen(self, screen: Screen) -> Screen:\n \"\"\"Handle the replaced screen.\n\n Args:\n screen (Screen): A screen object.\n\n Returns:\n Screen: The screen that was replaced.\n\n \"\"\"\n screen.post_message_no_wait(events.ScreenSuspend(self))\n self.log.system(f\"{screen} SUSPENDED\")\n if not self.is_screen_installed(screen) and screen not in self._screen_stack:\n screen.remove()\n self.log.system(f\"{screen} REMOVED\")\n return screen\n\n def push_screen(self, screen: Screen | str) -> AwaitMount:\n \"\"\"Push a new screen on the screen stack.\n\n Args:\n screen (Screen | str): A Screen instance or the name of an installed screen.\n\n \"\"\"\n next_screen, await_mount = self._get_screen(screen)\n self._screen_stack.append(next_screen)\n self.screen.post_message_no_wait(events.ScreenResume(self))\n self.log.system(f\"{self.screen} is current (PUSHED)\")\n return await_mount\n\n def switch_screen(self, screen: Screen | str) -> AwaitMount:\n \"\"\"Switch to another screen by replacing the top of the screen stack with a new screen.\n\n Args:\n screen (Screen | str): Either a Screen object or screen name (the `name` argument when installed).\n\n \"\"\"\n if self.screen is not screen:\n self._replace_screen(self._screen_stack.pop())\n next_screen, await_mount = self._get_screen(screen)\n self._screen_stack.append(next_screen)\n self.screen.post_message_no_wait(events.ScreenResume(self))\n self.log.system(f\"{self.screen} is current (SWITCHED)\")\n return await_mount\n return AwaitMount(self.screen, [])\n\n def install_screen(self, screen: Screen, name: str | None = None) -> AwaitMount:\n \"\"\"Install a screen.\n\n Args:\n screen (Screen): Screen to install.\n name (str | None, optional): Unique name of screen or None to auto-generate.\n Defaults to None.\n\n Raises:\n ScreenError: If the screen can't be installed.\n\n Returns:\n AwaitMount: An awaitable that awaits the mounting of the screen and its children.\n \"\"\"\n if name is None:\n name = nanoid.generate()\n if name in self._installed_screens:\n raise ScreenError(f\"Can't install screen; {name!r} is already installed\")\n if screen in self._installed_screens.values():\n raise ScreenError(\n \"Can't install screen; {screen!r} has already been installed\"\n )\n self._installed_screens[name] = screen\n _screen, await_mount = self._get_screen(name) # Ensures screen is running\n self.log.system(f\"{screen} INSTALLED name={name!r}\")\n return await_mount\n\n def uninstall_screen(self, screen: Screen | str) -> str | None:\n \"\"\"Uninstall a screen. If the screen was not previously installed then this\n method is a null-op.\n\n Args:\n screen (Screen | str): The screen to uninstall or the name of a installed screen.\n\n Returns:\n str | None: The name of the screen that was uninstalled, or None if no screen was uninstalled.\n \"\"\"\n if isinstance(screen, str):\n if screen not in self._installed_screens:\n return None\n uninstall_screen = self._installed_screens[screen]\n if uninstall_screen in self._screen_stack:\n raise ScreenStackError(\"Can't uninstall screen in screen stack\")\n del self._installed_screens[screen]\n self.log.system(f\"{uninstall_screen} UNINSTALLED name={screen!r}\")\n return screen\n else:\n if screen in self._screen_stack:\n raise ScreenStackError(\"Can't uninstall screen in screen stack\")\n for name, installed_screen in self._installed_screens.items():\n if installed_screen is screen:\n self._installed_screens.pop(name)\n self.log.system(f\"{screen} UNINSTALLED name={name!r}\")\n return name\n return None\n\n def pop_screen(self) -> Screen:\n \"\"\"Pop the current screen from the stack, and switch to the previous screen.\n\n Returns:\n Screen: The screen that was replaced.\n \"\"\"\n screen_stack = self._screen_stack\n if len(screen_stack) <= 1:\n raise ScreenStackError(\n \"Can't pop screen; there must be at least one screen on the stack\"\n )\n previous_screen = self._replace_screen(screen_stack.pop())\n self.screen._screen_resized(self.size)\n self.screen.post_message_no_wait(events.ScreenResume(self))\n self.log.system(f\"{self.screen} is active\")\n return previous_screen\n\n def set_focus(self, widget: Widget | None, scroll_visible: bool = True) -> None:\n \"\"\"Focus (or unfocus) a widget. A focused widget will receive key events first.\n\n Args:\n widget (Widget): Widget to focus.\n scroll_visible (bool, optional): Scroll widget in to view.\n \"\"\"\n self.screen.set_focus(widget, scroll_visible)\n\n async def _set_mouse_over(self, widget: Widget | None) -> None:\n \"\"\"Called when the mouse is over another widget.\n\n Args:\n widget (Widget | None): Widget under mouse, or None for no widgets.\n \"\"\"\n if widget is None:\n if self.mouse_over is not None:\n try:\n await self.mouse_over.post_message(events.Leave(self))\n finally:\n self.mouse_over = None\n else:\n if self.mouse_over is not widget:\n try:\n if self.mouse_over is not None:\n await self.mouse_over._forward_event(events.Leave(self))\n if widget is not None:\n await widget._forward_event(events.Enter(self))\n finally:\n self.mouse_over = widget\n\n def capture_mouse(self, widget: Widget | None) -> None:\n \"\"\"Send all mouse events to the given widget, disable mouse capture.\n\n Args:\n widget (Widget | None): If a widget, capture mouse event, or None to end mouse capture.\n \"\"\"\n if widget == self.mouse_captured:\n return\n if self.mouse_captured is not None:\n self.mouse_captured.post_message_no_wait(\n events.MouseRelease(self, self.mouse_position)\n )\n self.mouse_captured = widget\n if widget is not None:\n widget.post_message_no_wait(events.MouseCapture(self, self.mouse_position))\n\n def panic(self, *renderables: RenderableType) -> None:\n \"\"\"Exits the app then displays a message.\n\n Args:\n *renderables (RenderableType, optional): Rich renderables to display on exit.\n \"\"\"\n\n assert all(\n is_renderable(renderable) for renderable in renderables\n ), \"Can only call panic with strings or Rich renderables\"\n\n def render(renderable: RenderableType) -> list[Segment]:\n \"\"\"Render a panic renderables.\"\"\"\n segments = list(self.console.render(renderable, self.console.options))\n return segments\n\n pre_rendered = [Segments(render(renderable)) for renderable in renderables]\n self._exit_renderables.extend(pre_rendered)\n self._close_messages_no_wait()\n\n def _handle_exception(self, error: Exception) -> None:\n \"\"\"Called with an unhandled exception.\n\n Args:\n error (Exception): An exception instance.\n \"\"\"\n\n if hasattr(error, \"__rich__\"):\n # Exception has a rich method, so we can defer to that for the rendering\n self.panic(error)\n else:\n # Use default exception rendering\n self.fatal_error()\n\n def fatal_error(self) -> None:\n \"\"\"Exits the app after an unhandled exception.\"\"\"\n self.bell()\n traceback = Traceback(\n show_locals=True, width=None, locals_max_length=5, suppress=[rich]\n )\n self._exit_renderables.append(\n Segments(self.console.render(traceback, self.console.options))\n )\n self._close_messages_no_wait()\n\n def _print_error_renderables(self) -> None:\n for renderable in self._exit_renderables:\n self.error_console.print(renderable)\n self._exit_renderables.clear()\n\n async def _process_messages(\n self,\n ready_callback: CallbackType | None = None,\n headless: bool = False,\n terminal_size: tuple[int, int] | None = None,\n ) -> None:\n self._set_active()\n\n if self.devtools is not None:\n from .devtools.client import DevtoolsConnectionError\n\n try:\n await self.devtools.connect()\n self.log.system(f\"Connected to devtools ( {self.devtools.url} )\")\n except DevtoolsConnectionError:\n self.log.system(f\"Couldn't connect to devtools ( {self.devtools.url} )\")\n\n self.log.system(\"---\")\n\n self.log.system(driver=self.driver_class)\n self.log.system(loop=asyncio.get_running_loop())\n self.log.system(features=self.features)\n\n try:\n if self.css_path:\n self.stylesheet.read_all(self.css_path)\n for path, css, tie_breaker in self.get_default_css():\n self.stylesheet.add_source(\n css, path=path, is_default_css=True, tie_breaker=tie_breaker\n )\n if self.CSS:\n try:\n app_css_path = (\n f\"{inspect.getfile(self.__class__)}:{self.__class__.__name__}\"\n )\n except TypeError:\n app_css_path = f\"{self.__class__.__name__}\"\n self.stylesheet.add_source(\n self.CSS, path=app_css_path, is_default_css=False\n )\n except Exception as error:\n self._handle_exception(error)\n self._print_error_renderables()\n return\n\n if self.css_monitor:\n self.set_interval(0.25, self.css_monitor, name=\"css monitor\")\n self.log.system(\"[b green]STARTED[/]\", self.css_monitor)\n\n async def run_process_messages():\n \"\"\"The main message loop, invoke below.\"\"\"\n\n async def invoke_ready_callback() -> None:\n if ready_callback is not None:\n ready_result = ready_callback()\n if inspect.isawaitable(ready_result):\n await ready_result\n\n try:\n try:\n await self._dispatch_message(events.Compose(sender=self))\n await self._dispatch_message(events.Mount(sender=self))\n finally:\n self._mounted_event.set()\n\n Reactive._initialize_object(self)\n\n self.stylesheet.update(self)\n self.refresh()\n\n await self.animator.start()\n\n except Exception:\n await self.animator.stop()\n raise\n\n finally:\n self._running = True\n await self._ready()\n await invoke_ready_callback()\n\n try:\n await self._process_messages_loop()\n except asyncio.CancelledError:\n pass\n finally:\n self._running = False\n try:\n await self.animator.stop()\n finally:\n for timer in list(self._timers):\n await timer.stop()\n\n self._running = True\n try:\n load_event = events.Load(sender=self)\n await self._dispatch_message(load_event)\n\n driver: Driver\n driver_class = cast(\n \"type[Driver]\",\n HeadlessDriver if headless else self.driver_class,\n )\n driver = self._driver = driver_class(self.console, self, size=terminal_size)\n\n if not self._exit:\n driver.start_application_mode()\n try:\n if headless:\n await run_process_messages()\n else:\n if self.devtools is not None:\n devtools = self.devtools\n assert devtools is not None\n from .devtools.redirect_output import StdoutRedirector\n\n redirector = StdoutRedirector(devtools)\n with redirect_stderr(redirector):\n with redirect_stdout(redirector): # type: ignore\n await run_process_messages()\n else:\n null_file = _NullFile()\n with redirect_stderr(null_file):\n with redirect_stdout(null_file):\n await run_process_messages()\n\n finally:\n driver.stop_application_mode()\n except Exception as error:\n self._handle_exception(error)\n\n async def _pre_process(self) -> None:\n pass\n\n async def _ready(self) -> None:\n \"\"\"Called immediately prior to processing messages.\n\n May be used as a hook for any operations that should run first.\n\n \"\"\"\n try:\n screenshot_timer = float(os.environ.get(\"TEXTUAL_SCREENSHOT\", \"0\"))\n except ValueError:\n return\n\n screenshot_title = os.environ.get(\"TEXTUAL_SCREENSHOT_TITLE\")\n\n if not screenshot_timer:\n return\n\n async def on_screenshot():\n \"\"\"Used by docs plugin.\"\"\"\n svg = self.export_screenshot(title=screenshot_title)\n self._screenshot = svg # type: ignore\n self.exit()\n\n self.set_timer(screenshot_timer, on_screenshot, name=\"screenshot timer\")\n\n async def _on_compose(self) -> None:\n try:\n widgets = list(self.compose())\n except TypeError as error:\n raise TypeError(\n f\"{self!r} compose() returned an invalid response; {error}\"\n ) from None\n await self.mount_all(widgets)\n\n def _on_idle(self) -> None:\n \"\"\"Perform actions when there are no messages in the queue.\"\"\"\n if self._require_stylesheet_update:\n nodes: set[DOMNode] = {\n child\n for node in self._require_stylesheet_update\n for child in node.walk_children(with_self=True)\n }\n self._require_stylesheet_update.clear()\n self.stylesheet.update_nodes(nodes, animate=True)\n\n def _register_child(\n self, parent: DOMNode, child: Widget, before: int | None, after: int | None\n ) -> None:\n \"\"\"Register a widget as a child of another.\n\n Args:\n parent (DOMNode): Parent node.\n child (Widget): The child widget to register.\n widgets: The widget to register.\n before (int, optional): A location to mount before.\n after (int, option): A location to mount after.\n \"\"\"\n\n # Let's be 100% sure that we've not been asked to do a before and an\n # after at the same time. It's possible that we can remove this\n # check later on, but for the purposes of development right now,\n # it's likely a good idea to keep it here to check assumptions in\n # the rest of the code.\n if before is not None and after is not None:\n raise AppError(\"Only one of 'before' and 'after' may be specified.\")\n\n # If we don't already know about this widget...\n if child not in self._registry:\n # Now to figure out where to place it. If we've got a `before`...\n if before is not None:\n # ...it's safe to NodeList._insert before that location.\n parent.children._insert(before, child)\n elif after is not None and after != -1:\n # In this case we've got an after. -1 holds the special\n # position (for now) of meaning \"okay really what I mean is\n # do an append, like if I'd asked to add with no before or\n # after\". So... we insert before the next item in the node\n # list, iff after isn't -1.\n parent.children._insert(after + 1, child)\n else:\n # At this point we appear to not be adding before or after,\n # or we've got a before/after value that really means\n # \"please append\". So...\n parent.children._append(child)\n\n # Now that the widget is in the NodeList of its parent, sort out\n # the rest of the admin.\n self._registry.add(child)\n child._attach(parent)\n child._post_register(self)\n child._start_messages()\n\n def _register(\n self,\n parent: DOMNode,\n *widgets: Widget,\n before: int | None = None,\n after: int | None = None,\n ) -> list[Widget]:\n \"\"\"Register widget(s) so they may receive events.\n\n Args:\n parent (DOMNode): Parent node.\n *widgets: The widget(s) to register.\n before (int, optional): A location to mount before.\n after (int, option): A location to mount after.\n Returns:\n list[Widget]: List of modified widgets.\n\n \"\"\"\n\n if not widgets:\n return []\n\n new_widgets = list(widgets)\n if before is not None or after is not None:\n # There's a before or after, which means there's going to be an\n # insertion, so make it easier to get the new things in the\n # correct order.\n new_widgets = reversed(new_widgets)\n\n apply_stylesheet = self.stylesheet.apply\n for widget in new_widgets:\n if not isinstance(widget, Widget):\n raise AppError(f\"Can't register {widget!r}; expected a Widget instance\")\n if widget not in self._registry:\n self._register_child(parent, widget, before, after)\n if widget.children:\n self._register(widget, *widget.children)\n apply_stylesheet(widget)\n return list(widgets)\n\n def _unregister(self, widget: Widget) -> None:\n \"\"\"Unregister a widget.\n\n Args:\n widget (Widget): A Widget to unregister\n \"\"\"\n widget.reset_focus()\n if isinstance(widget._parent, Widget):\n widget._parent.children._remove(widget)\n widget._detach()\n self._registry.discard(widget)\n\n async def _disconnect_devtools(self):\n if self.devtools is not None:\n await self.devtools.disconnect()\n\n def _start_widget(self, parent: Widget, widget: Widget) -> None:\n \"\"\"Start a widget (run it's task) so that it can receive messages.\n\n Args:\n parent (Widget): The parent of the Widget.\n widget (Widget): The Widget to start.\n \"\"\"\n\n widget._attach(parent)\n widget._start_messages()\n self.app._registry.add(widget)\n\n def is_mounted(self, widget: Widget) -> bool:\n \"\"\"Check if a widget is mounted.\n\n Args:\n widget (Widget): A widget.\n\n Returns:\n bool: True of the widget is mounted.\n \"\"\"\n return widget in self._registry\n\n async def _close_all(self) -> None:\n \"\"\"Close all message pumps.\"\"\"\n\n # Close all screens on the stack\n for screen in self._screen_stack:\n if screen._running:\n await self._prune_node(screen)\n\n self._screen_stack.clear()\n\n # Close pre-defined screens\n for screen in self.SCREENS.values():\n if isinstance(screen, Screen) and screen._running:\n await self._prune_node(screen)\n\n # Close any remaining nodes\n # Should be empty by now\n remaining_nodes = list(self._registry)\n for child in remaining_nodes:\n await child._close_messages()\n\n async def _shutdown(self) -> None:\n driver = self._driver\n self._running = False\n if driver is not None:\n driver.disable_input()\n await self._close_all()\n await self._close_messages()\n\n await self._dispatch_message(events.Unmount(sender=self))\n\n self._print_error_renderables()\n if self.devtools is not None and self.devtools.is_connected:\n await self._disconnect_devtools()\n\n if self._writer_thread is not None:\n self._writer_thread.stop()\n\n async def _on_exit_app(self) -> None:\n await self._message_queue.put(None)\n\n def refresh(self, *, repaint: bool = True, layout: bool = False) -> None:\n if self._screen_stack:\n self.screen.refresh(repaint=repaint, layout=layout)\n self.check_idle()\n\n def refresh_css(self, animate: bool = True) -> None:\n \"\"\"Refresh CSS.\n\n Args:\n animate (bool, optional): Also execute CSS animations. Defaults to True.\n \"\"\"\n stylesheet = self.app.stylesheet\n stylesheet.set_variables(self.get_css_variables())\n stylesheet.reparse()\n stylesheet.update(self.app, animate=animate)\n self.screen._refresh_layout(self.size, full=True)\n\n def _display(self, screen: Screen, renderable: RenderableType | None) -> None:\n \"\"\"Display a renderable within a sync.\n\n Args:\n screen (Screen): Screen instance\n renderable (RenderableType): A Rich renderable.\n \"\"\"\n\n try:\n if screen is not self.screen or renderable is None:\n return\n\n if self._running and not self._closed and not self.is_headless:\n console = self.console\n self._begin_update()\n try:\n try:\n console.print(renderable)\n except Exception as error:\n self._handle_exception(error)\n finally:\n self._end_update()\n console.file.flush()\n finally:\n self.post_display_hook()\n\n def post_display_hook(self) -> None:\n \"\"\"Called immediately after a display is done. Used in tests.\"\"\"\n\n def get_widget_at(self, x: int, y: int) -> tuple[Widget, Region]:\n \"\"\"Get the widget under the given coordinates.\n\n Args:\n x (int): X Coord.\n y (int): Y Coord.\n\n Returns:\n tuple[Widget, Region]: The widget and the widget's screen region.\n \"\"\"\n return self.screen.get_widget_at(x, y)\n\n def bell(self) -> None:\n \"\"\"Play the console 'bell'.\"\"\"\n if not self.is_headless:\n self.console.bell()\n\n @property\n def _binding_chain(self) -> list[tuple[DOMNode, Bindings]]:\n \"\"\"Get a chain of nodes and bindings to consider. If no widget is focused, returns the bindings from both the screen and the app level bindings. Otherwise, combines all the bindings from the currently focused node up the DOM to the root App.\n\n Returns:\n list[tuple[DOMNode, Bindings]]: List of DOM nodes and their bindings.\n \"\"\"\n focused = self.focused\n namespace_bindings: list[tuple[DOMNode, Bindings]]\n if focused is None:\n namespace_bindings = [\n (self.screen, self.screen._bindings),\n (self, self._bindings),\n ]\n else:\n namespace_bindings = [\n (node, node._bindings) for node in focused.ancestors_with_self\n ]\n return namespace_bindings\n\n async def check_bindings(self, key: str, priority: bool = False) -> bool:\n \"\"\"Handle a key press.\n\n Args:\n key (str): A key.\n priority (bool): If `True` check from `App` down, otherwise from focused up.\n\n Returns:\n bool: True if the key was handled by a binding, otherwise False\n \"\"\"\n for namespace, bindings in (\n reversed(self._binding_chain) if priority else self._binding_chain\n ):\n binding = bindings.keys.get(key)\n if binding is not None and binding.priority == priority:\n if await self.action(binding.action, namespace):\n return True\n return False\n\n async def on_event(self, event: events.Event) -> None:\n # Handle input events that haven't been forwarded\n # If the event has been forwarded it may have bubbled up back to the App\n if isinstance(event, events.Compose):\n screen = Screen(id=\"_default\")\n self._register(self, screen)\n self._screen_stack.append(screen)\n await super().on_event(event)\n\n elif isinstance(event, events.InputEvent) and not event.is_forwarded:\n if isinstance(event, events.MouseEvent):\n # Record current mouse position on App\n self.mouse_position = Offset(event.x, event.y)\n await self.screen._forward_event(event)\n elif isinstance(event, events.Key):\n if not await self.check_bindings(event.key, priority=True):\n forward_target = self.focused or self.screen\n await forward_target._forward_event(event)\n else:\n await self.screen._forward_event(event)\n\n elif isinstance(event, events.Paste):\n if self.focused is not None:\n await self.focused._forward_event(event)\n else:\n await super().on_event(event)\n\n async def action(\n self,\n action: str | tuple[str, tuple[str, ...]],\n default_namespace: object | None = None,\n ) -> bool:\n \"\"\"Perform an action.\n\n Args:\n action (str): Action encoded in a string.\n default_namespace (object | None): Namespace to use if not provided in the action,\n or None to use app. Defaults to None.\n\n Returns:\n bool: True if the event has handled.\n \"\"\"\n print(\"ACTION\", action, default_namespace)\n if isinstance(action, str):\n target, params = actions.parse(action)\n else:\n target, params = action\n implicit_destination = True\n if \".\" in target:\n destination, action_name = target.split(\".\", 1)\n if destination not in self._action_targets:\n raise ActionError(f\"Action namespace {destination} is not known\")\n action_target = getattr(self, destination)\n implicit_destination = True\n else:\n action_target = default_namespace or self\n action_name = target\n\n handled = await self._dispatch_action(action_target, action_name, params)\n if not handled and implicit_destination and not isinstance(action_target, App):\n handled = await self.app._dispatch_action(self.app, action_name, params)\n return handled\n\n async def _dispatch_action(\n self, namespace: object, action_name: str, params: Any\n ) -> bool:\n \"\"\"Dispatch an action to an action method.\n\n Args:\n namespace (object): Namespace (object) of action.\n action_name (str): Name of the action.\n params (Any): Action parameters.\n\n Returns:\n bool: True if handled, otherwise False.\n \"\"\"\n _rich_traceback_guard = True\n\n log(\n \"<action>\",\n namespace=namespace,\n action_name=action_name,\n params=params,\n )\n\n try:\n private_method = getattr(namespace, f\"_action_{action_name}\", None)\n if callable(private_method):\n await invoke(private_method, *params)\n return True\n public_method = getattr(namespace, f\"action_{action_name}\", None)\n if callable(public_method):\n await invoke(public_method, *params)\n return True\n log(\n f\"<action> {action_name!r} has no target.\"\n f\" Could not find methods '_action_{action_name}' or 'action_{action_name}'\"\n )\n except SkipAction:\n # The action method raised this to explicitly not handle the action\n log(\"<action> {action_name!r} skipped.\")\n return False\n\n async def _broker_event(\n self, event_name: str, event: events.Event, default_namespace: object | None\n ) -> bool:\n \"\"\"Allow the app an opportunity to dispatch events to action system.\n\n Args:\n event_name (str): _description_\n event (events.Event): An event object.\n default_namespace (object | None): The default namespace, where one isn't supplied.\n\n Returns:\n bool: True if an action was processed.\n \"\"\"\n try:\n style = getattr(event, \"style\")\n except AttributeError:\n return False\n try:\n _modifiers, action = extract_handler_actions(event_name, style.meta)\n except NoHandler:\n return False\n else:\n event.stop()\n if isinstance(action, (str, tuple)):\n await self.action(action, default_namespace=default_namespace)\n elif callable(action):\n await action()\n else:\n return False\n return True\n\n async def _on_update(self, message: messages.Update) -> None:\n message.stop()\n\n async def _on_layout(self, message: messages.Layout) -> None:\n message.stop()\n\n async def _on_key(self, event: events.Key) -> None:\n if not (await self.check_bindings(event.key)):\n await self.dispatch_key(event)\n\n async def _on_shutdown_request(self, event: events.ShutdownRequest) -> None:\n log(\"shutdown request\")\n await self._close_messages()\n\n async def _on_resize(self, event: events.Resize) -> None:\n event.stop()\n await self.screen.post_message(event)\n\n def _detach_from_dom(self, widgets: list[Widget]) -> list[Widget]:\n \"\"\"Detach a list of widgets from the DOM.\n\n Args:\n widgets (list[Widget]): The list of widgets to detach from the DOM.\n\n Returns:\n list[Widget]: The list of widgets that should be pruned.\n\n Note:\n A side-effect of calling this function is that each parent of\n each affected widget will be made to forget about the affected\n child.\n \"\"\"\n\n # We've been given a list of widgets to remove, but removing those\n # will also result in other (descendent) widgets being removed. So\n # to start with let's get a list of everything that's not going to\n # be in the DOM by the time we've finished. Note that, at this\n # point, it's entirely possible that there will be duplicates.\n everything_to_remove: list[Widget] = []\n for widget in widgets:\n everything_to_remove.extend(\n widget.walk_children(\n Widget, with_self=True, method=\"depth\", reverse=True\n )\n )\n\n # Next up, let's quickly create a deduped collection of things to\n # remove and ensure that, if one of them is the focused widget,\n # focus gets moved to somewhere else.\n dedupe_to_remove = set(everything_to_remove)\n if self.screen.focused in dedupe_to_remove:\n self.screen._reset_focus(\n self.screen.focused,\n [to_remove for to_remove in dedupe_to_remove if to_remove.can_focus],\n )\n\n # Next, we go through the set of widgets we've been asked to remove\n # and try and find the minimal collection of widgets that will\n # result in everything else that should be removed, being removed.\n # In other words: find the smallest set of ancestors in the DOM that\n # will remove the widgets requested for removal, and also ensure\n # that all knock-on effects happen too.\n request_remove = set(widgets)\n pruned_remove = [\n widget for widget in widgets if request_remove.isdisjoint(widget.ancestors)\n ]\n\n # Now that we know that minimal set of widgets, we go through them\n # and get their parents to forget about them. This has the effect of\n # snipping each affected branch from the DOM.\n for widget in pruned_remove:\n if widget.parent is not None:\n widget.parent.children._remove(widget)\n\n # Return the list of widgets that should end up being sent off in a\n # prune event.\n return pruned_remove\n\n def _walk_children(self, root: Widget) -> Iterable[list[Widget]]:\n \"\"\"Walk children depth first, generating widgets and a list of their siblings.\n\n Returns:\n Iterable[list[Widget]]: The child widgets of root.\n\n \"\"\"\n stack: list[Widget] = [root]\n pop = stack.pop\n push = stack.append\n\n while stack:\n widget = pop()\n if widget.children:\n yield [*widget.children, *widget._get_virtual_dom()]\n for child in widget.children:\n push(child)\n\n def _remove_nodes(self, widgets: list[Widget]) -> AwaitRemove:\n \"\"\"Remove nodes from DOM, and return an awaitable that awaits cleanup.\n\n Args:\n widgets (list[Widget]): List of nodes to remvoe.\n\n Returns:\n AwaitRemove: Awaitable that returns when the nodes have been fully removed.\n \"\"\"\n\n async def prune_widgets_task(\n widgets: list[Widget], finished_event: asyncio.Event\n ) -> None:\n \"\"\"Prune widgets as a background task.\n\n Args:\n widgets (list[Widget]): Widgets to prune.\n finished_event (asyncio.Event): Event to set when complete.\n \"\"\"\n try:\n await self._prune_nodes(widgets)\n finally:\n finished_event.set()\n self.refresh(layout=True)\n\n removed_widgets = self._detach_from_dom(widgets)\n\n finished_event = asyncio.Event()\n asyncio.create_task(prune_widgets_task(removed_widgets, finished_event))\n\n return AwaitRemove(finished_event)\n\n async def _prune_nodes(self, widgets: list[Widget]) -> None:\n \"\"\"Remove nodes and children.\n\n Args:\n widgets (Widget): _description_\n \"\"\"\n async with self._dom_lock:\n for widget in widgets:\n await self._prune_node(widget)\n\n async def _prune_node(self, root: Widget) -> None:\n \"\"\"Remove a node and its children. Children are removed before parents.\n\n Args:\n root (Widget): Node to remove.\n \"\"\"\n # Pruning a node that has been removed is a no-op\n if root not in self._registry:\n return\n\n node_children = list(self._walk_children(root))\n\n for children in reversed(node_children):\n # Closing children can be done asynchronously.\n close_messages = [\n child._close_messages(wait=True) for child in children if child._running\n ]\n # TODO: What if a message pump refuses to exit?\n if close_messages:\n await asyncio.gather(*close_messages)\n for child in children:\n self._unregister(child)\n\n await root._close_messages(wait=False)\n self._unregister(root)\n\n async def action_check_bindings(self, key: str) -> None:\n if not await self.check_bindings(key, priority=True):\n await self.check_bindings(key, priority=False)\n\n async def action_quit(self) -> None:\n \"\"\"Quit the app as soon as possible.\"\"\"\n self.exit()\n\n async def action_bang(self) -> None:\n 1 / 0\n\n async def action_bell(self) -> None:\n \"\"\"Play the terminal 'bell'.\"\"\"\n self.bell()\n\n async def action_focus(self, widget_id: str) -> None:\n \"\"\"Focus the given widget.\n\n Args:\n widget_id (str): ID of widget to focus.\n \"\"\"\n try:\n node = self.query(f\"#{widget_id}\").first()\n except NoMatches:\n pass\n else:\n if isinstance(node, Widget):\n self.set_focus(node)\n\n async def action_switch_screen(self, screen: str) -> None:\n \"\"\"Switches to another screen.\n\n Args:\n screen (str): Name of the screen.\n \"\"\"\n self.switch_screen(screen)\n\n async def action_push_screen(self, screen: str) -> None:\n \"\"\"Pushes a screen on to the screen stack and makes it active.\n\n Args:\n screen (str): Name of the screen.\n \"\"\"\n self.push_screen(screen)\n\n async def action_pop_screen(self) -> None:\n \"\"\"Removes the topmost screen and makes the new topmost screen active.\"\"\"\n self.pop_screen()\n\n async def action_back(self) -> None:\n try:\n self.pop_screen()\n except ScreenStackError:\n pass\n\n async def action_add_class_(self, selector: str, class_name: str) -> None:\n self.screen.query(selector).add_class(class_name)\n\n async def action_remove_class_(self, selector: str, class_name: str) -> None:\n self.screen.query(selector).remove_class(class_name)\n\n async def action_toggle_class(self, selector: str, class_name: str) -> None:\n self.screen.query(selector).toggle_class(class_name)\n\n def action_focus_next(self) -> None:\n \"\"\"Focus the next widget.\"\"\"\n self.screen.focus_next()\n\n def action_focus_previous(self) -> None:\n \"\"\"Focus the previous widget.\"\"\"\n self.screen.focus_previous()\n\n def _on_terminal_supports_synchronized_output(\n self, message: messages.TerminalSupportsSynchronizedOutput\n ) -> None:\n log.system(\"[b green]SynchronizedOutput mode is supported\")\n self._sync_available = True\n\n def _begin_update(self) -> None:\n if self._sync_available:\n self.console.file.write(SYNC_START)\n\n def _end_update(self) -> None:\n if self._sync_available:\n self.console.file.write(SYNC_END)\n\n\n_uvloop_init_done: bool = False\n\n\ndef _init_uvloop() -> None:\n \"\"\"\n Import and install the `uvloop` asyncio policy, if available.\n This is done only once, even if the function is called multiple times.\n \"\"\"\n global _uvloop_init_done\n\n if _uvloop_init_done:\n return\n\n try:\n import uvloop\n except ImportError:\n pass\n else:\n uvloop.install()\n\n _uvloop_init_done = True\n"}
|
{"src/textual/app.py": [{"type": "function", "name": "App.call_from_thread", "lines": [614, 657], "signature": "def call_from_thread( self, callback: Callable[..., CallThreadReturnType | Awaitable[CallThreadReturnType]], *args, **kwargs, ) -> CallThreadReturnType:", "doc": "Run a callback from another thread.\n\nLike asyncio apps in general, Textual apps are not thread-safe. If you call methods\nor set attributes on Textual objects from a thread, you may get unpredictable results.\n\nThis method will ensure that your code is ran within the correct context.\n\nArgs:\n callback (Callable): A callable to run.\n *args: Arguments to the callback.\n **kwargs: Keyword arguments for the callback.\n\nRaises:\n RuntimeError: If the app isn't running or if this method is called from the same\n thread where the app is running."}]}
| null |
["tests/test_concurrency.py::test_call_from_thread_app_not_running", "tests/test_concurrency.py::test_call_from_thread"]
|
[]
|
86e93536b991014e0ea4bf993068202b446bb698
|
{"first_commit_time": 1673100292.0, "pr_title": "Call from thread method", "pr_body": "This is a step towards having an answer to devs who want to integrate a third-party API with Textual.\r\n\r\nThe `call_from_thread` takes a callback that will be called in the app's loop from a thread. It will block until the callback returns. If integrating with a threaded API (many are under the hood), this will generally give the most predictable behaviour.\r\n\r\nThere are downsides: you call it from the same thread as the app. Otherwise it would deadlock.\r\n\r\nI'll leave this method undocumented for now. We will at least have an answer for the next version, and we can work on greater syntactical sugar in the meantime.", "pr_timeline": [], "issues": {}}
|
|
astronomer/astronomer-cosmos
| 758
|
https://github.com/astronomer/astronomer-cosmos/pull/758
|
astronomer__astronomer-cosmos-758
|
[]
|
9fcee8e785ea3bcb0d19e8d36f3f5b94bedafc98
|
diff --git a/cosmos/profiles/athena/access_key.py b/cosmos/profiles/athena/access_key.py
index a8f71c2b7..02de2be24 100644
--- a/cosmos/profiles/athena/access_key.py
+++ b/cosmos/profiles/athena/access_key.py
@@ -3,20 +3,33 @@
from typing import Any
+from cosmos.exceptions import CosmosValueError
+
from ..base import BaseProfileMapping
class AthenaAccessKeyProfileMapping(BaseProfileMapping):
"""
- Maps Airflow AWS connections to a dbt Athena profile using an access key id and secret access key.
+ Uses the Airflow AWS Connection provided to get_credentials() to generate the profile for dbt.
- https://docs.getdbt.com/docs/core/connect-data-platform/athena-setup
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/connections/aws.html
+
+
+ This behaves similarly to other provider operators such as the AWS Athena Operator.
+ Where you pass the aws_conn_id and the operator will generate the credentials for you.
+
+ https://registry.astronomer.io/providers/amazon/versions/latest/modules/athenaoperator
+
+ Information about the dbt Athena profile that is generated can be found here:
+
+ https://github.com/dbt-athena/dbt-athena?tab=readme-ov-file#configuring-your-profile
+ https://docs.getdbt.com/docs/core/connect-data-platform/athena-setup
"""
airflow_connection_type: str = "aws"
dbt_profile_type: str = "athena"
is_community: bool = True
+ temporary_credentials = None
required_fields = [
"aws_access_key_id",
@@ -26,11 +39,7 @@ class AthenaAccessKeyProfileMapping(BaseProfileMapping):
"s3_staging_dir",
"schema",
]
- secret_fields = ["aws_secret_access_key", "aws_session_token"]
airflow_param_mapping = {
- "aws_access_key_id": "login",
- "aws_secret_access_key": "password",
- "aws_session_token": "extra.aws_session_token",
"aws_profile_name": "extra.aws_profile_name",
"database": "extra.database",
"debug_query_state": "extra.debug_query_state",
@@ -49,11 +58,43 @@ class AthenaAccessKeyProfileMapping(BaseProfileMapping):
@property
def profile(self) -> dict[str, Any | None]:
"Gets profile. The password is stored in an environment variable."
+
+ self.temporary_credentials = self._get_temporary_credentials() # type: ignore
+
profile = {
**self.mapped_params,
**self.profile_args,
- # aws_secret_access_key and aws_session_token should always get set as env var
+ "aws_access_key_id": self.temporary_credentials.access_key,
"aws_secret_access_key": self.get_env_var_format("aws_secret_access_key"),
"aws_session_token": self.get_env_var_format("aws_session_token"),
}
+
return self.filter_null(profile)
+
+ @property
+ def env_vars(self) -> dict[str, str]:
+ "Overwrites the env_vars for athena, Returns a dictionary of environment variables that should be set based on the self.temporary_credentials."
+
+ if self.temporary_credentials is None:
+ raise CosmosValueError(f"Could not find the athena credentials.")
+
+ env_vars = {}
+
+ env_secret_key_name = self.get_env_var_name("aws_secret_access_key")
+ env_session_token_name = self.get_env_var_name("aws_session_token")
+
+ env_vars[env_secret_key_name] = str(self.temporary_credentials.secret_key)
+ env_vars[env_session_token_name] = str(self.temporary_credentials.token)
+
+ return env_vars
+
+ def _get_temporary_credentials(self): # type: ignore
+ """
+ Helper function to retrieve temporary short lived credentials
+ Returns an object including access_key, secret_key and token
+ """
+ from airflow.providers.amazon.aws.hooks.base_aws import AwsGenericHook
+
+ hook = AwsGenericHook(self.conn_id) # type: ignore
+ credentials = hook.get_credentials()
+ return credentials
diff --git a/pyproject.toml b/pyproject.toml
index c08de4ade..9d367c075 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -57,6 +57,7 @@ dbt-all = [
]
dbt-athena = [
"dbt-athena-community",
+ "apache-airflow-providers-amazon>=8.0.0",
]
dbt-bigquery = [
"dbt-bigquery",
@@ -110,7 +111,6 @@ tests = [
"mypy",
"sqlalchemy-stubs", # Change when sqlalchemy is upgraded https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html
]
-
docker = [
"apache-airflow-providers-docker>=3.5.0",
]
@@ -121,7 +121,6 @@ pydantic = [
"pydantic>=1.10.0,<2.0.0",
]
-
[project.entry-points.cosmos]
provider_info = "cosmos:get_provider_info"
|
diff --git a/tests/profiles/athena/test_athena_access_key.py b/tests/profiles/athena/test_athena_access_key.py
index 22c8efa2c..c224a9d4b 100644
--- a/tests/profiles/athena/test_athena_access_key.py
+++ b/tests/profiles/athena/test_athena_access_key.py
@@ -1,20 +1,49 @@
"Tests for the Athena profile."
import json
-from unittest.mock import patch
-
+from collections import namedtuple
+import sys
+from unittest.mock import MagicMock, patch
import pytest
from airflow.models.connection import Connection
from cosmos.profiles import get_automatic_profile_mapping
from cosmos.profiles.athena.access_key import AthenaAccessKeyProfileMapping
+Credentials = namedtuple("Credentials", ["access_key", "secret_key", "token"])
+
+mock_assumed_credentials = Credentials(
+ secret_key="my_aws_assumed_secret_key",
+ access_key="my_aws_assumed_access_key",
+ token="my_aws_assumed_token",
+)
+
+mock_missing_credentials = Credentials(access_key=None, secret_key=None, token=None)
+
+
[email protected](autouse=True)
+def mock_aws_module():
+ mock_aws_hook = MagicMock()
+
+ class MockAwsGenericHook:
+ def __init__(self, conn_id: str) -> None:
+ pass
+
+ def get_credentials(self) -> Credentials:
+ return mock_assumed_credentials
+
+ mock_aws_hook.AwsGenericHook = MockAwsGenericHook
+
+ with patch.dict(sys.modules, {"airflow.providers.amazon.aws.hooks.base_aws": mock_aws_hook}):
+ yield mock_aws_hook
+
@pytest.fixture()
def mock_athena_conn(): # type: ignore
"""
Sets the connection as an environment variable.
"""
+
conn = Connection(
conn_id="my_athena_connection",
conn_type="aws",
@@ -24,7 +53,7 @@ def mock_athena_conn(): # type: ignore
{
"aws_session_token": "token123",
"database": "my_database",
- "region_name": "my_region",
+ "region_name": "us-east-1",
"s3_staging_dir": "s3://my_bucket/dbt/",
"schema": "my_schema",
}
@@ -48,6 +77,7 @@ def test_athena_connection_claiming() -> None:
# - region_name
# - s3_staging_dir
# - schema
+
potential_values = {
"conn_type": "aws",
"login": "my_aws_access_key_id",
@@ -55,7 +85,7 @@ def test_athena_connection_claiming() -> None:
"extra": json.dumps(
{
"database": "my_database",
- "region_name": "my_region",
+ "region_name": "us-east-1",
"s3_staging_dir": "s3://my_bucket/dbt/",
"schema": "my_schema",
}
@@ -68,12 +98,14 @@ def test_athena_connection_claiming() -> None:
del values[key]
conn = Connection(**values) # type: ignore
- print("testing with", values)
-
- with patch("airflow.hooks.base.BaseHook.get_connection", return_value=conn):
- # should raise an InvalidMappingException
- profile_mapping = AthenaAccessKeyProfileMapping(conn, {})
- assert not profile_mapping.can_claim_connection()
+ with patch(
+ "cosmos.profiles.athena.access_key.AthenaAccessKeyProfileMapping._get_temporary_credentials",
+ return_value=mock_missing_credentials,
+ ):
+ with patch("airflow.hooks.base.BaseHook.get_connection", return_value=conn):
+ # should raise an InvalidMappingException
+ profile_mapping = AthenaAccessKeyProfileMapping(conn, {})
+ assert not profile_mapping.can_claim_connection()
# if we have them all, it should claim
conn = Connection(**potential_values) # type: ignore
@@ -88,6 +120,7 @@ def test_athena_profile_mapping_selected(
"""
Tests that the correct profile mapping is selected for Athena.
"""
+
profile_mapping = get_automatic_profile_mapping(
mock_athena_conn.conn_id,
)
@@ -100,13 +133,14 @@ def test_athena_profile_args(
"""
Tests that the profile values get set correctly for Athena.
"""
+
profile_mapping = get_automatic_profile_mapping(
mock_athena_conn.conn_id,
)
assert profile_mapping.profile == {
"type": "athena",
- "aws_access_key_id": mock_athena_conn.login,
+ "aws_access_key_id": mock_assumed_credentials.access_key,
"aws_secret_access_key": "{{ env_var('COSMOS_CONN_AWS_AWS_SECRET_ACCESS_KEY') }}",
"aws_session_token": "{{ env_var('COSMOS_CONN_AWS_AWS_SESSION_TOKEN') }}",
"database": mock_athena_conn.extra_dejson.get("database"),
@@ -122,9 +156,14 @@ def test_athena_profile_args_overrides(
"""
Tests that you can override the profile values for Athena.
"""
+
profile_mapping = get_automatic_profile_mapping(
mock_athena_conn.conn_id,
- profile_args={"schema": "my_custom_schema", "database": "my_custom_db", "aws_session_token": "override_token"},
+ profile_args={
+ "schema": "my_custom_schema",
+ "database": "my_custom_db",
+ "aws_session_token": "override_token",
+ },
)
assert profile_mapping.profile_args == {
@@ -135,7 +174,7 @@ def test_athena_profile_args_overrides(
assert profile_mapping.profile == {
"type": "athena",
- "aws_access_key_id": mock_athena_conn.login,
+ "aws_access_key_id": mock_assumed_credentials.access_key,
"aws_secret_access_key": "{{ env_var('COSMOS_CONN_AWS_AWS_SECRET_ACCESS_KEY') }}",
"aws_session_token": "{{ env_var('COSMOS_CONN_AWS_AWS_SESSION_TOKEN') }}",
"database": "my_custom_db",
@@ -151,10 +190,12 @@ def test_athena_profile_env_vars(
"""
Tests that the environment variables get set correctly for Athena.
"""
+
profile_mapping = get_automatic_profile_mapping(
mock_athena_conn.conn_id,
)
+
assert profile_mapping.env_vars == {
- "COSMOS_CONN_AWS_AWS_SECRET_ACCESS_KEY": mock_athena_conn.password,
- "COSMOS_CONN_AWS_AWS_SESSION_TOKEN": mock_athena_conn.extra_dejson.get("aws_session_token"),
+ "COSMOS_CONN_AWS_AWS_SECRET_ACCESS_KEY": mock_assumed_credentials.secret_key,
+ "COSMOS_CONN_AWS_AWS_SESSION_TOKEN": mock_assumed_credentials.token,
}
| 2023-12-11T01:54:27
|
{}
|
{"cosmos/profiles/athena/access_key.py": "\"Maps Airflow AWS connections to a dbt Athena profile using an access key id and secret access key.\"\nfrom __future__ import annotations\n\nfrom typing import Any\n\nfrom ..base import BaseProfileMapping\n\n\nclass AthenaAccessKeyProfileMapping(BaseProfileMapping):\n \"\"\"\n Maps Airflow AWS connections to a dbt Athena profile using an access key id and secret access key.\n\n https://docs.getdbt.com/docs/core/connect-data-platform/athena-setup\n https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/connections/aws.html\n \"\"\"\n\n airflow_connection_type: str = \"aws\"\n dbt_profile_type: str = \"athena\"\n is_community: bool = True\n\n required_fields = [\n \"aws_access_key_id\",\n \"aws_secret_access_key\",\n \"database\",\n \"region_name\",\n \"s3_staging_dir\",\n \"schema\",\n ]\n secret_fields = [\"aws_secret_access_key\", \"aws_session_token\"]\n airflow_param_mapping = {\n \"aws_access_key_id\": \"login\",\n \"aws_secret_access_key\": \"password\",\n \"aws_session_token\": \"extra.aws_session_token\",\n \"aws_profile_name\": \"extra.aws_profile_name\",\n \"database\": \"extra.database\",\n \"debug_query_state\": \"extra.debug_query_state\",\n \"lf_tags_database\": \"extra.lf_tags_database\",\n \"num_retries\": \"extra.num_retries\",\n \"poll_interval\": \"extra.poll_interval\",\n \"region_name\": \"extra.region_name\",\n \"s3_data_dir\": \"extra.s3_data_dir\",\n \"s3_data_naming\": \"extra.s3_data_naming\",\n \"s3_staging_dir\": \"extra.s3_staging_dir\",\n \"schema\": \"extra.schema\",\n \"seed_s3_upload_args\": \"extra.seed_s3_upload_args\",\n \"work_group\": \"extra.work_group\",\n }\n\n @property\n def profile(self) -> dict[str, Any | None]:\n \"Gets profile. The password is stored in an environment variable.\"\n profile = {\n **self.mapped_params,\n **self.profile_args,\n # aws_secret_access_key and aws_session_token should always get set as env var\n \"aws_secret_access_key\": self.get_env_var_format(\"aws_secret_access_key\"),\n \"aws_session_token\": self.get_env_var_format(\"aws_session_token\"),\n }\n return self.filter_null(profile)\n", "pyproject.toml": "[build-system]\nrequires = [\"hatchling\"]\nbuild-backend = \"hatchling.build\"\n\n[project]\nname = \"astronomer-cosmos\"\ndynamic = [\"version\"]\ndescription = \"Render 3rd party workflows in Airflow\"\nreadme = \"README.rst\"\nlicense = \"Apache-2.0\"\nrequires-python = \">=3.8\"\nauthors = [\n { name = \"Astronomer\", email = \"[email protected]\" },\n]\nkeywords = [\n \"airflow\",\n \"apache-airflow\",\n \"astronomer\",\n \"dags\",\n \"dbt\",\n]\nclassifiers = [\n \"Development Status :: 3 - Alpha\",\n \"Environment :: Web Environment\",\n \"Framework :: Apache Airflow\",\n \"Framework :: Apache Airflow :: Provider\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n]\ndependencies = [\n \"aenum\",\n \"attrs\",\n \"apache-airflow>=2.3.0\",\n \"importlib-metadata; python_version < '3.8'\",\n \"Jinja2>=3.0.0\",\n \"typing-extensions; python_version < '3.8'\",\n \"virtualenv\",\n]\n\n[project.optional-dependencies]\ndbt-all = [\n \"dbt-athena\",\n \"dbt-bigquery\",\n \"dbt-databricks\",\n \"dbt-exasol\",\n \"dbt-postgres\",\n \"dbt-redshift\",\n \"dbt-snowflake\",\n \"dbt-spark\",\n \"dbt-vertica\",\n]\ndbt-athena = [\n \"dbt-athena-community\",\n]\ndbt-bigquery = [\n \"dbt-bigquery\",\n]\ndbt-databricks = [\n \"dbt-databricks\",\n]\ndbt-exasol = [\n \"dbt-exasol\",\n]\ndbt-postgres = [\n \"dbt-postgres\",\n]\ndbt-redshift = [\n \"dbt-redshift\",\n]\ndbt-snowflake = [\n \"dbt-snowflake\",\n]\ndbt-spark = [\n \"dbt-spark\",\n]\ndbt-vertica = [\n \"dbt-vertica<=1.5.4\",\n]\nopenlineage = [\n \"openlineage-integration-common\",\n \"openlineage-airflow\",\n]\nall = [\n \"astronomer-cosmos[dbt-all]\",\n \"astronomer-cosmos[openlineage]\"\n]\ndocs =[\n \"sphinx\",\n \"pydata-sphinx-theme\",\n \"sphinx-autobuild\",\n \"sphinx-autoapi\",\n \"apache-airflow-providers-cncf-kubernetes>=5.1.1\"\n]\ntests = [\n \"packaging\",\n \"pytest>=6.0\",\n \"pytest-split\",\n \"pytest-dotenv\",\n \"requests-mock\",\n \"pytest-cov\",\n \"pytest-describe\",\n \"sqlalchemy-stubs\", # Change when sqlalchemy is upgraded https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html\n \"types-requests\",\n \"mypy\",\n \"sqlalchemy-stubs\", # Change when sqlalchemy is upgraded https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html\n]\n\ndocker = [\n \"apache-airflow-providers-docker>=3.5.0\",\n]\nkubernetes = [\n \"apache-airflow-providers-cncf-kubernetes>=5.1.1\",\n]\npydantic = [\n \"pydantic>=1.10.0,<2.0.0\",\n]\n\n\n[project.entry-points.cosmos]\nprovider_info = \"cosmos:get_provider_info\"\n\n[project.urls]\nHomepage = \"https://github.com/astronomer/astronomer-cosmos\"\nDocumentation = \"https://astronomer.github.io/astronomer-cosmos\"\n\"Source code\" = \"https://github.com/astronomer/astronomer-cosmos\"\n\n[tool.hatch.version]\npath = \"cosmos/__init__.py\"\n\n[tool.hatch.build.targets.sdist]\ninclude = [\n \"/cosmos\",\n]\n\n[tool.hatch.build.targets.wheel]\npackages = [\"cosmos\"]\n\n######################################\n# TESTING\n######################################\n\n[tool.hatch.envs.tests]\ndependencies = [\n \"astronomer-cosmos[tests]\",\n \"apache-airflow-providers-docker>=3.5.0\",\n \"apache-airflow-providers-cncf-kubernetes>=5.1.1\",\n \"types-PyYAML\",\n \"types-attrs\",\n \"types-requests\",\n \"types-python-dateutil\",\n \"apache-airflow\",\n \"Werkzeug<3.0.0\",\n]\n\n[[tool.hatch.envs.tests.matrix]]\npython = [\"3.8\", \"3.9\", \"3.10\"]\nairflow = [\"2.3\", \"2.4\", \"2.5\", \"2.6\", \"2.7\"]\n\n[tool.hatch.envs.tests.overrides]\nmatrix.airflow.dependencies = [\n { value = \"apache-airflow==2.3\", if = [\"2.3\"] },\n { value = \"apache-airflow==2.4\", if = [\"2.4\"] },\n { value = \"apache-airflow==2.5\", if = [\"2.5\"] },\n { value = \"apache-airflow==2.6\", if = [\"2.6\"] },\n { value = \"pydantic>=1.10.0,<2.0.0\", if = [\"2.6\"]},\n { value = \"apache-airflow==2.7\", if = [\"2.7\"] },\n]\n\n[tool.hatch.envs.tests.scripts]\nfreeze = \"pip freeze\"\ntype-check = \"mypy cosmos\"\ntest = 'pytest -vv --durations=0 . -m \"not integration\" --ignore=tests/test_example_dags.py --ignore=tests/test_example_dags_no_connections.py'\ntest-cov = \"\"\"pytest -vv --cov=cosmos --cov-report=term-missing --cov-report=xml --durations=0 -m \"not integration\" --ignore=tests/test_example_dags.py --ignore=tests/test_example_dags_no_connections.py\"\"\"\n# we install using the following workaround to overcome installation conflicts, such as:\n# apache-airflow 2.3.0 and dbt-core [0.13.0 - 1.5.2] and jinja2>=3.0.0 because these package versions have conflicting dependencies\ntest-integration-setup = \"\"\"pip uninstall dbt-postgres dbt-databricks dbt-vertica; \\\nrm -rf airflow.*; \\\nairflow db init; \\\npip install 'dbt-core' 'dbt-databricks' 'dbt-postgres' 'dbt-vertica' 'openlineage-airflow'\"\"\"\ntest-integration = \"\"\"rm -rf dbt/jaffle_shop/dbt_packages;\npytest -vv \\\n--cov=cosmos \\\n--cov-report=term-missing \\\n--cov-report=xml \\\n--durations=0 \\\n-m integration \\\n-k 'not (sqlite or example_cosmos_sources or example_cosmos_python_models or example_virtualenv)'\"\"\"\ntest-integration-expensive = \"\"\"pytest -vv \\\n--cov=cosmos \\\n--cov-report=term-missing \\\n--cov-report=xml \\\n--durations=0 \\\n-m integration \\\n-k 'example_cosmos_python_models or example_virtualenv'\"\"\"\ntest-integration-sqlite-setup = \"\"\"pip uninstall -y dbt-core dbt-sqlite openlineage-airflow openlineage-integration-common; \\\nrm -rf airflow.*; \\\nairflow db init; \\\npip install 'dbt-core==1.4' 'dbt-sqlite<=1.4' 'dbt-databricks<=1.4' 'dbt-postgres<=1.4' \"\"\"\ntest-integration-sqlite = \"\"\"\npytest -vv \\\n--cov=cosmos \\\n--cov-report=term-missing \\\n--cov-report=xml \\\n--durations=0 \\\n-m integration \\\n-k 'example_cosmos_sources or sqlite'\"\"\"\n\n[tool.pytest.ini_options]\nfilterwarnings = [\n \"ignore::DeprecationWarning\",\n]\nminversion = \"6.0\"\nmarkers = [\n \"integration\",\n \"sqlite\"\n]\n\n######################################\n# DOCS\n######################################\n\n[tool.hatch.envs.docs]\ndependencies = [\n \"aenum\",\n \"sphinx\",\n \"pydata-sphinx-theme\",\n \"sphinx-autobuild\",\n \"sphinx-autoapi\",\n \"openlineage-airflow\",\n \"apache-airflow-providers-cncf-kubernetes>=5.1.1\"\n]\n\n[tool.hatch.envs.docs.scripts]\nbuild = \"sphinx-build -b html docs docs/_build\"\nserve = \"sphinx-autobuild docs docs/_build\"\n\n######################################\n# THIRD PARTY TOOLS\n######################################\n[tool.black]\nline-length = 120\ntarget-version = ['py37', 'py38', 'py39', 'py310']\n\n[tool.isort]\nprofile = \"black\"\nknown_third_party = [\"airflow\", \"jinja2\"]\n\n[tool.mypy]\nstrict = true\nignore_missing_imports = true\nno_warn_unused_ignores = true\n\n[tool.ruff]\nline-length = 120\n[tool.ruff.lint]\nselect = [\"C901\"]\n[tool.ruff.lint.mccabe]\nmax-complexity = 8\n\n[tool.distutils.bdist_wheel]\nuniversal = true\n"}
|
diff --git a/pyproject.toml b/pyproject.toml
index c08de4ade..9d367c075 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -57,6 +57,7 @@ dbt-all = [
]
dbt-athena = [
"dbt-athena-community",
+ "apache-airflow-providers-amazon>=8.0.0",
]
dbt-bigquery = [
"dbt-bigquery",
@@ -110,7 +111,6 @@ tests = [
"mypy",
"sqlalchemy-stubs", # Change when sqlalchemy is upgraded https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html
]
-
docker = [
"apache-airflow-providers-docker>=3.5.0",
]
@@ -121,7 +121,6 @@ pydantic = [
"pydantic>=1.10.0,<2.0.0",
]
-
[project.entry-points.cosmos]
provider_info = "cosmos:get_provider_info"
|
{"cosmos/profiles/athena/access_key.py": [{"type": "function", "name": "AthenaAccessKeyProfileMapping.env_vars", "lines": [75, 89], "signature": "def env_vars(self) -> dict[str, str]:", "doc": "Overwrites the env_vars for athena, Returns a dictionary of environment variables that should be set based on the self.temporary_credentials."}, {"type": "function", "name": "AthenaAccessKeyProfileMapping._get_temporary_credentials", "lines": [91, 100], "signature": "def _get_temporary_credentials(self):", "doc": "Helper function to retrieve temporary short lived credentials\nReturns an object including access_key, secret_key and token"}]}
| null |
["tests/profiles/athena/test_athena_access_key.py::test_athena_connection_claiming", "tests/profiles/athena/test_athena_access_key.py::test_athena_profile_args", "tests/profiles/athena/test_athena_access_key.py::test_athena_profile_args_overrides", "tests/profiles/athena/test_athena_access_key.py::test_athena_profile_env_vars"]
|
["tests/profiles/athena/test_athena_access_key.py::test_athena_profile_mapping_selected"]
|
c5edba07d2265d5185eaba149a639e8a0740e498
|
{"first_commit_time": 1702073118.0, "pr_title": "Athena - Get temporary credentials from the conn_id", "pr_body": "## Description\r\n\r\n<!-- Add a brief but complete description of the change. -->\r\n\r\nPasses the `conn_id` to the `AwsGenericHook` and uses `get_credentials()`, which handles the creation of a session, credentials, freezing of credentials & also masking. \r\n\r\n[See get_credentials() docs here](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/_api/airflow/providers/amazon/aws/hooks/base_aws/index.html#airflow.providers.amazon.aws.hooks.base_aws.AwsGenericHook.get_credentials)\r\n\r\n## Related Issue(s)\r\n\r\n#691 \r\n<!-- If this PR closes an issue, you can use a keyword to auto-close. -->\r\n<!-- i.e. \"closes #0000\" -->\r\n\r\n## Breaking Change?\r\n\r\n<!-- If this introduces a breaking change, specify that here. -->\r\n\r\n## Checklist\r\n\r\n- [ ] I have made corresponding changes to the documentation (if required)\r\n- [X] I have added tests that prove my fix is effective or that my feature works\r\n", "pr_timeline": [{"time": 1702516996.0, "comment": "### <span aria-hidden=\"true\">\ud83d\udc77</span> Deploy Preview for *amazing-pothos-a3bca0* processing.\n\n\n| Name | Link |\n|:-:|------------------------|\n|<span aria-hidden=\"true\">\ud83d\udd28</span> Latest commit | 67615488341c4fbba77dd631e7d8f43ff1fbda36 |\n|<span aria-hidden=\"true\">\ud83d\udd0d</span> Latest deploy log | https://app.netlify.com/sites/amazing-pothos-a3bca0/deploys/657a59029dc4d1000891bfde |"}, {"time": 1702280483.0, "comment": "Do you mean dbt-Athena or dbt-athena-community. Dbt-Athena is the original module but the community version is much improved and optimised. https://github.com/dbt-athena/dbt-athena"}, {"time": 1702284012.0, "comment": "@pixie79 the `dbt-athena` references the project optional dependency on line 58 \r\n\r\n```\r\ndbt-athena = [\r\n \"dbt-athena-community\",\r\n]\r\n```"}, {"time": 1702338636.0, "comment": "@jbandoro Thanks for the feedback. If the user has an Airflow connection with no `extra.role_arn` provided, their credentials (secret + key id) used in the DBT profile will be unchanged, except there will now always be a `aws_session_token`."}, {"time": 1702343774.0, "comment": "Tests failing due to #761"}, {"time": 1702457551.0, "comment": "LGTM, I tried this out today and it seems to work for my use-case -- unfortunately I don't have time to look into the memory issue and verify that this works at scale, but that's hopefully going to be resolved with the use of the AWS hook. Ran a couple of models concurrently and didn't run into any issues."}, {"time": 1702518365.0, "comment": "## [Codecov](https://app.codecov.io/gh/astronomer/astronomer-cosmos/pull/758?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer) Report\nAttention: `1 lines` in your changes are missing coverage. Please review.\n> Comparison is base [(`9fcee8e`)](https://app.codecov.io/gh/astronomer/astronomer-cosmos/commit/9fcee8e785ea3bcb0d19e8d36f3f5b94bedafc98?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer) 93.22% compared to head [(`6761548`)](https://app.codecov.io/gh/astronomer/astronomer-cosmos/pull/758?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer) 93.22%.\n\n| [Files](https://app.codecov.io/gh/astronomer/astronomer-cosmos/pull/758?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer) | Patch % | Lines |\n|---|---|---|\n| [cosmos/profiles/athena/access\\_key.py](https://app.codecov.io/gh/astronomer/astronomer-cosmos/pull/758?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer#diff-Y29zbW9zL3Byb2ZpbGVzL2F0aGVuYS9hY2Nlc3Nfa2V5LnB5) | 94.44% | [1 Missing :warning: ](https://app.codecov.io/gh/astronomer/astronomer-cosmos/pull/758?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer) |\n\n<details><summary>Additional details and impacted files</summary>\n\n\n```diff\n@@ Coverage Diff @@\n## main #758 +/- ##\n=======================================\n Coverage 93.22% 93.22% \n=======================================\n Files 55 55 \n Lines 2464 2481 +17 \n=======================================\n+ Hits 2297 2313 +16 \n- Misses 167 168 +1 \n```\n\n\n\n</details>\n\n[:umbrella: View full report in Codecov by Sentry](https://app.codecov.io/gh/astronomer/astronomer-cosmos/pull/758?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer). \n:loudspeaker: Have feedback on the report? [Share it here](https://about.codecov.io/codecov-pr-comment-feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=astronomer).\n"}, {"time": 1702509972.0, "comment": "@tatiana I am unable to write a test for the `_get_temporary_credentials`method, as I am unable to import `airflow.providers.amazon` via the test dependencies as we get some dependency conflicts when using Python 3.8/3.9 + Airflow 2.3. \r\n\r\nThe amazon provider at 8.0.0 is compatible with 2.3 and python down to 3.7.\r\n\r\nhttps://pypi.org/project/apache-airflow-providers-amazon/8.0.0/\r\n\r\nI tried many ways to fix this on the weekend, but could not. I ended up resolving this by removing the dependency and mocking the class function. \r\n\r\nThis has the unfortunately side-affect of reducing our test coverage"}, {"time": 1702512862.0, "comment": "Thanks @octiva addressing all the feedback! For your question below on test coverage:\r\n\r\n> I tried many ways to fix this on the weekend, but could not. I ended up resolving this by removing the dependency and mocking the class function.\r\n> \r\n> This has the unfortunately side-affect of reducing our test coverage\r\n\r\nYou can get test coverage for the `_get_temporary_credential` method by instead patching the provider module that can't be imported, like in the examples [here](https://github.com/astronomer/astronomer-cosmos/blob/9fcee8e785ea3bcb0d19e8d36f3f5b94bedafc98/tests/operators/test_local.py#L439) and [here](https://github.com/astronomer/astronomer-cosmos/blob/9fcee8e785ea3bcb0d19e8d36f3f5b94bedafc98/tests/test_export.py#L76-L83).\r\n\r\nIt might be easier to do this patching in a fixture and reuse it in all of tests that require it."}, {"time": 1702515418.0, "comment": "reposting from correct account @jbandoro Thanks for the hint. I had attempted this, but was doing it at the wrong level, and now ive got it working quite nicely. Let me know what you think \ud83d\ude80 "}], "issues": {}}
|
astropy/astropy
| 13,094
|
https://github.com/astropy/astropy/pull/13094
|
astropy__astropy-13094
|
[]
|
583464d40b32313da6b864d2f260c06d1a0e67e6
|
diff --git a/astropy/wcs/wcs.py b/astropy/wcs/wcs.py
index 6d508279375d..3903ad49cc1c 100644
--- a/astropy/wcs/wcs.py
+++ b/astropy/wcs/wcs.py
@@ -3226,6 +3226,21 @@ def has_spectral(self):
except InconsistentAxisTypesError:
return False
+ @property
+ def temporal(self):
+ """
+ A copy of the current WCS with only the time axes included
+ """
+ return self.sub([WCSSUB_TIME]) # Defined by C-ext # noqa: F821
+
+ @property
+ def is_temporal(self):
+ return self.has_temporal and self.naxis == 1
+
+ @property
+ def has_temporal(self):
+ return any(t // 1000 == 4 for t in self.wcs.axis_types)
+
@property
def has_distortion(self):
"""
diff --git a/docs/changes/wcs/13094.feature.rst b/docs/changes/wcs/13094.feature.rst
new file mode 100644
index 000000000000..e6b718e0a4e0
--- /dev/null
+++ b/docs/changes/wcs/13094.feature.rst
@@ -0,0 +1,2 @@
+Add ``temporal`` properties for convenient access of/selection of/testing for
+the ``TIME`` axis introduced in WCSLIB version 7.8.
|
diff --git a/astropy/wcs/tests/test_wcs.py b/astropy/wcs/tests/test_wcs.py
index f51b50906e80..9aca3b34b1de 100644
--- a/astropy/wcs/tests/test_wcs.py
+++ b/astropy/wcs/tests/test_wcs.py
@@ -1534,10 +1534,24 @@ def test_pixlist_wcs_colsel():
_WCSLIB_VER < Version('7.8'),
reason="TIME axis extraction only works with wcslib 7.8 or later"
)
-def test_time_axis_selection(tab_wcs_2di_f):
+def test_time_axis_selection():
w = wcs.WCS(naxis=3)
w.wcs.ctype = ['RA---TAN', 'DEC--TAN', 'TIME']
w.wcs.set()
assert list(w.sub([wcs.WCSSUB_TIME]).wcs.ctype) == ['TIME']
assert (w.wcs_pix2world([[1, 2, 3]], 0)[0, 2] ==
w.sub([wcs.WCSSUB_TIME]).wcs_pix2world([[3]], 0)[0, 0])
+
+
[email protected](
+ _WCSLIB_VER < Version('7.8'),
+ reason="TIME axis extraction only works with wcslib 7.8 or later"
+)
+def test_temporal():
+ w = wcs.WCS(naxis=3)
+ w.wcs.ctype = ['RA---TAN', 'DEC--TAN', 'TIME']
+ w.wcs.set()
+ assert w.has_temporal
+ assert w.sub([wcs.WCSSUB_TIME]).is_temporal
+ assert (w.wcs_pix2world([[1, 2, 3]], 0)[0, 2] ==
+ w.temporal.wcs_pix2world([[3]], 0)[0, 0])
| 2022-04-10T18:52:18
|
{}
|
{"astropy/wcs/wcs.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\n# Under the hood, there are 3 separate classes that perform different\n# parts of the transformation:\n#\n# - `~astropy.wcs.Wcsprm`: Is a direct wrapper of the core WCS\n# functionality in `wcslib`_. (This includes TPV and TPD\n# polynomial distortion, but not SIP distortion).\n#\n# - `~astropy.wcs.Sip`: Handles polynomial distortion as defined in the\n# `SIP`_ convention.\n#\n# - `~astropy.wcs.DistortionLookupTable`: Handles `distortion paper`_\n# lookup tables.\n#\n# Additionally, the class `WCS` aggregates all of these transformations\n# together in a pipeline:\n#\n# - Detector to image plane correction (by a pair of\n# `~astropy.wcs.DistortionLookupTable` objects).\n#\n# - `SIP`_ distortion correction (by an underlying `~astropy.wcs.Sip`\n# object)\n#\n# - `distortion paper`_ table-lookup correction (by a pair of\n# `~astropy.wcs.DistortionLookupTable` objects).\n#\n# - `wcslib`_ WCS transformation (by a `~astropy.wcs.Wcsprm` object)\n\n# STDLIB\nimport copy\nimport uuid\nimport io\nimport itertools\nimport os\nimport re\nimport textwrap\nimport warnings\nimport builtins\n\n# THIRD-PARTY\nimport numpy as np\n\n# LOCAL\nfrom astropy import log\nfrom astropy.io import fits\nfrom . import docstrings\nfrom . import _wcs\n\nfrom astropy import units as u\nfrom astropy.utils.compat import possible_filename\nfrom astropy.utils.exceptions import AstropyWarning, AstropyUserWarning, AstropyDeprecationWarning\nfrom astropy.utils.decorators import deprecated_renamed_argument\n\n# Mix-in class that provides the APE 14 API\nfrom .wcsapi.fitswcs import FITSWCSAPIMixin, SlicedFITSWCS\n\n__all__ = ['FITSFixedWarning', 'WCS', 'find_all_wcs',\n 'DistortionLookupTable', 'Sip', 'Tabprm', 'Wcsprm', 'Auxprm',\n 'Celprm', 'Prjprm', 'Wtbarr', 'WCSBase', 'validate', 'WcsError',\n 'SingularMatrixError', 'InconsistentAxisTypesError',\n 'InvalidTransformError', 'InvalidCoordinateError',\n 'InvalidPrjParametersError', 'NoSolutionError',\n 'InvalidSubimageSpecificationError', 'NoConvergence',\n 'NonseparableSubimageCoordinateSystemError',\n 'NoWcsKeywordsFoundError', 'InvalidTabularParametersError']\n\n\n__doctest_skip__ = ['WCS.all_world2pix']\n\n\nif _wcs is not None:\n _parsed_version = _wcs.__version__.split('.')\n if int(_parsed_version[0]) == 5 and int(_parsed_version[1]) < 8:\n raise ImportError(\n \"astropy.wcs is built with wcslib {0}, but only versions 5.8 and \"\n \"later on the 5.x series are known to work. The version of wcslib \"\n \"that ships with astropy may be used.\")\n\n if not _wcs._sanity_check():\n raise RuntimeError(\n \"astropy.wcs did not pass its sanity check for your build \"\n \"on your platform.\")\n\n WCSBase = _wcs._Wcs\n DistortionLookupTable = _wcs.DistortionLookupTable\n Sip = _wcs.Sip\n Wcsprm = _wcs.Wcsprm\n Auxprm = _wcs.Auxprm\n Celprm = _wcs.Celprm\n Prjprm = _wcs.Prjprm\n Tabprm = _wcs.Tabprm\n Wtbarr = _wcs.Wtbarr\n WcsError = _wcs.WcsError\n SingularMatrixError = _wcs.SingularMatrixError\n InconsistentAxisTypesError = _wcs.InconsistentAxisTypesError\n InvalidTransformError = _wcs.InvalidTransformError\n InvalidCoordinateError = _wcs.InvalidCoordinateError\n NoSolutionError = _wcs.NoSolutionError\n InvalidSubimageSpecificationError = _wcs.InvalidSubimageSpecificationError\n NonseparableSubimageCoordinateSystemError = _wcs.NonseparableSubimageCoordinateSystemError\n NoWcsKeywordsFoundError = _wcs.NoWcsKeywordsFoundError\n InvalidTabularParametersError = _wcs.InvalidTabularParametersError\n InvalidPrjParametersError = _wcs.InvalidPrjParametersError\n\n # Copy all the constants from the C extension into this module's namespace\n for key, val in _wcs.__dict__.items():\n if key.startswith(('WCSSUB_', 'WCSHDR_', 'WCSHDO_', 'WCSCOMPARE_', 'PRJ_')):\n locals()[key] = val\n __all__.append(key)\n\n # Set coordinate extraction callback for WCS -TAB:\n def _load_tab_bintable(hdulist, extnam, extver, extlev, kind, ttype, row, ndim):\n arr = hdulist[(extnam, extver)].data[ttype][row - 1]\n\n if arr.ndim != ndim:\n if kind == 'c' and ndim == 2:\n arr = arr.reshape((arr.size, 1))\n else:\n raise ValueError(\"Bad TDIM\")\n\n return np.ascontiguousarray(arr, dtype=np.double)\n\n _wcs.set_wtbarr_fitsio_callback(_load_tab_bintable)\n\nelse:\n WCSBase = object\n Wcsprm = object\n DistortionLookupTable = object\n Sip = object\n Tabprm = object\n Wtbarr = object\n WcsError = None\n SingularMatrixError = None\n InconsistentAxisTypesError = None\n InvalidTransformError = None\n InvalidCoordinateError = None\n NoSolutionError = None\n InvalidSubimageSpecificationError = None\n NonseparableSubimageCoordinateSystemError = None\n NoWcsKeywordsFoundError = None\n InvalidTabularParametersError = None\n\n\n# Additional relax bit flags\nWCSHDO_SIP = 0x80000\n\n# Regular expression defining SIP keyword It matches keyword that starts with A\n# or B, optionally followed by P, followed by an underscore then a number in\n# range of 0-19, followed by an underscore and another number in range of 0-19.\n# Keyword optionally ends with a capital letter.\nSIP_KW = re.compile('''^[AB]P?_1?[0-9]_1?[0-9][A-Z]?$''')\n\n\ndef _parse_keysel(keysel):\n keysel_flags = 0\n if keysel is not None:\n for element in keysel:\n if element.lower() == 'image':\n keysel_flags |= _wcs.WCSHDR_IMGHEAD\n elif element.lower() == 'binary':\n keysel_flags |= _wcs.WCSHDR_BIMGARR\n elif element.lower() == 'pixel':\n keysel_flags |= _wcs.WCSHDR_PIXLIST\n else:\n raise ValueError(\n \"keysel must be a list of 'image', 'binary' \" +\n \"and/or 'pixel'\")\n else:\n keysel_flags = -1\n\n return keysel_flags\n\n\nclass NoConvergence(Exception):\n \"\"\"\n An error class used to report non-convergence and/or divergence\n of numerical methods. It is used to report errors in the\n iterative solution used by\n the :py:meth:`~astropy.wcs.WCS.all_world2pix`.\n\n Attributes\n ----------\n\n best_solution : `numpy.ndarray`\n Best solution achieved by the numerical method.\n\n accuracy : `numpy.ndarray`\n Accuracy of the ``best_solution``.\n\n niter : `int`\n Number of iterations performed by the numerical method\n to compute ``best_solution``.\n\n divergent : None, `numpy.ndarray`\n Indices of the points in ``best_solution`` array\n for which the solution appears to be divergent. If the\n solution does not diverge, ``divergent`` will be set to `None`.\n\n slow_conv : None, `numpy.ndarray`\n Indices of the solutions in ``best_solution`` array\n for which the solution failed to converge within the\n specified maximum number of iterations. If there are no\n non-converging solutions (i.e., if the required accuracy\n has been achieved for all input data points)\n then ``slow_conv`` will be set to `None`.\n\n \"\"\"\n\n def __init__(self, *args, best_solution=None, accuracy=None, niter=None,\n divergent=None, slow_conv=None, **kwargs):\n super().__init__(*args)\n\n self.best_solution = best_solution\n self.accuracy = accuracy\n self.niter = niter\n self.divergent = divergent\n self.slow_conv = slow_conv\n\n if kwargs:\n warnings.warn(\"Function received unexpected arguments ({}) these \"\n \"are ignored but will raise an Exception in the \"\n \"future.\".format(list(kwargs)),\n AstropyDeprecationWarning)\n\n\nclass FITSFixedWarning(AstropyWarning):\n \"\"\"\n The warning raised when the contents of the FITS header have been\n modified to be standards compliant.\n \"\"\"\n pass\n\n\nclass WCS(FITSWCSAPIMixin, WCSBase):\n \"\"\"WCS objects perform standard WCS transformations, and correct for\n `SIP`_ and `distortion paper`_ table-lookup transformations, based\n on the WCS keywords and supplementary data read from a FITS file.\n\n See also: https://docs.astropy.org/en/stable/wcs/\n\n Parameters\n ----------\n header : `~astropy.io.fits.Header`, `~astropy.io.fits.hdu.image.PrimaryHDU`, `~astropy.io.fits.hdu.image.ImageHDU`, str, dict-like, or None, optional\n If *header* is not provided or None, the object will be\n initialized to default values.\n\n fobj : `~astropy.io.fits.HDUList`, optional\n It is needed when header keywords point to a `distortion\n paper`_ lookup table stored in a different extension.\n\n key : str, optional\n The name of a particular WCS transform to use. This may be\n either ``' '`` or ``'A'``-``'Z'`` and corresponds to the\n ``\\\"a\\\"`` part of the ``CTYPEia`` cards. *key* may only be\n provided if *header* is also provided.\n\n minerr : float, optional\n The minimum value a distortion correction must have in order\n to be applied. If the value of ``CQERRja`` is smaller than\n *minerr*, the corresponding distortion is not applied.\n\n relax : bool or int, optional\n Degree of permissiveness:\n\n - `True` (default): Admit all recognized informal extensions\n of the WCS standard.\n\n - `False`: Recognize only FITS keywords defined by the\n published WCS standard.\n\n - `int`: a bit field selecting specific extensions to accept.\n See :ref:`astropy:relaxread` for details.\n\n naxis : int or sequence, optional\n Extracts specific coordinate axes using\n :meth:`~astropy.wcs.Wcsprm.sub`. If a header is provided, and\n *naxis* is not ``None``, *naxis* will be passed to\n :meth:`~astropy.wcs.Wcsprm.sub` in order to select specific\n axes from the header. See :meth:`~astropy.wcs.Wcsprm.sub` for\n more details about this parameter.\n\n keysel : sequence of str, optional\n A sequence of flags used to select the keyword types\n considered by wcslib. When ``None``, only the standard image\n header keywords are considered (and the underlying wcspih() C\n function is called). To use binary table image array or pixel\n list keywords, *keysel* must be set.\n\n Each element in the list should be one of the following\n strings:\n\n - 'image': Image header keywords\n\n - 'binary': Binary table image array keywords\n\n - 'pixel': Pixel list keywords\n\n Keywords such as ``EQUIna`` or ``RFRQna`` that are common to\n binary table image arrays and pixel lists (including\n ``WCSNna`` and ``TWCSna``) are selected by both 'binary' and\n 'pixel'.\n\n colsel : sequence of int, optional\n A sequence of table column numbers used to restrict the WCS\n transformations considered to only those pertaining to the\n specified columns. If `None`, there is no restriction.\n\n fix : bool, optional\n When `True` (default), call `~astropy.wcs.Wcsprm.fix` on\n the resulting object to fix any non-standard uses in the\n header. `FITSFixedWarning` Warnings will be emitted if any\n changes were made.\n\n translate_units : str, optional\n Specify which potentially unsafe translations of non-standard\n unit strings to perform. By default, performs none. See\n `WCS.fix` for more information about this parameter. Only\n effective when ``fix`` is `True`.\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n ValueError\n Invalid key.\n\n KeyError\n Key not found in FITS header.\n\n ValueError\n Lookup table distortion present in the header but *fobj* was\n not provided.\n\n Notes\n -----\n\n 1. astropy.wcs supports arbitrary *n* dimensions for the core WCS\n (the transformations handled by WCSLIB). However, the\n `distortion paper`_ lookup table and `SIP`_ distortions must be\n two dimensional. Therefore, if you try to create a WCS object\n where the core WCS has a different number of dimensions than 2\n and that object also contains a `distortion paper`_ lookup\n table or `SIP`_ distortion, a `ValueError`\n exception will be raised. To avoid this, consider using the\n *naxis* kwarg to select two dimensions from the core WCS.\n\n 2. The number of coordinate axes in the transformation is not\n determined directly from the ``NAXIS`` keyword but instead from\n the highest of:\n\n - ``NAXIS`` keyword\n\n - ``WCSAXESa`` keyword\n\n - The highest axis number in any parameterized WCS keyword.\n The keyvalue, as well as the keyword, must be\n syntactically valid otherwise it will not be considered.\n\n If none of these keyword types is present, i.e. if the header\n only contains auxiliary WCS keywords for a particular\n coordinate representation, then no coordinate description is\n constructed for it.\n\n The number of axes, which is set as the ``naxis`` member, may\n differ for different coordinate representations of the same\n image.\n\n 3. When the header includes duplicate keywords, in most cases the\n last encountered is used.\n\n 4. `~astropy.wcs.Wcsprm.set` is called immediately after\n construction, so any invalid keywords or transformations will\n be raised by the constructor, not when subsequently calling a\n transformation method.\n\n \"\"\" # noqa: E501\n\n def __init__(self, header=None, fobj=None, key=' ', minerr=0.0,\n relax=True, naxis=None, keysel=None, colsel=None,\n fix=True, translate_units='', _do_set=True):\n close_fds = []\n\n # these parameters are stored to be used when unpickling a WCS object:\n self._init_kwargs = {\n 'keysel': copy.copy(keysel),\n 'colsel': copy.copy(colsel),\n }\n\n if header is None:\n if naxis is None:\n naxis = 2\n wcsprm = _wcs.Wcsprm(header=None, key=key,\n relax=relax, naxis=naxis)\n self.naxis = wcsprm.naxis\n # Set some reasonable defaults.\n det2im = (None, None)\n cpdis = (None, None)\n sip = None\n else:\n keysel_flags = _parse_keysel(keysel)\n\n if isinstance(header, (str, bytes)):\n try:\n is_path = (possible_filename(header) and\n os.path.exists(header))\n except (OSError, ValueError):\n is_path = False\n\n if is_path:\n if fobj is not None:\n raise ValueError(\n \"Can not provide both a FITS filename to \"\n \"argument 1 and a FITS file object to argument 2\")\n fobj = fits.open(header)\n close_fds.append(fobj)\n header = fobj[0].header\n elif isinstance(header, fits.hdu.image._ImageBaseHDU):\n header = header.header\n elif not isinstance(header, fits.Header):\n try:\n # Accept any dict-like object\n orig_header = header\n header = fits.Header()\n for dict_key in orig_header.keys():\n header[dict_key] = orig_header[dict_key]\n except TypeError:\n raise TypeError(\n \"header must be a string, an astropy.io.fits.Header \"\n \"object, or a dict-like object\")\n\n if isinstance(header, fits.Header):\n header_string = header.tostring().rstrip()\n else:\n header_string = header\n\n # Importantly, header is a *copy* of the passed-in header\n # because we will be modifying it\n if isinstance(header_string, str):\n header_bytes = header_string.encode('ascii')\n header_string = header_string\n else:\n header_bytes = header_string\n header_string = header_string.decode('ascii')\n\n if not (fobj is None or isinstance(fobj, fits.HDUList)):\n raise AssertionError(\"'fobj' must be either None or an \"\n \"astropy.io.fits.HDUList object.\")\n\n est_naxis = 2\n try:\n tmp_header = fits.Header.fromstring(header_string)\n self._remove_sip_kw(tmp_header)\n tmp_header_bytes = tmp_header.tostring().rstrip()\n if isinstance(tmp_header_bytes, str):\n tmp_header_bytes = tmp_header_bytes.encode('ascii')\n tmp_wcsprm = _wcs.Wcsprm(header=tmp_header_bytes, key=key,\n relax=relax, keysel=keysel_flags,\n colsel=colsel, warnings=False,\n hdulist=fobj)\n if naxis is not None:\n try:\n tmp_wcsprm = tmp_wcsprm.sub(naxis)\n except ValueError:\n pass\n est_naxis = tmp_wcsprm.naxis if tmp_wcsprm.naxis else 2\n\n except _wcs.NoWcsKeywordsFoundError:\n pass\n\n self.naxis = est_naxis\n\n header = fits.Header.fromstring(header_string)\n\n det2im = self._read_det2im_kw(header, fobj, err=minerr)\n cpdis = self._read_distortion_kw(\n header, fobj, dist='CPDIS', err=minerr)\n sip = self._read_sip_kw(header, wcskey=key)\n self._remove_sip_kw(header)\n\n header_string = header.tostring()\n header_string = header_string.replace('END' + ' ' * 77, '')\n\n if isinstance(header_string, str):\n header_bytes = header_string.encode('ascii')\n header_string = header_string\n else:\n header_bytes = header_string\n header_string = header_string.decode('ascii')\n\n try:\n wcsprm = _wcs.Wcsprm(header=header_bytes, key=key,\n relax=relax, keysel=keysel_flags,\n colsel=colsel, hdulist=fobj)\n except _wcs.NoWcsKeywordsFoundError:\n # The header may have SIP or distortions, but no core\n # WCS. That isn't an error -- we want a \"default\"\n # (identity) core Wcs transformation in that case.\n if colsel is None:\n wcsprm = _wcs.Wcsprm(header=None, key=key,\n relax=relax, keysel=keysel_flags,\n colsel=colsel, hdulist=fobj)\n else:\n raise\n\n if naxis is not None:\n wcsprm = wcsprm.sub(naxis)\n self.naxis = wcsprm.naxis\n\n if (wcsprm.naxis != 2 and\n (det2im[0] or det2im[1] or cpdis[0] or cpdis[1] or sip)):\n raise ValueError(\n \"\"\"\nFITS WCS distortion paper lookup tables and SIP distortions only work\nin 2 dimensions. However, WCSLIB has detected {} dimensions in the\ncore WCS keywords. To use core WCS in conjunction with FITS WCS\ndistortion paper lookup tables or SIP distortion, you must select or\nreduce these to 2 dimensions using the naxis kwarg.\n\"\"\".format(wcsprm.naxis))\n\n header_naxis = header.get('NAXIS', None)\n if header_naxis is not None and header_naxis < wcsprm.naxis:\n warnings.warn(\n \"The WCS transformation has more axes ({:d}) than the \"\n \"image it is associated with ({:d})\".format(\n wcsprm.naxis, header_naxis), FITSFixedWarning)\n\n self._get_naxis(header)\n WCSBase.__init__(self, sip, cpdis, wcsprm, det2im)\n\n if fix:\n if header is None:\n with warnings.catch_warnings():\n warnings.simplefilter('ignore', FITSFixedWarning)\n self.fix(translate_units=translate_units)\n else:\n self.fix(translate_units=translate_units)\n\n if _do_set:\n self.wcs.set()\n\n for fd in close_fds:\n fd.close()\n\n self._pixel_bounds = None\n\n def __copy__(self):\n new_copy = self.__class__()\n WCSBase.__init__(new_copy, self.sip,\n (self.cpdis1, self.cpdis2),\n self.wcs,\n (self.det2im1, self.det2im2))\n new_copy.__dict__.update(self.__dict__)\n return new_copy\n\n def __deepcopy__(self, memo):\n from copy import deepcopy\n\n new_copy = self.__class__()\n new_copy.naxis = deepcopy(self.naxis, memo)\n WCSBase.__init__(new_copy, deepcopy(self.sip, memo),\n (deepcopy(self.cpdis1, memo),\n deepcopy(self.cpdis2, memo)),\n deepcopy(self.wcs, memo),\n (deepcopy(self.det2im1, memo),\n deepcopy(self.det2im2, memo)))\n for key, val in self.__dict__.items():\n new_copy.__dict__[key] = deepcopy(val, memo)\n return new_copy\n\n def copy(self):\n \"\"\"\n Return a shallow copy of the object.\n\n Convenience method so user doesn't have to import the\n :mod:`copy` stdlib module.\n\n .. warning::\n Use `deepcopy` instead of `copy` unless you know why you need a\n shallow copy.\n \"\"\"\n return copy.copy(self)\n\n def deepcopy(self):\n \"\"\"\n Return a deep copy of the object.\n\n Convenience method so user doesn't have to import the\n :mod:`copy` stdlib module.\n \"\"\"\n return copy.deepcopy(self)\n\n def sub(self, axes=None):\n\n copy = self.deepcopy()\n\n # We need to know which axes have been dropped, but there is no easy\n # way to do this with the .sub function, so instead we assign UUIDs to\n # the CNAME parameters in copy.wcs. We can later access the original\n # CNAME properties from self.wcs.\n cname_uuid = [str(uuid.uuid4()) for i in range(copy.wcs.naxis)]\n copy.wcs.cname = cname_uuid\n\n # Subset the WCS\n copy.wcs = copy.wcs.sub(axes)\n copy.naxis = copy.wcs.naxis\n\n # Construct a list of dimensions from the original WCS in the order\n # in which they appear in the final WCS.\n keep = [cname_uuid.index(cname) if cname in cname_uuid else None\n for cname in copy.wcs.cname]\n\n # Restore the original CNAMEs\n copy.wcs.cname = ['' if i is None else self.wcs.cname[i] for i in keep]\n\n # Subset pixel_shape and pixel_bounds\n if self.pixel_shape:\n copy.pixel_shape = tuple([None if i is None else self.pixel_shape[i] for i in keep])\n if self.pixel_bounds:\n copy.pixel_bounds = [None if i is None else self.pixel_bounds[i] for i in keep]\n\n return copy\n\n if _wcs is not None:\n sub.__doc__ = _wcs.Wcsprm.sub.__doc__\n\n def _fix_scamp(self):\n \"\"\"\n Remove SCAMP's PVi_m distortion parameters if SIP distortion parameters\n are also present. Some projects (e.g., Palomar Transient Factory)\n convert SCAMP's distortion parameters (which abuse the PVi_m cards) to\n SIP. However, wcslib gets confused by the presence of both SCAMP and\n SIP distortion parameters.\n\n See https://github.com/astropy/astropy/issues/299.\n \"\"\"\n # Nothing to be done if no WCS attached\n if self.wcs is None:\n return\n\n # Nothing to be done if no PV parameters attached\n pv = self.wcs.get_pv()\n if not pv:\n return\n\n # Nothing to be done if axes don't use SIP distortion parameters\n if self.sip is None:\n return\n\n # Nothing to be done if any radial terms are present...\n # Loop over list to find any radial terms.\n # Certain values of the `j' index are used for storing\n # radial terms; refer to Equation (1) in\n # <http://web.ipac.caltech.edu/staff/shupe/reprints/SIP_to_PV_SPIE2012.pdf>.\n pv = np.asarray(pv)\n # Loop over distinct values of `i' index\n for i in set(pv[:, 0]):\n # Get all values of `j' index for this value of `i' index\n js = set(pv[:, 1][pv[:, 0] == i])\n # Find max value of `j' index\n max_j = max(js)\n for j in (3, 11, 23, 39):\n if j < max_j and j in js:\n return\n\n self.wcs.set_pv([])\n warnings.warn(\"Removed redundant SCAMP distortion parameters \" +\n \"because SIP parameters are also present\", FITSFixedWarning)\n\n def fix(self, translate_units='', naxis=None):\n \"\"\"\n Perform the fix operations from wcslib, and warn about any\n changes it has made.\n\n Parameters\n ----------\n translate_units : str, optional\n Specify which potentially unsafe translations of\n non-standard unit strings to perform. By default,\n performs none.\n\n Although ``\"S\"`` is commonly used to represent seconds,\n its translation to ``\"s\"`` is potentially unsafe since the\n standard recognizes ``\"S\"`` formally as Siemens, however\n rarely that may be used. The same applies to ``\"H\"`` for\n hours (Henry), and ``\"D\"`` for days (Debye).\n\n This string controls what to do in such cases, and is\n case-insensitive.\n\n - If the string contains ``\"s\"``, translate ``\"S\"`` to\n ``\"s\"``.\n\n - If the string contains ``\"h\"``, translate ``\"H\"`` to\n ``\"h\"``.\n\n - If the string contains ``\"d\"``, translate ``\"D\"`` to\n ``\"d\"``.\n\n Thus ``''`` doesn't do any unsafe translations, whereas\n ``'shd'`` does all of them.\n\n naxis : int array, optional\n Image axis lengths. If this array is set to zero or\n ``None``, then `~astropy.wcs.Wcsprm.cylfix` will not be\n invoked.\n \"\"\"\n if self.wcs is not None:\n self._fix_scamp()\n fixes = self.wcs.fix(translate_units, naxis)\n for key, val in fixes.items():\n if val != \"No change\":\n if (key == 'datfix' and '1858-11-17' in val and\n not np.count_nonzero(self.wcs.mjdref)):\n continue\n warnings.warn(\n (\"'{0}' made the change '{1}'.\").\n format(key, val),\n FITSFixedWarning)\n\n def calc_footprint(self, header=None, undistort=True, axes=None, center=True):\n \"\"\"\n Calculates the footprint of the image on the sky.\n\n A footprint is defined as the positions of the corners of the\n image on the sky after all available distortions have been\n applied.\n\n Parameters\n ----------\n header : `~astropy.io.fits.Header` object, optional\n Used to get ``NAXIS1`` and ``NAXIS2``\n header and axes are mutually exclusive, alternative ways\n to provide the same information.\n\n undistort : bool, optional\n If `True`, take SIP and distortion lookup table into\n account\n\n axes : (int, int), optional\n If provided, use the given sequence as the shape of the\n image. Otherwise, use the ``NAXIS1`` and ``NAXIS2``\n keywords from the header that was used to create this\n `WCS` object.\n\n center : bool, optional\n If `True` use the center of the pixel, otherwise use the corner.\n\n Returns\n -------\n coord : (4, 2) array of (*x*, *y*) coordinates.\n The order is clockwise starting with the bottom left corner.\n \"\"\"\n if axes is not None:\n naxis1, naxis2 = axes\n else:\n if header is None:\n try:\n # classes that inherit from WCS and define naxis1/2\n # do not require a header parameter\n naxis1, naxis2 = self.pixel_shape\n except (AttributeError, TypeError):\n warnings.warn(\n \"Need a valid header in order to calculate footprint\\n\", AstropyUserWarning)\n return None\n else:\n naxis1 = header.get('NAXIS1', None)\n naxis2 = header.get('NAXIS2', None)\n\n if naxis1 is None or naxis2 is None:\n raise ValueError(\n \"Image size could not be determined.\")\n\n if center:\n corners = np.array([[1, 1],\n [1, naxis2],\n [naxis1, naxis2],\n [naxis1, 1]], dtype=np.float64)\n else:\n corners = np.array([[0.5, 0.5],\n [0.5, naxis2 + 0.5],\n [naxis1 + 0.5, naxis2 + 0.5],\n [naxis1 + 0.5, 0.5]], dtype=np.float64)\n\n if undistort:\n return self.all_pix2world(corners, 1)\n else:\n return self.wcs_pix2world(corners, 1)\n\n def _read_det2im_kw(self, header, fobj, err=0.0):\n \"\"\"\n Create a `distortion paper`_ type lookup table for detector to\n image plane correction.\n \"\"\"\n if fobj is None:\n return (None, None)\n\n if not isinstance(fobj, fits.HDUList):\n return (None, None)\n\n try:\n axiscorr = header['AXISCORR']\n d2imdis = self._read_d2im_old_format(header, fobj, axiscorr)\n return d2imdis\n except KeyError:\n pass\n\n dist = 'D2IMDIS'\n d_kw = 'D2IM'\n err_kw = 'D2IMERR'\n tables = {}\n for i in range(1, self.naxis + 1):\n d_error = header.get(err_kw + str(i), 0.0)\n if d_error < err:\n tables[i] = None\n continue\n distortion = dist + str(i)\n if distortion in header:\n dis = header[distortion].lower()\n if dis == 'lookup':\n del header[distortion]\n assert isinstance(fobj, fits.HDUList), (\n 'An astropy.io.fits.HDUList'\n 'is required for Lookup table distortion.')\n dp = (d_kw + str(i)).strip()\n dp_extver_key = dp + '.EXTVER'\n if dp_extver_key in header:\n d_extver = header[dp_extver_key]\n del header[dp_extver_key]\n else:\n d_extver = 1\n dp_axis_key = dp + f'.AXIS.{i:d}'\n if i == header[dp_axis_key]:\n d_data = fobj['D2IMARR', d_extver].data\n else:\n d_data = (fobj['D2IMARR', d_extver].data).transpose()\n del header[dp_axis_key]\n d_header = fobj['D2IMARR', d_extver].header\n d_crpix = (d_header.get('CRPIX1', 0.0), d_header.get('CRPIX2', 0.0))\n d_crval = (d_header.get('CRVAL1', 0.0), d_header.get('CRVAL2', 0.0))\n d_cdelt = (d_header.get('CDELT1', 1.0), d_header.get('CDELT2', 1.0))\n d_lookup = DistortionLookupTable(d_data, d_crpix,\n d_crval, d_cdelt)\n tables[i] = d_lookup\n else:\n warnings.warn('Polynomial distortion is not implemented.\\n', AstropyUserWarning)\n for key in set(header):\n if key.startswith(dp + '.'):\n del header[key]\n else:\n tables[i] = None\n if not tables:\n return (None, None)\n else:\n return (tables.get(1), tables.get(2))\n\n def _read_d2im_old_format(self, header, fobj, axiscorr):\n warnings.warn(\n \"The use of ``AXISCORR`` for D2IM correction has been deprecated.\"\n \"`~astropy.wcs` will read in files with ``AXISCORR`` but ``to_fits()`` will write \"\n \"out files without it.\",\n AstropyDeprecationWarning)\n cpdis = [None, None]\n crpix = [0., 0.]\n crval = [0., 0.]\n cdelt = [1., 1.]\n try:\n d2im_data = fobj[('D2IMARR', 1)].data\n except KeyError:\n return (None, None)\n except AttributeError:\n return (None, None)\n\n d2im_data = np.array([d2im_data])\n d2im_hdr = fobj[('D2IMARR', 1)].header\n naxis = d2im_hdr['NAXIS']\n\n for i in range(1, naxis + 1):\n crpix[i - 1] = d2im_hdr.get('CRPIX' + str(i), 0.0)\n crval[i - 1] = d2im_hdr.get('CRVAL' + str(i), 0.0)\n cdelt[i - 1] = d2im_hdr.get('CDELT' + str(i), 1.0)\n\n cpdis = DistortionLookupTable(d2im_data, crpix, crval, cdelt)\n\n if axiscorr == 1:\n return (cpdis, None)\n elif axiscorr == 2:\n return (None, cpdis)\n else:\n warnings.warn(\"Expected AXISCORR to be 1 or 2\", AstropyUserWarning)\n return (None, None)\n\n def _write_det2im(self, hdulist):\n \"\"\"\n Writes a `distortion paper`_ type lookup table to the given\n `~astropy.io.fits.HDUList`.\n \"\"\"\n\n if self.det2im1 is None and self.det2im2 is None:\n return\n dist = 'D2IMDIS'\n d_kw = 'D2IM'\n\n def write_d2i(num, det2im):\n if det2im is None:\n return\n\n hdulist[0].header[f'{dist}{num:d}'] = (\n 'LOOKUP', 'Detector to image correction type')\n hdulist[0].header[f'{d_kw}{num:d}.EXTVER'] = (\n num, 'Version number of WCSDVARR extension')\n hdulist[0].header[f'{d_kw}{num:d}.NAXES'] = (\n len(det2im.data.shape), 'Number of independent variables in D2IM function')\n\n for i in range(det2im.data.ndim):\n jth = {1: '1st', 2: '2nd', 3: '3rd'}.get(i + 1, f'{i + 1}th')\n hdulist[0].header[f'{d_kw}{num:d}.AXIS.{i + 1:d}'] = (\n i + 1, f'Axis number of the {jth} variable in a D2IM function')\n\n image = fits.ImageHDU(det2im.data, name='D2IMARR')\n header = image.header\n\n header['CRPIX1'] = (det2im.crpix[0],\n 'Coordinate system reference pixel')\n header['CRPIX2'] = (det2im.crpix[1],\n 'Coordinate system reference pixel')\n header['CRVAL1'] = (det2im.crval[0],\n 'Coordinate system value at reference pixel')\n header['CRVAL2'] = (det2im.crval[1],\n 'Coordinate system value at reference pixel')\n header['CDELT1'] = (det2im.cdelt[0],\n 'Coordinate increment along axis')\n header['CDELT2'] = (det2im.cdelt[1],\n 'Coordinate increment along axis')\n image.ver = int(hdulist[0].header[f'{d_kw}{num:d}.EXTVER'])\n hdulist.append(image)\n write_d2i(1, self.det2im1)\n write_d2i(2, self.det2im2)\n\n def _read_distortion_kw(self, header, fobj, dist='CPDIS', err=0.0):\n \"\"\"\n Reads `distortion paper`_ table-lookup keywords and data, and\n returns a 2-tuple of `~astropy.wcs.DistortionLookupTable`\n objects.\n\n If no `distortion paper`_ keywords are found, ``(None, None)``\n is returned.\n \"\"\"\n if isinstance(header, (str, bytes)):\n return (None, None)\n\n if dist == 'CPDIS':\n d_kw = 'DP'\n err_kw = 'CPERR'\n else:\n d_kw = 'DQ'\n err_kw = 'CQERR'\n\n tables = {}\n for i in range(1, self.naxis + 1):\n d_error_key = err_kw + str(i)\n if d_error_key in header:\n d_error = header[d_error_key]\n del header[d_error_key]\n else:\n d_error = 0.0\n if d_error < err:\n tables[i] = None\n continue\n distortion = dist + str(i)\n if distortion in header:\n dis = header[distortion].lower()\n del header[distortion]\n if dis == 'lookup':\n if not isinstance(fobj, fits.HDUList):\n raise ValueError('an astropy.io.fits.HDUList is '\n 'required for Lookup table distortion.')\n dp = (d_kw + str(i)).strip()\n dp_extver_key = dp + '.EXTVER'\n if dp_extver_key in header:\n d_extver = header[dp_extver_key]\n del header[dp_extver_key]\n else:\n d_extver = 1\n dp_axis_key = dp + f'.AXIS.{i:d}'\n if i == header[dp_axis_key]:\n d_data = fobj['WCSDVARR', d_extver].data\n else:\n d_data = (fobj['WCSDVARR', d_extver].data).transpose()\n del header[dp_axis_key]\n d_header = fobj['WCSDVARR', d_extver].header\n d_crpix = (d_header.get('CRPIX1', 0.0),\n d_header.get('CRPIX2', 0.0))\n d_crval = (d_header.get('CRVAL1', 0.0),\n d_header.get('CRVAL2', 0.0))\n d_cdelt = (d_header.get('CDELT1', 1.0),\n d_header.get('CDELT2', 1.0))\n d_lookup = DistortionLookupTable(d_data, d_crpix, d_crval, d_cdelt)\n tables[i] = d_lookup\n\n for key in set(header):\n if key.startswith(dp + '.'):\n del header[key]\n else:\n warnings.warn('Polynomial distortion is not implemented.\\n', AstropyUserWarning)\n else:\n tables[i] = None\n\n if not tables:\n return (None, None)\n else:\n return (tables.get(1), tables.get(2))\n\n def _write_distortion_kw(self, hdulist, dist='CPDIS'):\n \"\"\"\n Write out `distortion paper`_ keywords to the given\n `~astropy.io.fits.HDUList`.\n \"\"\"\n if self.cpdis1 is None and self.cpdis2 is None:\n return\n\n if dist == 'CPDIS':\n d_kw = 'DP'\n else:\n d_kw = 'DQ'\n\n def write_dist(num, cpdis):\n if cpdis is None:\n return\n\n hdulist[0].header[f'{dist}{num:d}'] = (\n 'LOOKUP', 'Prior distortion function type')\n hdulist[0].header[f'{d_kw}{num:d}.EXTVER'] = (\n num, 'Version number of WCSDVARR extension')\n hdulist[0].header[f'{d_kw}{num:d}.NAXES'] = (\n len(cpdis.data.shape), f'Number of independent variables in {dist} function')\n\n for i in range(cpdis.data.ndim):\n jth = {1: '1st', 2: '2nd', 3: '3rd'}.get(i + 1, f'{i + 1}th')\n hdulist[0].header[f'{d_kw}{num:d}.AXIS.{i + 1:d}'] = (\n i + 1,\n f'Axis number of the {jth} variable in a {dist} function')\n\n image = fits.ImageHDU(cpdis.data, name='WCSDVARR')\n header = image.header\n\n header['CRPIX1'] = (cpdis.crpix[0], 'Coordinate system reference pixel')\n header['CRPIX2'] = (cpdis.crpix[1], 'Coordinate system reference pixel')\n header['CRVAL1'] = (cpdis.crval[0], 'Coordinate system value at reference pixel')\n header['CRVAL2'] = (cpdis.crval[1], 'Coordinate system value at reference pixel')\n header['CDELT1'] = (cpdis.cdelt[0], 'Coordinate increment along axis')\n header['CDELT2'] = (cpdis.cdelt[1], 'Coordinate increment along axis')\n image.ver = int(hdulist[0].header[f'{d_kw}{num:d}.EXTVER'])\n hdulist.append(image)\n\n write_dist(1, self.cpdis1)\n write_dist(2, self.cpdis2)\n\n def _remove_sip_kw(self, header):\n \"\"\"\n Remove SIP information from a header.\n \"\"\"\n # Never pass SIP coefficients to wcslib\n # CTYPE must be passed with -SIP to wcslib\n for key in set(m.group() for m in map(SIP_KW.match, list(header))\n if m is not None):\n del header[key]\n\n def _read_sip_kw(self, header, wcskey=\"\"):\n \"\"\"\n Reads `SIP`_ header keywords and returns a `~astropy.wcs.Sip`\n object.\n\n If no `SIP`_ header keywords are found, ``None`` is returned.\n \"\"\"\n if isinstance(header, (str, bytes)):\n # TODO: Parse SIP from a string without pyfits around\n return None\n\n if \"A_ORDER\" in header and header['A_ORDER'] > 1:\n if \"B_ORDER\" not in header:\n raise ValueError(\n \"A_ORDER provided without corresponding B_ORDER \"\n \"keyword for SIP distortion\")\n\n m = int(header[\"A_ORDER\"])\n a = np.zeros((m + 1, m + 1), np.double)\n for i in range(m + 1):\n for j in range(m - i + 1):\n key = f\"A_{i}_{j}\"\n if key in header:\n a[i, j] = header[key]\n del header[key]\n\n m = int(header[\"B_ORDER\"])\n if m > 1:\n b = np.zeros((m + 1, m + 1), np.double)\n for i in range(m + 1):\n for j in range(m - i + 1):\n key = f\"B_{i}_{j}\"\n if key in header:\n b[i, j] = header[key]\n del header[key]\n else:\n a = None\n b = None\n\n del header['A_ORDER']\n del header['B_ORDER']\n\n ctype = [header[f'CTYPE{nax}{wcskey}'] for nax in range(1, self.naxis + 1)]\n if any(not ctyp.endswith('-SIP') for ctyp in ctype):\n message = \"\"\"\n Inconsistent SIP distortion information is present in the FITS header and the WCS object:\n SIP coefficients were detected, but CTYPE is missing a \"-SIP\" suffix.\n astropy.wcs is using the SIP distortion coefficients,\n therefore the coordinates calculated here might be incorrect.\n\n If you do not want to apply the SIP distortion coefficients,\n please remove the SIP coefficients from the FITS header or the\n WCS object. As an example, if the image is already distortion-corrected\n (e.g., drizzled) then distortion components should not apply and the SIP\n coefficients should be removed.\n\n While the SIP distortion coefficients are being applied here, if that was indeed the intent,\n for consistency please append \"-SIP\" to the CTYPE in the FITS header or the WCS object.\n\n \"\"\" # noqa: E501\n log.info(message)\n elif \"B_ORDER\" in header and header['B_ORDER'] > 1:\n raise ValueError(\n \"B_ORDER provided without corresponding A_ORDER \" +\n \"keyword for SIP distortion\")\n else:\n a = None\n b = None\n\n if \"AP_ORDER\" in header and header['AP_ORDER'] > 1:\n if \"BP_ORDER\" not in header:\n raise ValueError(\n \"AP_ORDER provided without corresponding BP_ORDER \"\n \"keyword for SIP distortion\")\n\n m = int(header[\"AP_ORDER\"])\n ap = np.zeros((m + 1, m + 1), np.double)\n for i in range(m + 1):\n for j in range(m - i + 1):\n key = f\"AP_{i}_{j}\"\n if key in header:\n ap[i, j] = header[key]\n del header[key]\n\n m = int(header[\"BP_ORDER\"])\n if m > 1:\n bp = np.zeros((m + 1, m + 1), np.double)\n for i in range(m + 1):\n for j in range(m - i + 1):\n key = f\"BP_{i}_{j}\"\n if key in header:\n bp[i, j] = header[key]\n del header[key]\n else:\n ap = None\n bp = None\n\n del header['AP_ORDER']\n del header['BP_ORDER']\n elif \"BP_ORDER\" in header and header['BP_ORDER'] > 1:\n raise ValueError(\n \"BP_ORDER provided without corresponding AP_ORDER \"\n \"keyword for SIP distortion\")\n else:\n ap = None\n bp = None\n\n if a is None and b is None and ap is None and bp is None:\n return None\n\n if f\"CRPIX1{wcskey}\" not in header or f\"CRPIX2{wcskey}\" not in header:\n raise ValueError(\n \"Header has SIP keywords without CRPIX keywords\")\n\n crpix1 = header.get(f\"CRPIX1{wcskey}\")\n crpix2 = header.get(f\"CRPIX2{wcskey}\")\n\n return Sip(a, b, ap, bp, (crpix1, crpix2))\n\n def _write_sip_kw(self):\n \"\"\"\n Write out SIP keywords. Returns a dictionary of key-value\n pairs.\n \"\"\"\n if self.sip is None:\n return {}\n\n keywords = {}\n\n def write_array(name, a):\n if a is None:\n return\n size = a.shape[0]\n trdir = 'sky to detector' if name[-1] == 'P' else 'detector to sky'\n comment = ('SIP polynomial order, axis {:d}, {:s}'\n .format(ord(name[0]) - ord('A'), trdir))\n keywords[f'{name}_ORDER'] = size - 1, comment\n\n comment = 'SIP distortion coefficient'\n for i in range(size):\n for j in range(size - i):\n if a[i, j] != 0.0:\n keywords[\n f'{name}_{i:d}_{j:d}'] = a[i, j], comment\n\n write_array('A', self.sip.a)\n write_array('B', self.sip.b)\n write_array('AP', self.sip.ap)\n write_array('BP', self.sip.bp)\n\n return keywords\n\n def _denormalize_sky(self, sky):\n if self.wcs.lngtyp != 'RA':\n raise ValueError(\n \"WCS does not have longitude type of 'RA', therefore \" +\n \"(ra, dec) data can not be used as input\")\n if self.wcs.lattyp != 'DEC':\n raise ValueError(\n \"WCS does not have longitude type of 'DEC', therefore \" +\n \"(ra, dec) data can not be used as input\")\n if self.wcs.naxis == 2:\n if self.wcs.lng == 0 and self.wcs.lat == 1:\n return sky\n elif self.wcs.lng == 1 and self.wcs.lat == 0:\n # Reverse the order of the columns\n return sky[:, ::-1]\n else:\n raise ValueError(\n \"WCS does not have longitude and latitude celestial \" +\n \"axes, therefore (ra, dec) data can not be used as input\")\n else:\n if self.wcs.lng < 0 or self.wcs.lat < 0:\n raise ValueError(\n \"WCS does not have both longitude and latitude \"\n \"celestial axes, therefore (ra, dec) data can not be \" +\n \"used as input\")\n out = np.zeros((sky.shape[0], self.wcs.naxis))\n out[:, self.wcs.lng] = sky[:, 0]\n out[:, self.wcs.lat] = sky[:, 1]\n return out\n\n def _normalize_sky(self, sky):\n if self.wcs.lngtyp != 'RA':\n raise ValueError(\n \"WCS does not have longitude type of 'RA', therefore \" +\n \"(ra, dec) data can not be returned\")\n if self.wcs.lattyp != 'DEC':\n raise ValueError(\n \"WCS does not have longitude type of 'DEC', therefore \" +\n \"(ra, dec) data can not be returned\")\n if self.wcs.naxis == 2:\n if self.wcs.lng == 0 and self.wcs.lat == 1:\n return sky\n elif self.wcs.lng == 1 and self.wcs.lat == 0:\n # Reverse the order of the columns\n return sky[:, ::-1]\n else:\n raise ValueError(\n \"WCS does not have longitude and latitude celestial \"\n \"axes, therefore (ra, dec) data can not be returned\")\n else:\n if self.wcs.lng < 0 or self.wcs.lat < 0:\n raise ValueError(\n \"WCS does not have both longitude and latitude celestial \"\n \"axes, therefore (ra, dec) data can not be returned\")\n out = np.empty((sky.shape[0], 2))\n out[:, 0] = sky[:, self.wcs.lng]\n out[:, 1] = sky[:, self.wcs.lat]\n return out\n\n def _array_converter(self, func, sky, *args, ra_dec_order=False):\n \"\"\"\n A helper function to support reading either a pair of arrays\n or a single Nx2 array.\n \"\"\"\n\n def _return_list_of_arrays(axes, origin):\n if any([x.size == 0 for x in axes]):\n return axes\n\n try:\n axes = np.broadcast_arrays(*axes)\n except ValueError:\n raise ValueError(\n \"Coordinate arrays are not broadcastable to each other\")\n\n xy = np.hstack([x.reshape((x.size, 1)) for x in axes])\n\n if ra_dec_order and sky == 'input':\n xy = self._denormalize_sky(xy)\n output = func(xy, origin)\n if ra_dec_order and sky == 'output':\n output = self._normalize_sky(output)\n return (output[:, 0].reshape(axes[0].shape),\n output[:, 1].reshape(axes[0].shape))\n return [output[:, i].reshape(axes[0].shape)\n for i in range(output.shape[1])]\n\n def _return_single_array(xy, origin):\n if xy.shape[-1] != self.naxis:\n raise ValueError(\n \"When providing two arguments, the array must be \"\n \"of shape (N, {})\".format(self.naxis))\n if 0 in xy.shape:\n return xy\n if ra_dec_order and sky == 'input':\n xy = self._denormalize_sky(xy)\n result = func(xy, origin)\n if ra_dec_order and sky == 'output':\n result = self._normalize_sky(result)\n return result\n\n if len(args) == 2:\n try:\n xy, origin = args\n xy = np.asarray(xy)\n origin = int(origin)\n except Exception:\n raise TypeError(\n \"When providing two arguments, they must be \"\n \"(coords[N][{}], origin)\".format(self.naxis))\n if xy.shape == () or len(xy.shape) == 1:\n return _return_list_of_arrays([xy], origin)\n return _return_single_array(xy, origin)\n\n elif len(args) == self.naxis + 1:\n axes = args[:-1]\n origin = args[-1]\n try:\n axes = [np.asarray(x) for x in axes]\n origin = int(origin)\n except Exception:\n raise TypeError(\n \"When providing more than two arguments, they must be \" +\n \"a 1-D array for each axis, followed by an origin.\")\n\n return _return_list_of_arrays(axes, origin)\n\n raise TypeError(\n \"WCS projection has {0} dimensions, so expected 2 (an Nx{0} array \"\n \"and the origin argument) or {1} arguments (the position in each \"\n \"dimension, and the origin argument). Instead, {2} arguments were \"\n \"given.\".format(\n self.naxis, self.naxis + 1, len(args)))\n\n def all_pix2world(self, *args, **kwargs):\n return self._array_converter(\n self._all_pix2world, 'output', *args, **kwargs)\n all_pix2world.__doc__ = \"\"\"\n Transforms pixel coordinates to world coordinates.\n\n Performs all of the following in series:\n\n - Detector to image plane correction (if present in the\n FITS file)\n\n - `SIP`_ distortion correction (if present in the FITS\n file)\n\n - `distortion paper`_ table-lookup correction (if present\n in the FITS file)\n\n - `wcslib`_ \"core\" WCS transformation\n\n Parameters\n ----------\n {}\n\n For a transformation that is not two-dimensional, the\n two-argument form must be used.\n\n {}\n\n Returns\n -------\n\n {}\n\n Notes\n -----\n The order of the axes for the result is determined by the\n ``CTYPEia`` keywords in the FITS header, therefore it may not\n always be of the form (*ra*, *dec*). The\n `~astropy.wcs.Wcsprm.lat`, `~astropy.wcs.Wcsprm.lng`,\n `~astropy.wcs.Wcsprm.lattyp` and `~astropy.wcs.Wcsprm.lngtyp`\n members can be used to determine the order of the axes.\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n SingularMatrixError\n Linear transformation matrix is singular.\n\n InconsistentAxisTypesError\n Inconsistent or unrecognized coordinate axis types.\n\n ValueError\n Invalid parameter value.\n\n ValueError\n Invalid coordinate transformation parameters.\n\n ValueError\n x- and y-coordinate arrays are not the same size.\n\n InvalidTransformError\n Invalid coordinate transformation parameters.\n\n InvalidTransformError\n Ill-conditioned coordinate transformation parameters.\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('naxis', 8),\n docstrings.RA_DEC_ORDER(8),\n docstrings.RETURNS('sky coordinates, in degrees', 8))\n\n def wcs_pix2world(self, *args, **kwargs):\n if self.wcs is None:\n raise ValueError(\"No basic WCS settings were created.\")\n return self._array_converter(\n lambda xy, o: self.wcs.p2s(xy, o)['world'],\n 'output', *args, **kwargs)\n wcs_pix2world.__doc__ = \"\"\"\n Transforms pixel coordinates to world coordinates by doing\n only the basic `wcslib`_ transformation.\n\n No `SIP`_ or `distortion paper`_ table lookup correction is\n applied. To perform distortion correction, see\n `~astropy.wcs.WCS.all_pix2world`,\n `~astropy.wcs.WCS.sip_pix2foc`, `~astropy.wcs.WCS.p4_pix2foc`,\n or `~astropy.wcs.WCS.pix2foc`.\n\n Parameters\n ----------\n {}\n\n For a transformation that is not two-dimensional, the\n two-argument form must be used.\n\n {}\n\n Returns\n -------\n\n {}\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n SingularMatrixError\n Linear transformation matrix is singular.\n\n InconsistentAxisTypesError\n Inconsistent or unrecognized coordinate axis types.\n\n ValueError\n Invalid parameter value.\n\n ValueError\n Invalid coordinate transformation parameters.\n\n ValueError\n x- and y-coordinate arrays are not the same size.\n\n InvalidTransformError\n Invalid coordinate transformation parameters.\n\n InvalidTransformError\n Ill-conditioned coordinate transformation parameters.\n\n Notes\n -----\n The order of the axes for the result is determined by the\n ``CTYPEia`` keywords in the FITS header, therefore it may not\n always be of the form (*ra*, *dec*). The\n `~astropy.wcs.Wcsprm.lat`, `~astropy.wcs.Wcsprm.lng`,\n `~astropy.wcs.Wcsprm.lattyp` and `~astropy.wcs.Wcsprm.lngtyp`\n members can be used to determine the order of the axes.\n\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('naxis', 8),\n docstrings.RA_DEC_ORDER(8),\n docstrings.RETURNS('world coordinates, in degrees', 8))\n\n def _all_world2pix(self, world, origin, tolerance, maxiter, adaptive,\n detect_divergence, quiet):\n # ############################################################\n # # DESCRIPTION OF THE NUMERICAL METHOD ##\n # ############################################################\n # In this section I will outline the method of solving\n # the inverse problem of converting world coordinates to\n # pixel coordinates (*inverse* of the direct transformation\n # `all_pix2world`) and I will summarize some of the aspects\n # of the method proposed here and some of the issues of the\n # original `all_world2pix` (in relation to this method)\n # discussed in https://github.com/astropy/astropy/issues/1977\n # A more detailed discussion can be found here:\n # https://github.com/astropy/astropy/pull/2373\n #\n #\n # ### Background ###\n #\n #\n # I will refer here to the [SIP Paper]\n # (http://fits.gsfc.nasa.gov/registry/sip/SIP_distortion_v1_0.pdf).\n # According to this paper, the effect of distortions as\n # described in *their* equation (1) is:\n #\n # (1) x = CD*(u+f(u)),\n #\n # where `x` is a *vector* of \"intermediate spherical\n # coordinates\" (equivalent to (x,y) in the paper) and `u`\n # is a *vector* of \"pixel coordinates\", and `f` is a vector\n # function describing geometrical distortions\n # (see equations 2 and 3 in SIP Paper.\n # However, I prefer to use `w` for \"intermediate world\n # coordinates\", `x` for pixel coordinates, and assume that\n # transformation `W` performs the **linear**\n # (CD matrix + projection onto celestial sphere) part of the\n # conversion from pixel coordinates to world coordinates.\n # Then we can re-write (1) as:\n #\n # (2) w = W*(x+f(x)) = T(x)\n #\n # In `astropy.wcs.WCS` transformation `W` is represented by\n # the `wcs_pix2world` member, while the combined (\"total\")\n # transformation (linear part + distortions) is performed by\n # `all_pix2world`. Below I summarize the notations and their\n # equivalents in `astropy.wcs.WCS`:\n #\n # | Equation term | astropy.WCS/meaning |\n # | ------------- | ---------------------------- |\n # | `x` | pixel coordinates |\n # | `w` | world coordinates |\n # | `W` | `wcs_pix2world()` |\n # | `W^{-1}` | `wcs_world2pix()` |\n # | `T` | `all_pix2world()` |\n # | `x+f(x)` | `pix2foc()` |\n #\n #\n # ### Direct Solving of Equation (2) ###\n #\n #\n # In order to find the pixel coordinates that correspond to\n # given world coordinates `w`, it is necessary to invert\n # equation (2): `x=T^{-1}(w)`, or solve equation `w==T(x)`\n # for `x`. However, this approach has the following\n # disadvantages:\n # 1. It requires unnecessary transformations (see next\n # section).\n # 2. It is prone to \"RA wrapping\" issues as described in\n # https://github.com/astropy/astropy/issues/1977\n # (essentially because `all_pix2world` may return points with\n # a different phase than user's input `w`).\n #\n #\n # ### Description of the Method Used here ###\n #\n #\n # By applying inverse linear WCS transformation (`W^{-1}`)\n # to both sides of equation (2) and introducing notation `x'`\n # (prime) for the pixels coordinates obtained from the world\n # coordinates by applying inverse *linear* WCS transformation\n # (\"focal plane coordinates\"):\n #\n # (3) x' = W^{-1}(w)\n #\n # we obtain the following equation:\n #\n # (4) x' = x+f(x),\n #\n # or,\n #\n # (5) x = x'-f(x)\n #\n # This equation is well suited for solving using the method\n # of fixed-point iterations\n # (http://en.wikipedia.org/wiki/Fixed-point_iteration):\n #\n # (6) x_{i+1} = x'-f(x_i)\n #\n # As an initial value of the pixel coordinate `x_0` we take\n # \"focal plane coordinate\" `x'=W^{-1}(w)=wcs_world2pix(w)`.\n # We stop iterations when `|x_{i+1}-x_i|<tolerance`. We also\n # consider the process to be diverging if\n # `|x_{i+1}-x_i|>|x_i-x_{i-1}|`\n # **when** `|x_{i+1}-x_i|>=tolerance` (when current\n # approximation is close to the true solution,\n # `|x_{i+1}-x_i|>|x_i-x_{i-1}|` may be due to rounding errors\n # and we ignore such \"divergences\" when\n # `|x_{i+1}-x_i|<tolerance`). It may appear that checking for\n # `|x_{i+1}-x_i|<tolerance` in order to ignore divergence is\n # unnecessary since the iterative process should stop anyway,\n # however, the proposed implementation of this iterative\n # process is completely vectorized and, therefore, we may\n # continue iterating over *some* points even though they have\n # converged to within a specified tolerance (while iterating\n # over other points that have not yet converged to\n # a solution).\n #\n # In order to efficiently implement iterative process (6)\n # using available methods in `astropy.wcs.WCS`, we add and\n # subtract `x_i` from the right side of equation (6):\n #\n # (7) x_{i+1} = x'-(x_i+f(x_i))+x_i = x'-pix2foc(x_i)+x_i,\n #\n # where `x'=wcs_world2pix(w)` and it is computed only *once*\n # before the beginning of the iterative process (and we also\n # set `x_0=x'`). By using `pix2foc` at each iteration instead\n # of `all_pix2world` we get about 25% increase in performance\n # (by not performing the linear `W` transformation at each\n # step) and we also avoid the \"RA wrapping\" issue described\n # above (by working in focal plane coordinates and avoiding\n # pix->world transformations).\n #\n # As an added benefit, the process converges to the correct\n # solution in just one iteration when distortions are not\n # present (compare to\n # https://github.com/astropy/astropy/issues/1977 and\n # https://github.com/astropy/astropy/pull/2294): in this case\n # `pix2foc` is the identical transformation\n # `x_i=pix2foc(x_i)` and from equation (7) we get:\n #\n # x' = x_0 = wcs_world2pix(w)\n # x_1 = x' - pix2foc(x_0) + x_0 = x' - pix2foc(x') + x' = x'\n # = wcs_world2pix(w) = x_0\n # =>\n # |x_1-x_0| = 0 < tolerance (with tolerance > 0)\n #\n # However, for performance reasons, it is still better to\n # avoid iterations altogether and return the exact linear\n # solution (`wcs_world2pix`) right-away when non-linear\n # distortions are not present by checking that attributes\n # `sip`, `cpdis1`, `cpdis2`, `det2im1`, and `det2im2` are\n # *all* `None`.\n #\n #\n # ### Outline of the Algorithm ###\n #\n #\n # While the proposed code is relatively long (considering\n # the simplicity of the algorithm), this is due to: 1)\n # checking if iterative solution is necessary at all; 2)\n # checking for divergence; 3) re-implementation of the\n # completely vectorized algorithm as an \"adaptive\" vectorized\n # algorithm (for cases when some points diverge for which we\n # want to stop iterations). In my tests, the adaptive version\n # of the algorithm is about 50% slower than non-adaptive\n # version for all HST images.\n #\n # The essential part of the vectorized non-adaptive algorithm\n # (without divergence and other checks) can be described\n # as follows:\n #\n # pix0 = self.wcs_world2pix(world, origin)\n # pix = pix0.copy() # 0-order solution\n #\n # for k in range(maxiter):\n # # find correction to the previous solution:\n # dpix = self.pix2foc(pix, origin) - pix0\n #\n # # compute norm (L2) of the correction:\n # dn = np.linalg.norm(dpix, axis=1)\n #\n # # apply correction:\n # pix -= dpix\n #\n # # check convergence:\n # if np.max(dn) < tolerance:\n # break\n #\n # return pix\n #\n # Here, the input parameter `world` can be a `MxN` array\n # where `M` is the number of coordinate axes in WCS and `N`\n # is the number of points to be converted simultaneously to\n # image coordinates.\n #\n #\n # ### IMPORTANT NOTE: ###\n #\n # If, in the future releases of the `~astropy.wcs`,\n # `pix2foc` will not apply all the required distortion\n # corrections then in the code below, calls to `pix2foc` will\n # have to be replaced with\n # wcs_world2pix(all_pix2world(pix_list, origin), origin)\n #\n\n # ############################################################\n # # INITIALIZE ITERATIVE PROCESS: ##\n # ############################################################\n\n # initial approximation (linear WCS based only)\n pix0 = self.wcs_world2pix(world, origin)\n\n # Check that an iterative solution is required at all\n # (when any of the non-CD-matrix-based corrections are\n # present). If not required return the initial\n # approximation (pix0).\n if not self.has_distortion:\n # No non-WCS corrections detected so\n # simply return initial approximation:\n return pix0\n\n pix = pix0.copy() # 0-order solution\n\n # initial correction:\n dpix = self.pix2foc(pix, origin) - pix0\n\n # Update initial solution:\n pix -= dpix\n\n # Norm (L2) squared of the correction:\n dn = np.sum(dpix*dpix, axis=1)\n dnprev = dn.copy() # if adaptive else dn\n tol2 = tolerance**2\n\n # Prepare for iterative process\n k = 1\n ind = None\n inddiv = None\n\n # Turn off numpy runtime warnings for 'invalid' and 'over':\n old_invalid = np.geterr()['invalid']\n old_over = np.geterr()['over']\n np.seterr(invalid='ignore', over='ignore')\n\n # ############################################################\n # # NON-ADAPTIVE ITERATIONS: ##\n # ############################################################\n if not adaptive:\n # Fixed-point iterations:\n while (np.nanmax(dn) >= tol2 and k < maxiter):\n # Find correction to the previous solution:\n dpix = self.pix2foc(pix, origin) - pix0\n\n # Compute norm (L2) squared of the correction:\n dn = np.sum(dpix*dpix, axis=1)\n\n # Check for divergence (we do this in two stages\n # to optimize performance for the most common\n # scenario when successive approximations converge):\n if detect_divergence:\n divergent = (dn >= dnprev)\n if np.any(divergent):\n # Find solutions that have not yet converged:\n slowconv = (dn >= tol2)\n inddiv, = np.where(divergent & slowconv)\n\n if inddiv.shape[0] > 0:\n # Update indices of elements that\n # still need correction:\n conv = (dn < dnprev)\n iconv = np.where(conv)\n\n # Apply correction:\n dpixgood = dpix[iconv]\n pix[iconv] -= dpixgood\n dpix[iconv] = dpixgood\n\n # For the next iteration choose\n # non-divergent points that have not yet\n # converged to the requested accuracy:\n ind, = np.where(slowconv & conv)\n pix0 = pix0[ind]\n dnprev[ind] = dn[ind]\n k += 1\n\n # Switch to adaptive iterations:\n adaptive = True\n break\n # Save current correction magnitudes for later:\n dnprev = dn\n\n # Apply correction:\n pix -= dpix\n k += 1\n\n # ############################################################\n # # ADAPTIVE ITERATIONS: ##\n # ############################################################\n if adaptive:\n if ind is None:\n ind, = np.where(np.isfinite(pix).all(axis=1))\n pix0 = pix0[ind]\n\n # \"Adaptive\" fixed-point iterations:\n while (ind.shape[0] > 0 and k < maxiter):\n # Find correction to the previous solution:\n dpixnew = self.pix2foc(pix[ind], origin) - pix0\n\n # Compute norm (L2) of the correction:\n dnnew = np.sum(np.square(dpixnew), axis=1)\n\n # Bookkeeping of corrections:\n dnprev[ind] = dn[ind].copy()\n dn[ind] = dnnew\n\n if detect_divergence:\n # Find indices of pixels that are converging:\n conv = (dnnew < dnprev[ind])\n iconv = np.where(conv)\n iiconv = ind[iconv]\n\n # Apply correction:\n dpixgood = dpixnew[iconv]\n pix[iiconv] -= dpixgood\n dpix[iiconv] = dpixgood\n\n # Find indices of solutions that have not yet\n # converged to the requested accuracy\n # AND that do not diverge:\n subind, = np.where((dnnew >= tol2) & conv)\n\n else:\n # Apply correction:\n pix[ind] -= dpixnew\n dpix[ind] = dpixnew\n\n # Find indices of solutions that have not yet\n # converged to the requested accuracy:\n subind, = np.where(dnnew >= tol2)\n\n # Choose solutions that need more iterations:\n ind = ind[subind]\n pix0 = pix0[subind]\n\n k += 1\n\n # ############################################################\n # # FINAL DETECTION OF INVALID, DIVERGING, ##\n # # AND FAILED-TO-CONVERGE POINTS ##\n # ############################################################\n # Identify diverging and/or invalid points:\n invalid = ((~np.all(np.isfinite(pix), axis=1)) &\n (np.all(np.isfinite(world), axis=1)))\n\n # When detect_divergence==False, dnprev is outdated\n # (it is the norm of the very first correction).\n # Still better than nothing...\n inddiv, = np.where(((dn >= tol2) & (dn >= dnprev)) | invalid)\n if inddiv.shape[0] == 0:\n inddiv = None\n\n # Identify points that did not converge within 'maxiter'\n # iterations:\n if k >= maxiter:\n ind, = np.where((dn >= tol2) & (dn < dnprev) & (~invalid))\n if ind.shape[0] == 0:\n ind = None\n else:\n ind = None\n\n # Restore previous numpy error settings:\n np.seterr(invalid=old_invalid, over=old_over)\n\n # ############################################################\n # # RAISE EXCEPTION IF DIVERGING OR TOO SLOWLY CONVERGING ##\n # # DATA POINTS HAVE BEEN DETECTED: ##\n # ############################################################\n if (ind is not None or inddiv is not None) and not quiet:\n if inddiv is None:\n raise NoConvergence(\n \"'WCS.all_world2pix' failed to \"\n \"converge to the requested accuracy after {:d} \"\n \"iterations.\".format(k), best_solution=pix,\n accuracy=np.abs(dpix), niter=k,\n slow_conv=ind, divergent=None)\n else:\n raise NoConvergence(\n \"'WCS.all_world2pix' failed to \"\n \"converge to the requested accuracy.\\n\"\n \"After {:d} iterations, the solution is diverging \"\n \"at least for one input point.\"\n .format(k), best_solution=pix,\n accuracy=np.abs(dpix), niter=k,\n slow_conv=ind, divergent=inddiv)\n\n return pix\n\n @deprecated_renamed_argument('accuracy', 'tolerance', '4.3')\n def all_world2pix(self, *args, tolerance=1e-4, maxiter=20, adaptive=False,\n detect_divergence=True, quiet=False, **kwargs):\n if self.wcs is None:\n raise ValueError(\"No basic WCS settings were created.\")\n\n return self._array_converter(\n lambda *args, **kwargs:\n self._all_world2pix(\n *args, tolerance=tolerance, maxiter=maxiter,\n adaptive=adaptive, detect_divergence=detect_divergence,\n quiet=quiet),\n 'input', *args, **kwargs\n )\n\n all_world2pix.__doc__ = \"\"\"\n all_world2pix(*arg, tolerance=1.0e-4, maxiter=20,\n adaptive=False, detect_divergence=True, quiet=False)\n\n Transforms world coordinates to pixel coordinates, using\n numerical iteration to invert the full forward transformation\n `~astropy.wcs.WCS.all_pix2world` with complete\n distortion model.\n\n\n Parameters\n ----------\n {0}\n\n For a transformation that is not two-dimensional, the\n two-argument form must be used.\n\n {1}\n\n tolerance : float, optional (default = 1.0e-4)\n Tolerance of solution. Iteration terminates when the\n iterative solver estimates that the \"true solution\" is\n within this many pixels current estimate, more\n specifically, when the correction to the solution found\n during the previous iteration is smaller\n (in the sense of the L2 norm) than ``tolerance``.\n\n maxiter : int, optional (default = 20)\n Maximum number of iterations allowed to reach a solution.\n\n quiet : bool, optional (default = False)\n Do not throw :py:class:`NoConvergence` exceptions when\n the method does not converge to a solution with the\n required accuracy within a specified number of maximum\n iterations set by ``maxiter`` parameter. Instead,\n simply return the found solution.\n\n Other Parameters\n ----------------\n adaptive : bool, optional (default = False)\n Specifies whether to adaptively select only points that\n did not converge to a solution within the required\n accuracy for the next iteration. Default is recommended\n for HST as well as most other instruments.\n\n .. note::\n The :py:meth:`all_world2pix` uses a vectorized\n implementation of the method of consecutive\n approximations (see ``Notes`` section below) in which it\n iterates over *all* input points *regardless* until\n the required accuracy has been reached for *all* input\n points. In some cases it may be possible that\n *almost all* points have reached the required accuracy\n but there are only a few of input data points for\n which additional iterations may be needed (this\n depends mostly on the characteristics of the geometric\n distortions for a given instrument). In this situation\n it may be advantageous to set ``adaptive`` = `True` in\n which case :py:meth:`all_world2pix` will continue\n iterating *only* over the points that have not yet\n converged to the required accuracy. However, for the\n HST's ACS/WFC detector, which has the strongest\n distortions of all HST instruments, testing has\n shown that enabling this option would lead to a about\n 50-100% penalty in computational time (depending on\n specifics of the image, geometric distortions, and\n number of input points to be converted). Therefore,\n for HST and possibly instruments, it is recommended\n to set ``adaptive`` = `False`. The only danger in\n getting this setting wrong will be a performance\n penalty.\n\n .. note::\n When ``detect_divergence`` is `True`,\n :py:meth:`all_world2pix` will automatically switch\n to the adaptive algorithm once divergence has been\n detected.\n\n detect_divergence : bool, optional (default = True)\n Specifies whether to perform a more detailed analysis\n of the convergence to a solution. Normally\n :py:meth:`all_world2pix` may not achieve the required\n accuracy if either the ``tolerance`` or ``maxiter`` arguments\n are too low. However, it may happen that for some\n geometric distortions the conditions of convergence for\n the the method of consecutive approximations used by\n :py:meth:`all_world2pix` may not be satisfied, in which\n case consecutive approximations to the solution will\n diverge regardless of the ``tolerance`` or ``maxiter``\n settings.\n\n When ``detect_divergence`` is `False`, these divergent\n points will be detected as not having achieved the\n required accuracy (without further details). In addition,\n if ``adaptive`` is `False` then the algorithm will not\n know that the solution (for specific points) is diverging\n and will continue iterating and trying to \"improve\"\n diverging solutions. This may result in ``NaN`` or\n ``Inf`` values in the return results (in addition to a\n performance penalties). Even when ``detect_divergence``\n is `False`, :py:meth:`all_world2pix`, at the end of the\n iterative process, will identify invalid results\n (``NaN`` or ``Inf``) as \"diverging\" solutions and will\n raise :py:class:`NoConvergence` unless the ``quiet``\n parameter is set to `True`.\n\n When ``detect_divergence`` is `True`,\n :py:meth:`all_world2pix` will detect points for which\n current correction to the coordinates is larger than\n the correction applied during the previous iteration\n **if** the requested accuracy **has not yet been\n achieved**. In this case, if ``adaptive`` is `True`,\n these points will be excluded from further iterations and\n if ``adaptive`` is `False`, :py:meth:`all_world2pix` will\n automatically switch to the adaptive algorithm. Thus, the\n reported divergent solution will be the latest converging\n solution computed immediately *before* divergence\n has been detected.\n\n .. note::\n When accuracy has been achieved, small increases in\n current corrections may be possible due to rounding\n errors (when ``adaptive`` is `False`) and such\n increases will be ignored.\n\n .. note::\n Based on our testing using HST ACS/WFC images, setting\n ``detect_divergence`` to `True` will incur about 5-20%\n performance penalty with the larger penalty\n corresponding to ``adaptive`` set to `True`.\n Because the benefits of enabling this\n feature outweigh the small performance penalty,\n especially when ``adaptive`` = `False`, it is\n recommended to set ``detect_divergence`` to `True`,\n unless extensive testing of the distortion models for\n images from specific instruments show a good stability\n of the numerical method for a wide range of\n coordinates (even outside the image itself).\n\n .. note::\n Indices of the diverging inverse solutions will be\n reported in the ``divergent`` attribute of the\n raised :py:class:`NoConvergence` exception object.\n\n Returns\n -------\n\n {2}\n\n Notes\n -----\n The order of the axes for the input world array is determined by\n the ``CTYPEia`` keywords in the FITS header, therefore it may\n not always be of the form (*ra*, *dec*). The\n `~astropy.wcs.Wcsprm.lat`, `~astropy.wcs.Wcsprm.lng`,\n `~astropy.wcs.Wcsprm.lattyp`, and\n `~astropy.wcs.Wcsprm.lngtyp`\n members can be used to determine the order of the axes.\n\n Using the method of fixed-point iterations approximations we\n iterate starting with the initial approximation, which is\n computed using the non-distortion-aware\n :py:meth:`wcs_world2pix` (or equivalent).\n\n The :py:meth:`all_world2pix` function uses a vectorized\n implementation of the method of consecutive approximations and\n therefore it is highly efficient (>30x) when *all* data points\n that need to be converted from sky coordinates to image\n coordinates are passed at *once*. Therefore, it is advisable,\n whenever possible, to pass as input a long array of all points\n that need to be converted to :py:meth:`all_world2pix` instead\n of calling :py:meth:`all_world2pix` for each data point. Also\n see the note to the ``adaptive`` parameter.\n\n Raises\n ------\n NoConvergence\n The method did not converge to a\n solution to the required accuracy within a specified\n number of maximum iterations set by the ``maxiter``\n parameter. To turn off this exception, set ``quiet`` to\n `True`. Indices of the points for which the requested\n accuracy was not achieved (if any) will be listed in the\n ``slow_conv`` attribute of the\n raised :py:class:`NoConvergence` exception object.\n\n See :py:class:`NoConvergence` documentation for\n more details.\n\n MemoryError\n Memory allocation failed.\n\n SingularMatrixError\n Linear transformation matrix is singular.\n\n InconsistentAxisTypesError\n Inconsistent or unrecognized coordinate axis types.\n\n ValueError\n Invalid parameter value.\n\n ValueError\n Invalid coordinate transformation parameters.\n\n ValueError\n x- and y-coordinate arrays are not the same size.\n\n InvalidTransformError\n Invalid coordinate transformation parameters.\n\n InvalidTransformError\n Ill-conditioned coordinate transformation parameters.\n\n Examples\n --------\n >>> import astropy.io.fits as fits\n >>> import astropy.wcs as wcs\n >>> import numpy as np\n >>> import os\n\n >>> filename = os.path.join(wcs.__path__[0], 'tests/data/j94f05bgq_flt.fits')\n >>> hdulist = fits.open(filename)\n >>> w = wcs.WCS(hdulist[('sci',1)].header, hdulist)\n >>> hdulist.close()\n\n >>> ra, dec = w.all_pix2world([1,2,3], [1,1,1], 1)\n >>> print(ra) # doctest: +FLOAT_CMP\n [ 5.52645627 5.52649663 5.52653698]\n >>> print(dec) # doctest: +FLOAT_CMP\n [-72.05171757 -72.05171276 -72.05170795]\n >>> radec = w.all_pix2world([[1,1], [2,1], [3,1]], 1)\n >>> print(radec) # doctest: +FLOAT_CMP\n [[ 5.52645627 -72.05171757]\n [ 5.52649663 -72.05171276]\n [ 5.52653698 -72.05170795]]\n >>> x, y = w.all_world2pix(ra, dec, 1)\n >>> print(x) # doctest: +FLOAT_CMP\n [ 1.00000238 2.00000237 3.00000236]\n >>> print(y) # doctest: +FLOAT_CMP\n [ 0.99999996 0.99999997 0.99999997]\n >>> xy = w.all_world2pix(radec, 1)\n >>> print(xy) # doctest: +FLOAT_CMP\n [[ 1.00000238 0.99999996]\n [ 2.00000237 0.99999997]\n [ 3.00000236 0.99999997]]\n >>> xy = w.all_world2pix(radec, 1, maxiter=3,\n ... tolerance=1.0e-10, quiet=False)\n Traceback (most recent call last):\n ...\n NoConvergence: 'WCS.all_world2pix' failed to converge to the\n requested accuracy. After 3 iterations, the solution is\n diverging at least for one input point.\n\n >>> # Now try to use some diverging data:\n >>> divradec = w.all_pix2world([[1.0, 1.0],\n ... [10000.0, 50000.0],\n ... [3.0, 1.0]], 1)\n >>> print(divradec) # doctest: +FLOAT_CMP\n [[ 5.52645627 -72.05171757]\n [ 7.15976932 -70.8140779 ]\n [ 5.52653698 -72.05170795]]\n\n >>> # First, turn detect_divergence on:\n >>> try: # doctest: +FLOAT_CMP\n ... xy = w.all_world2pix(divradec, 1, maxiter=20,\n ... tolerance=1.0e-4, adaptive=False,\n ... detect_divergence=True,\n ... quiet=False)\n ... except wcs.wcs.NoConvergence as e:\n ... print(\"Indices of diverging points: {{0}}\"\n ... .format(e.divergent))\n ... print(\"Indices of poorly converging points: {{0}}\"\n ... .format(e.slow_conv))\n ... print(\"Best solution:\\\\n{{0}}\".format(e.best_solution))\n ... print(\"Achieved accuracy:\\\\n{{0}}\".format(e.accuracy))\n Indices of diverging points: [1]\n Indices of poorly converging points: None\n Best solution:\n [[ 1.00000238e+00 9.99999965e-01]\n [ -1.99441636e+06 1.44309097e+06]\n [ 3.00000236e+00 9.99999966e-01]]\n Achieved accuracy:\n [[ 6.13968380e-05 8.59638593e-07]\n [ 8.59526812e+11 6.61713548e+11]\n [ 6.09398446e-05 8.38759724e-07]]\n >>> raise e\n Traceback (most recent call last):\n ...\n NoConvergence: 'WCS.all_world2pix' failed to converge to the\n requested accuracy. After 5 iterations, the solution is\n diverging at least for one input point.\n\n >>> # This time turn detect_divergence off:\n >>> try: # doctest: +FLOAT_CMP\n ... xy = w.all_world2pix(divradec, 1, maxiter=20,\n ... tolerance=1.0e-4, adaptive=False,\n ... detect_divergence=False,\n ... quiet=False)\n ... except wcs.wcs.NoConvergence as e:\n ... print(\"Indices of diverging points: {{0}}\"\n ... .format(e.divergent))\n ... print(\"Indices of poorly converging points: {{0}}\"\n ... .format(e.slow_conv))\n ... print(\"Best solution:\\\\n{{0}}\".format(e.best_solution))\n ... print(\"Achieved accuracy:\\\\n{{0}}\".format(e.accuracy))\n Indices of diverging points: [1]\n Indices of poorly converging points: None\n Best solution:\n [[ 1.00000009 1. ]\n [ nan nan]\n [ 3.00000009 1. ]]\n Achieved accuracy:\n [[ 2.29417358e-06 3.21222995e-08]\n [ nan nan]\n [ 2.27407877e-06 3.13005639e-08]]\n >>> raise e\n Traceback (most recent call last):\n ...\n NoConvergence: 'WCS.all_world2pix' failed to converge to the\n requested accuracy. After 6 iterations, the solution is\n diverging at least for one input point.\n\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('naxis', 8),\n docstrings.RA_DEC_ORDER(8),\n docstrings.RETURNS('pixel coordinates', 8))\n\n def wcs_world2pix(self, *args, **kwargs):\n if self.wcs is None:\n raise ValueError(\"No basic WCS settings were created.\")\n return self._array_converter(\n lambda xy, o: self.wcs.s2p(xy, o)['pixcrd'],\n 'input', *args, **kwargs)\n wcs_world2pix.__doc__ = \"\"\"\n Transforms world coordinates to pixel coordinates, using only\n the basic `wcslib`_ WCS transformation. No `SIP`_ or\n `distortion paper`_ table lookup transformation is applied.\n\n Parameters\n ----------\n {}\n\n For a transformation that is not two-dimensional, the\n two-argument form must be used.\n\n {}\n\n Returns\n -------\n\n {}\n\n Notes\n -----\n The order of the axes for the input world array is determined by\n the ``CTYPEia`` keywords in the FITS header, therefore it may\n not always be of the form (*ra*, *dec*). The\n `~astropy.wcs.Wcsprm.lat`, `~astropy.wcs.Wcsprm.lng`,\n `~astropy.wcs.Wcsprm.lattyp` and `~astropy.wcs.Wcsprm.lngtyp`\n members can be used to determine the order of the axes.\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n SingularMatrixError\n Linear transformation matrix is singular.\n\n InconsistentAxisTypesError\n Inconsistent or unrecognized coordinate axis types.\n\n ValueError\n Invalid parameter value.\n\n ValueError\n Invalid coordinate transformation parameters.\n\n ValueError\n x- and y-coordinate arrays are not the same size.\n\n InvalidTransformError\n Invalid coordinate transformation parameters.\n\n InvalidTransformError\n Ill-conditioned coordinate transformation parameters.\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('naxis', 8),\n docstrings.RA_DEC_ORDER(8),\n docstrings.RETURNS('pixel coordinates', 8))\n\n def pix2foc(self, *args):\n return self._array_converter(self._pix2foc, None, *args)\n pix2foc.__doc__ = \"\"\"\n Convert pixel coordinates to focal plane coordinates using the\n `SIP`_ polynomial distortion convention and `distortion\n paper`_ table-lookup correction.\n\n The output is in absolute pixel coordinates, not relative to\n ``CRPIX``.\n\n Parameters\n ----------\n\n {}\n\n Returns\n -------\n\n {}\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n ValueError\n Invalid coordinate transformation parameters.\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('2', 8),\n docstrings.RETURNS('focal coordinates', 8))\n\n def p4_pix2foc(self, *args):\n return self._array_converter(self._p4_pix2foc, None, *args)\n p4_pix2foc.__doc__ = \"\"\"\n Convert pixel coordinates to focal plane coordinates using\n `distortion paper`_ table-lookup correction.\n\n The output is in absolute pixel coordinates, not relative to\n ``CRPIX``.\n\n Parameters\n ----------\n\n {}\n\n Returns\n -------\n\n {}\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n ValueError\n Invalid coordinate transformation parameters.\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('2', 8),\n docstrings.RETURNS('focal coordinates', 8))\n\n def det2im(self, *args):\n return self._array_converter(self._det2im, None, *args)\n det2im.__doc__ = \"\"\"\n Convert detector coordinates to image plane coordinates using\n `distortion paper`_ table-lookup correction.\n\n The output is in absolute pixel coordinates, not relative to\n ``CRPIX``.\n\n Parameters\n ----------\n\n {}\n\n Returns\n -------\n\n {}\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n ValueError\n Invalid coordinate transformation parameters.\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('2', 8),\n docstrings.RETURNS('pixel coordinates', 8))\n\n def sip_pix2foc(self, *args):\n if self.sip is None:\n if len(args) == 2:\n return args[0]\n elif len(args) == 3:\n return args[:2]\n else:\n raise TypeError(\"Wrong number of arguments\")\n return self._array_converter(self.sip.pix2foc, None, *args)\n sip_pix2foc.__doc__ = \"\"\"\n Convert pixel coordinates to focal plane coordinates using the\n `SIP`_ polynomial distortion convention.\n\n The output is in pixel coordinates, relative to ``CRPIX``.\n\n FITS WCS `distortion paper`_ table lookup correction is not\n applied, even if that information existed in the FITS file\n that initialized this :class:`~astropy.wcs.WCS` object. To\n correct for that, use `~astropy.wcs.WCS.pix2foc` or\n `~astropy.wcs.WCS.p4_pix2foc`.\n\n Parameters\n ----------\n\n {}\n\n Returns\n -------\n\n {}\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n ValueError\n Invalid coordinate transformation parameters.\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('2', 8),\n docstrings.RETURNS('focal coordinates', 8))\n\n def sip_foc2pix(self, *args):\n if self.sip is None:\n if len(args) == 2:\n return args[0]\n elif len(args) == 3:\n return args[:2]\n else:\n raise TypeError(\"Wrong number of arguments\")\n return self._array_converter(self.sip.foc2pix, None, *args)\n sip_foc2pix.__doc__ = \"\"\"\n Convert focal plane coordinates to pixel coordinates using the\n `SIP`_ polynomial distortion convention.\n\n FITS WCS `distortion paper`_ table lookup distortion\n correction is not applied, even if that information existed in\n the FITS file that initialized this `~astropy.wcs.WCS` object.\n\n Parameters\n ----------\n\n {}\n\n Returns\n -------\n\n {}\n\n Raises\n ------\n MemoryError\n Memory allocation failed.\n\n ValueError\n Invalid coordinate transformation parameters.\n \"\"\".format(docstrings.TWO_OR_MORE_ARGS('2', 8),\n docstrings.RETURNS('pixel coordinates', 8))\n\n def proj_plane_pixel_scales(self):\n \"\"\"\n Calculate pixel scales along each axis of the image pixel at\n the ``CRPIX`` location once it is projected onto the\n \"plane of intermediate world coordinates\" as defined in\n `Greisen & Calabretta 2002, A&A, 395, 1061 <https://ui.adsabs.harvard.edu/abs/2002A%26A...395.1061G>`_.\n\n .. note::\n This method is concerned **only** about the transformation\n \"image plane\"->\"projection plane\" and **not** about the\n transformation \"celestial sphere\"->\"projection plane\"->\"image plane\".\n Therefore, this function ignores distortions arising due to\n non-linear nature of most projections.\n\n .. note::\n This method only returns sensible answers if the WCS contains\n celestial axes, i.e., the `~astropy.wcs.WCS.celestial` WCS object.\n\n Returns\n -------\n scale : list of `~astropy.units.Quantity`\n A vector of projection plane increments corresponding to each\n pixel side (axis).\n\n See Also\n --------\n astropy.wcs.utils.proj_plane_pixel_scales\n\n \"\"\" # noqa: E501\n from astropy.wcs.utils import proj_plane_pixel_scales # Avoid circular import\n values = proj_plane_pixel_scales(self)\n units = [u.Unit(x) for x in self.wcs.cunit]\n return [value * unit for (value, unit) in zip(values, units)] # Can have different units\n\n def proj_plane_pixel_area(self):\n \"\"\"\n For a **celestial** WCS (see `astropy.wcs.WCS.celestial`), returns pixel\n area of the image pixel at the ``CRPIX`` location once it is projected\n onto the \"plane of intermediate world coordinates\" as defined in\n `Greisen & Calabretta 2002, A&A, 395, 1061 <https://ui.adsabs.harvard.edu/abs/2002A%26A...395.1061G>`_.\n\n .. note::\n This function is concerned **only** about the transformation\n \"image plane\"->\"projection plane\" and **not** about the\n transformation \"celestial sphere\"->\"projection plane\"->\"image plane\".\n Therefore, this function ignores distortions arising due to\n non-linear nature of most projections.\n\n .. note::\n This method only returns sensible answers if the WCS contains\n celestial axes, i.e., the `~astropy.wcs.WCS.celestial` WCS object.\n\n Returns\n -------\n area : `~astropy.units.Quantity`\n Area (in the projection plane) of the pixel at ``CRPIX`` location.\n\n Raises\n ------\n ValueError\n Pixel area is defined only for 2D pixels. Most likely the\n `~astropy.wcs.Wcsprm.cd` matrix of the `~astropy.wcs.WCS.celestial`\n WCS is not a square matrix of second order.\n\n Notes\n -----\n\n Depending on the application, square root of the pixel area can be used to\n represent a single pixel scale of an equivalent square pixel\n whose area is equal to the area of a generally non-square pixel.\n\n See Also\n --------\n astropy.wcs.utils.proj_plane_pixel_area\n\n \"\"\" # noqa: E501\n from astropy.wcs.utils import proj_plane_pixel_area # Avoid circular import\n value = proj_plane_pixel_area(self)\n unit = u.Unit(self.wcs.cunit[0]) * u.Unit(self.wcs.cunit[1]) # 2D only\n return value * unit\n\n def to_fits(self, relax=False, key=None):\n \"\"\"\n Generate an `~astropy.io.fits.HDUList` object with all of the\n information stored in this object. This should be logically identical\n to the input FITS file, but it will be normalized in a number of ways.\n\n See `to_header` for some warnings about the output produced.\n\n Parameters\n ----------\n\n relax : bool or int, optional\n Degree of permissiveness:\n\n - `False` (default): Write all extensions that are\n considered to be safe and recommended.\n\n - `True`: Write all recognized informal extensions of the\n WCS standard.\n\n - `int`: a bit field selecting specific extensions to\n write. See :ref:`astropy:relaxwrite` for details.\n\n key : str\n The name of a particular WCS transform to use. This may be\n either ``' '`` or ``'A'``-``'Z'`` and corresponds to the ``\"a\"``\n part of the ``CTYPEia`` cards.\n\n Returns\n -------\n hdulist : `~astropy.io.fits.HDUList`\n \"\"\"\n\n header = self.to_header(relax=relax, key=key)\n\n hdu = fits.PrimaryHDU(header=header)\n hdulist = fits.HDUList(hdu)\n\n self._write_det2im(hdulist)\n self._write_distortion_kw(hdulist)\n\n return hdulist\n\n def to_header(self, relax=None, key=None):\n \"\"\"Generate an `astropy.io.fits.Header` object with the basic WCS\n and SIP information stored in this object. This should be\n logically identical to the input FITS file, but it will be\n normalized in a number of ways.\n\n .. warning::\n\n This function does not write out FITS WCS `distortion\n paper`_ information, since that requires multiple FITS\n header data units. To get a full representation of\n everything in this object, use `to_fits`.\n\n Parameters\n ----------\n relax : bool or int, optional\n Degree of permissiveness:\n\n - `False` (default): Write all extensions that are\n considered to be safe and recommended.\n\n - `True`: Write all recognized informal extensions of the\n WCS standard.\n\n - `int`: a bit field selecting specific extensions to\n write. See :ref:`astropy:relaxwrite` for details.\n\n If the ``relax`` keyword argument is not given and any\n keywords were omitted from the output, an\n `~astropy.utils.exceptions.AstropyWarning` is displayed.\n To override this, explicitly pass a value to ``relax``.\n\n key : str\n The name of a particular WCS transform to use. This may be\n either ``' '`` or ``'A'``-``'Z'`` and corresponds to the ``\"a\"``\n part of the ``CTYPEia`` cards.\n\n Returns\n -------\n header : `astropy.io.fits.Header`\n\n Notes\n -----\n The output header will almost certainly differ from the input in a\n number of respects:\n\n 1. The output header only contains WCS-related keywords. In\n particular, it does not contain syntactically-required\n keywords such as ``SIMPLE``, ``NAXIS``, ``BITPIX``, or\n ``END``.\n\n 2. Deprecated (e.g. ``CROTAn``) or non-standard usage will\n be translated to standard (this is partially dependent on\n whether ``fix`` was applied).\n\n 3. Quantities will be converted to the units used internally,\n basically SI with the addition of degrees.\n\n 4. Floating-point quantities may be given to a different decimal\n precision.\n\n 5. Elements of the ``PCi_j`` matrix will be written if and\n only if they differ from the unit matrix. Thus, if the\n matrix is unity then no elements will be written.\n\n 6. Additional keywords such as ``WCSAXES``, ``CUNITia``,\n ``LONPOLEa`` and ``LATPOLEa`` may appear.\n\n 7. The original keycomments will be lost, although\n `to_header` tries hard to write meaningful comments.\n\n 8. Keyword order may be changed.\n\n \"\"\"\n # default precision for numerical WCS keywords\n precision = WCSHDO_P14 # Defined by C-ext # noqa: F821\n display_warning = False\n if relax is None:\n display_warning = True\n relax = False\n\n if relax not in (True, False):\n do_sip = relax & WCSHDO_SIP\n relax &= ~WCSHDO_SIP\n else:\n do_sip = relax\n relax = WCSHDO_all if relax is True else WCSHDO_safe # Defined by C-ext # noqa: F821\n\n relax = precision | relax\n\n if self.wcs is not None:\n if key is not None:\n orig_key = self.wcs.alt\n self.wcs.alt = key\n header_string = self.wcs.to_header(relax)\n header = fits.Header.fromstring(header_string)\n keys_to_remove = [\"\", \" \", \"COMMENT\"]\n for kw in keys_to_remove:\n if kw in header:\n del header[kw]\n # Check if we can handle TPD distortion correctly\n if int(_parsed_version[0]) * 10 + int(_parsed_version[1]) < 71:\n for kw, val in header.items():\n if kw[:5] in ('CPDIS', 'CQDIS') and val == 'TPD':\n warnings.warn(\n f\"WCS contains a TPD distortion model in {kw}. WCSLIB \"\n f\"{_wcs.__version__} is writing this in a format incompatible with \"\n f\"current versions - please update to 7.4 or use the bundled WCSLIB.\",\n AstropyWarning)\n elif int(_parsed_version[0]) * 10 + int(_parsed_version[1]) < 74:\n for kw, val in header.items():\n if kw[:5] in ('CPDIS', 'CQDIS') and val == 'TPD':\n warnings.warn(\n f\"WCS contains a TPD distortion model in {kw}, which requires WCSLIB \"\n f\"7.4 or later to store in a FITS header (having {_wcs.__version__}).\",\n AstropyWarning)\n else:\n header = fits.Header()\n\n if do_sip and self.sip is not None:\n if self.wcs is not None and any(not ctyp.endswith('-SIP') for ctyp in self.wcs.ctype):\n self._fix_ctype(header, add_sip=True)\n\n for kw, val in self._write_sip_kw().items():\n header[kw] = val\n\n if not do_sip and self.wcs is not None and any(self.wcs.ctype) and self.sip is not None:\n # This is called when relax is not False or WCSHDO_SIP\n # The default case of ``relax=None`` is handled further in the code.\n header = self._fix_ctype(header, add_sip=False)\n\n if display_warning:\n full_header = self.to_header(relax=True, key=key)\n missing_keys = []\n for kw, val in full_header.items():\n if kw not in header:\n missing_keys.append(kw)\n\n if len(missing_keys):\n warnings.warn(\n \"Some non-standard WCS keywords were excluded: {} \"\n \"Use the ``relax`` kwarg to control this.\".format(\n ', '.join(missing_keys)),\n AstropyWarning)\n # called when ``relax=None``\n # This is different from the case of ``relax=False``.\n if any(self.wcs.ctype) and self.sip is not None:\n header = self._fix_ctype(header, add_sip=False, log_message=False)\n # Finally reset the key. This must be called after ``_fix_ctype``.\n if key is not None:\n self.wcs.alt = orig_key\n return header\n\n def _fix_ctype(self, header, add_sip=True, log_message=True):\n \"\"\"\n Parameters\n ----------\n header : `~astropy.io.fits.Header`\n FITS header.\n add_sip : bool\n Flag indicating whether \"-SIP\" should be added or removed from CTYPE keywords.\n\n Remove \"-SIP\" from CTYPE when writing out a header with relax=False.\n This needs to be done outside ``to_header`` because ``to_header`` runs\n twice when ``relax=False`` and the second time ``relax`` is set to ``True``\n to display the missing keywords.\n\n If the user requested SIP distortion to be written out add \"-SIP\" to\n CTYPE if it is missing.\n \"\"\"\n\n _add_sip_to_ctype = \"\"\"\n Inconsistent SIP distortion information is present in the current WCS:\n SIP coefficients were detected, but CTYPE is missing \"-SIP\" suffix,\n therefore the current WCS is internally inconsistent.\n\n Because relax has been set to True, the resulting output WCS will have\n \"-SIP\" appended to CTYPE in order to make the header internally consistent.\n\n However, this may produce incorrect astrometry in the output WCS, if\n in fact the current WCS is already distortion-corrected.\n\n Therefore, if current WCS is already distortion-corrected (eg, drizzled)\n then SIP distortion components should not apply. In that case, for a WCS\n that is already distortion-corrected, please remove the SIP coefficients\n from the header.\n\n \"\"\"\n if log_message:\n if add_sip:\n log.info(_add_sip_to_ctype)\n for i in range(1, self.naxis+1):\n # strip() must be called here to cover the case of alt key= \" \"\n kw = f'CTYPE{i}{self.wcs.alt}'.strip()\n if kw in header:\n if add_sip:\n val = header[kw].strip(\"-SIP\") + \"-SIP\"\n else:\n val = header[kw].strip(\"-SIP\")\n header[kw] = val\n else:\n continue\n return header\n\n def to_header_string(self, relax=None):\n \"\"\"\n Identical to `to_header`, but returns a string containing the\n header cards.\n \"\"\"\n return str(self.to_header(relax))\n\n def footprint_to_file(self, filename='footprint.reg', color='green',\n width=2, coordsys=None):\n \"\"\"\n Writes out a `ds9`_ style regions file. It can be loaded\n directly by `ds9`_.\n\n Parameters\n ----------\n filename : str, optional\n Output file name - default is ``'footprint.reg'``\n\n color : str, optional\n Color to use when plotting the line.\n\n width : int, optional\n Width of the region line.\n\n coordsys : str, optional\n Coordinate system. If not specified (default), the ``radesys``\n value is used. For all possible values, see\n http://ds9.si.edu/doc/ref/region.html#RegionFileFormat\n\n \"\"\"\n comments = ('# Region file format: DS9 version 4.0 \\n'\n '# global color=green font=\"helvetica 12 bold '\n 'select=1 highlite=1 edit=1 move=1 delete=1 '\n 'include=1 fixed=0 source\\n')\n\n coordsys = coordsys or self.wcs.radesys\n\n if coordsys not in ('PHYSICAL', 'IMAGE', 'FK4', 'B1950', 'FK5',\n 'J2000', 'GALACTIC', 'ECLIPTIC', 'ICRS', 'LINEAR',\n 'AMPLIFIER', 'DETECTOR'):\n raise ValueError(\"Coordinate system '{}' is not supported. A valid\"\n \" one can be given with the 'coordsys' argument.\"\n .format(coordsys))\n\n with open(filename, mode='w') as f:\n f.write(comments)\n f.write(f'{coordsys}\\n')\n f.write('polygon(')\n ftpr = self.calc_footprint()\n if ftpr is not None:\n ftpr.tofile(f, sep=',')\n f.write(f') # color={color}, width={width:d} \\n')\n\n def _get_naxis(self, header=None):\n _naxis = []\n if (header is not None and\n not isinstance(header, (str, bytes))):\n for naxis in itertools.count(1):\n try:\n _naxis.append(header[f'NAXIS{naxis}'])\n except KeyError:\n break\n if len(_naxis) == 0:\n _naxis = [0, 0]\n elif len(_naxis) == 1:\n _naxis.append(0)\n self._naxis = _naxis\n\n def printwcs(self):\n print(repr(self))\n\n def __repr__(self):\n '''\n Return a short description. Simply porting the behavior from\n the `printwcs()` method.\n '''\n description = [\"WCS Keywords\\n\",\n f\"Number of WCS axes: {self.naxis!r}\"]\n sfmt = ' : ' + \"\".join([\"{\"+f\"{i}\"+\"!r} \" for i in range(self.naxis)])\n\n keywords = ['CTYPE', 'CRVAL', 'CRPIX']\n values = [self.wcs.ctype, self.wcs.crval, self.wcs.crpix]\n for keyword, value in zip(keywords, values):\n description.append(keyword+sfmt.format(*value))\n\n if hasattr(self.wcs, 'pc'):\n for i in range(self.naxis):\n s = ''\n for j in range(self.naxis):\n s += ''.join(['PC', str(i+1), '_', str(j+1), ' '])\n s += sfmt\n description.append(s.format(*self.wcs.pc[i]))\n s = 'CDELT' + sfmt\n description.append(s.format(*self.wcs.cdelt))\n elif hasattr(self.wcs, 'cd'):\n for i in range(self.naxis):\n s = ''\n for j in range(self.naxis):\n s += \"\".join(['CD', str(i+1), '_', str(j+1), ' '])\n s += sfmt\n description.append(s.format(*self.wcs.cd[i]))\n\n description.append(f\"NAXIS : {' '.join(map(str, self._naxis))}\")\n return '\\n'.join(description)\n\n def get_axis_types(self):\n \"\"\"\n Similar to `self.wcsprm.axis_types <astropy.wcs.Wcsprm.axis_types>`\n but provides the information in a more Python-friendly format.\n\n Returns\n -------\n result : list of dict\n\n Returns a list of dictionaries, one for each axis, each\n containing attributes about the type of that axis.\n\n Each dictionary has the following keys:\n\n - 'coordinate_type':\n\n - None: Non-specific coordinate type.\n\n - 'stokes': Stokes coordinate.\n\n - 'celestial': Celestial coordinate (including ``CUBEFACE``).\n\n - 'spectral': Spectral coordinate.\n\n - 'scale':\n\n - 'linear': Linear axis.\n\n - 'quantized': Quantized axis (``STOKES``, ``CUBEFACE``).\n\n - 'non-linear celestial': Non-linear celestial axis.\n\n - 'non-linear spectral': Non-linear spectral axis.\n\n - 'logarithmic': Logarithmic axis.\n\n - 'tabular': Tabular axis.\n\n - 'group'\n\n - Group number, e.g. lookup table number\n\n - 'number'\n\n - For celestial axes:\n\n - 0: Longitude coordinate.\n\n - 1: Latitude coordinate.\n\n - 2: ``CUBEFACE`` number.\n\n - For lookup tables:\n\n - the axis number in a multidimensional table.\n\n ``CTYPEia`` in ``\"4-3\"`` form with unrecognized algorithm code will\n generate an error.\n \"\"\"\n if self.wcs is None:\n raise AttributeError(\n \"This WCS object does not have a wcsprm object.\")\n\n coordinate_type_map = {\n 0: None,\n 1: 'stokes',\n 2: 'celestial',\n 3: 'spectral'}\n\n scale_map = {\n 0: 'linear',\n 1: 'quantized',\n 2: 'non-linear celestial',\n 3: 'non-linear spectral',\n 4: 'logarithmic',\n 5: 'tabular'}\n\n result = []\n for axis_type in self.wcs.axis_types:\n subresult = {}\n\n coordinate_type = (axis_type // 1000) % 10\n subresult['coordinate_type'] = coordinate_type_map[coordinate_type]\n\n scale = (axis_type // 100) % 10\n subresult['scale'] = scale_map[scale]\n\n group = (axis_type // 10) % 10\n subresult['group'] = group\n\n number = axis_type % 10\n subresult['number'] = number\n\n result.append(subresult)\n\n return result\n\n def __reduce__(self):\n \"\"\"\n Support pickling of WCS objects. This is done by serializing\n to an in-memory FITS file and dumping that as a string.\n \"\"\"\n\n hdulist = self.to_fits(relax=True)\n\n buffer = io.BytesIO()\n hdulist.writeto(buffer)\n\n dct = self.__dict__.copy()\n dct['_alt_wcskey'] = self.wcs.alt\n\n return (__WCS_unpickle__,\n (self.__class__, dct, buffer.getvalue(),))\n\n def dropaxis(self, dropax):\n \"\"\"\n Remove an axis from the WCS.\n\n Parameters\n ----------\n wcs : `~astropy.wcs.WCS`\n The WCS with naxis to be chopped to naxis-1\n dropax : int\n The index of the WCS to drop, counting from 0 (i.e., python convention,\n not FITS convention)\n\n Returns\n -------\n `~astropy.wcs.WCS`\n A new `~astropy.wcs.WCS` instance with one axis fewer\n \"\"\"\n inds = list(range(self.wcs.naxis))\n inds.pop(dropax)\n\n # axis 0 has special meaning to sub\n # if wcs.wcs.ctype == ['RA','DEC','VLSR'], you want\n # wcs.sub([1,2]) to get 'RA','DEC' back\n return self.sub([i+1 for i in inds])\n\n def swapaxes(self, ax0, ax1):\n \"\"\"\n Swap axes in a WCS.\n\n Parameters\n ----------\n wcs : `~astropy.wcs.WCS`\n The WCS to have its axes swapped\n ax0 : int\n ax1 : int\n The indices of the WCS to be swapped, counting from 0 (i.e., python\n convention, not FITS convention)\n\n Returns\n -------\n `~astropy.wcs.WCS`\n A new `~astropy.wcs.WCS` instance with the same number of axes,\n but two swapped\n \"\"\"\n inds = list(range(self.wcs.naxis))\n inds[ax0], inds[ax1] = inds[ax1], inds[ax0]\n\n return self.sub([i+1 for i in inds])\n\n def reorient_celestial_first(self):\n \"\"\"\n Reorient the WCS such that the celestial axes are first, followed by\n the spectral axis, followed by any others.\n Assumes at least celestial axes are present.\n \"\"\"\n return self.sub([WCSSUB_CELESTIAL, WCSSUB_SPECTRAL, WCSSUB_STOKES, WCSSUB_TIME]) # Defined by C-ext # noqa: F821 E501\n\n def slice(self, view, numpy_order=True):\n \"\"\"\n Slice a WCS instance using a Numpy slice. The order of the slice should\n be reversed (as for the data) compared to the natural WCS order.\n\n Parameters\n ----------\n view : tuple\n A tuple containing the same number of slices as the WCS system.\n The ``step`` method, the third argument to a slice, is not\n presently supported.\n numpy_order : bool\n Use numpy order, i.e. slice the WCS so that an identical slice\n applied to a numpy array will slice the array and WCS in the same\n way. If set to `False`, the WCS will be sliced in FITS order,\n meaning the first slice will be applied to the *last* numpy index\n but the *first* WCS axis.\n\n Returns\n -------\n wcs_new : `~astropy.wcs.WCS`\n A new resampled WCS axis\n \"\"\"\n if hasattr(view, '__len__') and len(view) > self.wcs.naxis:\n raise ValueError(\"Must have # of slices <= # of WCS axes\")\n elif not hasattr(view, '__len__'): # view MUST be an iterable\n view = [view]\n\n if not all(isinstance(x, slice) for x in view):\n # We need to drop some dimensions, but this may not always be\n # possible with .sub due to correlated axes, so instead we use the\n # generalized slicing infrastructure from astropy.wcs.wcsapi.\n return SlicedFITSWCS(self, view)\n\n # NOTE: we could in principle use SlicedFITSWCS as above for all slicing,\n # but in the simple case where there are no axes dropped, we can just\n # create a full WCS object with updated WCS parameters which is faster\n # for this specific case and also backward-compatible.\n\n wcs_new = self.deepcopy()\n if wcs_new.sip is not None:\n sip_crpix = wcs_new.sip.crpix.tolist()\n\n for i, iview in enumerate(view):\n if iview.step is not None and iview.step < 0:\n raise NotImplementedError(\"Reversing an axis is not \"\n \"implemented.\")\n\n if numpy_order:\n wcs_index = self.wcs.naxis - 1 - i\n else:\n wcs_index = i\n\n if iview.step is not None and iview.start is None:\n # Slice from \"None\" is equivalent to slice from 0 (but one\n # might want to downsample, so allow slices with\n # None,None,step or None,stop,step)\n iview = slice(0, iview.stop, iview.step)\n\n if iview.start is not None:\n if iview.step not in (None, 1):\n crpix = self.wcs.crpix[wcs_index]\n cdelt = self.wcs.cdelt[wcs_index]\n # equivalently (keep this comment so you can compare eqns):\n # wcs_new.wcs.crpix[wcs_index] =\n # (crpix - iview.start)*iview.step + 0.5 - iview.step/2.\n crp = ((crpix - iview.start - 1.)/iview.step\n + 0.5 + 1./iview.step/2.)\n wcs_new.wcs.crpix[wcs_index] = crp\n if wcs_new.sip is not None:\n sip_crpix[wcs_index] = crp\n wcs_new.wcs.cdelt[wcs_index] = cdelt * iview.step\n else:\n wcs_new.wcs.crpix[wcs_index] -= iview.start\n if wcs_new.sip is not None:\n sip_crpix[wcs_index] -= iview.start\n\n try:\n # range requires integers but the other attributes can also\n # handle arbitrary values, so this needs to be in a try/except.\n nitems = len(builtins.range(self._naxis[wcs_index])[iview])\n except TypeError as exc:\n if 'indices must be integers' not in str(exc):\n raise\n warnings.warn(\"NAXIS{} attribute is not updated because at \"\n \"least one index ('{}') is no integer.\"\n \"\".format(wcs_index, iview), AstropyUserWarning)\n else:\n wcs_new._naxis[wcs_index] = nitems\n\n if wcs_new.sip is not None:\n wcs_new.sip = Sip(self.sip.a, self.sip.b, self.sip.ap, self.sip.bp,\n sip_crpix)\n\n return wcs_new\n\n def __getitem__(self, item):\n # \"getitem\" is a shortcut for self.slice; it is very limited\n # there is no obvious and unambiguous interpretation of wcs[1,2,3]\n # We COULD allow wcs[1] to link to wcs.sub([2])\n # (wcs[i] -> wcs.sub([i+1])\n return self.slice(item)\n\n def __iter__(self):\n # Having __getitem__ makes Python think WCS is iterable. However,\n # Python first checks whether __iter__ is present, so we can raise an\n # exception here.\n raise TypeError(f\"'{self.__class__.__name__}' object is not iterable\")\n\n @property\n def axis_type_names(self):\n \"\"\"\n World names for each coordinate axis\n\n Returns\n -------\n list of str\n A list of names along each axis.\n \"\"\"\n names = list(self.wcs.cname)\n types = self.wcs.ctype\n for i in range(len(names)):\n if len(names[i]) > 0:\n continue\n names[i] = types[i].split('-')[0]\n return names\n\n @property\n def celestial(self):\n \"\"\"\n A copy of the current WCS with only the celestial axes included\n \"\"\"\n return self.sub([WCSSUB_CELESTIAL]) # Defined by C-ext # noqa: F821\n\n @property\n def is_celestial(self):\n return self.has_celestial and self.naxis == 2\n\n @property\n def has_celestial(self):\n try:\n return self.wcs.lng >= 0 and self.wcs.lat >= 0\n except InconsistentAxisTypesError:\n return False\n\n @property\n def spectral(self):\n \"\"\"\n A copy of the current WCS with only the spectral axes included\n \"\"\"\n return self.sub([WCSSUB_SPECTRAL]) # Defined by C-ext # noqa: F821\n\n @property\n def is_spectral(self):\n return self.has_spectral and self.naxis == 1\n\n @property\n def has_spectral(self):\n try:\n return self.wcs.spec >= 0\n except InconsistentAxisTypesError:\n return False\n\n @property\n def has_distortion(self):\n \"\"\"\n Returns `True` if any distortion terms are present.\n \"\"\"\n return (self.sip is not None or\n self.cpdis1 is not None or self.cpdis2 is not None or\n self.det2im1 is not None and self.det2im2 is not None)\n\n @property\n def pixel_scale_matrix(self):\n\n try:\n cdelt = np.diag(self.wcs.get_cdelt())\n pc = self.wcs.get_pc()\n except InconsistentAxisTypesError:\n try:\n # for non-celestial axes, get_cdelt doesn't work\n with warnings.catch_warnings():\n warnings.filterwarnings(\n 'ignore', 'cdelt will be ignored since cd is present', RuntimeWarning)\n cdelt = np.dot(self.wcs.cd, np.diag(self.wcs.cdelt))\n except AttributeError:\n cdelt = np.diag(self.wcs.cdelt)\n\n try:\n pc = self.wcs.pc\n except AttributeError:\n pc = 1\n\n pccd = np.dot(cdelt, pc)\n\n return pccd\n\n def footprint_contains(self, coord, **kwargs):\n \"\"\"\n Determines if a given SkyCoord is contained in the wcs footprint.\n\n Parameters\n ----------\n coord : `~astropy.coordinates.SkyCoord`\n The coordinate to check if it is within the wcs coordinate.\n **kwargs :\n Additional arguments to pass to `~astropy.coordinates.SkyCoord.to_pixel`\n\n Returns\n -------\n response : bool\n True means the WCS footprint contains the coordinate, False means it does not.\n \"\"\"\n\n return coord.contained_by(self, **kwargs)\n\n\ndef __WCS_unpickle__(cls, dct, fits_data):\n \"\"\"\n Unpickles a WCS object from a serialized FITS string.\n \"\"\"\n\n self = cls.__new__(cls)\n\n buffer = io.BytesIO(fits_data)\n hdulist = fits.open(buffer)\n\n naxis = dct.pop('naxis', None)\n if naxis:\n hdulist[0].header['naxis'] = naxis\n naxes = dct.pop('_naxis', [])\n for k, na in enumerate(naxes):\n hdulist[0].header[f'naxis{k + 1:d}'] = na\n\n kwargs = dct.pop('_init_kwargs', {})\n self.__dict__.update(dct)\n\n wcskey = dct.pop('_alt_wcskey', ' ')\n WCS.__init__(self, hdulist[0].header, hdulist, key=wcskey, **kwargs)\n self.pixel_bounds = dct.get('_pixel_bounds', None)\n\n return self\n\n\ndef find_all_wcs(header, relax=True, keysel=None, fix=True,\n translate_units='',\n _do_set=True):\n \"\"\"\n Find all the WCS transformations in the given header.\n\n Parameters\n ----------\n header : str or `~astropy.io.fits.Header` object.\n\n relax : bool or int, optional\n Degree of permissiveness:\n\n - `True` (default): Admit all recognized informal extensions of the\n WCS standard.\n\n - `False`: Recognize only FITS keywords defined by the\n published WCS standard.\n\n - `int`: a bit field selecting specific extensions to accept.\n See :ref:`astropy:relaxread` for details.\n\n keysel : sequence of str, optional\n A list of flags used to select the keyword types considered by\n wcslib. When ``None``, only the standard image header\n keywords are considered (and the underlying wcspih() C\n function is called). To use binary table image array or pixel\n list keywords, *keysel* must be set.\n\n Each element in the list should be one of the following strings:\n\n - 'image': Image header keywords\n\n - 'binary': Binary table image array keywords\n\n - 'pixel': Pixel list keywords\n\n Keywords such as ``EQUIna`` or ``RFRQna`` that are common to\n binary table image arrays and pixel lists (including\n ``WCSNna`` and ``TWCSna``) are selected by both 'binary' and\n 'pixel'.\n\n fix : bool, optional\n When `True` (default), call `~astropy.wcs.Wcsprm.fix` on\n the resulting objects to fix any non-standard uses in the\n header. `FITSFixedWarning` warnings will be emitted if any\n changes were made.\n\n translate_units : str, optional\n Specify which potentially unsafe translations of non-standard\n unit strings to perform. By default, performs none. See\n `WCS.fix` for more information about this parameter. Only\n effective when ``fix`` is `True`.\n\n Returns\n -------\n wcses : list of `WCS`\n \"\"\"\n\n if isinstance(header, (str, bytes)):\n header_string = header\n elif isinstance(header, fits.Header):\n header_string = header.tostring()\n else:\n raise TypeError(\n \"header must be a string or astropy.io.fits.Header object\")\n\n keysel_flags = _parse_keysel(keysel)\n\n if isinstance(header_string, str):\n header_bytes = header_string.encode('ascii')\n else:\n header_bytes = header_string\n\n wcsprms = _wcs.find_all_wcs(header_bytes, relax, keysel_flags)\n\n result = []\n for wcsprm in wcsprms:\n subresult = WCS(fix=False, _do_set=False)\n subresult.wcs = wcsprm\n result.append(subresult)\n\n if fix:\n subresult.fix(translate_units)\n\n if _do_set:\n subresult.wcs.set()\n\n return result\n\n\ndef validate(source):\n \"\"\"\n Prints a WCS validation report for the given FITS file.\n\n Parameters\n ----------\n source : str or file-like or `~astropy.io.fits.HDUList`\n The FITS file to validate.\n\n Returns\n -------\n results : list subclass instance\n The result is returned as nested lists. The first level\n corresponds to the HDUs in the given file. The next level has\n an entry for each WCS found in that header. The special\n subclass of list will pretty-print the results as a table when\n printed.\n\n \"\"\"\n class _WcsValidateWcsResult(list):\n def __init__(self, key):\n self._key = key\n\n def __repr__(self):\n result = [f\" WCS key '{self._key or ' '}':\"]\n if len(self):\n for entry in self:\n for i, line in enumerate(entry.splitlines()):\n if i == 0:\n initial_indent = ' - '\n else:\n initial_indent = ' '\n result.extend(\n textwrap.wrap(\n line,\n initial_indent=initial_indent,\n subsequent_indent=' '))\n else:\n result.append(\" No issues.\")\n return '\\n'.join(result)\n\n class _WcsValidateHduResult(list):\n def __init__(self, hdu_index, hdu_name):\n self._hdu_index = hdu_index\n self._hdu_name = hdu_name\n list.__init__(self)\n\n def __repr__(self):\n if len(self):\n if self._hdu_name:\n hdu_name = f' ({self._hdu_name})'\n else:\n hdu_name = ''\n result = [f'HDU {self._hdu_index}{hdu_name}:']\n for wcs in self:\n result.append(repr(wcs))\n return '\\n'.join(result)\n return ''\n\n class _WcsValidateResults(list):\n def __repr__(self):\n result = []\n for hdu in self:\n content = repr(hdu)\n if len(content):\n result.append(content)\n return '\\n\\n'.join(result)\n\n global __warningregistry__\n\n if isinstance(source, fits.HDUList):\n hdulist = source\n else:\n hdulist = fits.open(source)\n\n results = _WcsValidateResults()\n\n for i, hdu in enumerate(hdulist):\n hdu_results = _WcsValidateHduResult(i, hdu.name)\n results.append(hdu_results)\n\n with warnings.catch_warnings(record=True) as warning_lines:\n wcses = find_all_wcs(\n hdu.header, relax=_wcs.WCSHDR_reject,\n fix=False, _do_set=False)\n\n for wcs in wcses:\n wcs_results = _WcsValidateWcsResult(wcs.wcs.alt)\n hdu_results.append(wcs_results)\n\n try:\n del __warningregistry__\n except NameError:\n pass\n\n with warnings.catch_warnings(record=True) as warning_lines:\n warnings.resetwarnings()\n warnings.simplefilter(\n \"always\", FITSFixedWarning, append=True)\n\n try:\n WCS(hdu.header,\n key=wcs.wcs.alt or ' ',\n relax=_wcs.WCSHDR_reject,\n fix=True, _do_set=False)\n except WcsError as e:\n wcs_results.append(str(e))\n\n wcs_results.extend([str(x.message) for x in warning_lines])\n\n return results\n", "docs/changes/wcs/13094.feature.rst": null}
|
diff --git a/docs/changes/wcs/13094.feature.rst b/docs/changes/wcs/13094.feature.rst
new file mode 100644
index 000000000000..e6b718e0a4e0
--- /dev/null
+++ b/docs/changes/wcs/13094.feature.rst
@@ -0,0 +1,2 @@
+Add ``temporal`` properties for convenient access of/selection of/testing for
+the ``TIME`` axis introduced in WCSLIB version 7.8.
|
{"astropy/wcs/wcs.py": [{"type": "function", "name": "WCS.temporal", "lines": [3230, 3234], "signature": "def temporal(self):", "doc": "A copy of the current WCS with only the time axes included"}, {"type": "function", "name": "WCS.is_temporal", "lines": [3237, 3238], "signature": "def is_temporal(self):", "doc": ""}, {"type": "function", "name": "WCS.has_temporal", "lines": [3241, 3242], "signature": "def has_temporal(self):", "doc": ""}]}
|
5.0
|
["astropy/wcs/tests/test_wcs.py::test_temporal"]
|
["astropy/wcs/tests/test_wcs.py::test_fixes", "astropy/wcs/tests/test_wcs.py::test_outside_sky", "astropy/wcs/tests/test_wcs.py::test_pix2world", "astropy/wcs/tests/test_wcs.py::test_load_fits_path", "astropy/wcs/tests/test_wcs.py::test_dict_init", "astropy/wcs/tests/test_wcs.py::test_extra_kwarg", "astropy/wcs/tests/test_wcs.py::test_3d_shapes", "astropy/wcs/tests/test_wcs.py::test_preserve_shape", "astropy/wcs/tests/test_wcs.py::test_broadcasting", "astropy/wcs/tests/test_wcs.py::test_shape_mismatch", "astropy/wcs/tests/test_wcs.py::test_invalid_shape", "astropy/wcs/tests/test_wcs.py::test_warning_about_defunct_keywords", "astropy/wcs/tests/test_wcs.py::test_warning_about_defunct_keywords_exception", "astropy/wcs/tests/test_wcs.py::test_to_header_string", "astropy/wcs/tests/test_wcs.py::test_to_fits", "astropy/wcs/tests/test_wcs.py::test_to_header_warning", "astropy/wcs/tests/test_wcs.py::test_no_comments_in_header", "astropy/wcs/tests/test_wcs.py::test_find_all_wcs_crash", "astropy/wcs/tests/test_wcs.py::test_validate", "astropy/wcs/tests/test_wcs.py::test_validate_with_2_wcses", "astropy/wcs/tests/test_wcs.py::test_crpix_maps_to_crval", "astropy/wcs/tests/test_wcs.py::test_all_world2pix", "astropy/wcs/tests/test_wcs.py::test_scamp_sip_distortion_parameters", "astropy/wcs/tests/test_wcs.py::test_fixes2", "astropy/wcs/tests/test_wcs.py::test_unit_normalization", "astropy/wcs/tests/test_wcs.py::test_footprint_to_file", "astropy/wcs/tests/test_wcs.py::test_validate_faulty_wcs", "astropy/wcs/tests/test_wcs.py::test_error_message", "astropy/wcs/tests/test_wcs.py::test_out_of_bounds", "astropy/wcs/tests/test_wcs.py::test_calc_footprint_1", "astropy/wcs/tests/test_wcs.py::test_calc_footprint_2", "astropy/wcs/tests/test_wcs.py::test_calc_footprint_3", "astropy/wcs/tests/test_wcs.py::test_sip", "astropy/wcs/tests/test_wcs.py::test_sub_3d_with_sip", "astropy/wcs/tests/test_wcs.py::test_printwcs", "astropy/wcs/tests/test_wcs.py::test_invalid_spherical", "astropy/wcs/tests/test_wcs.py::test_no_iteration", "astropy/wcs/tests/test_wcs.py::test_sip_tpv_agreement", "astropy/wcs/tests/test_wcs.py::test_tpv_copy", "astropy/wcs/tests/test_wcs.py::test_hst_wcs", "astropy/wcs/tests/test_wcs.py::test_cpdis_comments", "astropy/wcs/tests/test_wcs.py::test_d2im_comments", "astropy/wcs/tests/test_wcs.py::test_sip_broken", "astropy/wcs/tests/test_wcs.py::test_no_truncate_crval", "astropy/wcs/tests/test_wcs.py::test_no_truncate_crval_try2", "astropy/wcs/tests/test_wcs.py::test_no_truncate_crval_p17", "astropy/wcs/tests/test_wcs.py::test_no_truncate_using_compare", "astropy/wcs/tests/test_wcs.py::test_passing_ImageHDU", "astropy/wcs/tests/test_wcs.py::test_inconsistent_sip", "astropy/wcs/tests/test_wcs.py::test_bounds_check", "astropy/wcs/tests/test_wcs.py::test_naxis", "astropy/wcs/tests/test_wcs.py::test_sip_with_altkey", "astropy/wcs/tests/test_wcs.py::test_to_fits_1", "astropy/wcs/tests/test_wcs.py::test_keyedsip", "astropy/wcs/tests/test_wcs.py::test_zero_size_input", "astropy/wcs/tests/test_wcs.py::test_scalar_inputs", "astropy/wcs/tests/test_wcs.py::test_footprint_contains", "astropy/wcs/tests/test_wcs.py::test_cunit", "astropy/wcs/tests/test_wcs.py::test_invalid_coordinate_masking", "astropy/wcs/tests/test_wcs.py::test_no_pixel_area", "astropy/wcs/tests/test_wcs.py::test_distortion_header", "astropy/wcs/tests/test_wcs.py::test_pixlist_wcs_colsel", "astropy/wcs/tests/test_wcs.py::test_time_axis_selection"]
|
7cbba866a8c5749b90a5cb4f9877ddfad2d36037
|
{"first_commit_time": 1649160155.0, "pr_title": "Add wcs temporal properties", "pr_body": "### Description\r\nThis PR add \"temporal\" properties to access 'TIME' axis in the WCS. This is the second part of the original PR - https://github.com/astropy/astropy/pull/13062\r\n\r\n### Checklist for package maintainer(s)\r\n\r\nThis checklist is meant to remind the package maintainer(s) who will review this pull request of some common things to look for. This list is not exhaustive.\r\n\r\n- [ ] Do the proposed changes actually accomplish desired goals?\r\n- [ ] Do the proposed changes follow the [Astropy coding guidelines](https://docs.astropy.org/en/latest/development/codeguide.html)?\r\n- [ ] Are tests added/updated as required? If so, do they follow the [Astropy testing guidelines](https://docs.astropy.org/en/latest/development/testguide.html)?\r\n- [ ] Are docs added/updated as required? If so, do they follow the [Astropy documentation guidelines](https://docs.astropy.org/en/latest/development/docguide.html#astropy-documentation-rules-and-guidelines)?\r\n- [ ] Is rebase and/or squash necessary? If so, please provide the author with appropriate instructions. Also see [\"When to rebase and squash commits\"](https://docs.astropy.org/en/latest/development/when_to_rebase.html).\r\n- [ ] Did the CI pass? If no, are the failures related? If you need to run daily and weekly cron jobs as part of the PR, please apply the `Extra CI` label.\r\n- [ ] Is a change log needed? If yes, did the change log check pass? If no, add the `no-changelog-entry-needed` label. If this is a manual backport, use the `skip-changelog-checks` label unless special changelog handling is necessary.\r\n- [ ] Is this a big PR that makes a \"What's new?\" entry worthwhile and if so, is (1) a \"what's new\" entry included in this PR and (2) the \"whatsnew-needed\" label applied?\r\n- [ ] Is a milestone set? Milestone must be set but `astropy-bot` check might be missing; do not let the green checkmark fool you.\r\n- [ ] At the time of adding the milestone, if the milestone set requires a backport to release branch(es), apply the appropriate `backport-X.Y.x` label(s) *before* merge.\r\n", "pr_timeline": [], "issues": {}}
|
astropy/astropy
| 13,508
|
https://github.com/astropy/astropy/pull/13508
|
astropy__astropy-13508
|
["13507"]
|
093e96735f6bc4dd87f154c54b6c42667489b602
|
diff --git a/astropy/time/core.py b/astropy/time/core.py
index 3716665aadde..50be00fcfb2a 100644
--- a/astropy/time/core.py
+++ b/astropy/time/core.py
@@ -1356,6 +1356,82 @@ def sort(self, axis=-1):
return self[self._advanced_index(self.argsort(axis), axis,
keepdims=True)]
+ def mean(self, axis=None, dtype=None, out=None, keepdims=False, *, where=True):
+ """Mean along a given axis.
+
+ This is similar to :meth:`~numpy.ndarray.mean`, but adapted to ensure
+ that the full precision given by the two doubles ``jd1`` and ``jd2`` is
+ used, and that corresponding attributes are copied.
+
+ Note that the ``out`` argument is present only for compatibility with
+ ``np.mean``; since `Time` instances are immutable, it is not possible
+ to have an actual ``out`` to store the result in.
+
+ Similarly, the ``dtype`` argument is also present for compatibility
+ only; it has no meaning for `Time`.
+
+ Parameters
+ ----------
+ axis : None or int or tuple of ints, optional
+ Axis or axes along which the means are computed. The default is to
+ compute the mean of the flattened array.
+ dtype : None
+ Only present for compatibility with :meth:`~numpy.ndarray.mean`,
+ must be `None`.
+ out : None
+ Only present for compatibility with :meth:`~numpy.ndarray.mean`,
+ must be `None`.
+ keepdims : bool, optional
+ If this is set to True, the axes which are reduced are left
+ in the result as dimensions with size one. With this option,
+ the result will broadcast correctly against the input array.
+ where : array_like of bool, optional
+ Elements to include in the mean. See `~numpy.ufunc.reduce` for
+ details.
+
+ Returns
+ -------
+ m : Time
+ A new Time instance containing the mean values
+ """
+ if dtype is not None:
+ raise ValueError('Cannot set ``dtype`` on `Time` instances')
+ if out is not None:
+ raise ValueError("Since `Time` instances are immutable, ``out`` "
+ "cannot be set to anything but ``None``.")
+
+ where = where & ~self.mask
+ where_broadcasted = np.broadcast_to(where, self.shape)
+
+ kwargs = dict(
+ axis=axis,
+ keepdims=keepdims,
+ where=where,
+ )
+
+ divisor = np.sum(where_broadcasted, axis=axis, keepdims=keepdims)
+ if np.any(divisor == 0):
+ raise ValueError(
+ 'Mean over zero elements is not supported as it would give an undefined time;'
+ 'see issue https://github.com/astropy/astropy/issues/6509'
+ )
+
+ jd1, jd2 = day_frac(
+ val1=np.sum(np.ma.getdata(self.jd1), **kwargs),
+ val2=np.sum(np.ma.getdata(self.jd2), **kwargs),
+ divisor=divisor,
+ )
+
+ result = type(self)(
+ val=jd1,
+ val2=jd2,
+ format='jd',
+ scale=self.scale,
+ copy=False,
+ )
+ result.format = self.format
+ return result
+
@property
def cache(self):
"""
@@ -2275,6 +2351,44 @@ def __add__(self, other):
def __radd__(self, other):
return self.__add__(other)
+ def mean(self, axis=None, dtype=None, out=None, keepdims=False, *, where=True):
+
+ scale = self.scale
+ if scale == 'utc':
+ self = self.tai
+ result = super().mean(axis=axis, dtype=dtype, out=out, keepdims=keepdims, where=where)
+ if scale == 'utc':
+ result = result.utc
+
+ result.out_subfmt = self.out_subfmt
+
+ location = self.location
+ if self.location is not None:
+ if self.location.shape:
+
+ if axis is None:
+ axis_normalized = tuple(range(self.ndim))
+ elif isinstance(axis, int):
+ axis_normalized = axis,
+ else:
+ axis_normalized = axis
+
+ sl = [slice(None)] * self.location.ndim
+ for a in axis_normalized:
+ sl[a] = slice(0, 1)
+
+ if np.any(self.location != self.location[tuple(sl)]):
+ raise ValueError("`location` must be constant over the reduction axes.")
+
+ if not keepdims:
+ for a in axis_normalized:
+ sl[a] = 0
+
+ location = self.location[tuple(sl)]
+
+ result.location = location
+ return result
+
def __array_function__(self, function, types, args, kwargs):
"""
Wrap numpy functions.
diff --git a/docs/changes/time/13508.feature.rst b/docs/changes/time/13508.feature.rst
new file mode 100644
index 000000000000..d6dffaa3b3db
--- /dev/null
+++ b/docs/changes/time/13508.feature.rst
@@ -0,0 +1,1 @@
+Added the ``astropy.time.Time.mean()`` method which also enables the ``numpy.mean()`` function to be used on instances of ``astropy.time.Time``.
|
diff --git a/astropy/table/tests/test_info.py b/astropy/table/tests/test_info.py
index 9a2977074bb9..6ea33917c75e 100644
--- a/astropy/table/tests/test_info.py
+++ b/astropy/table/tests/test_info.py
@@ -67,7 +67,7 @@ def test_table_info_stats(table_types):
a = np.array([1, 2, 1, 2], dtype='int32')
b = np.array([1, 2, 1, 2], dtype='float32')
c = np.array(['a', 'c', 'e', 'f'], dtype='|S1')
- d = time.Time([1, 2, 1, 2], format='mjd')
+ d = time.Time([1, 2, 1, 2], format='mjd', scale='tai')
t = table_types.Table([a, b, c, d], names=['a', 'b', 'c', 'd'])
# option = 'stats'
@@ -81,14 +81,14 @@ def test_table_info_stats(table_types):
' a 1.5 0.5 1 2',
' b 1.5 0.5 1 2',
' c -- -- -- --',
- ' d -- -- 1.0 2.0']
+ ' d 1.5 -- 1.0 2.0']
assert out.getvalue().splitlines() == exp
# option = ['attributes', 'stats']
tinfo = t.info(['attributes', 'stats'], out=None)
assert tinfo.colnames == ['name', 'dtype', 'shape', 'unit', 'format', 'description',
'class', 'mean', 'std', 'min', 'max', 'n_bad', 'length']
- assert np.all(tinfo['mean'] == ['1.5', '1.5', '--', '--'])
+ assert np.all(tinfo['mean'] == ['1.5', '1.5', '--', '1.5'])
assert np.all(tinfo['std'] == ['0.5', '0.5', '--', '--'])
assert np.all(tinfo['min'] == ['1', '1', '--', '1.0'])
assert np.all(tinfo['max'] == ['2', '2', '--', '2.0'])
@@ -101,7 +101,7 @@ def test_table_info_stats(table_types):
' a 1.5 0.5 1 2',
' b 1.5 0.5 1 2',
' c -- -- -- --',
- ' d -- -- 1.0 2.0']
+ ' d 1.5 -- 1.0 2.0']
assert out.getvalue().splitlines() == exp
# option = ['attributes', custom]
diff --git a/astropy/time/tests/test_delta.py b/astropy/time/tests/test_delta.py
index 1bc2915d694e..7ce39db9ce1c 100644
--- a/astropy/time/tests/test_delta.py
+++ b/astropy/time/tests/test_delta.py
@@ -209,6 +209,16 @@ def test_mul_div(self):
with pytest.raises(TypeError):
self.dt * object()
+ def test_mean(self):
+
+ def is_consistent(time_delta: TimeDelta):
+ mean_expected = (np.sum(time_delta.jd1) + np.sum(time_delta.jd2)) / time_delta.size
+ mean_test = time_delta.mean().jd1 + time_delta.mean().jd2
+ return mean_test == mean_expected
+
+ assert is_consistent(self.dt)
+ assert is_consistent(self.dt_array)
+
def test_keep_properties(self):
# closes #1924 (partially)
dt = TimeDelta(1000., format='sec')
diff --git a/astropy/time/tests/test_methods.py b/astropy/time/tests/test_methods.py
index 8fac951171fc..65bb05722884 100644
--- a/astropy/time/tests/test_methods.py
+++ b/astropy/time/tests/test_methods.py
@@ -7,7 +7,9 @@
import numpy as np
import pytest
+import astropy.units as u
from astropy.time import Time
+from astropy.time.utils import day_frac
from astropy.units.quantity_helper.function_helpers import ARRAY_FUNCTION_ENABLED
from astropy.utils import iers
@@ -479,9 +481,17 @@ def setup_class(cls):
'masked': mjd + frac_masked
}
+ cls.t2 = {
+ 'not_masked': Time(mjd + frac, format='mjd', scale='utc',
+ location=(np.arange(len(frac)), np.arange(len(frac)))),
+ 'masked': Time(mjd + frac_masked, format='mjd', scale='utc',
+ location=(np.arange(len(frac_masked)), np.arange(len(frac_masked)))),
+ }
+
def create_data(self, use_mask):
self.t0 = self.__class__.t0[use_mask]
self.t1 = self.__class__.t1[use_mask]
+ self.t2 = self.__class__.t2[use_mask]
self.jd = self.__class__.jd[use_mask]
@pytest.mark.parametrize('kw, func', itertools.product(kwargs, functions))
@@ -619,6 +629,92 @@ def test_sort(self, use_mask):
assert np.all(self.t0.sort(-1)[:, :, 0] == self.t0.min(-1))
assert np.all(self.t0.sort(-1)[:, :, -1] == self.t0.max(-1))
+ @pytest.mark.parametrize('axis', [None, 0, 1, 2, (0, 1)])
+ @pytest.mark.parametrize('where', [True, np.array([True, False, True, True, False])[..., np.newaxis]])
+ @pytest.mark.parametrize('keepdims', [False, True])
+ def test_mean(self, use_mask, axis, where, keepdims):
+ self.create_data(use_mask)
+
+ kwargs = dict(axis=axis, where=where, keepdims=keepdims)
+
+ def is_consistent(time):
+
+ where_expected = where & ~time.mask
+ where_expected = np.broadcast_to(where_expected, time.shape)
+
+ kw = kwargs.copy()
+ kw['where'] = where_expected
+
+ divisor = where_expected.sum(axis=axis, keepdims=keepdims)
+
+ if np.any(divisor == 0):
+ with pytest.raises(ValueError):
+ time.mean(**kwargs)
+
+ else:
+ time_mean = time.mean(**kwargs)
+ time_expected = Time(
+ *day_frac(
+ val1=np.ma.getdata(time.tai.jd1).sum(**kw),
+ val2=np.ma.getdata(time.tai.jd2).sum(**kw),
+ divisor=divisor
+ ),
+ format='jd',
+ scale='tai',
+ )
+ time_expected._set_scale(time.scale)
+ assert np.all(time_mean == time_expected)
+
+ is_consistent(self.t0)
+ is_consistent(self.t1)
+
+ axes_location_not_constant = [None, 2]
+ if axis in axes_location_not_constant:
+ with pytest.raises(ValueError):
+ self.t2.mean(**kwargs)
+ else:
+ is_consistent(self.t2)
+
+ def test_mean_precision(self, use_mask):
+
+ scale = 'tai'
+ epsilon = 1 * u.ns
+
+ t0 = Time('2021-07-27T00:00:00', scale=scale)
+ t1 = Time('2022-07-27T00:00:00', scale=scale)
+ t2 = Time('2023-07-27T00:00:00', scale=scale)
+
+ t = Time([t0, t2 + epsilon])
+
+ if use_mask == 'masked':
+ t[0] = np.ma.masked
+ assert t.mean() == (t2 + epsilon)
+
+ else:
+ assert t.mean() == (t1 + epsilon / 2)
+
+ def test_mean_dtype(self, use_mask):
+ self.create_data(use_mask)
+ with pytest.raises(ValueError):
+ self.t0.mean(dtype=int)
+
+ def test_mean_out(self, use_mask):
+ self.create_data(use_mask)
+ with pytest.raises(ValueError):
+ self.t0.mean(out=Time(np.zeros_like(self.t0.jd1), format='jd'))
+
+ def test_mean_leap_second(self, use_mask):
+ # Check that leap second is dealt with correctly: for UTC, across a leap
+ # second bounday, one cannot just average jd, but has to go through TAI.
+ if use_mask == 'not_masked':
+ t = Time(['2012-06-30 23:59:60.000', '2012-07-01 00:00:01.000'])
+ mean_expected = t[0] + (t[1] - t[0]) / 2
+ mean_expected_explicit = Time('2012-07-01 00:00:00')
+ mean_test = t.mean()
+ assert mean_expected == mean_expected_explicit
+ assert mean_expected == mean_test
+ assert mean_test != Time(*day_frac(t.jd1.sum(), t.jd2.sum(), divisor=2), format='jd')
+
def test_regression():
# For #5225, where a time with a single-element delta_ut1_utc could not
| 2022-07-27T18:03:06
|
{}
|
{"astropy/time/core.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"\nThe astropy.time package provides functionality for manipulating times and\ndates. Specific emphasis is placed on supporting time scales (e.g. UTC, TAI,\nUT1) and time representations (e.g. JD, MJD, ISO 8601) that are used in\nastronomy.\n\"\"\"\n\nimport copy\nimport enum\nimport operator\nimport os\nimport threading\nfrom datetime import date, datetime, timedelta\nfrom time import strftime\nfrom warnings import warn\n\nimport erfa\nimport numpy as np\n\nfrom astropy import constants as const\nfrom astropy import units as u\nfrom astropy.extern import _strptime\nfrom astropy.units import UnitConversionError\nfrom astropy.utils import ShapedLikeNDArray\nfrom astropy.utils.data_info import MixinInfo, data_info_factory\nfrom astropy.utils.exceptions import AstropyDeprecationWarning, AstropyWarning\n\n# Import TimeFromEpoch to avoid breaking code that followed the old example of\n# making a custom timescale in the documentation.\nfrom .formats import TimeFromEpoch # noqa: F401\nfrom .formats import (\n TIME_DELTA_FORMATS, TIME_FORMATS, TimeAstropyTime, TimeDatetime, TimeJD, TimeUnique)\nfrom .time_helper.function_helpers import CUSTOM_FUNCTIONS, UNSUPPORTED_FUNCTIONS\nfrom .utils import day_frac\n\n__all__ = ['TimeBase', 'Time', 'TimeDelta', 'TimeInfo', 'TimeInfoBase', 'update_leap_seconds',\n 'TIME_SCALES', 'STANDARD_TIME_SCALES', 'TIME_DELTA_SCALES',\n 'ScaleValueError', 'OperandTypeError', 'TimeDeltaMissingUnitWarning']\n\n\nSTANDARD_TIME_SCALES = ('tai', 'tcb', 'tcg', 'tdb', 'tt', 'ut1', 'utc')\nLOCAL_SCALES = ('local',)\nTIME_TYPES = {scale: scales for scales in (STANDARD_TIME_SCALES, LOCAL_SCALES) for scale in scales}\nTIME_SCALES = STANDARD_TIME_SCALES + LOCAL_SCALES\nMULTI_HOPS = {('tai', 'tcb'): ('tt', 'tdb'),\n ('tai', 'tcg'): ('tt',),\n ('tai', 'ut1'): ('utc',),\n ('tai', 'tdb'): ('tt',),\n ('tcb', 'tcg'): ('tdb', 'tt'),\n ('tcb', 'tt'): ('tdb',),\n ('tcb', 'ut1'): ('tdb', 'tt', 'tai', 'utc'),\n ('tcb', 'utc'): ('tdb', 'tt', 'tai'),\n ('tcg', 'tdb'): ('tt',),\n ('tcg', 'ut1'): ('tt', 'tai', 'utc'),\n ('tcg', 'utc'): ('tt', 'tai'),\n ('tdb', 'ut1'): ('tt', 'tai', 'utc'),\n ('tdb', 'utc'): ('tt', 'tai'),\n ('tt', 'ut1'): ('tai', 'utc'),\n ('tt', 'utc'): ('tai',),\n }\nGEOCENTRIC_SCALES = ('tai', 'tt', 'tcg')\nBARYCENTRIC_SCALES = ('tcb', 'tdb')\nROTATIONAL_SCALES = ('ut1',)\nTIME_DELTA_TYPES = {scale: scales\n for scales in (GEOCENTRIC_SCALES, BARYCENTRIC_SCALES,\n ROTATIONAL_SCALES, LOCAL_SCALES) for scale in scales}\nTIME_DELTA_SCALES = GEOCENTRIC_SCALES + BARYCENTRIC_SCALES + ROTATIONAL_SCALES + LOCAL_SCALES\n# For time scale changes, we need L_G and L_B, which are stored in erfam.h as\n# /* L_G = 1 - d(TT)/d(TCG) */\n# define ERFA_ELG (6.969290134e-10)\n# /* L_B = 1 - d(TDB)/d(TCB), and TDB (s) at TAI 1977/1/1.0 */\n# define ERFA_ELB (1.550519768e-8)\n# These are exposed in erfa as erfa.ELG and erfa.ELB.\n# Implied: d(TT)/d(TCG) = 1-L_G\n# and d(TCG)/d(TT) = 1/(1-L_G) = 1 + (1-(1-L_G))/(1-L_G) = 1 + L_G/(1-L_G)\n# scale offsets as second = first + first * scale_offset[(first,second)]\nSCALE_OFFSETS = {('tt', 'tai'): None,\n ('tai', 'tt'): None,\n ('tcg', 'tt'): -erfa.ELG,\n ('tt', 'tcg'): erfa.ELG / (1. - erfa.ELG),\n ('tcg', 'tai'): -erfa.ELG,\n ('tai', 'tcg'): erfa.ELG / (1. - erfa.ELG),\n ('tcb', 'tdb'): -erfa.ELB,\n ('tdb', 'tcb'): erfa.ELB / (1. - erfa.ELB)}\n\n# triple-level dictionary, yay!\nSIDEREAL_TIME_MODELS = {\n 'mean': {\n 'IAU2006': {'function': erfa.gmst06, 'scales': ('ut1', 'tt')},\n 'IAU2000': {'function': erfa.gmst00, 'scales': ('ut1', 'tt')},\n 'IAU1982': {'function': erfa.gmst82, 'scales': ('ut1',), 'include_tio': False}\n },\n 'apparent': {\n 'IAU2006A': {'function': erfa.gst06a, 'scales': ('ut1', 'tt')},\n 'IAU2000A': {'function': erfa.gst00a, 'scales': ('ut1', 'tt')},\n 'IAU2000B': {'function': erfa.gst00b, 'scales': ('ut1',)},\n 'IAU1994': {'function': erfa.gst94, 'scales': ('ut1',), 'include_tio': False}\n }}\n\n\nclass _LeapSecondsCheck(enum.Enum):\n NOT_STARTED = 0 # No thread has reached the check\n RUNNING = 1 # A thread is running update_leap_seconds (_LEAP_SECONDS_LOCK is held)\n DONE = 2 # update_leap_seconds has completed\n\n\n_LEAP_SECONDS_CHECK = _LeapSecondsCheck.NOT_STARTED\n_LEAP_SECONDS_LOCK = threading.RLock()\n\n\nclass TimeInfoBase(MixinInfo):\n \"\"\"\n Container for meta information like name, description, format. This is\n required when the object is used as a mixin column within a table, but can\n be used as a general way to store meta information.\n\n This base class is common between TimeInfo and TimeDeltaInfo.\n \"\"\"\n attr_names = MixinInfo.attr_names | {'serialize_method'}\n _supports_indexing = True\n\n # The usual tuple of attributes needed for serialization is replaced\n # by a property, since Time can be serialized different ways.\n _represent_as_dict_extra_attrs = ('format', 'scale', 'precision',\n 'in_subfmt', 'out_subfmt', 'location',\n '_delta_ut1_utc', '_delta_tdb_tt')\n\n # When serializing, write out the `value` attribute using the column name.\n _represent_as_dict_primary_data = 'value'\n\n mask_val = np.ma.masked\n\n @property\n def _represent_as_dict_attrs(self):\n method = self.serialize_method[self._serialize_context]\n\n if method == 'formatted_value':\n out = ('value',)\n elif method == 'jd1_jd2':\n out = ('jd1', 'jd2')\n else:\n raise ValueError(\"serialize method must be 'formatted_value' or 'jd1_jd2'\")\n\n return out + self._represent_as_dict_extra_attrs\n\n def __init__(self, bound=False):\n super().__init__(bound)\n\n # If bound to a data object instance then create the dict of attributes\n # which stores the info attribute values.\n if bound:\n # Specify how to serialize this object depending on context.\n # If ``True`` for a context, then use formatted ``value`` attribute\n # (e.g. the ISO time string). If ``False`` then use float jd1 and jd2.\n self.serialize_method = {'fits': 'jd1_jd2',\n 'ecsv': 'formatted_value',\n 'hdf5': 'jd1_jd2',\n 'yaml': 'jd1_jd2',\n 'parquet': 'jd1_jd2',\n None: 'jd1_jd2'}\n\n def get_sortable_arrays(self):\n \"\"\"\n Return a list of arrays which can be lexically sorted to represent\n the order of the parent column.\n\n Returns\n -------\n arrays : list of ndarray\n \"\"\"\n parent = self._parent\n jd_approx = parent.jd\n jd_remainder = (parent - parent.__class__(jd_approx, format='jd')).jd\n return [jd_approx, jd_remainder]\n\n @property\n def unit(self):\n return None\n\n info_summary_stats = staticmethod(\n data_info_factory(names=MixinInfo._stats,\n funcs=[getattr(np, stat) for stat in MixinInfo._stats]))\n # When Time has mean, std, min, max methods:\n # funcs = [lambda x: getattr(x, stat)() for stat_name in MixinInfo._stats])\n\n def _construct_from_dict(self, map):\n if 'jd1' in map and 'jd2' in map:\n # Initialize as JD but revert to desired format and out_subfmt (if needed)\n format = map.pop('format')\n out_subfmt = map.pop('out_subfmt', None)\n map['format'] = 'jd'\n map['val'] = map.pop('jd1')\n map['val2'] = map.pop('jd2')\n out = self._parent_cls(**map)\n out.format = format\n if out_subfmt is not None:\n out.out_subfmt = out_subfmt\n\n else:\n map['val'] = map.pop('value')\n out = self._parent_cls(**map)\n\n return out\n\n def new_like(self, cols, length, metadata_conflicts='warn', name=None):\n \"\"\"\n Return a new Time instance which is consistent with the input Time objects\n ``cols`` and has ``length`` rows.\n\n This is intended for creating an empty Time instance whose elements can\n be set in-place for table operations like join or vstack. It checks\n that the input locations and attributes are consistent. This is used\n when a Time object is used as a mixin column in an astropy Table.\n\n Parameters\n ----------\n cols : list\n List of input columns (Time objects)\n length : int\n Length of the output column object\n metadata_conflicts : str ('warn'|'error'|'silent')\n How to handle metadata conflicts\n name : str\n Output column name\n\n Returns\n -------\n col : Time (or subclass)\n Empty instance of this class consistent with ``cols``\n\n \"\"\"\n # Get merged info attributes like shape, dtype, format, description, etc.\n attrs = self.merge_cols_attributes(cols, metadata_conflicts, name,\n ('meta', 'description'))\n attrs.pop('dtype') # Not relevant for Time\n col0 = cols[0]\n\n # Check that location is consistent for all Time objects\n for col in cols[1:]:\n # This is the method used by __setitem__ to ensure that the right side\n # has a consistent location (and coerce data if necessary, but that does\n # not happen in this case since `col` is already a Time object). If this\n # passes then any subsequent table operations via setitem will work.\n try:\n col0._make_value_equivalent(slice(None), col)\n except ValueError:\n raise ValueError('input columns have inconsistent locations')\n\n # Make a new Time object with the desired shape and attributes\n shape = (length,) + attrs.pop('shape')\n jd2000 = 2451544.5 # Arbitrary JD value J2000.0 that will work with ERFA\n jd1 = np.full(shape, jd2000, dtype='f8')\n jd2 = np.zeros(shape, dtype='f8')\n tm_attrs = {attr: getattr(col0, attr)\n for attr in ('scale', 'location', 'precision')}\n out = self._parent_cls(jd1, jd2, format='jd', **tm_attrs)\n out.format = col0.format\n out.out_subfmt = col0.out_subfmt\n out.in_subfmt = col0.in_subfmt\n\n # Set remaining info attributes\n for attr, value in attrs.items():\n setattr(out.info, attr, value)\n\n return out\n\n\nclass TimeInfo(TimeInfoBase):\n \"\"\"\n Container for meta information like name, description, format. This is\n required when the object is used as a mixin column within a table, but can\n be used as a general way to store meta information.\n \"\"\"\n def _represent_as_dict(self, attrs=None):\n \"\"\"Get the values for the parent ``attrs`` and return as a dict.\n\n By default, uses '_represent_as_dict_attrs'.\n \"\"\"\n map = super()._represent_as_dict(attrs=attrs)\n\n # TODO: refactor these special cases into the TimeFormat classes?\n\n # The datetime64 format requires special handling for ECSV (see #12840).\n # The `value` has numpy dtype datetime64 but this is not an allowed\n # datatype for ECSV. Instead convert to a string representation.\n if (self._serialize_context == 'ecsv'\n and map['format'] == 'datetime64'\n and 'value' in map):\n map['value'] = map['value'].astype('U')\n\n # The datetime format is serialized as ISO with no loss of precision.\n if map['format'] == 'datetime' and 'value' in map:\n map['value'] = np.vectorize(lambda x: x.isoformat())(map['value'])\n\n return map\n\n def _construct_from_dict(self, map):\n # See comment above. May need to convert string back to datetime64.\n # Note that _serialize_context is not set here so we just look for the\n # string value directly.\n if (map['format'] == 'datetime64'\n and 'value' in map\n and map['value'].dtype.kind == 'U'):\n map['value'] = map['value'].astype('datetime64')\n\n # Convert back to datetime objects for datetime format.\n if map['format'] == 'datetime' and 'value' in map:\n from datetime import datetime\n map['value'] = np.vectorize(datetime.fromisoformat)(map['value'])\n\n delta_ut1_utc = map.pop('_delta_ut1_utc', None)\n delta_tdb_tt = map.pop('_delta_tdb_tt', None)\n\n out = super()._construct_from_dict(map)\n\n if delta_ut1_utc is not None:\n out._delta_ut1_utc = delta_ut1_utc\n if delta_tdb_tt is not None:\n out._delta_tdb_tt = delta_tdb_tt\n\n return out\n\n\nclass TimeDeltaInfo(TimeInfoBase):\n \"\"\"\n Container for meta information like name, description, format. This is\n required when the object is used as a mixin column within a table, but can\n be used as a general way to store meta information.\n \"\"\"\n _represent_as_dict_extra_attrs = ('format', 'scale')\n\n def new_like(self, cols, length, metadata_conflicts='warn', name=None):\n \"\"\"\n Return a new TimeDelta instance which is consistent with the input Time objects\n ``cols`` and has ``length`` rows.\n\n This is intended for creating an empty Time instance whose elements can\n be set in-place for table operations like join or vstack. It checks\n that the input locations and attributes are consistent. This is used\n when a Time object is used as a mixin column in an astropy Table.\n\n Parameters\n ----------\n cols : list\n List of input columns (Time objects)\n length : int\n Length of the output column object\n metadata_conflicts : str ('warn'|'error'|'silent')\n How to handle metadata conflicts\n name : str\n Output column name\n\n Returns\n -------\n col : Time (or subclass)\n Empty instance of this class consistent with ``cols``\n\n \"\"\"\n # Get merged info attributes like shape, dtype, format, description, etc.\n attrs = self.merge_cols_attributes(cols, metadata_conflicts, name,\n ('meta', 'description'))\n attrs.pop('dtype') # Not relevant for Time\n col0 = cols[0]\n\n # Make a new Time object with the desired shape and attributes\n shape = (length,) + attrs.pop('shape')\n jd1 = np.zeros(shape, dtype='f8')\n jd2 = np.zeros(shape, dtype='f8')\n out = self._parent_cls(jd1, jd2, format='jd', scale=col0.scale)\n out.format = col0.format\n\n # Set remaining info attributes\n for attr, value in attrs.items():\n setattr(out.info, attr, value)\n\n return out\n\n\nclass TimeBase(ShapedLikeNDArray):\n \"\"\"Base time class from which Time and TimeDelta inherit.\"\"\"\n\n # Make sure that reverse arithmetic (e.g., TimeDelta.__rmul__)\n # gets called over the __mul__ of Numpy arrays.\n __array_priority__ = 20000\n\n # Declare that Time can be used as a Table column by defining the\n # attribute where column attributes will be stored.\n _astropy_column_attrs = None\n\n def __getnewargs__(self):\n return (self._time,)\n\n def _init_from_vals(self, val, val2, format, scale, copy,\n precision=None, in_subfmt=None, out_subfmt=None):\n \"\"\"\n Set the internal _format, scale, and _time attrs from user\n inputs. This handles coercion into the correct shapes and\n some basic input validation.\n \"\"\"\n if precision is None:\n precision = 3\n if in_subfmt is None:\n in_subfmt = '*'\n if out_subfmt is None:\n out_subfmt = '*'\n\n # Coerce val into an array\n val = _make_array(val, copy)\n\n # If val2 is not None, ensure consistency\n if val2 is not None:\n val2 = _make_array(val2, copy)\n try:\n np.broadcast(val, val2)\n except ValueError:\n raise ValueError('Input val and val2 have inconsistent shape; '\n 'they cannot be broadcast together.')\n\n if scale is not None:\n if not (isinstance(scale, str)\n and scale.lower() in self.SCALES):\n raise ScaleValueError(\"Scale {!r} is not in the allowed scales \"\n \"{}\".format(scale,\n sorted(self.SCALES)))\n\n # If either of the input val, val2 are masked arrays then\n # find the masked elements and fill them.\n mask, val, val2 = _check_for_masked_and_fill(val, val2)\n\n # Parse / convert input values into internal jd1, jd2 based on format\n self._time = self._get_time_fmt(val, val2, format, scale,\n precision, in_subfmt, out_subfmt)\n self._format = self._time.name\n\n # Hack from #9969 to allow passing the location value that has been\n # collected by the TimeAstropyTime format class up to the Time level.\n # TODO: find a nicer way.\n if hasattr(self._time, '_location'):\n self.location = self._time._location\n del self._time._location\n\n # If any inputs were masked then masked jd2 accordingly. From above\n # routine ``mask`` must be either Python bool False or an bool ndarray\n # with shape broadcastable to jd2.\n if mask is not False:\n mask = np.broadcast_to(mask, self._time.jd2.shape)\n self._time.jd1[mask] = 2451544.5 # Set to JD for 2000-01-01\n self._time.jd2[mask] = np.nan\n\n def _get_time_fmt(self, val, val2, format, scale,\n precision, in_subfmt, out_subfmt):\n \"\"\"\n Given the supplied val, val2, format and scale try to instantiate\n the corresponding TimeFormat class to convert the input values into\n the internal jd1 and jd2.\n\n If format is `None` and the input is a string-type or object array then\n guess available formats and stop when one matches.\n \"\"\"\n\n if (format is None\n and (val.dtype.kind in ('S', 'U', 'O', 'M') or val.dtype.names)):\n # Input is a string, object, datetime, or a table-like ndarray\n # (structured array, recarray). These input types can be\n # uniquely identified by the format classes.\n formats = [(name, cls) for name, cls in self.FORMATS.items()\n if issubclass(cls, TimeUnique)]\n\n # AstropyTime is a pseudo-format that isn't in the TIME_FORMATS registry,\n # but try to guess it at the end.\n formats.append(('astropy_time', TimeAstropyTime))\n\n elif not (isinstance(format, str)\n and format.lower() in self.FORMATS):\n if format is None:\n raise ValueError(\"No time format was given, and the input is \"\n \"not unique\")\n else:\n raise ValueError(\"Format {!r} is not one of the allowed \"\n \"formats {}\".format(format,\n sorted(self.FORMATS)))\n else:\n formats = [(format, self.FORMATS[format])]\n\n assert formats\n problems = {}\n for name, cls in formats:\n try:\n return cls(val, val2, scale, precision, in_subfmt, out_subfmt)\n except UnitConversionError:\n raise\n except (ValueError, TypeError) as err:\n # If ``format`` specified then there is only one possibility, so raise\n # immediately and include the upstream exception message to make it\n # easier for user to see what is wrong.\n if len(formats) == 1:\n raise ValueError(\n f'Input values did not match the format class {format}:'\n + os.linesep\n + f'{err.__class__.__name__}: {err}'\n ) from err\n else:\n problems[name] = err\n else:\n raise ValueError(f'Input values did not match any of the formats '\n f'where the format keyword is optional: '\n f'{problems}') from problems[formats[0][0]]\n\n @property\n def writeable(self):\n return self._time.jd1.flags.writeable & self._time.jd2.flags.writeable\n\n @writeable.setter\n def writeable(self, value):\n self._time.jd1.flags.writeable = value\n self._time.jd2.flags.writeable = value\n\n @property\n def format(self):\n \"\"\"\n Get or set time format.\n\n The format defines the way times are represented when accessed via the\n ``.value`` attribute. By default it is the same as the format used for\n initializing the `Time` instance, but it can be set to any other value\n that could be used for initialization. These can be listed with::\n\n >>> list(Time.FORMATS)\n ['jd', 'mjd', 'decimalyear', 'unix', 'unix_tai', 'cxcsec', 'gps', 'plot_date',\n 'stardate', 'datetime', 'ymdhms', 'iso', 'isot', 'yday', 'datetime64',\n 'fits', 'byear', 'jyear', 'byear_str', 'jyear_str']\n \"\"\"\n return self._format\n\n @format.setter\n def format(self, format):\n \"\"\"Set time format\"\"\"\n if format not in self.FORMATS:\n raise ValueError(f'format must be one of {list(self.FORMATS)}')\n format_cls = self.FORMATS[format]\n\n # Get the new TimeFormat object to contain time in new format. Possibly\n # coerce in/out_subfmt to '*' (default) if existing subfmt values are\n # not valid in the new format.\n self._time = format_cls(\n self._time.jd1, self._time.jd2,\n self._time._scale, self.precision,\n in_subfmt=format_cls._get_allowed_subfmt(self.in_subfmt),\n out_subfmt=format_cls._get_allowed_subfmt(self.out_subfmt),\n from_jd=True)\n\n self._format = format\n\n def __repr__(self):\n return (\"<{} object: scale='{}' format='{}' value={}>\"\n .format(self.__class__.__name__, self.scale, self.format,\n getattr(self, self.format)))\n\n def __str__(self):\n return str(getattr(self, self.format))\n\n def __hash__(self):\n\n try:\n loc = getattr(self, 'location', None)\n if loc is not None:\n loc = loc.x.to_value(u.m), loc.y.to_value(u.m), loc.z.to_value(u.m)\n\n return hash((self.jd1, self.jd2, self.scale, loc))\n\n except TypeError:\n if self.ndim != 0:\n reason = '(must be scalar)'\n elif self.masked:\n reason = '(value is masked)'\n else:\n raise\n\n raise TypeError(f\"unhashable type: '{self.__class__.__name__}' {reason}\")\n\n @property\n def scale(self):\n \"\"\"Time scale\"\"\"\n return self._time.scale\n\n def _set_scale(self, scale):\n \"\"\"\n This is the key routine that actually does time scale conversions.\n This is not public and not connected to the read-only scale property.\n \"\"\"\n\n if scale == self.scale:\n return\n if scale not in self.SCALES:\n raise ValueError(\"Scale {!r} is not in the allowed scales {}\"\n .format(scale, sorted(self.SCALES)))\n\n if scale == 'utc' or self.scale == 'utc':\n # If doing a transform involving UTC then check that the leap\n # seconds table is up to date.\n _check_leapsec()\n\n # Determine the chain of scale transformations to get from the current\n # scale to the new scale. MULTI_HOPS contains a dict of all\n # transformations (xforms) that require intermediate xforms.\n # The MULTI_HOPS dict is keyed by (sys1, sys2) in alphabetical order.\n xform = (self.scale, scale)\n xform_sort = tuple(sorted(xform))\n multi = MULTI_HOPS.get(xform_sort, ())\n xforms = xform_sort[:1] + multi + xform_sort[-1:]\n # If we made the reverse xform then reverse it now.\n if xform_sort != xform:\n xforms = tuple(reversed(xforms))\n\n # Transform the jd1,2 pairs through the chain of scale xforms.\n jd1, jd2 = self._time.jd1, self._time.jd2_filled\n for sys1, sys2 in zip(xforms[:-1], xforms[1:]):\n # Some xforms require an additional delta_ argument that is\n # provided through Time methods. These values may be supplied by\n # the user or computed based on available approximations. The\n # get_delta_ methods are available for only one combination of\n # sys1, sys2 though the property applies for both xform directions.\n args = [jd1, jd2]\n for sys12 in ((sys1, sys2), (sys2, sys1)):\n dt_method = '_get_delta_{}_{}'.format(*sys12)\n try:\n get_dt = getattr(self, dt_method)\n except AttributeError:\n pass\n else:\n args.append(get_dt(jd1, jd2))\n break\n\n conv_func = getattr(erfa, sys1 + sys2)\n jd1, jd2 = conv_func(*args)\n\n jd1, jd2 = day_frac(jd1, jd2)\n if self.masked:\n jd2[self.mask] = np.nan\n\n self._time = self.FORMATS[self.format](jd1, jd2, scale, self.precision,\n self.in_subfmt, self.out_subfmt,\n from_jd=True)\n\n @property\n def precision(self):\n \"\"\"\n Decimal precision when outputting seconds as floating point (int\n value between 0 and 9 inclusive).\n \"\"\"\n return self._time.precision\n\n @precision.setter\n def precision(self, val):\n del self.cache\n self._time.precision = val\n\n @property\n def in_subfmt(self):\n \"\"\"\n Unix wildcard pattern to select subformats for parsing string input\n times.\n \"\"\"\n return self._time.in_subfmt\n\n @in_subfmt.setter\n def in_subfmt(self, val):\n self._time.in_subfmt = val\n del self.cache\n\n @property\n def out_subfmt(self):\n \"\"\"\n Unix wildcard pattern to select subformats for outputting times.\n \"\"\"\n return self._time.out_subfmt\n\n @out_subfmt.setter\n def out_subfmt(self, val):\n # Setting the out_subfmt property here does validation of ``val``\n self._time.out_subfmt = val\n del self.cache\n\n @property\n def shape(self):\n \"\"\"The shape of the time instances.\n\n Like `~numpy.ndarray.shape`, can be set to a new shape by assigning a\n tuple. Note that if different instances share some but not all\n underlying data, setting the shape of one instance can make the other\n instance unusable. Hence, it is strongly recommended to get new,\n reshaped instances with the ``reshape`` method.\n\n Raises\n ------\n ValueError\n If the new shape has the wrong total number of elements.\n AttributeError\n If the shape of the ``jd1``, ``jd2``, ``location``,\n ``delta_ut1_utc``, or ``delta_tdb_tt`` attributes cannot be changed\n without the arrays being copied. For these cases, use the\n `Time.reshape` method (which copies any arrays that cannot be\n reshaped in-place).\n \"\"\"\n return self._time.jd1.shape\n\n @shape.setter\n def shape(self, shape):\n del self.cache\n\n # We have to keep track of arrays that were already reshaped,\n # since we may have to return those to their original shape if a later\n # shape-setting fails.\n reshaped = []\n oldshape = self.shape\n\n # In-place reshape of data/attributes. Need to access _time.jd1/2 not\n # self.jd1/2 because the latter are not guaranteed to be the actual\n # data, and in fact should not be directly changeable from the public\n # API.\n for obj, attr in ((self._time, 'jd1'),\n (self._time, 'jd2'),\n (self, '_delta_ut1_utc'),\n (self, '_delta_tdb_tt'),\n (self, 'location')):\n val = getattr(obj, attr, None)\n if val is not None and val.size > 1:\n try:\n val.shape = shape\n except Exception:\n for val2 in reshaped:\n val2.shape = oldshape\n raise\n else:\n reshaped.append(val)\n\n def _shaped_like_input(self, value):\n if self._time.jd1.shape:\n if isinstance(value, np.ndarray):\n return value\n else:\n raise TypeError(\n f\"JD is an array ({self._time.jd1!r}) but value \"\n f\"is not ({value!r})\")\n else:\n # zero-dimensional array, is it safe to unbox?\n if (isinstance(value, np.ndarray)\n and not value.shape\n and not np.ma.is_masked(value)):\n if value.dtype.kind == 'M':\n # existing test doesn't want datetime64 converted\n return value[()]\n elif value.dtype.fields:\n # Unpack but keep field names; .item() doesn't\n # Still don't get python types in the fields\n return value[()]\n else:\n return value.item()\n else:\n return value\n\n @property\n def jd1(self):\n \"\"\"\n First of the two doubles that internally store time value(s) in JD.\n \"\"\"\n jd1 = self._time.mask_if_needed(self._time.jd1)\n return self._shaped_like_input(jd1)\n\n @property\n def jd2(self):\n \"\"\"\n Second of the two doubles that internally store time value(s) in JD.\n \"\"\"\n jd2 = self._time.mask_if_needed(self._time.jd2)\n return self._shaped_like_input(jd2)\n\n def to_value(self, format, subfmt='*'):\n \"\"\"Get time values expressed in specified output format.\n\n This method allows representing the ``Time`` object in the desired\n output ``format`` and optional sub-format ``subfmt``. Available\n built-in formats include ``jd``, ``mjd``, ``iso``, and so forth. Each\n format can have its own sub-formats\n\n For built-in numerical formats like ``jd`` or ``unix``, ``subfmt`` can\n be one of 'float', 'long', 'decimal', 'str', or 'bytes'. Here, 'long'\n uses ``numpy.longdouble`` for somewhat enhanced precision (with\n the enhancement depending on platform), and 'decimal'\n :class:`decimal.Decimal` for full precision. For 'str' and 'bytes', the\n number of digits is also chosen such that time values are represented\n accurately.\n\n For built-in date-like string formats, one of 'date_hms', 'date_hm', or\n 'date' (or 'longdate_hms', etc., for 5-digit years in\n `~astropy.time.TimeFITS`). For sub-formats including seconds, the\n number of digits used for the fractional seconds is as set by\n `~astropy.time.Time.precision`.\n\n Parameters\n ----------\n format : str\n The format in which one wants the time values. Default: the current\n format.\n subfmt : str or None, optional\n Value or wildcard pattern to select the sub-format in which the\n values should be given. The default of '*' picks the first\n available for a given format, i.e., 'float' or 'date_hms'.\n If `None`, use the instance's ``out_subfmt``.\n\n \"\"\"\n # TODO: add a precision argument (but ensure it is keyword argument\n # only, to make life easier for TimeDelta.to_value()).\n if format not in self.FORMATS:\n raise ValueError(f'format must be one of {list(self.FORMATS)}')\n\n cache = self.cache['format']\n # Try to keep cache behaviour like it was in astropy < 4.0.\n key = format if subfmt is None else (format, subfmt)\n if key not in cache:\n if format == self.format:\n tm = self\n else:\n tm = self.replicate(format=format)\n\n # Some TimeFormat subclasses may not be able to handle being passes\n # on a out_subfmt. This includes some core classes like\n # TimeBesselianEpochString that do not have any allowed subfmts. But\n # those do deal with `self.out_subfmt` internally, so if subfmt is\n # the same, we do not pass it on.\n kwargs = {}\n if subfmt is not None and subfmt != tm.out_subfmt:\n kwargs['out_subfmt'] = subfmt\n try:\n value = tm._time.to_value(parent=tm, **kwargs)\n except TypeError as exc:\n # Try validating subfmt, e.g. for formats like 'jyear_str' that\n # do not implement out_subfmt in to_value() (because there are\n # no allowed subformats). If subfmt is not valid this gives the\n # same exception as would have occurred if the call to\n # `to_value()` had succeeded.\n tm._time._select_subfmts(subfmt)\n\n # Subfmt was valid, so fall back to the original exception to see\n # if it was lack of support for out_subfmt as a call arg.\n if \"unexpected keyword argument 'out_subfmt'\" in str(exc):\n raise ValueError(\n f\"to_value() method for format {format!r} does not \"\n f\"support passing a 'subfmt' argument\") from None\n else:\n # Some unforeseen exception so raise.\n raise\n\n value = tm._shaped_like_input(value)\n cache[key] = value\n return cache[key]\n\n @property\n def value(self):\n \"\"\"Time value(s) in current format\"\"\"\n return self.to_value(self.format, None)\n\n @property\n def masked(self):\n return self._time.masked\n\n @property\n def mask(self):\n return self._time.mask\n\n def insert(self, obj, values, axis=0):\n \"\"\"\n Insert values before the given indices in the column and return\n a new `~astropy.time.Time` or `~astropy.time.TimeDelta` object.\n\n The values to be inserted must conform to the rules for in-place setting\n of ``Time`` objects (see ``Get and set values`` in the ``Time``\n documentation).\n\n The API signature matches the ``np.insert`` API, but is more limited.\n The specification of insert index ``obj`` must be a single integer,\n and the ``axis`` must be ``0`` for simple row insertion before the\n index.\n\n Parameters\n ----------\n obj : int\n Integer index before which ``values`` is inserted.\n values : array-like\n Value(s) to insert. If the type of ``values`` is different\n from that of quantity, ``values`` is converted to the matching type.\n axis : int, optional\n Axis along which to insert ``values``. Default is 0, which is the\n only allowed value and will insert a row.\n\n Returns\n -------\n out : `~astropy.time.Time` subclass\n New time object with inserted value(s)\n\n \"\"\"\n # Validate inputs: obj arg is integer, axis=0, self is not a scalar, and\n # input index is in bounds.\n try:\n idx0 = operator.index(obj)\n except TypeError:\n raise TypeError('obj arg must be an integer')\n\n if axis != 0:\n raise ValueError('axis must be 0')\n\n if not self.shape:\n raise TypeError('cannot insert into scalar {} object'\n .format(self.__class__.__name__))\n\n if abs(idx0) > len(self):\n raise IndexError('index {} is out of bounds for axis 0 with size {}'\n .format(idx0, len(self)))\n\n # Turn negative index into positive\n if idx0 < 0:\n idx0 = len(self) + idx0\n\n # For non-Time object, use numpy to help figure out the length. (Note annoying\n # case of a string input that has a length which is not the length we want).\n if not isinstance(values, self.__class__):\n values = np.asarray(values)\n n_values = len(values) if values.shape else 1\n\n # Finally make the new object with the correct length and set values for the\n # three sections, before insert, the insert, and after the insert.\n out = self.__class__.info.new_like([self], len(self) + n_values, name=self.info.name)\n\n out._time.jd1[:idx0] = self._time.jd1[:idx0]\n out._time.jd2[:idx0] = self._time.jd2[:idx0]\n\n # This uses the Time setting machinery to coerce and validate as necessary.\n out[idx0:idx0 + n_values] = values\n\n out._time.jd1[idx0 + n_values:] = self._time.jd1[idx0:]\n out._time.jd2[idx0 + n_values:] = self._time.jd2[idx0:]\n\n return out\n\n def __setitem__(self, item, value):\n if not self.writeable:\n if self.shape:\n raise ValueError('{} object is read-only. Make a '\n 'copy() or set \"writeable\" attribute to True.'\n .format(self.__class__.__name__))\n else:\n raise ValueError('scalar {} object is read-only.'\n .format(self.__class__.__name__))\n\n # Any use of setitem results in immediate cache invalidation\n del self.cache\n\n # Setting invalidates transform deltas\n for attr in ('_delta_tdb_tt', '_delta_ut1_utc'):\n if hasattr(self, attr):\n delattr(self, attr)\n\n if value is np.ma.masked or value is np.nan:\n self._time.jd2[item] = np.nan\n return\n\n value = self._make_value_equivalent(item, value)\n\n # Finally directly set the jd1/2 values. Locations are known to match.\n if self.scale is not None:\n value = getattr(value, self.scale)\n self._time.jd1[item] = value._time.jd1\n self._time.jd2[item] = value._time.jd2\n\n def isclose(self, other, atol=None):\n \"\"\"Returns a boolean or boolean array where two Time objects are\n element-wise equal within a time tolerance.\n\n This evaluates the expression below::\n\n abs(self - other) <= atol\n\n Parameters\n ----------\n other : `~astropy.time.Time`\n Time object for comparison.\n atol : `~astropy.units.Quantity` or `~astropy.time.TimeDelta`\n Absolute tolerance for equality with units of time (e.g. ``u.s`` or\n ``u.day``). Default is two bits in the 128-bit JD time representation,\n equivalent to about 40 picosecs.\n \"\"\"\n if atol is None:\n # Note: use 2 bits instead of 1 bit based on experience in precision\n # tests, since taking the difference with a UTC time means one has\n # to do a scale change.\n atol = 2 * np.finfo(float).eps * u.day\n\n if not isinstance(atol, (u.Quantity, TimeDelta)):\n raise TypeError(\"'atol' argument must be a Quantity or TimeDelta instance, got \"\n f'{atol.__class__.__name__} instead')\n\n try:\n # Separate these out so user sees where the problem is\n dt = self - other\n dt = abs(dt)\n out = dt <= atol\n except Exception as err:\n raise TypeError(\"'other' argument must support subtraction with Time \"\n f\"and return a value that supports comparison with \"\n f\"{atol.__class__.__name__}: {err}\")\n\n return out\n\n def copy(self, format=None):\n \"\"\"\n Return a fully independent copy the Time object, optionally changing\n the format.\n\n If ``format`` is supplied then the time format of the returned Time\n object will be set accordingly, otherwise it will be unchanged from the\n original.\n\n In this method a full copy of the internal time arrays will be made.\n The internal time arrays are normally not changeable by the user so in\n most cases the ``replicate()`` method should be used.\n\n Parameters\n ----------\n format : str, optional\n Time format of the copy.\n\n Returns\n -------\n tm : Time object\n Copy of this object\n \"\"\"\n return self._apply('copy', format=format)\n\n def replicate(self, format=None, copy=False, cls=None):\n \"\"\"\n Return a replica of the Time object, optionally changing the format.\n\n If ``format`` is supplied then the time format of the returned Time\n object will be set accordingly, otherwise it will be unchanged from the\n original.\n\n If ``copy`` is set to `True` then a full copy of the internal time arrays\n will be made. By default the replica will use a reference to the\n original arrays when possible to save memory. The internal time arrays\n are normally not changeable by the user so in most cases it should not\n be necessary to set ``copy`` to `True`.\n\n The convenience method copy() is available in which ``copy`` is `True`\n by default.\n\n Parameters\n ----------\n format : str, optional\n Time format of the replica.\n copy : bool, optional\n Return a true copy instead of using references where possible.\n\n Returns\n -------\n tm : Time object\n Replica of this object\n \"\"\"\n return self._apply('copy' if copy else 'replicate', format=format, cls=cls)\n\n def _apply(self, method, *args, format=None, cls=None, **kwargs):\n \"\"\"Create a new time object, possibly applying a method to the arrays.\n\n Parameters\n ----------\n method : str or callable\n If string, can be 'replicate' or the name of a relevant\n `~numpy.ndarray` method. In the former case, a new time instance\n with unchanged internal data is created, while in the latter the\n method is applied to the internal ``jd1`` and ``jd2`` arrays, as\n well as to possible ``location``, ``_delta_ut1_utc``, and\n ``_delta_tdb_tt`` arrays.\n If a callable, it is directly applied to the above arrays.\n Examples: 'copy', '__getitem__', 'reshape', `~numpy.broadcast_to`.\n args : tuple\n Any positional arguments for ``method``.\n kwargs : dict\n Any keyword arguments for ``method``. If the ``format`` keyword\n argument is present, this will be used as the Time format of the\n replica.\n\n Examples\n --------\n Some ways this is used internally::\n\n copy : ``_apply('copy')``\n replicate : ``_apply('replicate')``\n reshape : ``_apply('reshape', new_shape)``\n index or slice : ``_apply('__getitem__', item)``\n broadcast : ``_apply(np.broadcast, shape=new_shape)``\n \"\"\"\n new_format = self.format if format is None else format\n\n if callable(method):\n apply_method = lambda array: method(array, *args, **kwargs)\n\n else:\n if method == 'replicate':\n apply_method = None\n else:\n apply_method = operator.methodcaller(method, *args, **kwargs)\n\n jd1, jd2 = self._time.jd1, self._time.jd2\n if apply_method:\n jd1 = apply_method(jd1)\n jd2 = apply_method(jd2)\n\n # Get a new instance of our class and set its attributes directly.\n tm = super().__new__(cls or self.__class__)\n tm._time = TimeJD(jd1, jd2, self.scale, precision=0,\n in_subfmt='*', out_subfmt='*', from_jd=True)\n\n # Optional ndarray attributes.\n for attr in ('_delta_ut1_utc', '_delta_tdb_tt', 'location'):\n try:\n val = getattr(self, attr)\n except AttributeError:\n continue\n\n if apply_method:\n # Apply the method to any value arrays (though skip if there is\n # only an array scalar and the method would return a view,\n # since in that case nothing would change).\n if getattr(val, 'shape', ()):\n val = apply_method(val)\n elif method == 'copy' or method == 'flatten':\n # flatten should copy also for a single element array, but\n # we cannot use it directly for array scalars, since it\n # always returns a one-dimensional array. So, just copy.\n val = copy.copy(val)\n\n setattr(tm, attr, val)\n\n # Copy other 'info' attr only if it has actually been defined and the\n # time object is not a scalar (issue #10688).\n # See PR #3898 for further explanation and justification, along\n # with Quantity.__array_finalize__\n if 'info' in self.__dict__:\n tm.info = self.info\n\n # Make the new internal _time object corresponding to the format\n # in the copy. If the format is unchanged this process is lightweight\n # and does not create any new arrays.\n if new_format not in tm.FORMATS:\n raise ValueError(f'format must be one of {list(tm.FORMATS)}')\n\n NewFormat = tm.FORMATS[new_format]\n\n tm._time = NewFormat(\n tm._time.jd1, tm._time.jd2,\n tm._time._scale,\n precision=self.precision,\n in_subfmt=NewFormat._get_allowed_subfmt(self.in_subfmt),\n out_subfmt=NewFormat._get_allowed_subfmt(self.out_subfmt),\n from_jd=True)\n tm._format = new_format\n tm.SCALES = self.SCALES\n\n return tm\n\n def __copy__(self):\n \"\"\"\n Overrides the default behavior of the `copy.copy` function in\n the python stdlib to behave like `Time.copy`. Does *not* make a\n copy of the JD arrays - only copies by reference.\n \"\"\"\n return self.replicate()\n\n def __deepcopy__(self, memo):\n \"\"\"\n Overrides the default behavior of the `copy.deepcopy` function\n in the python stdlib to behave like `Time.copy`. Does make a\n copy of the JD arrays.\n \"\"\"\n return self.copy()\n\n def _advanced_index(self, indices, axis=None, keepdims=False):\n \"\"\"Turn argmin, argmax output into an advanced index.\n\n Argmin, argmax output contains indices along a given axis in an array\n shaped like the other dimensions. To use this to get values at the\n correct location, a list is constructed in which the other axes are\n indexed sequentially. For ``keepdims`` is ``True``, the net result is\n the same as constructing an index grid with ``np.ogrid`` and then\n replacing the ``axis`` item with ``indices`` with its shaped expanded\n at ``axis``. For ``keepdims`` is ``False``, the result is the same but\n with the ``axis`` dimension removed from all list entries.\n\n For ``axis`` is ``None``, this calls :func:`~numpy.unravel_index`.\n\n Parameters\n ----------\n indices : array\n Output of argmin or argmax.\n axis : int or None\n axis along which argmin or argmax was used.\n keepdims : bool\n Whether to construct indices that keep or remove the axis along\n which argmin or argmax was used. Default: ``False``.\n\n Returns\n -------\n advanced_index : list of arrays\n Suitable for use as an advanced index.\n \"\"\"\n if axis is None:\n return np.unravel_index(indices, self.shape)\n\n ndim = self.ndim\n if axis < 0:\n axis = axis + ndim\n\n if keepdims and indices.ndim < self.ndim:\n indices = np.expand_dims(indices, axis)\n\n index = [indices\n if i == axis\n else np.arange(s).reshape(\n (1,) * (i if keepdims or i < axis else i - 1)\n + (s,)\n + (1,) * (ndim - i - (1 if keepdims or i > axis else 2))\n )\n for i, s in enumerate(self.shape)]\n\n return tuple(index)\n\n def argmin(self, axis=None, out=None):\n \"\"\"Return indices of the minimum values along the given axis.\n\n This is similar to :meth:`~numpy.ndarray.argmin`, but adapted to ensure\n that the full precision given by the two doubles ``jd1`` and ``jd2``\n is used. See :func:`~numpy.argmin` for detailed documentation.\n \"\"\"\n # First get the minimum at normal precision.\n jd1, jd2 = self.jd1, self.jd2\n approx = np.min(jd1 + jd2, axis, keepdims=True)\n\n # Approx is very close to the true minimum, and by subtracting it at\n # full precision, all numbers near 0 can be represented correctly,\n # so we can be sure we get the true minimum.\n # The below is effectively what would be done for\n # dt = (self - self.__class__(approx, format='jd')).jd\n # which translates to:\n # approx_jd1, approx_jd2 = day_frac(approx, 0.)\n # dt = (self.jd1 - approx_jd1) + (self.jd2 - approx_jd2)\n dt = (jd1 - approx) + jd2\n\n return dt.argmin(axis, out)\n\n def argmax(self, axis=None, out=None):\n \"\"\"Return indices of the maximum values along the given axis.\n\n This is similar to :meth:`~numpy.ndarray.argmax`, but adapted to ensure\n that the full precision given by the two doubles ``jd1`` and ``jd2``\n is used. See :func:`~numpy.argmax` for detailed documentation.\n \"\"\"\n # For procedure, see comment on argmin.\n jd1, jd2 = self.jd1, self.jd2\n approx = np.max(jd1 + jd2, axis, keepdims=True)\n\n dt = (jd1 - approx) + jd2\n\n return dt.argmax(axis, out)\n\n def argsort(self, axis=-1):\n \"\"\"Returns the indices that would sort the time array.\n\n This is similar to :meth:`~numpy.ndarray.argsort`, but adapted to ensure\n that the full precision given by the two doubles ``jd1`` and ``jd2``\n is used, and that corresponding attributes are copied. Internally,\n it uses :func:`~numpy.lexsort`, and hence no sort method can be chosen.\n \"\"\"\n # For procedure, see comment on argmin.\n jd1, jd2 = self.jd1, self.jd2\n approx = jd1 + jd2\n remainder = (jd1 - approx) + jd2\n\n if axis is None:\n return np.lexsort((remainder.ravel(), approx.ravel()))\n else:\n return np.lexsort(keys=(remainder, approx), axis=axis)\n\n def min(self, axis=None, out=None, keepdims=False):\n \"\"\"Minimum along a given axis.\n\n This is similar to :meth:`~numpy.ndarray.min`, but adapted to ensure\n that the full precision given by the two doubles ``jd1`` and ``jd2``\n is used, and that corresponding attributes are copied.\n\n Note that the ``out`` argument is present only for compatibility with\n ``np.min``; since `Time` instances are immutable, it is not possible\n to have an actual ``out`` to store the result in.\n \"\"\"\n if out is not None:\n raise ValueError(\"Since `Time` instances are immutable, ``out`` \"\n \"cannot be set to anything but ``None``.\")\n return self[self._advanced_index(self.argmin(axis), axis, keepdims)]\n\n def max(self, axis=None, out=None, keepdims=False):\n \"\"\"Maximum along a given axis.\n\n This is similar to :meth:`~numpy.ndarray.max`, but adapted to ensure\n that the full precision given by the two doubles ``jd1`` and ``jd2``\n is used, and that corresponding attributes are copied.\n\n Note that the ``out`` argument is present only for compatibility with\n ``np.max``; since `Time` instances are immutable, it is not possible\n to have an actual ``out`` to store the result in.\n \"\"\"\n if out is not None:\n raise ValueError(\"Since `Time` instances are immutable, ``out`` \"\n \"cannot be set to anything but ``None``.\")\n return self[self._advanced_index(self.argmax(axis), axis, keepdims)]\n\n def ptp(self, axis=None, out=None, keepdims=False):\n \"\"\"Peak to peak (maximum - minimum) along a given axis.\n\n This is similar to :meth:`~numpy.ndarray.ptp`, but adapted to ensure\n that the full precision given by the two doubles ``jd1`` and ``jd2``\n is used.\n\n Note that the ``out`` argument is present only for compatibility with\n `~numpy.ptp`; since `Time` instances are immutable, it is not possible\n to have an actual ``out`` to store the result in.\n \"\"\"\n if out is not None:\n raise ValueError(\"Since `Time` instances are immutable, ``out`` \"\n \"cannot be set to anything but ``None``.\")\n return (self.max(axis, keepdims=keepdims)\n - self.min(axis, keepdims=keepdims))\n\n def sort(self, axis=-1):\n \"\"\"Return a copy sorted along the specified axis.\n\n This is similar to :meth:`~numpy.ndarray.sort`, but internally uses\n indexing with :func:`~numpy.lexsort` to ensure that the full precision\n given by the two doubles ``jd1`` and ``jd2`` is kept, and that\n corresponding attributes are properly sorted and copied as well.\n\n Parameters\n ----------\n axis : int or None\n Axis to be sorted. If ``None``, the flattened array is sorted.\n By default, sort over the last axis.\n \"\"\"\n return self[self._advanced_index(self.argsort(axis), axis,\n keepdims=True)]\n\n @property\n def cache(self):\n \"\"\"\n Return the cache associated with this instance.\n \"\"\"\n return self._time.cache\n\n @cache.deleter\n def cache(self):\n del self._time.cache\n\n def __getattr__(self, attr):\n \"\"\"\n Get dynamic attributes to output format or do timescale conversion.\n \"\"\"\n if attr in self.SCALES and self.scale is not None:\n cache = self.cache['scale']\n if attr not in cache:\n if attr == self.scale:\n tm = self\n else:\n tm = self.replicate()\n tm._set_scale(attr)\n if tm.shape:\n # Prevent future modification of cached array-like object\n tm.writeable = False\n cache[attr] = tm\n return cache[attr]\n\n elif attr in self.FORMATS:\n return self.to_value(attr, subfmt=None)\n\n elif attr in TIME_SCALES: # allowed ones done above (self.SCALES)\n if self.scale is None:\n raise ScaleValueError(\"Cannot convert TimeDelta with \"\n \"undefined scale to any defined scale.\")\n else:\n raise ScaleValueError(\"Cannot convert {} with scale \"\n \"'{}' to scale '{}'\"\n .format(self.__class__.__name__,\n self.scale, attr))\n\n else:\n # Should raise AttributeError\n return self.__getattribute__(attr)\n\n def __dir__(self):\n return sorted(set(super().__dir__()) | set(self.SCALES) | set(self.FORMATS))\n\n def _match_shape(self, val):\n \"\"\"\n Ensure that `val` is matched to length of self. If val has length 1\n then broadcast, otherwise cast to double and make sure shape matches.\n \"\"\"\n val = _make_array(val, copy=True) # be conservative and copy\n if val.size > 1 and val.shape != self.shape:\n try:\n # check the value can be broadcast to the shape of self.\n val = np.broadcast_to(val, self.shape, subok=True)\n except Exception:\n raise ValueError('Attribute shape must match or be '\n 'broadcastable to that of Time object. '\n 'Typically, give either a single value or '\n 'one for each time.')\n\n return val\n\n def _time_comparison(self, other, op):\n \"\"\"If other is of same class as self, compare difference in self.scale.\n Otherwise, return NotImplemented\n \"\"\"\n if other.__class__ is not self.__class__:\n try:\n other = self.__class__(other, scale=self.scale)\n except Exception:\n # Let other have a go.\n return NotImplemented\n\n if(self.scale is not None and self.scale not in other.SCALES\n or other.scale is not None and other.scale not in self.SCALES):\n # Other will also not be able to do it, so raise a TypeError\n # immediately, allowing us to explain why it doesn't work.\n raise TypeError(\"Cannot compare {} instances with scales \"\n \"'{}' and '{}'\".format(self.__class__.__name__,\n self.scale, other.scale))\n\n if self.scale is not None and other.scale is not None:\n other = getattr(other, self.scale)\n\n return op((self.jd1 - other.jd1) + (self.jd2 - other.jd2), 0.)\n\n def __lt__(self, other):\n return self._time_comparison(other, operator.lt)\n\n def __le__(self, other):\n return self._time_comparison(other, operator.le)\n\n def __eq__(self, other):\n \"\"\"\n If other is an incompatible object for comparison, return `False`.\n Otherwise, return `True` if the time difference between self and\n other is zero.\n \"\"\"\n return self._time_comparison(other, operator.eq)\n\n def __ne__(self, other):\n \"\"\"\n If other is an incompatible object for comparison, return `True`.\n Otherwise, return `False` if the time difference between self and\n other is zero.\n \"\"\"\n return self._time_comparison(other, operator.ne)\n\n def __gt__(self, other):\n return self._time_comparison(other, operator.gt)\n\n def __ge__(self, other):\n return self._time_comparison(other, operator.ge)\n\n\nclass Time(TimeBase):\n \"\"\"\n Represent and manipulate times and dates for astronomy.\n\n A `Time` object is initialized with one or more times in the ``val``\n argument. The input times in ``val`` must conform to the specified\n ``format`` and must correspond to the specified time ``scale``. The\n optional ``val2`` time input should be supplied only for numeric input\n formats (e.g. JD) where very high precision (better than 64-bit precision)\n is required.\n\n The allowed values for ``format`` can be listed with::\n\n >>> list(Time.FORMATS)\n ['jd', 'mjd', 'decimalyear', 'unix', 'unix_tai', 'cxcsec', 'gps', 'plot_date',\n 'stardate', 'datetime', 'ymdhms', 'iso', 'isot', 'yday', 'datetime64',\n 'fits', 'byear', 'jyear', 'byear_str', 'jyear_str']\n\n See also: http://docs.astropy.org/en/stable/time/\n\n Parameters\n ----------\n val : sequence, ndarray, number, str, bytes, or `~astropy.time.Time` object\n Value(s) to initialize the time or times. Bytes are decoded as ascii.\n val2 : sequence, ndarray, or number; optional\n Value(s) to initialize the time or times. Only used for numerical\n input, to help preserve precision.\n format : str, optional\n Format of input value(s)\n scale : str, optional\n Time scale of input value(s), must be one of the following:\n ('tai', 'tcb', 'tcg', 'tdb', 'tt', 'ut1', 'utc')\n precision : int, optional\n Digits of precision in string representation of time\n in_subfmt : str, optional\n Unix glob to select subformats for parsing input times\n out_subfmt : str, optional\n Unix glob to select subformat for outputting times\n location : `~astropy.coordinates.EarthLocation` or tuple, optional\n If given as an tuple, it should be able to initialize an\n an EarthLocation instance, i.e., either contain 3 items with units of\n length for geocentric coordinates, or contain a longitude, latitude,\n and an optional height for geodetic coordinates.\n Can be a single location, or one for each input time.\n If not given, assumed to be the center of the Earth for time scale\n transformations to and from the solar-system barycenter.\n copy : bool, optional\n Make a copy of the input values\n \"\"\"\n SCALES = TIME_SCALES\n \"\"\"List of time scales\"\"\"\n\n FORMATS = TIME_FORMATS\n \"\"\"Dict of time formats\"\"\"\n\n def __new__(cls, val, val2=None, format=None, scale=None,\n precision=None, in_subfmt=None, out_subfmt=None,\n location=None, copy=False):\n\n if isinstance(val, Time):\n self = val.replicate(format=format, copy=copy, cls=cls)\n else:\n self = super().__new__(cls)\n\n return self\n\n def __init__(self, val, val2=None, format=None, scale=None,\n precision=None, in_subfmt=None, out_subfmt=None,\n location=None, copy=False):\n\n if location is not None:\n from astropy.coordinates import EarthLocation\n if isinstance(location, EarthLocation):\n self.location = location\n else:\n self.location = EarthLocation(*location)\n if self.location.size == 1:\n self.location = self.location.squeeze()\n else:\n if not hasattr(self, 'location'):\n self.location = None\n\n if isinstance(val, Time):\n # Update _time formatting parameters if explicitly specified\n if precision is not None:\n self._time.precision = precision\n if in_subfmt is not None:\n self._time.in_subfmt = in_subfmt\n if out_subfmt is not None:\n self._time.out_subfmt = out_subfmt\n self.SCALES = TIME_TYPES[self.scale]\n if scale is not None:\n self._set_scale(scale)\n else:\n self._init_from_vals(val, val2, format, scale, copy,\n precision, in_subfmt, out_subfmt)\n self.SCALES = TIME_TYPES[self.scale]\n\n if self.location is not None and (self.location.size > 1\n and self.location.shape != self.shape):\n try:\n # check the location can be broadcast to self's shape.\n self.location = np.broadcast_to(self.location, self.shape,\n subok=True)\n except Exception as err:\n raise ValueError('The location with shape {} cannot be '\n 'broadcast against time with shape {}. '\n 'Typically, either give a single location or '\n 'one for each time.'\n .format(self.location.shape, self.shape)) from err\n\n def _make_value_equivalent(self, item, value):\n \"\"\"Coerce setitem value into an equivalent Time object\"\"\"\n\n # If there is a vector location then broadcast to the Time shape\n # and then select with ``item``\n if self.location is not None and self.location.shape:\n self_location = np.broadcast_to(self.location, self.shape, subok=True)[item]\n else:\n self_location = self.location\n\n if isinstance(value, Time):\n # Make sure locations are compatible. Location can be either None or\n # a Location object.\n if self_location is None and value.location is None:\n match = True\n elif ((self_location is None and value.location is not None)\n or (self_location is not None and value.location is None)):\n match = False\n else:\n match = np.all(self_location == value.location)\n if not match:\n raise ValueError('cannot set to Time with different location: '\n 'expected location={} and '\n 'got location={}'\n .format(self_location, value.location))\n else:\n try:\n value = self.__class__(value, scale=self.scale, location=self_location)\n except Exception:\n try:\n value = self.__class__(value, scale=self.scale, format=self.format,\n location=self_location)\n except Exception as err:\n raise ValueError('cannot convert value to a compatible Time object: {}'\n .format(err))\n return value\n\n @classmethod\n def now(cls):\n \"\"\"\n Creates a new object corresponding to the instant in time this\n method is called.\n\n .. note::\n \"Now\" is determined using the `~datetime.datetime.utcnow`\n function, so its accuracy and precision is determined by that\n function. Generally that means it is set by the accuracy of\n your system clock.\n\n Returns\n -------\n nowtime : :class:`~astropy.time.Time`\n A new `Time` object (or a subclass of `Time` if this is called from\n such a subclass) at the current time.\n \"\"\"\n # call `utcnow` immediately to be sure it's ASAP\n dtnow = datetime.utcnow()\n return cls(val=dtnow, format='datetime', scale='utc')\n\n info = TimeInfo()\n\n @classmethod\n def strptime(cls, time_string, format_string, **kwargs):\n \"\"\"\n Parse a string to a Time according to a format specification.\n See `time.strptime` documentation for format specification.\n\n >>> Time.strptime('2012-Jun-30 23:59:60', '%Y-%b-%d %H:%M:%S')\n <Time object: scale='utc' format='isot' value=2012-06-30T23:59:60.000>\n\n Parameters\n ----------\n time_string : str, sequence, or ndarray\n Objects containing time data of type string\n format_string : str\n String specifying format of time_string.\n kwargs : dict\n Any keyword arguments for ``Time``. If the ``format`` keyword\n argument is present, this will be used as the Time format.\n\n Returns\n -------\n time_obj : `~astropy.time.Time`\n A new `~astropy.time.Time` object corresponding to the input\n ``time_string``.\n\n \"\"\"\n time_array = np.asarray(time_string)\n\n if time_array.dtype.kind not in ('U', 'S'):\n err = \"Expected type is string, a bytes-like object or a sequence\"\\\n \" of these. Got dtype '{}'\".format(time_array.dtype.kind)\n raise TypeError(err)\n\n to_string = (str if time_array.dtype.kind == 'U' else\n lambda x: str(x.item(), encoding='ascii'))\n iterator = np.nditer([time_array, None],\n op_dtypes=[time_array.dtype, 'U30'])\n\n for time, formatted in iterator:\n tt, fraction = _strptime._strptime(to_string(time), format_string)\n time_tuple = tt[:6] + (fraction,)\n formatted[...] = '{:04}-{:02}-{:02}T{:02}:{:02}:{:02}.{:06}'\\\n .format(*time_tuple)\n\n format = kwargs.pop('format', None)\n out = cls(*iterator.operands[1:], format='isot', **kwargs)\n if format is not None:\n out.format = format\n\n return out\n\n def strftime(self, format_spec):\n \"\"\"\n Convert Time to a string or a numpy.array of strings according to a\n format specification.\n See `time.strftime` documentation for format specification.\n\n Parameters\n ----------\n format_spec : str\n Format definition of return string.\n\n Returns\n -------\n formatted : str or numpy.array\n String or numpy.array of strings formatted according to the given\n format string.\n\n \"\"\"\n formatted_strings = []\n for sk in self.replicate('iso')._time.str_kwargs():\n date_tuple = date(sk['year'], sk['mon'], sk['day']).timetuple()\n datetime_tuple = (sk['year'], sk['mon'], sk['day'],\n sk['hour'], sk['min'], sk['sec'],\n date_tuple[6], date_tuple[7], -1)\n fmtd_str = format_spec\n if '%f' in fmtd_str:\n fmtd_str = fmtd_str.replace('%f', '{frac:0{precision}}'.format(\n frac=sk['fracsec'], precision=self.precision))\n fmtd_str = strftime(fmtd_str, datetime_tuple)\n formatted_strings.append(fmtd_str)\n\n if self.isscalar:\n return formatted_strings[0]\n else:\n return np.array(formatted_strings).reshape(self.shape)\n\n def light_travel_time(self, skycoord, kind='barycentric', location=None, ephemeris=None):\n \"\"\"Light travel time correction to the barycentre or heliocentre.\n\n The frame transformations used to calculate the location of the solar\n system barycentre and the heliocentre rely on the erfa routine epv00,\n which is consistent with the JPL DE405 ephemeris to an accuracy of\n 11.2 km, corresponding to a light travel time of 4 microseconds.\n\n The routine assumes the source(s) are at large distance, i.e., neglects\n finite-distance effects.\n\n Parameters\n ----------\n skycoord : `~astropy.coordinates.SkyCoord`\n The sky location to calculate the correction for.\n kind : str, optional\n ``'barycentric'`` (default) or ``'heliocentric'``\n location : `~astropy.coordinates.EarthLocation`, optional\n The location of the observatory to calculate the correction for.\n If no location is given, the ``location`` attribute of the Time\n object is used\n ephemeris : str, optional\n Solar system ephemeris to use (e.g., 'builtin', 'jpl'). By default,\n use the one set with ``astropy.coordinates.solar_system_ephemeris.set``.\n For more information, see `~astropy.coordinates.solar_system_ephemeris`.\n\n Returns\n -------\n time_offset : `~astropy.time.TimeDelta`\n The time offset between the barycentre or Heliocentre and Earth,\n in TDB seconds. Should be added to the original time to get the\n time in the Solar system barycentre or the Heliocentre.\n Also, the time conversion to BJD will then include the relativistic correction as well.\n \"\"\"\n\n if kind.lower() not in ('barycentric', 'heliocentric'):\n raise ValueError(\"'kind' parameter must be one of 'heliocentric' \"\n \"or 'barycentric'\")\n\n if location is None:\n if self.location is None:\n raise ValueError('An EarthLocation needs to be set or passed '\n 'in to calculate bary- or heliocentric '\n 'corrections')\n location = self.location\n\n from astropy.coordinates import (\n GCRS, HCRS, ICRS, CartesianRepresentation, UnitSphericalRepresentation,\n solar_system_ephemeris)\n\n # ensure sky location is ICRS compatible\n if not skycoord.is_transformable_to(ICRS()):\n raise ValueError(\"Given skycoord is not transformable to the ICRS\")\n\n # get location of observatory in ITRS coordinates at this Time\n try:\n itrs = location.get_itrs(obstime=self)\n except Exception:\n raise ValueError(\"Supplied location does not have a valid `get_itrs` method\")\n\n with solar_system_ephemeris.set(ephemeris):\n if kind.lower() == 'heliocentric':\n # convert to heliocentric coordinates, aligned with ICRS\n cpos = itrs.transform_to(HCRS(obstime=self)).cartesian.xyz\n else:\n # first we need to convert to GCRS coordinates with the correct\n # obstime, since ICRS coordinates have no frame time\n gcrs_coo = itrs.transform_to(GCRS(obstime=self))\n # convert to barycentric (BCRS) coordinates, aligned with ICRS\n cpos = gcrs_coo.transform_to(ICRS()).cartesian.xyz\n\n # get unit ICRS vector to star\n spos = (skycoord.icrs.represent_as(UnitSphericalRepresentation).\n represent_as(CartesianRepresentation).xyz)\n\n # Move X,Y,Z to last dimension, to enable possible broadcasting below.\n cpos = np.rollaxis(cpos, 0, cpos.ndim)\n spos = np.rollaxis(spos, 0, spos.ndim)\n\n # calculate light travel time correction\n tcor_val = (spos * cpos).sum(axis=-1) / const.c\n return TimeDelta(tcor_val, scale='tdb')\n\n def earth_rotation_angle(self, longitude=None):\n \"\"\"Calculate local Earth rotation angle.\n\n Parameters\n ----------\n longitude : `~astropy.units.Quantity`, `~astropy.coordinates.EarthLocation`, str, or None; optional\n The longitude on the Earth at which to compute the Earth rotation\n angle (taken from a location as needed). If `None` (default), taken\n from the ``location`` attribute of the Time instance. If the special\n string 'tio', the result will be relative to the Terrestrial\n Intermediate Origin (TIO) (i.e., the output of `~erfa.era00`).\n\n Returns\n -------\n `~astropy.coordinates.Longitude`\n Local Earth rotation angle with units of hourangle.\n\n See Also\n --------\n astropy.time.Time.sidereal_time\n\n References\n ----------\n IAU 2006 NFA Glossary\n (currently located at: https://syrte.obspm.fr/iauWGnfa/NFA_Glossary.html)\n\n Notes\n -----\n The difference between apparent sidereal time and Earth rotation angle\n is the equation of the origins, which is the angle between the Celestial\n Intermediate Origin (CIO) and the equinox. Applying apparent sidereal\n time to the hour angle yields the true apparent Right Ascension with\n respect to the equinox, while applying the Earth rotation angle yields\n the intermediate (CIRS) Right Ascension with respect to the CIO.\n\n The result includes the TIO locator (s'), which positions the Terrestrial\n Intermediate Origin on the equator of the Celestial Intermediate Pole (CIP)\n and is rigorously corrected for polar motion.\n (except when ``longitude='tio'``).\n\n \"\"\" # noqa\n if isinstance(longitude, str) and longitude == 'tio':\n longitude = 0\n include_tio = False\n else:\n include_tio = True\n\n return self._sid_time_or_earth_rot_ang(longitude=longitude,\n function=erfa.era00, scales=('ut1',),\n include_tio=include_tio)\n\n def sidereal_time(self, kind, longitude=None, model=None):\n \"\"\"Calculate sidereal time.\n\n Parameters\n ----------\n kind : str\n ``'mean'`` or ``'apparent'``, i.e., accounting for precession\n only, or also for nutation.\n longitude : `~astropy.units.Quantity`, `~astropy.coordinates.EarthLocation`, str, or None; optional\n The longitude on the Earth at which to compute the Earth rotation\n angle (taken from a location as needed). If `None` (default), taken\n from the ``location`` attribute of the Time instance. If the special\n string 'greenwich' or 'tio', the result will be relative to longitude\n 0 for models before 2000, and relative to the Terrestrial Intermediate\n Origin (TIO) for later ones (i.e., the output of the relevant ERFA\n function that calculates greenwich sidereal time).\n model : str or None; optional\n Precession (and nutation) model to use. The available ones are:\n - {0}: {1}\n - {2}: {3}\n If `None` (default), the last (most recent) one from the appropriate\n list above is used.\n\n Returns\n -------\n `~astropy.coordinates.Longitude`\n Local sidereal time, with units of hourangle.\n\n See Also\n --------\n astropy.time.Time.earth_rotation_angle\n\n References\n ----------\n IAU 2006 NFA Glossary\n (currently located at: https://syrte.obspm.fr/iauWGnfa/NFA_Glossary.html)\n\n Notes\n -----\n The difference between apparent sidereal time and Earth rotation angle\n is the equation of the origins, which is the angle between the Celestial\n Intermediate Origin (CIO) and the equinox. Applying apparent sidereal\n time to the hour angle yields the true apparent Right Ascension with\n respect to the equinox, while applying the Earth rotation angle yields\n the intermediate (CIRS) Right Ascension with respect to the CIO.\n\n For the IAU precession models from 2000 onwards, the result includes the\n TIO locator (s'), which positions the Terrestrial Intermediate Origin on\n the equator of the Celestial Intermediate Pole (CIP) and is rigorously\n corrected for polar motion (except when ``longitude='tio'`` or ``'greenwich'``).\n\n \"\"\" # noqa (docstring is formatted below)\n\n if kind.lower() not in SIDEREAL_TIME_MODELS.keys():\n raise ValueError('The kind of sidereal time has to be {}'.format(\n ' or '.join(sorted(SIDEREAL_TIME_MODELS.keys()))))\n\n available_models = SIDEREAL_TIME_MODELS[kind.lower()]\n\n if model is None:\n model = sorted(available_models.keys())[-1]\n elif model.upper() not in available_models:\n raise ValueError(\n 'Model {} not implemented for {} sidereal time; '\n 'available models are {}'\n .format(model, kind, sorted(available_models.keys())))\n\n model_kwargs = available_models[model.upper()]\n\n if isinstance(longitude, str) and longitude in ('tio', 'greenwich'):\n longitude = 0\n model_kwargs = model_kwargs.copy()\n model_kwargs['include_tio'] = False\n\n return self._sid_time_or_earth_rot_ang(longitude=longitude, **model_kwargs)\n\n if isinstance(sidereal_time.__doc__, str):\n sidereal_time.__doc__ = sidereal_time.__doc__.format(\n 'apparent', sorted(SIDEREAL_TIME_MODELS['apparent'].keys()),\n 'mean', sorted(SIDEREAL_TIME_MODELS['mean'].keys()))\n\n def _sid_time_or_earth_rot_ang(self, longitude, function, scales, include_tio=True):\n \"\"\"Calculate a local sidereal time or Earth rotation angle.\n\n Parameters\n ----------\n longitude : `~astropy.units.Quantity`, `~astropy.coordinates.EarthLocation`, str, or None; optional\n The longitude on the Earth at which to compute the Earth rotation\n angle (taken from a location as needed). If `None` (default), taken\n from the ``location`` attribute of the Time instance.\n function : callable\n The ERFA function to use.\n scales : tuple of str\n The time scales that the function requires on input.\n include_tio : bool, optional\n Whether to includes the TIO locator corrected for polar motion.\n Should be `False` for pre-2000 IAU models. Default: `True`.\n\n Returns\n -------\n `~astropy.coordinates.Longitude`\n Local sidereal time or Earth rotation angle, with units of hourangle.\n\n \"\"\" # noqa\n from astropy.coordinates import EarthLocation, Longitude\n from astropy.coordinates.builtin_frames.utils import get_polar_motion\n from astropy.coordinates.matrix_utilities import rotation_matrix\n\n if longitude is None:\n if self.location is None:\n raise ValueError('No longitude is given but the location for '\n 'the Time object is not set.')\n longitude = self.location.lon\n elif isinstance(longitude, EarthLocation):\n longitude = longitude.lon\n else:\n # Sanity check on input; default unit is degree.\n longitude = Longitude(longitude, u.degree, copy=False)\n\n theta = self._call_erfa(function, scales)\n\n if include_tio:\n # TODO: this duplicates part of coordinates.erfa_astrom.ErfaAstrom.apio;\n # maybe posisble to factor out to one or the other.\n sp = self._call_erfa(erfa.sp00, ('tt',))\n xp, yp = get_polar_motion(self)\n # Form the rotation matrix, CIRS to apparent [HA,Dec].\n r = (rotation_matrix(longitude, 'z')\n @ rotation_matrix(-yp, 'x', unit=u.radian)\n @ rotation_matrix(-xp, 'y', unit=u.radian)\n @ rotation_matrix(theta + sp, 'z', unit=u.radian))\n # Solve for angle.\n angle = np.arctan2(r[..., 0, 1], r[..., 0, 0]) << u.radian\n\n else:\n angle = longitude + (theta << u.radian)\n\n return Longitude(angle, u.hourangle)\n\n def _call_erfa(self, function, scales):\n # TODO: allow erfa functions to be used on Time with __array_ufunc__.\n erfa_parameters = [getattr(getattr(self, scale)._time, jd_part)\n for scale in scales\n for jd_part in ('jd1', 'jd2_filled')]\n\n result = function(*erfa_parameters)\n\n if self.masked:\n result[self.mask] = np.nan\n\n return result\n\n def get_delta_ut1_utc(self, iers_table=None, return_status=False):\n \"\"\"Find UT1 - UTC differences by interpolating in IERS Table.\n\n Parameters\n ----------\n iers_table : `~astropy.utils.iers.IERS`, optional\n Table containing UT1-UTC differences from IERS Bulletins A\n and/or B. Default: `~astropy.utils.iers.earth_orientation_table`\n (which in turn defaults to the combined version provided by\n `~astropy.utils.iers.IERS_Auto`).\n return_status : bool\n Whether to return status values. If `False` (default), iers\n raises `IndexError` if any time is out of the range\n covered by the IERS table.\n\n Returns\n -------\n ut1_utc : float or float array\n UT1-UTC, interpolated in IERS Table\n status : int or int array\n Status values (if ``return_status=`True```)::\n ``astropy.utils.iers.FROM_IERS_B``\n ``astropy.utils.iers.FROM_IERS_A``\n ``astropy.utils.iers.FROM_IERS_A_PREDICTION``\n ``astropy.utils.iers.TIME_BEFORE_IERS_RANGE``\n ``astropy.utils.iers.TIME_BEYOND_IERS_RANGE``\n\n Notes\n -----\n In normal usage, UT1-UTC differences are calculated automatically\n on the first instance ut1 is needed.\n\n Examples\n --------\n To check in code whether any times are before the IERS table range::\n\n >>> from astropy.utils.iers import TIME_BEFORE_IERS_RANGE\n >>> t = Time(['1961-01-01', '2000-01-01'], scale='utc')\n >>> delta, status = t.get_delta_ut1_utc(return_status=True) # doctest: +REMOTE_DATA\n >>> status == TIME_BEFORE_IERS_RANGE # doctest: +REMOTE_DATA\n array([ True, False]...)\n \"\"\"\n if iers_table is None:\n from astropy.utils.iers import earth_orientation_table\n iers_table = earth_orientation_table.get()\n\n return iers_table.ut1_utc(self.utc, return_status=return_status)\n\n # Property for ERFA DUT arg = UT1 - UTC\n def _get_delta_ut1_utc(self, jd1=None, jd2=None):\n \"\"\"\n Get ERFA DUT arg = UT1 - UTC. This getter takes optional jd1 and\n jd2 args because it gets called that way when converting time scales.\n If delta_ut1_utc is not yet set, this will interpolate them from the\n the IERS table.\n \"\"\"\n # Sec. 4.3.1: the arg DUT is the quantity delta_UT1 = UT1 - UTC in\n # seconds. It is obtained from tables published by the IERS.\n if not hasattr(self, '_delta_ut1_utc'):\n from astropy.utils.iers import earth_orientation_table\n iers_table = earth_orientation_table.get()\n # jd1, jd2 are normally set (see above), except if delta_ut1_utc\n # is access directly; ensure we behave as expected for that case\n if jd1 is None:\n self_utc = self.utc\n jd1, jd2 = self_utc._time.jd1, self_utc._time.jd2_filled\n scale = 'utc'\n else:\n scale = self.scale\n # interpolate UT1-UTC in IERS table\n delta = iers_table.ut1_utc(jd1, jd2)\n # if we interpolated using UT1 jds, we may be off by one\n # second near leap seconds (and very slightly off elsewhere)\n if scale == 'ut1':\n # calculate UTC using the offset we got; the ERFA routine\n # is tolerant of leap seconds, so will do this right\n jd1_utc, jd2_utc = erfa.ut1utc(jd1, jd2, delta.to_value(u.s))\n # calculate a better estimate using the nearly correct UTC\n delta = iers_table.ut1_utc(jd1_utc, jd2_utc)\n\n self._set_delta_ut1_utc(delta)\n\n return self._delta_ut1_utc\n\n def _set_delta_ut1_utc(self, val):\n del self.cache\n if hasattr(val, 'to'): # Matches Quantity but also TimeDelta.\n val = val.to(u.second).value\n val = self._match_shape(val)\n self._delta_ut1_utc = val\n\n # Note can't use @property because _get_delta_tdb_tt is explicitly\n # called with the optional jd1 and jd2 args.\n delta_ut1_utc = property(_get_delta_ut1_utc, _set_delta_ut1_utc)\n \"\"\"UT1 - UTC time scale offset\"\"\"\n\n # Property for ERFA DTR arg = TDB - TT\n def _get_delta_tdb_tt(self, jd1=None, jd2=None):\n if not hasattr(self, '_delta_tdb_tt'):\n # If jd1 and jd2 are not provided (which is the case for property\n # attribute access) then require that the time scale is TT or TDB.\n # Otherwise the computations here are not correct.\n if jd1 is None or jd2 is None:\n if self.scale not in ('tt', 'tdb'):\n raise ValueError('Accessing the delta_tdb_tt attribute '\n 'is only possible for TT or TDB time '\n 'scales')\n else:\n jd1 = self._time.jd1\n jd2 = self._time.jd2_filled\n\n # First go from the current input time (which is either\n # TDB or TT) to an approximate UT1. Since TT and TDB are\n # pretty close (few msec?), assume TT. Similarly, since the\n # UT1 terms are very small, use UTC instead of UT1.\n njd1, njd2 = erfa.tttai(jd1, jd2)\n njd1, njd2 = erfa.taiutc(njd1, njd2)\n # subtract 0.5, so UT is fraction of the day from midnight\n ut = day_frac(njd1 - 0.5, njd2)[1]\n\n if self.location is None:\n # Assume geocentric.\n self._delta_tdb_tt = erfa.dtdb(jd1, jd2, ut, 0., 0., 0.)\n else:\n location = self.location\n # Geodetic params needed for d_tdb_tt()\n lon = location.lon\n rxy = np.hypot(location.x, location.y)\n z = location.z\n self._delta_tdb_tt = erfa.dtdb(\n jd1, jd2, ut, lon.to_value(u.radian),\n rxy.to_value(u.km), z.to_value(u.km))\n\n return self._delta_tdb_tt\n\n def _set_delta_tdb_tt(self, val):\n del self.cache\n if hasattr(val, 'to'): # Matches Quantity but also TimeDelta.\n val = val.to(u.second).value\n val = self._match_shape(val)\n self._delta_tdb_tt = val\n\n # Note can't use @property because _get_delta_tdb_tt is explicitly\n # called with the optional jd1 and jd2 args.\n delta_tdb_tt = property(_get_delta_tdb_tt, _set_delta_tdb_tt)\n \"\"\"TDB - TT time scale offset\"\"\"\n\n def __sub__(self, other):\n # T - Tdelta = T\n # T - T = Tdelta\n other_is_delta = not isinstance(other, Time)\n if other_is_delta: # T - Tdelta\n # Check other is really a TimeDelta or something that can initialize.\n if not isinstance(other, TimeDelta):\n try:\n other = TimeDelta(other)\n except Exception:\n return NotImplemented\n\n # we need a constant scale to calculate, which is guaranteed for\n # TimeDelta, but not for Time (which can be UTC)\n out = self.replicate()\n if self.scale in other.SCALES:\n if other.scale not in (out.scale, None):\n other = getattr(other, out.scale)\n else:\n if other.scale is None:\n out._set_scale('tai')\n else:\n if self.scale not in TIME_TYPES[other.scale]:\n raise TypeError(\"Cannot subtract Time and TimeDelta instances \"\n \"with scales '{}' and '{}'\"\n .format(self.scale, other.scale))\n out._set_scale(other.scale)\n # remove attributes that are invalidated by changing time\n for attr in ('_delta_ut1_utc', '_delta_tdb_tt'):\n if hasattr(out, attr):\n delattr(out, attr)\n\n else: # T - T\n # the scales should be compatible (e.g., cannot convert TDB to LOCAL)\n if other.scale not in self.SCALES:\n raise TypeError(\"Cannot subtract Time instances \"\n \"with scales '{}' and '{}'\"\n .format(self.scale, other.scale))\n self_time = (self._time if self.scale in TIME_DELTA_SCALES\n else self.tai._time)\n # set up TimeDelta, subtraction to be done shortly\n out = TimeDelta(self_time.jd1, self_time.jd2, format='jd',\n scale=self_time.scale)\n\n if other.scale != out.scale:\n other = getattr(other, out.scale)\n\n jd1 = out._time.jd1 - other._time.jd1\n jd2 = out._time.jd2 - other._time.jd2\n\n out._time.jd1, out._time.jd2 = day_frac(jd1, jd2)\n\n if other_is_delta:\n # Go back to left-side scale if needed\n out._set_scale(self.scale)\n\n return out\n\n def __add__(self, other):\n # T + Tdelta = T\n # T + T = error\n if isinstance(other, Time):\n raise OperandTypeError(self, other, '+')\n\n # Check other is really a TimeDelta or something that can initialize.\n if not isinstance(other, TimeDelta):\n try:\n other = TimeDelta(other)\n except Exception:\n return NotImplemented\n\n # ideally, we calculate in the scale of the Time item, since that is\n # what we want the output in, but this may not be possible, since\n # TimeDelta cannot be converted arbitrarily\n out = self.replicate()\n if self.scale in other.SCALES:\n if other.scale not in (out.scale, None):\n other = getattr(other, out.scale)\n else:\n if other.scale is None:\n out._set_scale('tai')\n else:\n if self.scale not in TIME_TYPES[other.scale]:\n raise TypeError(\"Cannot add Time and TimeDelta instances \"\n \"with scales '{}' and '{}'\"\n .format(self.scale, other.scale))\n out._set_scale(other.scale)\n # remove attributes that are invalidated by changing time\n for attr in ('_delta_ut1_utc', '_delta_tdb_tt'):\n if hasattr(out, attr):\n delattr(out, attr)\n\n jd1 = out._time.jd1 + other._time.jd1\n jd2 = out._time.jd2 + other._time.jd2\n\n out._time.jd1, out._time.jd2 = day_frac(jd1, jd2)\n\n # Go back to left-side scale if needed\n out._set_scale(self.scale)\n\n return out\n\n # Reverse addition is possible: <something-Tdelta-ish> + T\n # but there is no case of <something> - T, so no __rsub__.\n def __radd__(self, other):\n return self.__add__(other)\n\n def __array_function__(self, function, types, args, kwargs):\n \"\"\"\n Wrap numpy functions.\n\n Parameters\n ----------\n function : callable\n Numpy function to wrap\n types : iterable of classes\n Classes that provide an ``__array_function__`` override. Can\n in principle be used to interact with other classes. Below,\n mostly passed on to `~numpy.ndarray`, which can only interact\n with subclasses.\n args : tuple\n Positional arguments provided in the function call.\n kwargs : dict\n Keyword arguments provided in the function call.\n \"\"\"\n if function in CUSTOM_FUNCTIONS:\n f = CUSTOM_FUNCTIONS[function]\n return f(*args, **kwargs)\n elif function in UNSUPPORTED_FUNCTIONS:\n return NotImplemented\n else:\n return super().__array_function__(function, types, args, kwargs)\n\n def to_datetime(self, timezone=None):\n # TODO: this could likely go through to_value, as long as that\n # had an **kwargs part that was just passed on to _time.\n tm = self.replicate(format='datetime')\n return tm._shaped_like_input(tm._time.to_value(timezone))\n\n to_datetime.__doc__ = TimeDatetime.to_value.__doc__\n\n\nclass TimeDeltaMissingUnitWarning(AstropyDeprecationWarning):\n \"\"\"Warning for missing unit or format in TimeDelta\"\"\"\n pass\n\n\nclass TimeDelta(TimeBase):\n \"\"\"\n Represent the time difference between two times.\n\n A TimeDelta object is initialized with one or more times in the ``val``\n argument. The input times in ``val`` must conform to the specified\n ``format``. The optional ``val2`` time input should be supplied only for\n numeric input formats (e.g. JD) where very high precision (better than\n 64-bit precision) is required.\n\n The allowed values for ``format`` can be listed with::\n\n >>> list(TimeDelta.FORMATS)\n ['sec', 'jd', 'datetime']\n\n Note that for time differences, the scale can be among three groups:\n geocentric ('tai', 'tt', 'tcg'), barycentric ('tcb', 'tdb'), and rotational\n ('ut1'). Within each of these, the scales for time differences are the\n same. Conversion between geocentric and barycentric is possible, as there\n is only a scale factor change, but one cannot convert to or from 'ut1', as\n this requires knowledge of the actual times, not just their difference. For\n a similar reason, 'utc' is not a valid scale for a time difference: a UTC\n day is not always 86400 seconds.\n\n See also:\n\n - https://docs.astropy.org/en/stable/time/\n - https://docs.astropy.org/en/stable/time/index.html#time-deltas\n\n Parameters\n ----------\n val : sequence, ndarray, number, `~astropy.units.Quantity` or `~astropy.time.TimeDelta` object\n Value(s) to initialize the time difference(s). Any quantities will\n be converted appropriately (with care taken to avoid rounding\n errors for regular time units).\n val2 : sequence, ndarray, number, or `~astropy.units.Quantity`; optional\n Additional values, as needed to preserve precision.\n format : str, optional\n Format of input value(s). For numerical inputs without units,\n \"jd\" is assumed and values are interpreted as days.\n A deprecation warning is raised in this case. To avoid the warning,\n either specify the format or add units to the input values.\n scale : str, optional\n Time scale of input value(s), must be one of the following values:\n ('tdb', 'tt', 'ut1', 'tcg', 'tcb', 'tai'). If not given (or\n ``None``), the scale is arbitrary; when added or subtracted from a\n ``Time`` instance, it will be used without conversion.\n copy : bool, optional\n Make a copy of the input values\n \"\"\"\n SCALES = TIME_DELTA_SCALES\n \"\"\"List of time delta scales.\"\"\"\n\n FORMATS = TIME_DELTA_FORMATS\n \"\"\"Dict of time delta formats.\"\"\"\n\n info = TimeDeltaInfo()\n\n def __new__(cls, val, val2=None, format=None, scale=None,\n precision=None, in_subfmt=None, out_subfmt=None,\n location=None, copy=False):\n\n if isinstance(val, TimeDelta):\n self = val.replicate(format=format, copy=copy, cls=cls)\n else:\n self = super().__new__(cls)\n\n return self\n\n def __init__(self, val, val2=None, format=None, scale=None, copy=False):\n if isinstance(val, TimeDelta):\n if scale is not None:\n self._set_scale(scale)\n else:\n format = format or self._get_format(val)\n self._init_from_vals(val, val2, format, scale, copy)\n\n if scale is not None:\n self.SCALES = TIME_DELTA_TYPES[scale]\n\n @staticmethod\n def _get_format(val):\n if isinstance(val, timedelta):\n return 'datetime'\n\n if getattr(val, 'unit', None) is None:\n warn('Numerical value without unit or explicit format passed to'\n ' TimeDelta, assuming days', TimeDeltaMissingUnitWarning)\n\n return 'jd'\n\n def replicate(self, *args, **kwargs):\n out = super().replicate(*args, **kwargs)\n out.SCALES = self.SCALES\n return out\n\n def to_datetime(self):\n \"\"\"\n Convert to ``datetime.timedelta`` object.\n \"\"\"\n tm = self.replicate(format='datetime')\n return tm._shaped_like_input(tm._time.value)\n\n def _set_scale(self, scale):\n \"\"\"\n This is the key routine that actually does time scale conversions.\n This is not public and not connected to the read-only scale property.\n \"\"\"\n\n if scale == self.scale:\n return\n if scale not in self.SCALES:\n raise ValueError(\"Scale {!r} is not in the allowed scales {}\"\n .format(scale, sorted(self.SCALES)))\n\n # For TimeDelta, there can only be a change in scale factor,\n # which is written as time2 - time1 = scale_offset * time1\n scale_offset = SCALE_OFFSETS[(self.scale, scale)]\n if scale_offset is None:\n self._time.scale = scale\n else:\n jd1, jd2 = self._time.jd1, self._time.jd2\n offset1, offset2 = day_frac(jd1, jd2, factor=scale_offset)\n self._time = self.FORMATS[self.format](\n jd1 + offset1, jd2 + offset2, scale,\n self.precision, self.in_subfmt,\n self.out_subfmt, from_jd=True)\n\n def _add_sub(self, other, op):\n \"\"\"Perform common elements of addition / subtraction for two delta times\"\"\"\n # If not a TimeDelta then see if it can be turned into a TimeDelta.\n if not isinstance(other, TimeDelta):\n try:\n other = TimeDelta(other)\n except Exception:\n return NotImplemented\n\n # the scales should be compatible (e.g., cannot convert TDB to TAI)\n if(self.scale is not None and self.scale not in other.SCALES\n or other.scale is not None and other.scale not in self.SCALES):\n raise TypeError(\"Cannot add TimeDelta instances with scales \"\n \"'{}' and '{}'\".format(self.scale, other.scale))\n\n # adjust the scale of other if the scale of self is set (or no scales)\n if self.scale is not None or other.scale is None:\n out = self.replicate()\n if other.scale is not None:\n other = getattr(other, self.scale)\n else:\n out = other.replicate()\n\n jd1 = op(self._time.jd1, other._time.jd1)\n jd2 = op(self._time.jd2, other._time.jd2)\n\n out._time.jd1, out._time.jd2 = day_frac(jd1, jd2)\n\n return out\n\n def __add__(self, other):\n # If other is a Time then use Time.__add__ to do the calculation.\n if isinstance(other, Time):\n return other.__add__(self)\n\n return self._add_sub(other, operator.add)\n\n def __sub__(self, other):\n # TimeDelta - Time is an error\n if isinstance(other, Time):\n raise OperandTypeError(self, other, '-')\n\n return self._add_sub(other, operator.sub)\n\n def __radd__(self, other):\n return self.__add__(other)\n\n def __rsub__(self, other):\n out = self.__sub__(other)\n return -out\n\n def __neg__(self):\n \"\"\"Negation of a `TimeDelta` object.\"\"\"\n new = self.copy()\n new._time.jd1 = -self._time.jd1\n new._time.jd2 = -self._time.jd2\n return new\n\n def __abs__(self):\n \"\"\"Absolute value of a `TimeDelta` object.\"\"\"\n jd1, jd2 = self._time.jd1, self._time.jd2\n negative = jd1 + jd2 < 0\n new = self.copy()\n new._time.jd1 = np.where(negative, -jd1, jd1)\n new._time.jd2 = np.where(negative, -jd2, jd2)\n return new\n\n def __mul__(self, other):\n \"\"\"Multiplication of `TimeDelta` objects by numbers/arrays.\"\"\"\n # Check needed since otherwise the self.jd1 * other multiplication\n # would enter here again (via __rmul__)\n if isinstance(other, Time):\n raise OperandTypeError(self, other, '*')\n elif ((isinstance(other, u.UnitBase)\n and other == u.dimensionless_unscaled)\n or (isinstance(other, str) and other == '')):\n return self.copy()\n\n # If other is something consistent with a dimensionless quantity\n # (could just be a float or an array), then we can just multiple in.\n try:\n other = u.Quantity(other, u.dimensionless_unscaled, copy=False)\n except Exception:\n # If not consistent with a dimensionless quantity, try downgrading\n # self to a quantity and see if things work.\n try:\n return self.to(u.day) * other\n except Exception:\n # The various ways we could multiply all failed;\n # returning NotImplemented to give other a final chance.\n return NotImplemented\n\n jd1, jd2 = day_frac(self.jd1, self.jd2, factor=other.value)\n out = TimeDelta(jd1, jd2, format='jd', scale=self.scale)\n\n if self.format != 'jd':\n out = out.replicate(format=self.format)\n return out\n\n def __rmul__(self, other):\n \"\"\"Multiplication of numbers/arrays with `TimeDelta` objects.\"\"\"\n return self.__mul__(other)\n\n def __truediv__(self, other):\n \"\"\"Division of `TimeDelta` objects by numbers/arrays.\"\"\"\n # Cannot do __mul__(1./other) as that looses precision\n if ((isinstance(other, u.UnitBase)\n and other == u.dimensionless_unscaled)\n or (isinstance(other, str) and other == '')):\n return self.copy()\n\n # If other is something consistent with a dimensionless quantity\n # (could just be a float or an array), then we can just divide in.\n try:\n other = u.Quantity(other, u.dimensionless_unscaled, copy=False)\n except Exception:\n # If not consistent with a dimensionless quantity, try downgrading\n # self to a quantity and see if things work.\n try:\n return self.to(u.day) / other\n except Exception:\n # The various ways we could divide all failed;\n # returning NotImplemented to give other a final chance.\n return NotImplemented\n\n jd1, jd2 = day_frac(self.jd1, self.jd2, divisor=other.value)\n out = TimeDelta(jd1, jd2, format='jd', scale=self.scale)\n\n if self.format != 'jd':\n out = out.replicate(format=self.format)\n return out\n\n def __rtruediv__(self, other):\n \"\"\"Division by `TimeDelta` objects of numbers/arrays.\"\"\"\n # Here, we do not have to worry about returning NotImplemented,\n # since other has already had a chance to look at us.\n return other / self.to(u.day)\n\n def to(self, unit, equivalencies=[]):\n \"\"\"\n Convert to a quantity in the specified unit.\n\n Parameters\n ----------\n unit : unit-like\n The unit to convert to.\n equivalencies : list of tuple\n A list of equivalence pairs to try if the units are not directly\n convertible (see :ref:`astropy:unit_equivalencies`). If `None`, no\n equivalencies will be applied at all, not even any set globallyq\n or within a context.\n\n Returns\n -------\n quantity : `~astropy.units.Quantity`\n The quantity in the units specified.\n\n See also\n --------\n to_value : get the numerical value in a given unit.\n \"\"\"\n return u.Quantity(self._time.jd1 + self._time.jd2,\n u.day).to(unit, equivalencies=equivalencies)\n\n def to_value(self, *args, **kwargs):\n \"\"\"Get time delta values expressed in specified output format or unit.\n\n This method is flexible and handles both conversion to a specified\n ``TimeDelta`` format / sub-format AND conversion to a specified unit.\n If positional argument(s) are provided then the first one is checked\n to see if it is a valid ``TimeDelta`` format, and next it is checked\n to see if it is a valid unit or unit string.\n\n To convert to a ``TimeDelta`` format and optional sub-format the options\n are::\n\n tm = TimeDelta(1.0 * u.s)\n tm.to_value('jd') # equivalent of tm.jd\n tm.to_value('jd', 'decimal') # convert to 'jd' as a Decimal object\n tm.to_value('jd', subfmt='decimal')\n tm.to_value(format='jd', subfmt='decimal')\n\n To convert to a unit with optional equivalencies, the options are::\n\n tm.to_value('hr') # convert to u.hr (hours)\n tm.to_value('hr', []) # specify equivalencies as a positional arg\n tm.to_value('hr', equivalencies=[])\n tm.to_value(unit='hr', equivalencies=[])\n\n The built-in `~astropy.time.TimeDelta` options for ``format`` are:\n {'jd', 'sec', 'datetime'}.\n\n For the two numerical formats 'jd' and 'sec', the available ``subfmt``\n options are: {'float', 'long', 'decimal', 'str', 'bytes'}. Here, 'long'\n uses ``numpy.longdouble`` for somewhat enhanced precision (with the\n enhancement depending on platform), and 'decimal' instances of\n :class:`decimal.Decimal` for full precision. For the 'str' and 'bytes'\n sub-formats, the number of digits is also chosen such that time values\n are represented accurately. Default: as set by ``out_subfmt`` (which by\n default picks the first available for a given format, i.e., 'float').\n\n Parameters\n ----------\n format : str, optional\n The format in which one wants the `~astropy.time.TimeDelta` values.\n Default: the current format.\n subfmt : str, optional\n Possible sub-format in which the values should be given. Default: as\n set by ``out_subfmt`` (which by default picks the first available\n for a given format, i.e., 'float' or 'date_hms').\n unit : `~astropy.units.UnitBase` instance or str, optional\n The unit in which the value should be given.\n equivalencies : list of tuple\n A list of equivalence pairs to try if the units are not directly\n convertible (see :ref:`astropy:unit_equivalencies`). If `None`, no\n equivalencies will be applied at all, not even any set globally or\n within a context.\n\n Returns\n -------\n value : ndarray or scalar\n The value in the format or units specified.\n\n See also\n --------\n to : Convert to a `~astropy.units.Quantity` instance in a given unit.\n value : The time value in the current format.\n\n \"\"\"\n if not (args or kwargs):\n raise TypeError('to_value() missing required format or unit argument')\n\n # TODO: maybe allow 'subfmt' also for units, keeping full precision\n # (effectively, by doing the reverse of quantity_day_frac)?\n # This way, only equivalencies could lead to possible precision loss.\n if ('format' in kwargs\n or (args != () and (args[0] is None or args[0] in self.FORMATS))):\n # Super-class will error with duplicate arguments, etc.\n return super().to_value(*args, **kwargs)\n\n # With positional arguments, we try parsing the first one as a unit,\n # so that on failure we can give a more informative exception.\n if args:\n try:\n unit = u.Unit(args[0])\n except ValueError as exc:\n raise ValueError(\"first argument is not one of the known \"\n \"formats ({}) and failed to parse as a unit.\"\n .format(list(self.FORMATS))) from exc\n args = (unit,) + args[1:]\n\n return u.Quantity(self._time.jd1 + self._time.jd2,\n u.day).to_value(*args, **kwargs)\n\n def _make_value_equivalent(self, item, value):\n \"\"\"Coerce setitem value into an equivalent TimeDelta object\"\"\"\n if not isinstance(value, TimeDelta):\n try:\n value = self.__class__(value, scale=self.scale, format=self.format)\n except Exception as err:\n raise ValueError('cannot convert value to a compatible TimeDelta '\n 'object: {}'.format(err))\n return value\n\n def isclose(self, other, atol=None, rtol=0.0):\n \"\"\"Returns a boolean or boolean array where two TimeDelta objects are\n element-wise equal within a time tolerance.\n\n This effectively evaluates the expression below::\n\n abs(self - other) <= atol + rtol * abs(other)\n\n Parameters\n ----------\n other : `~astropy.units.Quantity` or `~astropy.time.TimeDelta`\n Quantity or TimeDelta object for comparison.\n atol : `~astropy.units.Quantity` or `~astropy.time.TimeDelta`\n Absolute tolerance for equality with units of time (e.g. ``u.s`` or\n ``u.day``). Default is one bit in the 128-bit JD time representation,\n equivalent to about 20 picosecs.\n rtol : float\n Relative tolerance for equality\n \"\"\"\n try:\n other_day = other.to_value(u.day)\n except Exception as err:\n raise TypeError(f\"'other' argument must support conversion to days: {err}\")\n\n if atol is None:\n atol = np.finfo(float).eps * u.day\n\n if not isinstance(atol, (u.Quantity, TimeDelta)):\n raise TypeError(\"'atol' argument must be a Quantity or TimeDelta instance, got \"\n f'{atol.__class__.__name__} instead')\n\n return np.isclose(self.to_value(u.day), other_day,\n rtol=rtol, atol=atol.to_value(u.day))\n\n\nclass ScaleValueError(Exception):\n pass\n\n\ndef _make_array(val, copy=False):\n \"\"\"\n Take ``val`` and convert/reshape to an array. If ``copy`` is `True`\n then copy input values.\n\n Returns\n -------\n val : ndarray\n Array version of ``val``.\n \"\"\"\n if isinstance(val, (tuple, list)) and len(val) > 0 and isinstance(val[0], Time):\n dtype = object\n else:\n dtype = None\n\n val = np.array(val, copy=copy, subok=True, dtype=dtype)\n\n # Allow only float64, string or object arrays as input\n # (object is for datetime, maybe add more specific test later?)\n # This also ensures the right byteorder for float64 (closes #2942).\n if val.dtype.kind == \"f\" and val.dtype.itemsize >= np.dtype(np.float64).itemsize:\n pass\n elif val.dtype.kind in 'OSUMaV':\n pass\n else:\n val = np.asanyarray(val, dtype=np.float64)\n\n return val\n\n\ndef _check_for_masked_and_fill(val, val2):\n \"\"\"\n If ``val`` or ``val2`` are masked arrays then fill them and cast\n to ndarray.\n\n Returns a mask corresponding to the logical-or of masked elements\n in ``val`` and ``val2``. If neither is masked then the return ``mask``\n is ``None``.\n\n If either ``val`` or ``val2`` are masked then they are replaced\n with filled versions of themselves.\n\n Parameters\n ----------\n val : ndarray or MaskedArray\n Input val\n val2 : ndarray or MaskedArray\n Input val2\n\n Returns\n -------\n mask, val, val2: ndarray or None\n Mask: (None or bool ndarray), val, val2: ndarray\n \"\"\"\n def get_as_filled_ndarray(mask, val):\n \"\"\"\n Fill the given MaskedArray ``val`` from the first non-masked\n element in the array. This ensures that upstream Time initialization\n will succeed.\n\n Note that nothing happens if there are no masked elements.\n \"\"\"\n fill_value = None\n\n if np.any(val.mask):\n # Final mask is the logical-or of inputs\n mask = mask | val.mask\n\n # First unmasked element. If all elements are masked then\n # use fill_value=None from above which will use val.fill_value.\n # As long as the user has set this appropriately then all will\n # be fine.\n val_unmasked = val.compressed() # 1-d ndarray of unmasked values\n if len(val_unmasked) > 0:\n fill_value = val_unmasked[0]\n\n # Fill the input ``val``. If fill_value is None then this just returns\n # an ndarray view of val (no copy).\n val = val.filled(fill_value)\n\n return mask, val\n\n mask = False\n if isinstance(val, np.ma.MaskedArray):\n mask, val = get_as_filled_ndarray(mask, val)\n if isinstance(val2, np.ma.MaskedArray):\n mask, val2 = get_as_filled_ndarray(mask, val2)\n\n return mask, val, val2\n\n\nclass OperandTypeError(TypeError):\n def __init__(self, left, right, op=None):\n op_string = '' if op is None else f' for {op}'\n super().__init__(\n \"Unsupported operand type(s){}: \"\n \"'{}' and '{}'\".format(op_string,\n left.__class__.__name__,\n right.__class__.__name__))\n\n\ndef _check_leapsec():\n global _LEAP_SECONDS_CHECK\n if _LEAP_SECONDS_CHECK != _LeapSecondsCheck.DONE:\n with _LEAP_SECONDS_LOCK:\n # There are three ways we can get here:\n # 1. First call (NOT_STARTED).\n # 2. Re-entrant call (RUNNING). We skip the initialisation\n # and don't worry about leap second errors.\n # 3. Another thread which raced with the first call\n # (RUNNING). The first thread has relinquished the\n # lock to us, so initialization is complete.\n if _LEAP_SECONDS_CHECK == _LeapSecondsCheck.NOT_STARTED:\n _LEAP_SECONDS_CHECK = _LeapSecondsCheck.RUNNING\n update_leap_seconds()\n _LEAP_SECONDS_CHECK = _LeapSecondsCheck.DONE\n\n\ndef update_leap_seconds(files=None):\n \"\"\"If the current ERFA leap second table is out of date, try to update it.\n\n Uses `astropy.utils.iers.LeapSeconds.auto_open` to try to find an\n up-to-date table. See that routine for the definition of \"out of date\".\n\n In order to make it safe to call this any time, all exceptions are turned\n into warnings,\n\n Parameters\n ----------\n files : list of path-like, optional\n List of files/URLs to attempt to open. By default, uses defined by\n `astropy.utils.iers.LeapSeconds.auto_open`, which includes the table\n used by ERFA itself, so if that is up to date, nothing will happen.\n\n Returns\n -------\n n_update : int\n Number of items updated.\n\n \"\"\"\n try:\n from astropy.utils import iers\n\n table = iers.LeapSeconds.auto_open(files)\n return erfa.leap_seconds.update(table)\n\n except Exception as exc:\n warn(\"leap-second auto-update failed due to the following \"\n f\"exception: {exc!r}\", AstropyWarning)\n return 0\n", "docs/changes/time/13508.feature.rst": null}
|
diff --git a/docs/changes/time/13508.feature.rst b/docs/changes/time/13508.feature.rst
new file mode 100644
index 000000000000..d6dffaa3b3db
--- /dev/null
+++ b/docs/changes/time/13508.feature.rst
@@ -0,0 +1,1 @@
+Added the ``astropy.time.Time.mean()`` method which also enables the ``numpy.mean()`` function to be used on instances of ``astropy.time.Time``.
|
{"astropy/time/core.py": [{"type": "function", "name": "TimeBase.mean", "lines": [1359, 1433], "signature": "def mean(self, axis=None, dtype=None, out=None, keepdims=False, *, where=True):", "doc": "Mean along a given axis.\n\nThis is similar to :meth:`~numpy.ndarray.mean`, but adapted to ensure\nthat the full precision given by the two doubles ``jd1`` and ``jd2`` is\nused, and that corresponding attributes are copied.\n\nNote that the ``out`` argument is present only for compatibility with\n``np.mean``; since `Time` instances are immutable, it is not possible\nto have an actual ``out`` to store the result in.\n\nSimilarly, the ``dtype`` argument is also present for compatibility\nonly; it has no meaning for `Time`.\n\nParameters\n----------\naxis : None or int or tuple of ints, optional\n Axis or axes along which the means are computed. The default is to\n compute the mean of the flattened array.\ndtype : None\n Only present for compatibility with :meth:`~numpy.ndarray.mean`,\n must be `None`.\nout : None\n Only present for compatibility with :meth:`~numpy.ndarray.mean`,\n must be `None`.\nkeepdims : bool, optional\n If this is set to True, the axes which are reduced are left\n in the result as dimensions with size one. With this option,\n the result will broadcast correctly against the input array.\nwhere : array_like of bool, optional\n Elements to include in the mean. See `~numpy.ufunc.reduce` for\n details.\n\nReturns\n-------\nm : Time\n A new Time instance containing the mean values"}, {"type": "function", "name": "Time.mean", "lines": [2354, 2390], "signature": "def mean(self, axis=None, dtype=None, out=None, keepdims=False, *, where=True):", "doc": ""}]}
|
5.0
|
["astropy/table/tests/test_info.py::test_table_info_stats[unmasked]", "astropy/table/tests/test_info.py::test_table_info_stats[masked]", "astropy/table/tests/test_info.py::test_table_info_stats[subclass]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-None-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-None-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-0-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-0-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-1-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-1-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-2-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-2-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-axis4-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-True-axis4-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-None-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-None-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-0-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-0-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-1-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-1-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-2-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-2-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-axis4-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[False-where1-axis4-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-None-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-None-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-0-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-0-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-1-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-1-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-2-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-2-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-axis4-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-True-axis4-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-None-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-None-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-0-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-0-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-1-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-1-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-2-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-2-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-axis4-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean[True-where1-axis4-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_precision[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_precision[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_dtype[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_dtype[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_out[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_out[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_leap_second[not_masked]"]
|
["astropy/table/tests/test_info.py::test_table_info_attributes[unmasked]", "astropy/table/tests/test_info.py::test_table_info_attributes[masked]", "astropy/table/tests/test_info.py::test_table_info_attributes[subclass]", "astropy/table/tests/test_info.py::test_data_info", "astropy/table/tests/test_info.py::test_data_info_subclass", "astropy/table/tests/test_info.py::test_scalar_info", "astropy/table/tests/test_info.py::test_empty_table", "astropy/table/tests/test_info.py::test_class_attribute", "astropy/table/tests/test_info.py::test_ignore_warnings", "astropy/table/tests/test_info.py::test_no_deprecation_warning", "astropy/table/tests/test_info.py::test_lost_parent_error", "astropy/table/tests/test_info.py::test_info_serialize_method", "astropy/table/tests/test_info.py::test_info_serialize_method_exception", "astropy/time/tests/test_delta.py::test_timedelta_setitem", "astropy/time/tests/test_delta.py::test_timedelta_setitem_sec", "astropy/time/tests/test_delta.py::test_timedelta_mask", "astropy/time/tests/test_delta.py::test_python_timedelta_scalar", "astropy/time/tests/test_delta.py::test_python_timedelta_vector", "astropy/time/tests/test_delta.py::test_timedelta_to_datetime", "astropy/time/tests/test_delta.py::test_insert_timedelta", "astropy/time/tests/test_delta.py::test_no_units_warning", "astropy/time/tests/test_methods.py::TestManipulation::test_ravel[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_ravel[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_flatten[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_flatten[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_transpose[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_transpose[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_diagonal[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_diagonal[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_swapaxes[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_swapaxes[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_reshape[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_reshape[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_squeeze[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_squeeze[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_add_dimension[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_add_dimension[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_take[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_take[not_masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_broadcast_via_apply[masked]", "astropy/time/tests/test_methods.py::TestManipulation::test_broadcast_via_apply[not_masked]", "astropy/time/tests/test_methods.py::TestSetShape::test_shape_setting[masked]", "astropy/time/tests/test_methods.py::TestSetShape::test_shape_setting[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_broadcast[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_broadcast[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_atleast_1d[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_atleast_1d[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_atleast_2d[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_atleast_2d[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_atleast_3d[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_atleast_3d[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_move_axis[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_move_axis[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_roll_axis[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_roll_axis[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_fliplr[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_fliplr[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_rot90[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_rot90[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_roll[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_roll[not_masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_delete[masked]", "astropy/time/tests/test_methods.py::TestShapeFunctions::test_delete[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw0-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw0-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw1-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw1-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw2-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw2-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw3-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw3-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw4-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw4-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw5-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw5-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw6-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw6-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw7-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw7-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw8-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw8-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw9-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw9-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw10-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw10-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw11-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw11-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw12-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw12-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw13-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw13-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw14-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argfuncs[kw14-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw0-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw0-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw1-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw1-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw2-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw2-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw3-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw3-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw4-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw4-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw5-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw5-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw6-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw6-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw7-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw7-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw8-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw8-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw9-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw9-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw10-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw10-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw11-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw11-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw12-min-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw12-min-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw13-max-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw13-max-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw14-sort-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_funcs[kw14-sort-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argmin[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argmin[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argmax[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argmax[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tai-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tai-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tcb-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tcb-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tcg-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tcg-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tdb-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tdb-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tt-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[tt-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[ut1-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[ut1-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[local-masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_argsort_warning[local-not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_min[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_min[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_max[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_max[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_ptp[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_ptp[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_sort[masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_sort[not_masked]", "astropy/time/tests/test_methods.py::TestArithmetic::test_mean_leap_second[masked]", "astropy/time/tests/test_methods.py::test_regression"]
|
7cbba866a8c5749b90a5cb4f9877ddfad2d36037
|
{"first_commit_time": 1658944508.0, "pr_title": "Enabled `np.mean()` function for instances of `astropy.time.Time`.", "pr_body": "<!-- This comments are hidden when you submit the pull request,\r\nso you do not need to remove them! -->\r\n\r\n<!-- Please be sure to check out our contributing guidelines,\r\nhttps://github.com/astropy/astropy/blob/main/CONTRIBUTING.md .\r\nPlease be sure to check out our code of conduct,\r\nhttps://github.com/astropy/astropy/blob/main/CODE_OF_CONDUCT.md . -->\r\n\r\n<!-- If you are new or need to be re-acquainted with Astropy\r\ncontributing workflow, please see\r\nhttp://docs.astropy.org/en/latest/development/workflow/development_workflow.html .\r\nThere is even a practical example at\r\nhttps://docs.astropy.org/en/latest/development/workflow/git_edit_workflow_examples.html#astropy-fix-example . -->\r\n\r\n<!-- Astropy coding style guidelines can be found here:\r\nhttps://docs.astropy.org/en/latest/development/codeguide.html#coding-style-conventions\r\nOur testing infrastructure enforces to follow a subset of the PEP8 to be\r\nfollowed. You can check locally whether your changes have followed these by\r\nrunning the following command:\r\n\r\ntox -e codestyle\r\n\r\n-->\r\n\r\n<!-- Please just have a quick search on GitHub to see if a similar\r\npull request has already been posted.\r\nWe have old closed pull requests that might provide useful code or ideas\r\nthat directly tie in with your pull request. -->\r\n\r\n<!-- We have several automatic features that run when a pull request is open.\r\nThey can appear daunting but do not worry because maintainers will help\r\nyou navigate them, if necessary. -->\r\n\r\n### Description\r\n<!-- Provide a general description of what your pull request does.\r\nComplete the following sentence and add relevant details as you see fit. -->\r\n\r\n<!-- In addition please ensure that the pull request title is descriptive\r\nand allows maintainers to infer the applicable subpackage(s). -->\r\n\r\n<!-- READ THIS FOR MANUAL BACKPORT FROM A MAINTAINER:\r\nApply \"skip-basebranch-check\" label **before** you open the PR! -->\r\n\r\nThis pull request is to address enabling the `numpy.mean()` array function to instances of `astropy.time.Time`\r\n\r\n<!-- If the pull request closes any open issues you can add this.\r\nIf you replace <Issue Number> with a number, GitHub will automatically link it.\r\nIf this pull request is unrelated to any issues, please remove\r\nthe following line. -->\r\n\r\nFixes #13507 \r\n\r\n### Checklist for package maintainer(s)\r\n<!-- This section is to be filled by package maintainer(s) who will\r\nreview this pull request. -->\r\n\r\nThis checklist is meant to remind the package maintainer(s) who will review this pull request of some common things to look for. This list is not exhaustive.\r\n\r\n- [x] Do the proposed changes actually accomplish desired goals?\r\n- [x] Do the proposed changes follow the [Astropy coding guidelines](https://docs.astropy.org/en/latest/development/codeguide.html)?\r\n- [x] Are tests added/updated as required? If so, do they follow the [Astropy testing guidelines](https://docs.astropy.org/en/latest/development/testguide.html)?\r\n- [x] Are docs added/updated as required? If so, do they follow the [Astropy documentation guidelines](https://docs.astropy.org/en/latest/development/docguide.html#astropy-documentation-rules-and-guidelines)?\r\n- [x] Is rebase and/or squash necessary? If so, please provide the author with appropriate instructions. Also see [\"When to rebase and squash commits\"](https://docs.astropy.org/en/latest/development/when_to_rebase.html).\r\n- [x] Did the CI pass? If no, are the failures related? If you need to run daily and weekly cron jobs as part of the PR, please apply the `Extra CI` label.\r\n- [x] Is a change log needed? If yes, did the change log check pass? If no, add the `no-changelog-entry-needed` label. If this is a manual backport, use the `skip-changelog-checks` label unless special changelog handling is necessary.\r\n- [x] Is this a big PR that makes a \"What's new?\" entry worthwhile and if so, is (1) a \"what's new\" entry included in this PR and (2) the \"whatsnew-needed\" label applied?\r\n- [x] Is a milestone set? Milestone must be set but `astropy-bot` check might be missing; do not let the green checkmark fool you.\r\n- [x] At the time of adding the milestone, if the milestone set requires a backport to release branch(es), apply the appropriate `backport-X.Y.x` label(s) *before* merge.\r\n", "pr_timeline": [{"time": 1663023628.0, "comment": "Hello @byrdie :wave:! It looks like you've made some changes in your pull request, so I've checked the code again for style.\n\n\n\n\n\n\n\n\nThere are no PEP8 style issues with this pull request - thanks! :tada:\n\n##### Comment last updated at 2022-09-12 23:00:27 UTC"}, {"time": 1658955904.0, "comment": "@byrdie - one has to take real care with rounding errors for `Time`. The suggestion of doing `jd1` and `jd2` separately is good, but the mean of the former will have rounding errors, which need to be taken into account. I would suggest to start with some tests where you know the answer (e.g., a few times years apart with also some time differences of a few ns, checking that the mean is what is expected).\r\n\r\nBoth: would it make sense to have a `.mean()` method? I think it would. Indeed, for `TimeDelta` (but not `Time`), one could also have `.sum()`. Note that if one defines such methods, they will automatically be used by `np.mean` and `np.sum`."}, {"time": 1658960986.0, "comment": "@mhvk, thanks for your suggestions, I've gone ahead and moved the logic to `astropy.time.Time.mean()` so that both the mean method and function work as expected.\r\n\r\nI will add your test case as soon as I figure out why the current tests are failing for masked arrays."}, {"time": 1658963909.0, "comment": "@mhvk, I've added a test, `astropy.time.tests.test_methods.test_mean_precision()`, that tries to address your suggestions above."}, {"time": 1658995032.0, "comment": "@byrdie @mhvk It might be worth it to implement separate methods for mean and mean keeping track of rounding errors for maximum precision, as these will have vastly different performance.\r\n\r\nFor the second option, see e.g.: https://docs.python.org/3/library/math.html#math.fsum and the resource linked there"}, {"time": 1659008146.0, "comment": "@maxnoe - Thanks for that link! For addition and subtraction, what we have is most similar to the first entry in the link (`msum`). Need to think a bit about whether this would be easy to generalize to a numpy arrays, possibly with an `axis`, etc.\r\n\r\nAlso agree that in the end we may need some way to get lower precision means, though perhaps as follow-up, since so far we have ensured that `Time` is always about as precise as possible. "}, {"time": 1659012386.0, "comment": "I wonder if we can consider an incremental approach which follows the \"perfect is the enemy of good\" mantra. The current code here is good and provides a very useful new functionality. \r\n\r\nI would argue that all we need now is *documentation* of the potential loss of precision given the current implementation and possible workarounds. I think that in practice there are very few science cases where you need sub-nanosec precision on the mean of a large number of times. The only case I can imagine is where all of the times are already extremely close, so all the `jd1` are the same. Here the documentation might give some hints like referring to the `math.fsum` docs and the link therein to get an accurate mean of the `jd2`."}, {"time": 1659029286.0, "comment": "@maxnoe, thanks for that documentation, I was about to go dig out my copy of Numerical Recipes to remind myself how to do this. Since numpy does not have a fsum implementation it seems like we either need an external C module, numba compilation, or some other accelerator library to make this work right? \r\n\r\nI do have some experience with numba, and I do think that it would be pretty easy to get a numba implementation of fsum working, but I don't know yet how to make it work with `numpy.ma.MaskedArray` or other `ndarray` duck types.\r\n\r\n@taldcroft, I think documentation describing this subtlety is useful regardless, I'll start working on that."}, {"time": 1659029817.0, "comment": "@mhvk and @taldcroft, the [tests are failing](https://github.com/astropy/astropy/runs/7550755821?check_suite_focus=true#step:6:1633) because `MaskedArray.mean()` doesn't support `where` (which sort of seems like a numpy bug to me, but I'm curious what you think).\r\n\r\nWhat do you think the best workaround is in this situation? I'm considering just throwing an error if `a` is a `MaskedArray` and `where` is not `None`, but that's not very graceful at all."}, {"time": 1659032678.0, "comment": "@byrdie - I have a PR in the works which will let `Time` use our own `Masked` internally, which will remove this limitation. But for now, I think your suggestion of just raising an error if someone uses `where` for masked times is the best solution. Here, it would make most sense to get an initial implementation without necessarily being feature complete.\r\n\r\nOn the implementation, I'm abroad and taking care of my father so haven't looked at what you have right now, but `Time` is nice in that `jd1` is guaranteed to be an integer. So, the sum of `jd1` is exact. The sum of `jd2` will not be exact but since that is the fraction, rounding errors are OK. So I think something like `utils.day_frac(jd1.sum(), jd2.sum(), divisor=n)` should give you a properly rounded result (in that it does take into account possible rounding errors in dividing `jd1.sum()` by `n`)."}, {"time": 1659044560.0, "comment": "@mhvk, thanks for that implementation, that looks much better. I've updated the PR with those changes."}, {"time": 1659378223.0, "comment": "Does anyone know why `astropy.tables.tests.test_info.test_table_info_stats()` [fails during wheel building](https://github.com/astropy/astropy/runs/7616387347?check_suite_focus=true#step:4:3391) but [passes the normal CI](https://github.com/astropy/astropy/runs/7616392156?check_suite_focus=true#step:6:1125)?"}, {"time": 1659378496.0, "comment": "p.s. I guess that for masked data to work, you should take the unmasked data and apply your new `where` that includes the mask in it. Given the constraint on `divisor != 0`, it just means that the output is always unmasked (for now!)."}, {"time": 1659378531.0, "comment": "p.p.s. Absolutely fine to deal with the masked case plus where later - you have been at this for quite a bit, and it is already a super-nice addition!!"}, {"time": 1659378672.0, "comment": "I was trying to figure out how to make an unmasked `Time` from a masked one. Is there an easy way to do that?"}, {"time": 1659380201.0, "comment": "I think all you need is to put in elements that are not masked!"}, {"time": 1659380484.0, "comment": "Ok, just pushed a commit that tries to address combining `where` and `MaskedArray`. Made everything a little simpler which is nice :)"}, {"time": 1659381805.0, "comment": "I think you now have some \"failures\" -- because masked times now work. I am also a bit confused why that is only picked up in the wheels, though..."}, {"time": 1659381909.0, "comment": "I was actually getting that problem before the latest commits, see my comment earlier today. I'm really confused about it only showing up on the wheels too!"}, {"time": 1659466466.0, "comment": "Does the new IERS table update fix the failure? Can you please rebase? Thanks!"}, {"time": 1659469358.0, "comment": "@pllim thanks for your review. I just rebased, but the problem persists the same as before."}, {"time": 1659469965.0, "comment": "The failure definite looks related, as it affects a table with Time column, but why only in wheel jobs is a mystery. What is the wheel job doing that the CI isn't?\r\n\r\nhttps://github.com/astropy/astropy/blob/c33e53f68388d4b4d68e85956bcfe4500a94c642/.github/workflows/publish.yml#L21-L23\r\n\r\nAlso, you have to fix the RTD warnings:\r\n\r\n```\r\nastropy/time/core.py:docstring of astropy.time.core.Time.mean:: WARNING: py:class reference target not found: npt.DTypeLike\r\nastropy/time/core.py:docstring of astropy.time.core.Time.mean:: WARNING: py:class reference target not found: npt.NDArray\r\nastropy/time/core.py:docstring of astropy.time.core.Time.mean:: WARNING: py:class reference target not found: Self\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:: WARNING: py:class reference target not found: npt.DTypeLike\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:: WARNING: py:class reference target not found: npt.NDArray\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:: WARNING: py:class reference target not found: Self\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:: WARNING: py:class reference target not found: npt.DTypeLike\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:: WARNING: py:class reference target not found: npt.NDArray\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:: WARNING: py:class reference target not found: Self\r\n```"}, {"time": 1659471020.0, "comment": "@pllim thanks for pointing out those RTD warnings. They are related to the typing information. I might have to leave those off for now and open a new PR (or add on to PR #12971) to fix that issue, unless somebody knows a clever way to fix this."}, {"time": 1659471581.0, "comment": "In my project, I think I use the [sphinx-autodoc-typehints](https://pypi.org/project/sphinx-autodoc-typehints/) extension, but that's probably not appropriate here since we don't use autodoc, so I'm not sure what to do yet."}, {"time": 1659472389.0, "comment": "Re: RTD warnings -- Let's write the docstring out in the \"classic\" way then?"}, {"time": 1659626635.0, "comment": "@nstarman, do you know how to solve these Sphinx warnings? If not I might have to get rid of the typing annotations to get this to pass.\r\n\r\n```\r\nastropy/time/core.py:docstring of astropy.time.core.Time.mean:: WARNING: py:class reference target not found: npt.DTypeLike\r\nastropy/time/core.py:docstring of astropy.time.core.Time.mean:: WARNING: py:class reference target not found: npt.NDArray\r\nastropy/time/core.py:docstring of astropy.time.core.Time.mean:: WARNING: py:class reference target not found: Self\r\n```\r\n\r\nI tried moving the `from astropy.utils.compat.numpycompat import npt` outside the `if TYPE_CHECKING` clause, but it had no effect. \r\n\r\nI also thought writing a docstring with those parameters documented would overwrite the typing annotations present in the docstring, but it also didn't change the outcome."}, {"time": 1661625154.0, "comment": "> @nstarman, do you know how to solve these Sphinx warnings? If not I might have to get rid of the typing annotations to get this to pass.\r\n\r\nSorry, I missed this notification. \r\nI think the issue is that the docs are built with python 3.8 with an old version of numpy. The ``Self`` problem is more confusing as ``typing_extensions`` should backport ``Self`` correctly. I think if we build the docs with python 3.10 this might be fixed? Maybe in ``setup.cfg::[options.extras_require]::docs`` add ``numpy>=3.10``. \r\n"}, {"time": 1663002332.0, "comment": "@nstarman, I tried several iterations of your suggestion, but I can't seem to make any headway. I've tried changing the Python version in both the `setup.cfg` file and the `.readthedocs.yml` file, but it always leads to errors that I haven't been able to understand yet. Do you think I should just remove the typing annotations to get this PR through? Seems like we might need a separate PR to solve these issues."}, {"time": 1663005111.0, "comment": "There has been some recent work on the readthedocs environment, since there are other problems as well (see #13587), but without success, so I think indeed it is best just not to do the typing stuff here. Easier to find the real problems in a more targeted PR."}, {"time": 1663080783.0, "comment": "Agreed. I didn't realize how poorly typing is handled in RTD, esp because it's interpreted as a string by static type checkers.\r\n"}, {"time": 1663089420.0, "comment": "Does anyone know why the RTD build is failing now? I removed all the typing information, but now the build is failing, and I can't understand what is causing the error.\r\n\r\nAlso the wheel building is still failing as well, does anyone have any ideas on that front?"}, {"time": 1663090896.0, "comment": "wheel -- Have you rebased against upstream/main lately?\r\n\r\nRTD -- see warnings\r\n\r\n```\r\nastropy/time/core.py:docstring of astropy.time.core.Time.mean:38: WARNING: py:func reference target not found: numpy.ufunc.reduce\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:38: WARNING: py:func reference target not found: numpy.ufunc.reduce\r\nastropy/time/core.py:docstring of astropy.time.core.TimeBase.mean:38: WARNING: py:func reference target not found: numpy.ufunc.reduce\r\n```"}, {"time": 1663090945.0, "comment": "If rebase doesn't fix that CI error, it is probably your PR as I am not seeing that one on main, so try to see if you can reproduce the failure locally with your branch."}, {"time": 1663284123.0, "comment": "@pllim, thanks for leading me to the RTD error, I was blind but now I see!\r\n\r\nIt's weird that intersphinx can't seem to find `numpy.ufunc.reduce` but can find `numpy.ndarray.mean`. Does anyone know the trick to getting `numpy.ufunc.reduce` to resolve properly?"}, {"time": 1663362960.0, "comment": "Hmm the wheel testing failure seems related but I wonder why other CI jobs are green..."}, {"time": 1663363114.0, "comment": "Yeah, I agree it's definitely related. I'm going to try and naively fix it to see if the other CI fails."}, {"time": 1663363318.0, "comment": "I just rebased onto astropy/main per your earlier suggestion to make sure, but I'm reasonably certain it won't fix it since it hasn't worked in the past."}, {"time": 1664489906.0, "comment": "@byrdie - looking at the tests, it seems that no means are taken for `Time` inside a table, which means the `.info()` stuff is not working right. Is this indeed the case? Have you tried executing these tests by hand to see what is actually happening? "}, {"time": 1664490231.0, "comment": "@mhvk, I think that is the case. I've been working through the info() method, and I think the problem is at line 152 of `astropy/utils/data_info.py`. Taking a mean of that particular `Time` instance results in a `erfa.ErfaWarning` with the message: ERFA function \"utctai\" yielded 4 of \"dubious year (Note 3)\".\r\n\r\nYou can see in the current changes that I'm trying to filter that warning, but I still can't quite get it to work."}, {"time": 1664490358.0, "comment": "I am quite OK with just changing the test. In this case, easiest may be to add `scale='tai'` to the `Time` initialization, so that the code doesn't try the UTC->TAI->UTC conversion (which is causing the ERFA warning)."}, {"time": 1664490594.0, "comment": "Oh good idea, I hadn't considered that! Thanks so much for your help, I was really struggling with this."}, {"time": 1664626135.0, "comment": "@taldcroft - I think this is all OK. If you want a last look, let me know."}, {"time": 1664903404.0, "comment": "I'd recommend ensuring that all the edited lines are `black`-compatible, and that includes using `\"` instead of `'`. If the number of lines in `astropy` that do not comply with `black` is smaller then the diffs `black` will create will also be smaller."}, {"time": 1664903687.0, "comment": "@eerovaher, thanks for your comment, do you recommend installing black locally to check this, or is there somewhere in the CI that I can see the lines that black is complaining about?"}, {"time": 1664904347.0, "comment": "When `black` is applied to `astropy` then it would be a good idea to also use it locally. If you use the `pre-commit` settings from `astropy` then it should not require any additional manual setup. However, running `black` before it is applied to `astropy` is not a good idea because `black` will enforce its style in the entire file. If you want to ensure that only the lines that are being edited anyways comply with `black` then you might find [`darker`](https://pypi.org/project/darker/) to be useful."}, {"time": 1664910312.0, "comment": "Can I push back on this a little? The point of black supposedly is that this kind of stuff gets done automatically; I don't think we should now start to push everyone to do more work in preparation for this. Why should everybody know what black preferences are?"}, {"time": 1664910345.0, "comment": "In fact, I suggest we just merge this -- this really is a very nice PR and it is done! Thanks again, @byrdie!"}, {"time": 1664910640.0, "comment": "Just merging might be best, I'm pretty busy this week so I might not get around to making everything black-compatible until the weekend. I do think making everything black-compatible in the future is a good idea though!"}, {"time": 1664912959.0, "comment": "I would like new pull requests to follow `black` style, but I agree that it should not block merging."}, {"time": 1664913321.0, "comment": "OK, let me get this in.\r\n\r\nFWIW, I do not agree that we should force any contribution to be black compliant - the whole purpose is that this gets run *automatically*, so that reviewers and contributors do not have to worry about it. If to help that we set up a pre-commit that does that, fine, but otherwise let's use our time better!"}, {"time": 1664913405.0, "comment": "Hmm, that's weird: readthedocs states it is pending but actually seems to have finished. @pllim - any suggestions?"}, {"time": 1664913459.0, "comment": "O wait, there was another commit, so it is just trying to upload... Sorry for the noise!"}, {"time": 1664934563.0, "comment": "Merged. Thanks, all!"}, {"time": 1664934607.0, "comment": "Thanks everyone for all your help!"}], "issues": {"13507": {"issue_title": "Enable `np.mean()` for `astropy.time.Time`", "issue_body": "### Description\r\n\r\nThe `__array_function__` dispatch mechanism was recently added to astropy.\r\nCan we use this mechanism to activate the `np.mean()` array function for instances of `astropy.time.Time`?\r\n\r\nIf so, what is the most efficient way to implement this, considering the many time formats?\r\n\r\n", "issue_timeline": [{"time": 1658945203.0, "comment": "I've opened Pull Request #13508 with my first attempt at implementing this functionality. I'm worried it's pretty inefficient / might lose precision since it converts times to Unix format before doing the arithmetic. \r\n\r\nI'm open to all suggestions / ideas to make this more efficient. "}]}}}
|
astropy/astropy
| 13,676
|
https://github.com/astropy/astropy/pull/13676
|
astropy__astropy-13676
|
[]
|
a30301e5535be2f558cb948da6b3475df4e36a98
|
diff --git a/astropy/units/quantity.py b/astropy/units/quantity.py
index 7043156e91d7..0512e969291a 100644
--- a/astropy/units/quantity.py
+++ b/astropy/units/quantity.py
@@ -29,7 +29,7 @@
from .quantity_helper import can_have_arbitrary_unit, check_output, converters_and_unit
from .quantity_helper.function_helpers import (
DISPATCHED_FUNCTIONS, FUNCTION_HELPERS, SUBCLASS_SAFE_FUNCTIONS, UNSUPPORTED_FUNCTIONS)
-from .structured import StructuredUnit
+from .structured import StructuredUnit, _structured_unit_like_dtype
from .utils import is_effectively_unity
__all__ = ["Quantity", "SpecificTypeQuantity",
@@ -481,6 +481,7 @@ def __new__(cls, value, unit=None, dtype=np.inexact, copy=True, order=None,
# structured dtype of the value.
dtype = unit._recursively_get_dtype(value)
+ using_default_unit = False
if value_unit is None:
# If the value has a `unit` attribute and if not None
# (for Columns with uninitialized unit), treat it like a quantity.
@@ -488,6 +489,7 @@ def __new__(cls, value, unit=None, dtype=np.inexact, copy=True, order=None,
if value_unit is None:
# Default to dimensionless for no (initialized) unit attribute.
if unit is None:
+ using_default_unit = True
unit = cls._default_unit
value_unit = unit # signal below that no conversion is needed
else:
@@ -507,6 +509,11 @@ def __new__(cls, value, unit=None, dtype=np.inexact, copy=True, order=None,
value = np.array(value, dtype=dtype, copy=copy, order=order,
subok=True, ndmin=ndmin)
+ # For no-user-input unit, make sure the constructed unit matches the
+ # structure of the data.
+ if using_default_unit and value.dtype.names is not None:
+ unit = value_unit = _structured_unit_like_dtype(value_unit, value.dtype)
+
# check that array contains numbers or long int objects
if (value.dtype.kind in 'OSU' and
not (value.dtype.kind == 'O' and
diff --git a/astropy/units/structured.py b/astropy/units/structured.py
index ead091e4b9db..a28ff6fb873c 100644
--- a/astropy/units/structured.py
+++ b/astropy/units/structured.py
@@ -3,6 +3,8 @@
This module defines structured units and quantities.
"""
+from __future__ import annotations # For python < 3.10
+
# Standard library
import operator
@@ -515,3 +517,34 @@ def __ne__(self, other):
other = other.item()
return self.item() != other
+
+
+def _structured_unit_like_dtype(unit: UnitBase | StructuredUnit, dtype: np.dtype) -> StructuredUnit:
+ """Make a `StructuredUnit` of one unit, with the structure of a `numpy.dtype`.
+
+ Parameters
+ ----------
+ unit : UnitBase
+ The unit that will be filled into the structure.
+ dtype : `numpy.dtype`
+ The structure for the StructuredUnit.
+
+ Returns
+ -------
+ StructuredUnit
+ """
+ if isinstance(unit, StructuredUnit):
+ # If unit is structured, it should match the dtype. This function is
+ # only used in Quantity, which performs this check, so it's fine to
+ # return as is.
+ return unit
+
+ # Make a structured unit
+ units = []
+ for name in dtype.names:
+ subdtype = dtype.fields[name][0]
+ if subdtype.names is not None:
+ units.append(_structured_unit_like_dtype(unit, subdtype))
+ else:
+ units.append(unit)
+ return StructuredUnit(tuple(units), names=dtype.names)
diff --git a/docs/changes/units/13676.api.rst b/docs/changes/units/13676.api.rst
new file mode 100644
index 000000000000..966372f06cef
--- /dev/null
+++ b/docs/changes/units/13676.api.rst
@@ -0,0 +1,2 @@
+When ``Quantity`` is constructed from a structured array and ``unit`` is
+``None``, the default unit is now structured like the input data.
|
diff --git a/astropy/units/tests/test_structured.py b/astropy/units/tests/test_structured.py
index fe4134ac05b2..78ab03a7c022 100644
--- a/astropy/units/tests/test_structured.py
+++ b/astropy/units/tests/test_structured.py
@@ -12,6 +12,7 @@
from astropy import units as u
from astropy.tests.helper import check_pickling_recovery, pickle_protocol
from astropy.units import Quantity, StructuredUnit, Unit, UnitBase
+from astropy.units.quantity import _structured_unit_like_dtype
from astropy.utils.compat import NUMPY_LT_1_21_1
from astropy.utils.masked import Masked
@@ -433,6 +434,17 @@ def test_initialization_by_shifting_to_unit(self):
assert np.all(q_pv_t.value == self.pv_t)
assert np.may_share_memory(q_pv_t, self.pv_t)
+ def test_initialization_without_unit(self):
+ q_pv_t = u.Quantity(self.pv_t, unit=None)
+
+ assert np.all(q_pv_t.value == self.pv_t)
+
+ # Test that unit is a structured unit like the dtype
+ expected_unit = _structured_unit_like_dtype(u.Quantity._default_unit, self.pv_t.dtype)
+ assert q_pv_t.unit == expected_unit
+ # A more explicit test
+ assert q_pv_t.unit == u.StructuredUnit(((u.one, u.one), u.one))
+
def test_getitem(self):
q_pv_t = Quantity(self.pv_t, self.pv_t_unit)
q_pv_t01 = q_pv_t[:2]
| 2022-09-15T19:20:47
|
{}
|
{"astropy/units/quantity.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"\nThis module defines the `Quantity` object, which represents a number with some\nassociated units. `Quantity` objects support operations like ordinary numbers,\nbut will deal with unit conversions internally.\n\"\"\"\n\n# STDLIB\nimport numbers\nimport operator\nimport re\nimport warnings\nfrom fractions import Fraction\n\n# THIRD PARTY\nimport numpy as np\n\n# LOCAL\nfrom astropy import config as _config\nfrom astropy.utils.compat import NUMPY_LT_1_20, NUMPY_LT_1_22\nfrom astropy.utils.data_info import ParentDtypeInfo\nfrom astropy.utils.exceptions import AstropyDeprecationWarning, AstropyWarning\nfrom astropy.utils.misc import isiterable\n\nfrom .core import (\n Unit, UnitBase, UnitConversionError, UnitsError, UnitTypeError, dimensionless_unscaled,\n get_current_unit_registry)\nfrom .format.latex import Latex\nfrom .quantity_helper import can_have_arbitrary_unit, check_output, converters_and_unit\nfrom .quantity_helper.function_helpers import (\n DISPATCHED_FUNCTIONS, FUNCTION_HELPERS, SUBCLASS_SAFE_FUNCTIONS, UNSUPPORTED_FUNCTIONS)\nfrom .structured import StructuredUnit\nfrom .utils import is_effectively_unity\n\n__all__ = [\"Quantity\", \"SpecificTypeQuantity\",\n \"QuantityInfoBase\", \"QuantityInfo\", \"allclose\", \"isclose\"]\n\n\n# We don't want to run doctests in the docstrings we inherit from Numpy\n__doctest_skip__ = ['Quantity.*']\n\n_UNIT_NOT_INITIALISED = \"(Unit not initialised)\"\n_UFUNCS_FILTER_WARNINGS = {np.arcsin, np.arccos, np.arccosh, np.arctanh}\n\n\nclass Conf(_config.ConfigNamespace):\n \"\"\"\n Configuration parameters for Quantity\n \"\"\"\n latex_array_threshold = _config.ConfigItem(100,\n 'The maximum size an array Quantity can be before its LaTeX '\n 'representation for IPython gets \"summarized\" (meaning only the first '\n 'and last few elements are shown with \"...\" between). Setting this to a '\n 'negative number means that the value will instead be whatever numpy '\n 'gets from get_printoptions.')\n\n\nconf = Conf()\n\n\nclass QuantityIterator:\n \"\"\"\n Flat iterator object to iterate over Quantities\n\n A `QuantityIterator` iterator is returned by ``q.flat`` for any Quantity\n ``q``. It allows iterating over the array as if it were a 1-D array,\n either in a for-loop or by calling its `next` method.\n\n Iteration is done in C-contiguous style, with the last index varying the\n fastest. The iterator can also be indexed using basic slicing or\n advanced indexing.\n\n See Also\n --------\n Quantity.flatten : Returns a flattened copy of an array.\n\n Notes\n -----\n `QuantityIterator` is inspired by `~numpy.ma.core.MaskedIterator`. It\n is not exported by the `~astropy.units` module. Instead of\n instantiating a `QuantityIterator` directly, use `Quantity.flat`.\n \"\"\"\n\n def __init__(self, q):\n self._quantity = q\n self._dataiter = q.view(np.ndarray).flat\n\n def __iter__(self):\n return self\n\n def __getitem__(self, indx):\n out = self._dataiter.__getitem__(indx)\n # For single elements, ndarray.flat.__getitem__ returns scalars; these\n # need a new view as a Quantity.\n if isinstance(out, type(self._quantity)):\n return out\n else:\n return self._quantity._new_view(out)\n\n def __setitem__(self, index, value):\n self._dataiter[index] = self._quantity._to_own_unit(value)\n\n def __next__(self):\n \"\"\"\n Return the next value, or raise StopIteration.\n \"\"\"\n out = next(self._dataiter)\n # ndarray.flat._dataiter returns scalars, so need a view as a Quantity.\n return self._quantity._new_view(out)\n\n next = __next__\n\n def __len__(self):\n return len(self._dataiter)\n\n #### properties and methods to match `numpy.ndarray.flatiter` ####\n\n @property\n def base(self):\n \"\"\"A reference to the array that is iterated over.\"\"\"\n return self._quantity\n\n @property\n def coords(self):\n \"\"\"An N-dimensional tuple of current coordinates.\"\"\"\n return self._dataiter.coords\n\n @property\n def index(self):\n \"\"\"Current flat index into the array.\"\"\"\n return self._dataiter.index\n\n def copy(self):\n \"\"\"Get a copy of the iterator as a 1-D array.\"\"\"\n return self._quantity.flatten()\n\n\nclass QuantityInfoBase(ParentDtypeInfo):\n # This is on a base class rather than QuantityInfo directly, so that\n # it can be used for EarthLocationInfo yet make clear that that class\n # should not be considered a typical Quantity subclass by Table.\n attrs_from_parent = {'dtype', 'unit'} # dtype and unit taken from parent\n _supports_indexing = True\n\n @staticmethod\n def default_format(val):\n return f'{val.value}'\n\n @staticmethod\n def possible_string_format_functions(format_):\n \"\"\"Iterate through possible string-derived format functions.\n\n A string can either be a format specifier for the format built-in,\n a new-style format string, or an old-style format string.\n\n This method is overridden in order to suppress printing the unit\n in each row since it is already at the top in the column header.\n \"\"\"\n yield lambda format_, val: format(val.value, format_)\n yield lambda format_, val: format_.format(val.value)\n yield lambda format_, val: format_ % val.value\n\n\nclass QuantityInfo(QuantityInfoBase):\n \"\"\"\n Container for meta information like name, description, format. This is\n required when the object is used as a mixin column within a table, but can\n be used as a general way to store meta information.\n \"\"\"\n _represent_as_dict_attrs = ('value', 'unit')\n _construct_from_dict_args = ['value']\n _represent_as_dict_primary_data = 'value'\n\n def new_like(self, cols, length, metadata_conflicts='warn', name=None):\n \"\"\"\n Return a new Quantity instance which is consistent with the\n input ``cols`` and has ``length`` rows.\n\n This is intended for creating an empty column object whose elements can\n be set in-place for table operations like join or vstack.\n\n Parameters\n ----------\n cols : list\n List of input columns\n length : int\n Length of the output column object\n metadata_conflicts : str ('warn'|'error'|'silent')\n How to handle metadata conflicts\n name : str\n Output column name\n\n Returns\n -------\n col : `~astropy.units.Quantity` (or subclass)\n Empty instance of this class consistent with ``cols``\n\n \"\"\"\n\n # Get merged info attributes like shape, dtype, format, description, etc.\n attrs = self.merge_cols_attributes(cols, metadata_conflicts, name,\n ('meta', 'format', 'description'))\n\n # Make an empty quantity using the unit of the last one.\n shape = (length,) + attrs.pop('shape')\n dtype = attrs.pop('dtype')\n # Use zeros so we do not get problems for Quantity subclasses such\n # as Longitude and Latitude, which cannot take arbitrary values.\n data = np.zeros(shape=shape, dtype=dtype)\n # Get arguments needed to reconstruct class\n map = {key: (data if key == 'value' else getattr(cols[-1], key))\n for key in self._represent_as_dict_attrs}\n map['copy'] = False\n out = self._construct_from_dict(map)\n\n # Set remaining info attributes\n for attr, value in attrs.items():\n setattr(out.info, attr, value)\n\n return out\n\n def get_sortable_arrays(self):\n \"\"\"\n Return a list of arrays which can be lexically sorted to represent\n the order of the parent column.\n\n For Quantity this is just the quantity itself.\n\n\n Returns\n -------\n arrays : list of ndarray\n \"\"\"\n return [self._parent]\n\n\nclass Quantity(np.ndarray):\n \"\"\"A `~astropy.units.Quantity` represents a number with some associated unit.\n\n See also: https://docs.astropy.org/en/stable/units/quantity.html\n\n Parameters\n ----------\n value : number, `~numpy.ndarray`, `~astropy.units.Quantity` (sequence), or str\n The numerical value of this quantity in the units given by unit. If a\n `Quantity` or sequence of them (or any other valid object with a\n ``unit`` attribute), creates a new `Quantity` object, converting to\n `unit` units as needed. If a string, it is converted to a number or\n `Quantity`, depending on whether a unit is present.\n\n unit : unit-like\n An object that represents the unit associated with the input value.\n Must be an `~astropy.units.UnitBase` object or a string parseable by\n the :mod:`~astropy.units` package.\n\n dtype : ~numpy.dtype, optional\n The dtype of the resulting Numpy array or scalar that will\n hold the value. If not provided, it is determined from the input,\n except that any integer and (non-Quantity) object inputs are converted\n to float by default.\n If `None`, the normal `numpy.dtype` introspection is used, e.g.\n preventing upcasting of integers.\n\n copy : bool, optional\n If `True` (default), then the value is copied. Otherwise, a copy will\n only be made if ``__array__`` returns a copy, if value is a nested\n sequence, or if a copy is needed to satisfy an explicitly given\n ``dtype``. (The `False` option is intended mostly for internal use,\n to speed up initialization where a copy is known to have been made.\n Use with care.)\n\n order : {'C', 'F', 'A'}, optional\n Specify the order of the array. As in `~numpy.array`. This parameter\n is ignored if the input is a `Quantity` and ``copy=False``.\n\n subok : bool, optional\n If `False` (default), the returned array will be forced to be a\n `Quantity`. Otherwise, `Quantity` subclasses will be passed through,\n or a subclass appropriate for the unit will be used (such as\n `~astropy.units.Dex` for ``u.dex(u.AA)``).\n\n ndmin : int, optional\n Specifies the minimum number of dimensions that the resulting array\n should have. Ones will be pre-pended to the shape as needed to meet\n this requirement. This parameter is ignored if the input is a\n `Quantity` and ``copy=False``.\n\n Raises\n ------\n TypeError\n If the value provided is not a Python numeric type.\n TypeError\n If the unit provided is not either a :class:`~astropy.units.Unit`\n object or a parseable string unit.\n\n Notes\n -----\n Quantities can also be created by multiplying a number or array with a\n :class:`~astropy.units.Unit`. See https://docs.astropy.org/en/latest/units/\n\n Unless the ``dtype`` argument is explicitly specified, integer\n or (non-Quantity) object inputs are converted to `float` by default.\n \"\"\"\n # Need to set a class-level default for _equivalencies, or\n # Constants can not initialize properly\n _equivalencies = []\n\n # Default unit for initialization; can be overridden by subclasses,\n # possibly to `None` to indicate there is no default unit.\n _default_unit = dimensionless_unscaled\n\n # Ensures views have an undefined unit.\n _unit = None\n\n __array_priority__ = 10000\n\n def __class_getitem__(cls, unit_shape_dtype):\n \"\"\"Quantity Type Hints.\n\n Unit-aware type hints are ``Annotated`` objects that encode the class,\n the unit, and possibly shape and dtype information, depending on the\n python and :mod:`numpy` versions.\n\n Schematically, ``Annotated[cls[shape, dtype], unit]``\n\n As a classmethod, the type is the class, ie ``Quantity``\n produces an ``Annotated[Quantity, ...]`` while a subclass\n like :class:`~astropy.coordinates.Angle` returns\n ``Annotated[Angle, ...]``.\n\n Parameters\n ----------\n unit_shape_dtype : :class:`~astropy.units.UnitBase`, str, `~astropy.units.PhysicalType`, or tuple\n Unit specification, can be the physical type (ie str or class).\n If tuple, then the first element is the unit specification\n and all other elements are for `numpy.ndarray` type annotations.\n Whether they are included depends on the python and :mod:`numpy`\n versions.\n\n Returns\n -------\n `typing.Annotated`, `typing_extensions.Annotated`, `astropy.units.Unit`, or `astropy.units.PhysicalType`\n Return type in this preference order:\n * if python v3.9+ : `typing.Annotated`\n * if :mod:`typing_extensions` is installed : `typing_extensions.Annotated`\n * `astropy.units.Unit` or `astropy.units.PhysicalType`\n\n Raises\n ------\n TypeError\n If the unit/physical_type annotation is not Unit-like or\n PhysicalType-like.\n\n Examples\n --------\n Create a unit-aware Quantity type annotation\n\n >>> Quantity[Unit(\"s\")]\n Annotated[Quantity, Unit(\"s\")]\n\n See Also\n --------\n `~astropy.units.quantity_input`\n Use annotations for unit checks on function arguments and results.\n\n Notes\n -----\n With Python 3.9+ or :mod:`typing_extensions`, |Quantity| types are also\n static-type compatible.\n \"\"\"\n # LOCAL\n from ._typing import HAS_ANNOTATED, Annotated\n\n # process whether [unit] or [unit, shape, ptype]\n if isinstance(unit_shape_dtype, tuple): # unit, shape, dtype\n target = unit_shape_dtype[0]\n shape_dtype = unit_shape_dtype[1:]\n else: # just unit\n target = unit_shape_dtype\n shape_dtype = ()\n\n # Allowed unit/physical types. Errors if neither.\n try:\n unit = Unit(target)\n except (TypeError, ValueError):\n from astropy.units.physical import get_physical_type\n\n try:\n unit = get_physical_type(target)\n except (TypeError, ValueError, KeyError): # KeyError for Enum\n raise TypeError(\"unit annotation is not a Unit or PhysicalType\") from None\n\n # Allow to sort of work for python 3.8- / no typing_extensions\n # instead of bailing out, return the unit for `quantity_input`\n if not HAS_ANNOTATED:\n warnings.warn(\"Quantity annotations are valid static type annotations only\"\n \" if Python is v3.9+ or `typing_extensions` is installed.\")\n return unit\n\n # Quantity does not (yet) properly extend the NumPy generics types,\n # introduced in numpy v1.22+, instead just including the unit info as\n # metadata using Annotated.\n # TODO: ensure we do interact with NDArray.__class_getitem__.\n return Annotated.__class_getitem__((cls, unit))\n\n def __new__(cls, value, unit=None, dtype=np.inexact, copy=True, order=None,\n subok=False, ndmin=0):\n\n if unit is not None:\n # convert unit first, to avoid multiple string->unit conversions\n unit = Unit(unit)\n\n # inexact -> upcast to float dtype\n float_default = dtype is np.inexact\n if float_default:\n dtype = None\n\n # optimize speed for Quantity with no dtype given, copy=False\n if isinstance(value, Quantity):\n if unit is not None and unit is not value.unit:\n value = value.to(unit)\n # the above already makes a copy (with float dtype)\n copy = False\n\n if type(value) is not cls and not (subok and\n isinstance(value, cls)):\n value = value.view(cls)\n\n if float_default and value.dtype.kind in 'iu':\n dtype = float\n\n return np.array(value, dtype=dtype, copy=copy, order=order,\n subok=True, ndmin=ndmin)\n\n # Maybe str, or list/tuple of Quantity? If so, this may set value_unit.\n # To ensure array remains fast, we short-circuit it.\n value_unit = None\n if not isinstance(value, np.ndarray):\n if isinstance(value, str):\n # The first part of the regex string matches any integer/float;\n # the second parts adds possible trailing .+-, which will break\n # the float function below and ensure things like 1.2.3deg\n # will not work.\n pattern = (r'\\s*[+-]?'\n r'((\\d+\\.?\\d*)|(\\.\\d+)|([nN][aA][nN])|'\n r'([iI][nN][fF]([iI][nN][iI][tT][yY]){0,1}))'\n r'([eE][+-]?\\d+)?'\n r'[.+-]?')\n\n v = re.match(pattern, value)\n unit_string = None\n try:\n value = float(v.group())\n\n except Exception:\n raise TypeError('Cannot parse \"{}\" as a {}. It does not '\n 'start with a number.'\n .format(value, cls.__name__))\n\n unit_string = v.string[v.end():].strip()\n if unit_string:\n value_unit = Unit(unit_string)\n if unit is None:\n unit = value_unit # signal no conversion needed below.\n\n elif isiterable(value) and len(value) > 0:\n # Iterables like lists and tuples.\n if all(isinstance(v, Quantity) for v in value):\n # If a list/tuple containing only quantities, convert all\n # to the same unit.\n if unit is None:\n unit = value[0].unit\n value = [q.to_value(unit) for q in value]\n value_unit = unit # signal below that conversion has been done\n elif (dtype is None and not hasattr(value, 'dtype')\n and isinstance(unit, StructuredUnit)):\n # Special case for list/tuple of values and a structured unit:\n # ``np.array(value, dtype=None)`` would treat tuples as lower\n # levels of the array, rather than as elements of a structured\n # array, so we use the structure of the unit to help infer the\n # structured dtype of the value.\n dtype = unit._recursively_get_dtype(value)\n\n if value_unit is None:\n # If the value has a `unit` attribute and if not None\n # (for Columns with uninitialized unit), treat it like a quantity.\n value_unit = getattr(value, 'unit', None)\n if value_unit is None:\n # Default to dimensionless for no (initialized) unit attribute.\n if unit is None:\n unit = cls._default_unit\n value_unit = unit # signal below that no conversion is needed\n else:\n try:\n value_unit = Unit(value_unit)\n except Exception as exc:\n raise TypeError(\"The unit attribute {!r} of the input could \"\n \"not be parsed as an astropy Unit, raising \"\n \"the following exception:\\n{}\"\n .format(value.unit, exc))\n\n if unit is None:\n unit = value_unit\n elif unit is not value_unit:\n copy = False # copy will be made in conversion at end\n\n value = np.array(value, dtype=dtype, copy=copy, order=order,\n subok=True, ndmin=ndmin)\n\n # check that array contains numbers or long int objects\n if (value.dtype.kind in 'OSU' and\n not (value.dtype.kind == 'O' and\n isinstance(value.item(0), numbers.Number))):\n raise TypeError(\"The value must be a valid Python or \"\n \"Numpy numeric type.\")\n\n # by default, cast any integer, boolean, etc., to float\n if float_default and value.dtype.kind in 'iuO':\n value = value.astype(float)\n\n # if we allow subclasses, allow a class from the unit.\n if subok:\n qcls = getattr(unit, '_quantity_class', cls)\n if issubclass(qcls, cls):\n cls = qcls\n\n value = value.view(cls)\n value._set_unit(value_unit)\n if unit is value_unit:\n return value\n else:\n # here we had non-Quantity input that had a \"unit\" attribute\n # with a unit different from the desired one. So, convert.\n return value.to(unit)\n\n def __array_finalize__(self, obj):\n # Check whether super().__array_finalize should be called\n # (sadly, ndarray.__array_finalize__ is None; we cannot be sure\n # what is above us).\n super_array_finalize = super().__array_finalize__\n if super_array_finalize is not None:\n super_array_finalize(obj)\n\n # If we're a new object or viewing an ndarray, nothing has to be done.\n if obj is None or obj.__class__ is np.ndarray:\n return\n\n # If our unit is not set and obj has a valid one, use it.\n if self._unit is None:\n unit = getattr(obj, '_unit', None)\n if unit is not None:\n self._set_unit(unit)\n\n # Copy info if the original had `info` defined. Because of the way the\n # DataInfo works, `'info' in obj.__dict__` is False until the\n # `info` attribute is accessed or set.\n if 'info' in obj.__dict__:\n self.info = obj.info\n\n def __array_wrap__(self, obj, context=None):\n\n if context is None:\n # Methods like .squeeze() created a new `ndarray` and then call\n # __array_wrap__ to turn the array into self's subclass.\n return self._new_view(obj)\n\n raise NotImplementedError('__array_wrap__ should not be used '\n 'with a context any more since all use '\n 'should go through array_function. '\n 'Please raise an issue on '\n 'https://github.com/astropy/astropy')\n\n def __array_ufunc__(self, function, method, *inputs, **kwargs):\n \"\"\"Wrap numpy ufuncs, taking care of units.\n\n Parameters\n ----------\n function : callable\n ufunc to wrap.\n method : str\n Ufunc method: ``__call__``, ``at``, ``reduce``, etc.\n inputs : tuple\n Input arrays.\n kwargs : keyword arguments\n As passed on, with ``out`` containing possible quantity output.\n\n Returns\n -------\n result : `~astropy.units.Quantity`\n Results of the ufunc, with the unit set properly.\n \"\"\"\n # Determine required conversion functions -- to bring the unit of the\n # input to that expected (e.g., radian for np.sin), or to get\n # consistent units between two inputs (e.g., in np.add) --\n # and the unit of the result (or tuple of units for nout > 1).\n converters, unit = converters_and_unit(function, method, *inputs)\n\n out = kwargs.get('out', None)\n # Avoid loop back by turning any Quantity output into array views.\n if out is not None:\n # If pre-allocated output is used, check it is suitable.\n # This also returns array view, to ensure we don't loop back.\n if function.nout == 1:\n out = out[0]\n out_array = check_output(out, unit, inputs, function=function)\n # Ensure output argument remains a tuple.\n kwargs['out'] = (out_array,) if function.nout == 1 else out_array\n\n if method == 'reduce' and 'initial' in kwargs and unit is not None:\n # Special-case for initial argument for reductions like\n # np.add.reduce. This should be converted to the output unit as\n # well, which is typically the same as the input unit (but can\n # in principle be different: unitless for np.equal, radian\n # for np.arctan2, though those are not necessarily useful!)\n kwargs['initial'] = self._to_own_unit(kwargs['initial'],\n check_precision=False, unit=unit)\n\n # Same for inputs, but here also convert if necessary.\n arrays = []\n for input_, converter in zip(inputs, converters):\n input_ = getattr(input_, 'value', input_)\n arrays.append(converter(input_) if converter else input_)\n\n # Call our superclass's __array_ufunc__\n result = super().__array_ufunc__(function, method, *arrays, **kwargs)\n # If unit is None, a plain array is expected (e.g., comparisons), which\n # means we're done.\n # We're also done if the result was None (for method 'at') or\n # NotImplemented, which can happen if other inputs/outputs override\n # __array_ufunc__; hopefully, they can then deal with us.\n if unit is None or result is None or result is NotImplemented:\n return result\n\n return self._result_as_quantity(result, unit, out)\n\n def _result_as_quantity(self, result, unit, out):\n \"\"\"Turn result into a quantity with the given unit.\n\n If no output is given, it will take a view of the array as a quantity,\n and set the unit. If output is given, those should be quantity views\n of the result arrays, and the function will just set the unit.\n\n Parameters\n ----------\n result : ndarray or tuple thereof\n Array(s) which need to be turned into quantity.\n unit : `~astropy.units.Unit`\n Unit for the quantities to be returned (or `None` if the result\n should not be a quantity). Should be tuple if result is a tuple.\n out : `~astropy.units.Quantity` or None\n Possible output quantity. Should be `None` or a tuple if result\n is a tuple.\n\n Returns\n -------\n out : `~astropy.units.Quantity`\n With units set.\n \"\"\"\n if isinstance(result, (tuple, list)):\n if out is None:\n out = (None,) * len(result)\n return result.__class__(\n self._result_as_quantity(result_, unit_, out_)\n for (result_, unit_, out_) in\n zip(result, unit, out))\n\n if out is None:\n # View the result array as a Quantity with the proper unit.\n return result if unit is None else self._new_view(result, unit)\n\n # For given output, just set the unit. We know the unit is not None and\n # the output is of the correct Quantity subclass, as it was passed\n # through check_output.\n out._set_unit(unit)\n return out\n\n def __quantity_subclass__(self, unit):\n \"\"\"\n Overridden by subclasses to change what kind of view is\n created based on the output unit of an operation.\n\n Parameters\n ----------\n unit : UnitBase\n The unit for which the appropriate class should be returned\n\n Returns\n -------\n tuple :\n - `~astropy.units.Quantity` subclass\n - bool: True if subclasses of the given class are ok\n \"\"\"\n return Quantity, True\n\n def _new_view(self, obj=None, unit=None):\n \"\"\"\n Create a Quantity view of some array-like input, and set the unit\n\n By default, return a view of ``obj`` of the same class as ``self`` and\n with the same unit. Subclasses can override the type of class for a\n given unit using ``__quantity_subclass__``, and can ensure properties\n other than the unit are copied using ``__array_finalize__``.\n\n If the given unit defines a ``_quantity_class`` of which ``self``\n is not an instance, a view using this class is taken.\n\n Parameters\n ----------\n obj : ndarray or scalar, optional\n The array to create a view of. If obj is a numpy or python scalar,\n it will be converted to an array scalar. By default, ``self``\n is converted.\n\n unit : unit-like, optional\n The unit of the resulting object. It is used to select a\n subclass, and explicitly assigned to the view if given.\n If not given, the subclass and unit will be that of ``self``.\n\n Returns\n -------\n view : `~astropy.units.Quantity` subclass\n \"\"\"\n # Determine the unit and quantity subclass that we need for the view.\n if unit is None:\n unit = self.unit\n quantity_subclass = self.__class__\n elif unit is self.unit and self.__class__ is Quantity:\n # The second part is because we should not presume what other\n # classes want to do for the same unit. E.g., Constant will\n # always want to fall back to Quantity, and relies on going\n # through `__quantity_subclass__`.\n quantity_subclass = Quantity\n else:\n unit = Unit(unit)\n quantity_subclass = getattr(unit, '_quantity_class', Quantity)\n if isinstance(self, quantity_subclass):\n quantity_subclass, subok = self.__quantity_subclass__(unit)\n if subok:\n quantity_subclass = self.__class__\n\n # We only want to propagate information from ``self`` to our new view,\n # so obj should be a regular array. By using ``np.array``, we also\n # convert python and numpy scalars, which cannot be viewed as arrays\n # and thus not as Quantity either, to zero-dimensional arrays.\n # (These are turned back into scalar in `.value`)\n # Note that for an ndarray input, the np.array call takes only double\n # ``obj.__class is np.ndarray``. So, not worth special-casing.\n if obj is None:\n obj = self.view(np.ndarray)\n else:\n obj = np.array(obj, copy=False, subok=True)\n\n # Take the view, set the unit, and update possible other properties\n # such as ``info``, ``wrap_angle`` in `Longitude`, etc.\n view = obj.view(quantity_subclass)\n view._set_unit(unit)\n view.__array_finalize__(self)\n return view\n\n def _set_unit(self, unit):\n \"\"\"Set the unit.\n\n This is used anywhere the unit is set or modified, i.e., in the\n initilizer, in ``__imul__`` and ``__itruediv__`` for in-place\n multiplication and division by another unit, as well as in\n ``__array_finalize__`` for wrapping up views. For Quantity, it just\n sets the unit, but subclasses can override it to check that, e.g.,\n a unit is consistent.\n \"\"\"\n if not isinstance(unit, UnitBase):\n if (isinstance(self._unit, StructuredUnit)\n or isinstance(unit, StructuredUnit)):\n unit = StructuredUnit(unit, self.dtype)\n else:\n # Trying to go through a string ensures that, e.g., Magnitudes with\n # dimensionless physical unit become Quantity with units of mag.\n unit = Unit(str(unit), parse_strict='silent')\n if not isinstance(unit, (UnitBase, StructuredUnit)):\n raise UnitTypeError(\n \"{} instances require normal units, not {} instances.\"\n .format(type(self).__name__, type(unit)))\n\n self._unit = unit\n\n def __deepcopy__(self, memo):\n # If we don't define this, ``copy.deepcopy(quantity)`` will\n # return a bare Numpy array.\n return self.copy()\n\n def __reduce__(self):\n # patch to pickle Quantity objects (ndarray subclasses), see\n # http://www.mail-archive.com/[email protected]/msg02446.html\n\n object_state = list(super().__reduce__())\n object_state[2] = (object_state[2], self.__dict__)\n return tuple(object_state)\n\n def __setstate__(self, state):\n # patch to unpickle Quantity objects (ndarray subclasses), see\n # http://www.mail-archive.com/[email protected]/msg02446.html\n\n nd_state, own_state = state\n super().__setstate__(nd_state)\n self.__dict__.update(own_state)\n\n info = QuantityInfo()\n\n def _to_value(self, unit, equivalencies=[]):\n \"\"\"Helper method for to and to_value.\"\"\"\n if equivalencies == []:\n equivalencies = self._equivalencies\n if not self.dtype.names or isinstance(self.unit, StructuredUnit):\n # Standard path, let unit to do work.\n return self.unit.to(unit, self.view(np.ndarray),\n equivalencies=equivalencies)\n\n else:\n # The .to() method of a simple unit cannot convert a structured\n # dtype, so we work around it, by recursing.\n # TODO: deprecate this?\n # Convert simple to Structured on initialization?\n result = np.empty_like(self.view(np.ndarray))\n for name in self.dtype.names:\n result[name] = self[name]._to_value(unit, equivalencies)\n return result\n\n def to(self, unit, equivalencies=[], copy=True):\n \"\"\"\n Return a new `~astropy.units.Quantity` object with the specified unit.\n\n Parameters\n ----------\n unit : unit-like\n An object that represents the unit to convert to. Must be\n an `~astropy.units.UnitBase` object or a string parseable\n by the `~astropy.units` package.\n\n equivalencies : list of tuple\n A list of equivalence pairs to try if the units are not\n directly convertible. See :ref:`astropy:unit_equivalencies`.\n If not provided or ``[]``, class default equivalencies will be used\n (none for `~astropy.units.Quantity`, but may be set for subclasses)\n If `None`, no equivalencies will be applied at all, not even any\n set globally or within a context.\n\n copy : bool, optional\n If `True` (default), then the value is copied. Otherwise, a copy\n will only be made if necessary.\n\n See also\n --------\n to_value : get the numerical value in a given unit.\n \"\"\"\n # We don't use `to_value` below since we always want to make a copy\n # and don't want to slow down this method (esp. the scalar case).\n unit = Unit(unit)\n if copy:\n # Avoid using to_value to ensure that we make a copy. We also\n # don't want to slow down this method (esp. the scalar case).\n value = self._to_value(unit, equivalencies)\n else:\n # to_value only copies if necessary\n value = self.to_value(unit, equivalencies)\n return self._new_view(value, unit)\n\n def to_value(self, unit=None, equivalencies=[]):\n \"\"\"\n The numerical value, possibly in a different unit.\n\n Parameters\n ----------\n unit : unit-like, optional\n The unit in which the value should be given. If not given or `None`,\n use the current unit.\n\n equivalencies : list of tuple, optional\n A list of equivalence pairs to try if the units are not directly\n convertible (see :ref:`astropy:unit_equivalencies`). If not provided\n or ``[]``, class default equivalencies will be used (none for\n `~astropy.units.Quantity`, but may be set for subclasses).\n If `None`, no equivalencies will be applied at all, not even any\n set globally or within a context.\n\n Returns\n -------\n value : ndarray or scalar\n The value in the units specified. For arrays, this will be a view\n of the data if no unit conversion was necessary.\n\n See also\n --------\n to : Get a new instance in a different unit.\n \"\"\"\n if unit is None or unit is self.unit:\n value = self.view(np.ndarray)\n elif not self.dtype.names:\n # For non-structured, we attempt a short-cut, where we just get\n # the scale. If that is 1, we do not have to do anything.\n unit = Unit(unit)\n # We want a view if the unit does not change. One could check\n # with \"==\", but that calculates the scale that we need anyway.\n # TODO: would be better for `unit.to` to have an in-place flag.\n try:\n scale = self.unit._to(unit)\n except Exception:\n # Short-cut failed; try default (maybe equivalencies help).\n value = self._to_value(unit, equivalencies)\n else:\n value = self.view(np.ndarray)\n if not is_effectively_unity(scale):\n # not in-place!\n value = value * scale\n else:\n # For structured arrays, we go the default route.\n value = self._to_value(unit, equivalencies)\n\n # Index with empty tuple to decay array scalars in to numpy scalars.\n return value if value.shape else value[()]\n\n value = property(to_value,\n doc=\"\"\"The numerical value of this instance.\n\n See also\n --------\n to_value : Get the numerical value in a given unit.\n \"\"\")\n\n @property\n def unit(self):\n \"\"\"\n A `~astropy.units.UnitBase` object representing the unit of this\n quantity.\n \"\"\"\n\n return self._unit\n\n @property\n def equivalencies(self):\n \"\"\"\n A list of equivalencies that will be applied by default during\n unit conversions.\n \"\"\"\n\n return self._equivalencies\n\n def _recursively_apply(self, func):\n \"\"\"Apply function recursively to every field.\n\n Returns a copy with the result.\n \"\"\"\n result = np.empty_like(self)\n result_value = result.view(np.ndarray)\n result_unit = ()\n for name in self.dtype.names:\n part = func(self[name])\n result_value[name] = part.value\n result_unit += (part.unit,)\n\n result._set_unit(result_unit)\n return result\n\n @property\n def si(self):\n \"\"\"\n Returns a copy of the current `Quantity` instance with SI units. The\n value of the resulting object will be scaled.\n \"\"\"\n if self.dtype.names:\n return self._recursively_apply(operator.attrgetter('si'))\n si_unit = self.unit.si\n return self._new_view(self.value * si_unit.scale,\n si_unit / si_unit.scale)\n\n @property\n def cgs(self):\n \"\"\"\n Returns a copy of the current `Quantity` instance with CGS units. The\n value of the resulting object will be scaled.\n \"\"\"\n if self.dtype.names:\n return self._recursively_apply(operator.attrgetter('cgs'))\n cgs_unit = self.unit.cgs\n return self._new_view(self.value * cgs_unit.scale,\n cgs_unit / cgs_unit.scale)\n\n @property\n def isscalar(self):\n \"\"\"\n True if the `value` of this quantity is a scalar, or False if it\n is an array-like object.\n\n .. note::\n This is subtly different from `numpy.isscalar` in that\n `numpy.isscalar` returns False for a zero-dimensional array\n (e.g. ``np.array(1)``), while this is True for quantities,\n since quantities cannot represent true numpy scalars.\n \"\"\"\n return not self.shape\n\n # This flag controls whether convenience conversion members, such\n # as `q.m` equivalent to `q.to_value(u.m)` are available. This is\n # not turned on on Quantity itself, but is on some subclasses of\n # Quantity, such as `astropy.coordinates.Angle`.\n _include_easy_conversion_members = False\n\n def __dir__(self):\n \"\"\"\n Quantities are able to directly convert to other units that\n have the same physical type. This function is implemented in\n order to make autocompletion still work correctly in IPython.\n \"\"\"\n if not self._include_easy_conversion_members:\n return super().__dir__()\n\n dir_values = set(super().__dir__())\n equivalencies = Unit._normalize_equivalencies(self.equivalencies)\n for equivalent in self.unit._get_units_with_same_physical_type(equivalencies):\n dir_values.update(equivalent.names)\n return sorted(dir_values)\n\n def __getattr__(self, attr):\n \"\"\"\n Quantities are able to directly convert to other units that\n have the same physical type.\n \"\"\"\n if not self._include_easy_conversion_members:\n raise AttributeError(\n f\"'{self.__class__.__name__}' object has no '{attr}' member\")\n\n def get_virtual_unit_attribute():\n registry = get_current_unit_registry().registry\n to_unit = registry.get(attr, None)\n if to_unit is None:\n return None\n\n try:\n return self.unit.to(\n to_unit, self.value, equivalencies=self.equivalencies)\n except UnitsError:\n return None\n\n value = get_virtual_unit_attribute()\n\n if value is None:\n raise AttributeError(\n f\"{self.__class__.__name__} instance has no attribute '{attr}'\")\n else:\n return value\n\n # Equality needs to be handled explicitly as ndarray.__eq__ gives\n # DeprecationWarnings on any error, which is distracting, and does not\n # deal well with structured arrays (nor does the ufunc).\n def __eq__(self, other):\n try:\n other_value = self._to_own_unit(other)\n except UnitsError:\n return False\n except Exception:\n return NotImplemented\n return self.value.__eq__(other_value)\n\n def __ne__(self, other):\n try:\n other_value = self._to_own_unit(other)\n except UnitsError:\n return True\n except Exception:\n return NotImplemented\n return self.value.__ne__(other_value)\n\n # Unit conversion operator (<<).\n def __lshift__(self, other):\n try:\n other = Unit(other, parse_strict='silent')\n except UnitTypeError:\n return NotImplemented\n\n return self.__class__(self, other, copy=False, subok=True)\n\n def __ilshift__(self, other):\n try:\n other = Unit(other, parse_strict='silent')\n except UnitTypeError:\n return NotImplemented\n\n try:\n factor = self.unit._to(other)\n except Exception:\n # Maybe via equivalencies? Now we do make a temporary copy.\n try:\n value = self._to_value(other)\n except UnitConversionError:\n return NotImplemented\n\n self.view(np.ndarray)[...] = value\n\n else:\n self.view(np.ndarray)[...] *= factor\n\n self._set_unit(other)\n return self\n\n def __rlshift__(self, other):\n if not self.isscalar:\n return NotImplemented\n return Unit(self).__rlshift__(other)\n\n # Give warning for other >> self, since probably other << self was meant.\n def __rrshift__(self, other):\n warnings.warn(\">> is not implemented. Did you mean to convert \"\n \"something to this quantity as a unit using '<<'?\",\n AstropyWarning)\n return NotImplemented\n\n # Also define __rshift__ and __irshift__ so we override default ndarray\n # behaviour, but instead of emitting a warning here, let it be done by\n # other (which likely is a unit if this was a mistake).\n def __rshift__(self, other):\n return NotImplemented\n\n def __irshift__(self, other):\n return NotImplemented\n\n # Arithmetic operations\n def __mul__(self, other):\n \"\"\" Multiplication between `Quantity` objects and other objects.\"\"\"\n\n if isinstance(other, (UnitBase, str)):\n try:\n return self._new_view(self.copy(), other * self.unit)\n except UnitsError: # let other try to deal with it\n return NotImplemented\n\n return super().__mul__(other)\n\n def __imul__(self, other):\n \"\"\"In-place multiplication between `Quantity` objects and others.\"\"\"\n\n if isinstance(other, (UnitBase, str)):\n self._set_unit(other * self.unit)\n return self\n\n return super().__imul__(other)\n\n def __rmul__(self, other):\n \"\"\" Right Multiplication between `Quantity` objects and other\n objects.\n \"\"\"\n\n return self.__mul__(other)\n\n def __truediv__(self, other):\n \"\"\" Division between `Quantity` objects and other objects.\"\"\"\n\n if isinstance(other, (UnitBase, str)):\n try:\n return self._new_view(self.copy(), self.unit / other)\n except UnitsError: # let other try to deal with it\n return NotImplemented\n\n return super().__truediv__(other)\n\n def __itruediv__(self, other):\n \"\"\"Inplace division between `Quantity` objects and other objects.\"\"\"\n\n if isinstance(other, (UnitBase, str)):\n self._set_unit(self.unit / other)\n return self\n\n return super().__itruediv__(other)\n\n def __rtruediv__(self, other):\n \"\"\" Right Division between `Quantity` objects and other objects.\"\"\"\n\n if isinstance(other, (UnitBase, str)):\n return self._new_view(1. / self.value, other / self.unit)\n\n return super().__rtruediv__(other)\n\n def __pow__(self, other):\n if isinstance(other, Fraction):\n # Avoid getting object arrays by raising the value to a Fraction.\n return self._new_view(self.value ** float(other),\n self.unit ** other)\n\n return super().__pow__(other)\n\n # other overrides of special functions\n def __hash__(self):\n return hash(self.value) ^ hash(self.unit)\n\n def __iter__(self):\n if self.isscalar:\n raise TypeError(\n \"'{cls}' object with a scalar value is not iterable\"\n .format(cls=self.__class__.__name__))\n\n # Otherwise return a generator\n def quantity_iter():\n for val in self.value:\n yield self._new_view(val)\n\n return quantity_iter()\n\n def __getitem__(self, key):\n if isinstance(key, str) and isinstance(self.unit, StructuredUnit):\n return self._new_view(self.view(np.ndarray)[key], self.unit[key])\n\n try:\n out = super().__getitem__(key)\n except IndexError:\n # We want zero-dimensional Quantity objects to behave like scalars,\n # so they should raise a TypeError rather than an IndexError.\n if self.isscalar:\n raise TypeError(\n \"'{cls}' object with a scalar value does not support \"\n \"indexing\".format(cls=self.__class__.__name__))\n else:\n raise\n # For single elements, ndarray.__getitem__ returns scalars; these\n # need a new view as a Quantity.\n if not isinstance(out, np.ndarray):\n out = self._new_view(out)\n return out\n\n def __setitem__(self, i, value):\n if isinstance(i, str):\n # Indexing will cause a different unit, so by doing this in\n # two steps we effectively try with the right unit.\n self[i][...] = value\n return\n\n # update indices in info if the info property has been accessed\n # (in which case 'info' in self.__dict__ is True; this is guaranteed\n # to be the case if we're part of a table).\n if not self.isscalar and 'info' in self.__dict__:\n self.info.adjust_indices(i, value, len(self))\n self.view(np.ndarray).__setitem__(i, self._to_own_unit(value))\n\n # __contains__ is OK\n\n def __bool__(self):\n \"\"\"Quantities should always be treated as non-False; there is too much\n potential for ambiguity otherwise.\n \"\"\"\n warnings.warn('The truth value of a Quantity is ambiguous. '\n 'In the future this will raise a ValueError.',\n AstropyDeprecationWarning)\n return True\n\n def __len__(self):\n if self.isscalar:\n raise TypeError(\"'{cls}' object with a scalar value has no \"\n \"len()\".format(cls=self.__class__.__name__))\n else:\n return len(self.value)\n\n # Numerical types\n def __float__(self):\n try:\n return float(self.to_value(dimensionless_unscaled))\n except (UnitsError, TypeError):\n raise TypeError('only dimensionless scalar quantities can be '\n 'converted to Python scalars')\n\n def __int__(self):\n try:\n return int(self.to_value(dimensionless_unscaled))\n except (UnitsError, TypeError):\n raise TypeError('only dimensionless scalar quantities can be '\n 'converted to Python scalars')\n\n def __index__(self):\n # for indices, we do not want to mess around with scaling at all,\n # so unlike for float, int, we insist here on unscaled dimensionless\n try:\n assert self.unit.is_unity()\n return self.value.__index__()\n except Exception:\n raise TypeError('only integer dimensionless scalar quantities '\n 'can be converted to a Python index')\n\n # TODO: we may want to add a hook for dimensionless quantities?\n @property\n def _unitstr(self):\n if self.unit is None:\n unitstr = _UNIT_NOT_INITIALISED\n else:\n unitstr = str(self.unit)\n\n if unitstr:\n unitstr = ' ' + unitstr\n\n return unitstr\n\n def to_string(self, unit=None, precision=None, format=None, subfmt=None):\n \"\"\"\n Generate a string representation of the quantity and its unit.\n\n The behavior of this function can be altered via the\n `numpy.set_printoptions` function and its various keywords. The\n exception to this is the ``threshold`` keyword, which is controlled via\n the ``[units.quantity]`` configuration item ``latex_array_threshold``.\n This is treated separately because the numpy default of 1000 is too big\n for most browsers to handle.\n\n Parameters\n ----------\n unit : unit-like, optional\n Specifies the unit. If not provided,\n the unit used to initialize the quantity will be used.\n\n precision : number, optional\n The level of decimal precision. If `None`, or not provided,\n it will be determined from NumPy print options.\n\n format : str, optional\n The format of the result. If not provided, an unadorned\n string is returned. Supported values are:\n\n - 'latex': Return a LaTeX-formatted string\n\n - 'latex_inline': Return a LaTeX-formatted string that uses\n negative exponents instead of fractions\n\n subfmt : str, optional\n Subformat of the result. For the moment, only used for\n ``format='latex'`` and ``format='latex_inline'``. Supported\n values are:\n\n - 'inline': Use ``$ ... $`` as delimiters.\n\n - 'display': Use ``$\\\\displaystyle ... $`` as delimiters.\n\n Returns\n -------\n str\n A string with the contents of this Quantity\n \"\"\"\n if unit is not None and unit != self.unit:\n return self.to(unit).to_string(\n unit=None, precision=precision, format=format, subfmt=subfmt)\n\n formats = {\n None: None,\n \"latex\": {\n None: (\"$\", \"$\"),\n \"inline\": (\"$\", \"$\"),\n \"display\": (r\"$\\displaystyle \", r\"$\"),\n },\n }\n formats['latex_inline'] = formats['latex']\n\n if format not in formats:\n raise ValueError(f\"Unknown format '{format}'\")\n elif format is None:\n if precision is None:\n # Use default formatting settings\n return f'{self.value}{self._unitstr:s}'\n else:\n # np.array2string properly formats arrays as well as scalars\n return np.array2string(self.value, precision=precision,\n floatmode=\"fixed\") + self._unitstr\n\n # else, for the moment we assume format=\"latex\" or \"latex_inline\".\n\n # Set the precision if set, otherwise use numpy default\n pops = np.get_printoptions()\n format_spec = f\".{precision if precision is not None else pops['precision']}g\"\n\n def float_formatter(value):\n return Latex.format_exponential_notation(value,\n format_spec=format_spec)\n\n def complex_formatter(value):\n return '({}{}i)'.format(\n Latex.format_exponential_notation(value.real,\n format_spec=format_spec),\n Latex.format_exponential_notation(value.imag,\n format_spec='+' + format_spec))\n\n # The view is needed for the scalar case - self.value might be float.\n latex_value = np.array2string(\n self.view(np.ndarray),\n threshold=(conf.latex_array_threshold\n if conf.latex_array_threshold > -1 else pops['threshold']),\n formatter={'float_kind': float_formatter,\n 'complex_kind': complex_formatter},\n max_line_width=np.inf,\n separator=',~')\n\n latex_value = latex_value.replace('...', r'\\dots')\n\n # Format unit\n # [1:-1] strips the '$' on either side needed for math mode\n if self.unit is None:\n latex_unit = _UNIT_NOT_INITIALISED\n elif format == 'latex':\n latex_unit = self.unit._repr_latex_()[1:-1] # note this is unicode\n elif format == 'latex_inline':\n latex_unit = self.unit.to_string(format='latex_inline')[1:-1]\n\n delimiter_left, delimiter_right = formats[format][subfmt]\n\n return rf'{delimiter_left}{latex_value} \\; {latex_unit}{delimiter_right}'\n\n def __str__(self):\n return self.to_string()\n\n def __repr__(self):\n prefixstr = '<' + self.__class__.__name__ + ' '\n arrstr = np.array2string(self.view(np.ndarray), separator=', ',\n prefix=prefixstr)\n return f'{prefixstr}{arrstr}{self._unitstr:s}>'\n\n def _repr_latex_(self):\n \"\"\"\n Generate a latex representation of the quantity and its unit.\n\n Returns\n -------\n lstr\n A LaTeX string with the contents of this Quantity\n \"\"\"\n # NOTE: This should change to display format in a future release\n return self.to_string(format='latex', subfmt='inline')\n\n def __format__(self, format_spec):\n \"\"\"\n Format quantities using the new-style python formatting codes\n as specifiers for the number.\n\n If the format specifier correctly applies itself to the value,\n then it is used to format only the value. If it cannot be\n applied to the value, then it is applied to the whole string.\n\n \"\"\"\n try:\n value = format(self.value, format_spec)\n full_format_spec = \"s\"\n except ValueError:\n value = self.value\n full_format_spec = format_spec\n\n return format(f\"{value}{self._unitstr:s}\",\n full_format_spec)\n\n def decompose(self, bases=[]):\n \"\"\"\n Generates a new `Quantity` with the units\n decomposed. Decomposed units have only irreducible units in\n them (see `astropy.units.UnitBase.decompose`).\n\n Parameters\n ----------\n bases : sequence of `~astropy.units.UnitBase`, optional\n The bases to decompose into. When not provided,\n decomposes down to any irreducible units. When provided,\n the decomposed result will only contain the given units.\n This will raises a `~astropy.units.UnitsError` if it's not possible\n to do so.\n\n Returns\n -------\n newq : `~astropy.units.Quantity`\n A new object equal to this quantity with units decomposed.\n \"\"\"\n return self._decompose(False, bases=bases)\n\n def _decompose(self, allowscaledunits=False, bases=[]):\n \"\"\"\n Generates a new `Quantity` with the units decomposed. Decomposed\n units have only irreducible units in them (see\n `astropy.units.UnitBase.decompose`).\n\n Parameters\n ----------\n allowscaledunits : bool\n If True, the resulting `Quantity` may have a scale factor\n associated with it. If False, any scaling in the unit will\n be subsumed into the value of the resulting `Quantity`\n\n bases : sequence of UnitBase, optional\n The bases to decompose into. When not provided,\n decomposes down to any irreducible units. When provided,\n the decomposed result will only contain the given units.\n This will raises a `~astropy.units.UnitsError` if it's not possible\n to do so.\n\n Returns\n -------\n newq : `~astropy.units.Quantity`\n A new object equal to this quantity with units decomposed.\n\n \"\"\"\n\n new_unit = self.unit.decompose(bases=bases)\n\n # Be careful here because self.value usually is a view of self;\n # be sure that the original value is not being modified.\n if not allowscaledunits and hasattr(new_unit, 'scale'):\n new_value = self.value * new_unit.scale\n new_unit = new_unit / new_unit.scale\n return self._new_view(new_value, new_unit)\n else:\n return self._new_view(self.copy(), new_unit)\n\n # These functions need to be overridden to take into account the units\n # Array conversion\n # https://numpy.org/doc/stable/reference/arrays.ndarray.html#array-conversion\n\n def item(self, *args):\n \"\"\"Copy an element of an array to a scalar Quantity and return it.\n\n Like :meth:`~numpy.ndarray.item` except that it always\n returns a `Quantity`, not a Python scalar.\n\n \"\"\"\n return self._new_view(super().item(*args))\n\n def tolist(self):\n raise NotImplementedError(\"cannot make a list of Quantities. Get \"\n \"list of values with q.value.tolist()\")\n\n def _to_own_unit(self, value, check_precision=True, *, unit=None):\n \"\"\"Convert value to one's own unit (or that given).\n\n Here, non-quantities are treated as dimensionless, and care is taken\n for values of 0, infinity or nan, which are allowed to have any unit.\n\n Parameters\n ----------\n value : anything convertible to `~astropy.units.Quantity`\n The value to be converted to the requested unit.\n check_precision : bool\n Whether to forbit conversion of float to integer if that changes\n the input number. Default: `True`.\n unit : `~astropy.units.Unit` or None\n The unit to convert to. By default, the unit of ``self``.\n\n Returns\n -------\n value : number or `~numpy.ndarray`\n In the requested units.\n\n \"\"\"\n if unit is None:\n unit = self.unit\n try:\n _value = value.to_value(unit)\n except AttributeError:\n # We're not a Quantity.\n # First remove two special cases (with a fast test):\n # 1) Maybe masked printing? MaskedArray with quantities does not\n # work very well, but no reason to break even repr and str.\n # 2) np.ma.masked? useful if we're a MaskedQuantity.\n if (value is np.ma.masked\n or (value is np.ma.masked_print_option\n and self.dtype.kind == 'O')):\n return value\n # Now, let's try a more general conversion.\n # Plain arrays will be converted to dimensionless in the process,\n # but anything with a unit attribute will use that.\n try:\n as_quantity = Quantity(value)\n _value = as_quantity.to_value(unit)\n except UnitsError:\n # last chance: if this was not something with a unit\n # and is all 0, inf, or nan, we treat it as arbitrary unit.\n if (not hasattr(value, 'unit') and\n can_have_arbitrary_unit(as_quantity.value)):\n _value = as_quantity.value\n else:\n raise\n\n if self.dtype.kind == 'i' and check_precision:\n # If, e.g., we are casting float to int, we want to fail if\n # precision is lost, but let things pass if it works.\n _value = np.array(_value, copy=False, subok=True)\n if not np.can_cast(_value.dtype, self.dtype):\n self_dtype_array = np.array(_value, self.dtype, subok=True)\n if not np.all(np.logical_or(self_dtype_array == _value,\n np.isnan(_value))):\n raise TypeError(\"cannot convert value type to array type \"\n \"without precision loss\")\n\n # Setting names to ensure things like equality work (note that\n # above will have failed already if units did not match).\n if self.dtype.names:\n _value.dtype.names = self.dtype.names\n return _value\n\n def itemset(self, *args):\n if len(args) == 0:\n raise ValueError(\"itemset must have at least one argument\")\n\n self.view(np.ndarray).itemset(*(args[:-1] +\n (self._to_own_unit(args[-1]),)))\n\n def tostring(self, order='C'):\n raise NotImplementedError(\"cannot write Quantities to string. Write \"\n \"array with q.value.tostring(...).\")\n\n def tobytes(self, order='C'):\n raise NotImplementedError(\"cannot write Quantities to string. Write \"\n \"array with q.value.tobytes(...).\")\n\n def tofile(self, fid, sep=\"\", format=\"%s\"):\n raise NotImplementedError(\"cannot write Quantities to file. Write \"\n \"array with q.value.tofile(...)\")\n\n def dump(self, file):\n raise NotImplementedError(\"cannot dump Quantities to file. Write \"\n \"array with q.value.dump()\")\n\n def dumps(self):\n raise NotImplementedError(\"cannot dump Quantities to string. Write \"\n \"array with q.value.dumps()\")\n\n # astype, byteswap, copy, view, getfield, setflags OK as is\n\n def fill(self, value):\n self.view(np.ndarray).fill(self._to_own_unit(value))\n\n # Shape manipulation: resize cannot be done (does not own data), but\n # shape, transpose, swapaxes, flatten, ravel, squeeze all OK. Only\n # the flat iterator needs to be overwritten, otherwise single items are\n # returned as numbers.\n @property\n def flat(self):\n \"\"\"A 1-D iterator over the Quantity array.\n\n This returns a ``QuantityIterator`` instance, which behaves the same\n as the `~numpy.flatiter` instance returned by `~numpy.ndarray.flat`,\n and is similar to, but not a subclass of, Python's built-in iterator\n object.\n \"\"\"\n return QuantityIterator(self)\n\n @flat.setter\n def flat(self, value):\n y = self.ravel()\n y[:] = value\n\n # Item selection and manipulation\n # repeat, sort, compress, diagonal OK\n def take(self, indices, axis=None, out=None, mode='raise'):\n out = super().take(indices, axis=axis, out=out, mode=mode)\n # For single elements, ndarray.take returns scalars; these\n # need a new view as a Quantity.\n if type(out) is not type(self):\n out = self._new_view(out)\n return out\n\n def put(self, indices, values, mode='raise'):\n self.view(np.ndarray).put(indices, self._to_own_unit(values), mode)\n\n def choose(self, choices, out=None, mode='raise'):\n raise NotImplementedError(\"cannot choose based on quantity. Choose \"\n \"using array with q.value.choose(...)\")\n\n # ensure we do not return indices as quantities\n def argsort(self, axis=-1, kind='quicksort', order=None):\n return self.view(np.ndarray).argsort(axis=axis, kind=kind, order=order)\n\n def searchsorted(self, v, *args, **kwargs):\n return np.searchsorted(np.array(self),\n self._to_own_unit(v, check_precision=False),\n *args, **kwargs) # avoid numpy 1.6 problem\n\n if NUMPY_LT_1_22:\n def argmax(self, axis=None, out=None):\n return self.view(np.ndarray).argmax(axis, out=out)\n\n def argmin(self, axis=None, out=None):\n return self.view(np.ndarray).argmin(axis, out=out)\n else:\n def argmax(self, axis=None, out=None, *, keepdims=False):\n return self.view(np.ndarray).argmax(axis=axis, out=out, keepdims=keepdims)\n\n def argmin(self, axis=None, out=None, *, keepdims=False):\n return self.view(np.ndarray).argmin(axis=axis, out=out, keepdims=keepdims)\n\n def __array_function__(self, function, types, args, kwargs):\n \"\"\"Wrap numpy functions, taking care of units.\n\n Parameters\n ----------\n function : callable\n Numpy function to wrap\n types : iterable of classes\n Classes that provide an ``__array_function__`` override. Can\n in principle be used to interact with other classes. Below,\n mostly passed on to `~numpy.ndarray`, which can only interact\n with subclasses.\n args : tuple\n Positional arguments provided in the function call.\n kwargs : dict\n Keyword arguments provided in the function call.\n\n Returns\n -------\n result: `~astropy.units.Quantity`, `~numpy.ndarray`\n As appropriate for the function. If the function is not\n supported, `NotImplemented` is returned, which will lead to\n a `TypeError` unless another argument overrode the function.\n\n Raises\n ------\n ~astropy.units.UnitsError\n If operands have incompatible units.\n \"\"\"\n # A function should be in one of the following sets or dicts:\n # 1. SUBCLASS_SAFE_FUNCTIONS (set), if the numpy implementation\n # supports Quantity; we pass on to ndarray.__array_function__.\n # 2. FUNCTION_HELPERS (dict), if the numpy implementation is usable\n # after converting quantities to arrays with suitable units,\n # and possibly setting units on the result.\n # 3. DISPATCHED_FUNCTIONS (dict), if the function makes sense but\n # requires a Quantity-specific implementation.\n # 4. UNSUPPORTED_FUNCTIONS (set), if the function does not make sense.\n # For now, since we may not yet have complete coverage, if a\n # function is in none of the above, we simply call the numpy\n # implementation.\n if function in SUBCLASS_SAFE_FUNCTIONS:\n return super().__array_function__(function, types, args, kwargs)\n\n elif function in FUNCTION_HELPERS:\n function_helper = FUNCTION_HELPERS[function]\n try:\n args, kwargs, unit, out = function_helper(*args, **kwargs)\n except NotImplementedError:\n return self._not_implemented_or_raise(function, types)\n\n result = super().__array_function__(function, types, args, kwargs)\n # Fall through to return section\n\n elif function in DISPATCHED_FUNCTIONS:\n dispatched_function = DISPATCHED_FUNCTIONS[function]\n try:\n result, unit, out = dispatched_function(*args, **kwargs)\n except NotImplementedError:\n return self._not_implemented_or_raise(function, types)\n\n # Fall through to return section\n\n elif function in UNSUPPORTED_FUNCTIONS:\n return NotImplemented\n\n else:\n warnings.warn(\"function '{}' is not known to astropy's Quantity. \"\n \"Will run it anyway, hoping it will treat ndarray \"\n \"subclasses correctly. Please raise an issue at \"\n \"https://github.com/astropy/astropy/issues. \"\n .format(function.__name__), AstropyWarning)\n\n return super().__array_function__(function, types, args, kwargs)\n\n # If unit is None, a plain array is expected (e.g., boolean), which\n # means we're done.\n # We're also done if the result was NotImplemented, which can happen\n # if other inputs/outputs override __array_function__;\n # hopefully, they can then deal with us.\n if unit is None or result is NotImplemented:\n return result\n\n return self._result_as_quantity(result, unit, out=out)\n\n def _not_implemented_or_raise(self, function, types):\n # Our function helper or dispatcher found that the function does not\n # work with Quantity. In principle, there may be another class that\n # knows what to do with us, for which we should return NotImplemented.\n # But if there is ndarray (or a non-Quantity subclass of it) around,\n # it quite likely coerces, so we should just break.\n if any(issubclass(t, np.ndarray) and not issubclass(t, Quantity)\n for t in types):\n raise TypeError(\"the Quantity implementation cannot handle {} \"\n \"with the given arguments.\"\n .format(function)) from None\n else:\n return NotImplemented\n\n # Calculation -- override ndarray methods to take into account units.\n # We use the corresponding numpy functions to evaluate the results, since\n # the methods do not always allow calling with keyword arguments.\n # For instance, np.array([0.,2.]).clip(a_min=0., a_max=1.) gives\n # TypeError: 'a_max' is an invalid keyword argument for this function.\n def _wrap_function(self, function, *args, unit=None, out=None, **kwargs):\n \"\"\"Wrap a numpy function that processes self, returning a Quantity.\n\n Parameters\n ----------\n function : callable\n Numpy function to wrap.\n args : positional arguments\n Any positional arguments to the function beyond the first argument\n (which will be set to ``self``).\n kwargs : keyword arguments\n Keyword arguments to the function.\n\n If present, the following arguments are treated specially:\n\n unit : `~astropy.units.Unit`\n Unit of the output result. If not given, the unit of ``self``.\n out : `~astropy.units.Quantity`\n A Quantity instance in which to store the output.\n\n Notes\n -----\n Output should always be assigned via a keyword argument, otherwise\n no proper account of the unit is taken.\n\n Returns\n -------\n out : `~astropy.units.Quantity`\n Result of the function call, with the unit set properly.\n \"\"\"\n if unit is None:\n unit = self.unit\n # Ensure we don't loop back by turning any Quantity into array views.\n args = (self.value,) + tuple((arg.value if isinstance(arg, Quantity)\n else arg) for arg in args)\n if out is not None:\n # If pre-allocated output is used, check it is suitable.\n # This also returns array view, to ensure we don't loop back.\n arrays = tuple(arg for arg in args if isinstance(arg, np.ndarray))\n kwargs['out'] = check_output(out, unit, arrays, function=function)\n # Apply the function and turn it back into a Quantity.\n result = function(*args, **kwargs)\n return self._result_as_quantity(result, unit, out)\n\n def trace(self, offset=0, axis1=0, axis2=1, dtype=None, out=None):\n return self._wrap_function(np.trace, offset, axis1, axis2, dtype,\n out=out)\n if NUMPY_LT_1_20:\n def var(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False):\n return self._wrap_function(np.var, axis, dtype,\n out=out, ddof=ddof, keepdims=keepdims,\n unit=self.unit**2)\n else:\n def var(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=True):\n return self._wrap_function(np.var, axis, dtype,\n out=out, ddof=ddof, keepdims=keepdims, where=where,\n unit=self.unit**2)\n\n if NUMPY_LT_1_20:\n def std(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False):\n return self._wrap_function(np.std, axis, dtype, out=out, ddof=ddof,\n keepdims=keepdims)\n else:\n def std(self, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *, where=True):\n return self._wrap_function(np.std, axis, dtype, out=out, ddof=ddof,\n keepdims=keepdims, where=where)\n\n if NUMPY_LT_1_20:\n def mean(self, axis=None, dtype=None, out=None, keepdims=False):\n return self._wrap_function(np.mean, axis, dtype, out=out,\n keepdims=keepdims)\n else:\n def mean(self, axis=None, dtype=None, out=None, keepdims=False, *, where=True):\n return self._wrap_function(np.mean, axis, dtype, out=out,\n keepdims=keepdims, where=where)\n\n def round(self, decimals=0, out=None):\n return self._wrap_function(np.round, decimals, out=out)\n\n def dot(self, b, out=None):\n result_unit = self.unit * getattr(b, 'unit', dimensionless_unscaled)\n return self._wrap_function(np.dot, b, out=out, unit=result_unit)\n\n # Calculation: override methods that do not make sense.\n\n def all(self, axis=None, out=None):\n raise TypeError(\"cannot evaluate truth value of quantities. \"\n \"Evaluate array with q.value.all(...)\")\n\n def any(self, axis=None, out=None):\n raise TypeError(\"cannot evaluate truth value of quantities. \"\n \"Evaluate array with q.value.any(...)\")\n\n # Calculation: numpy functions that can be overridden with methods.\n\n def diff(self, n=1, axis=-1):\n return self._wrap_function(np.diff, n, axis)\n\n def ediff1d(self, to_end=None, to_begin=None):\n return self._wrap_function(np.ediff1d, to_end, to_begin)\n\n if NUMPY_LT_1_22:\n def nansum(self, axis=None, out=None, keepdims=False):\n return self._wrap_function(np.nansum, axis,\n out=out, keepdims=keepdims)\n else:\n # TODO: deprecate this method? It is not on ndarray, and we do not\n # support nanmean, etc., so why this one?\n def nansum(self, axis=None, out=None, keepdims=False, *, initial=None, where=True):\n if initial is not None:\n initial = self._to_own_unit(initial)\n return self._wrap_function(np.nansum, axis,\n out=out, keepdims=keepdims, initial=initial, where=where)\n\n def insert(self, obj, values, axis=None):\n \"\"\"\n Insert values along the given axis before the given indices and return\n a new `~astropy.units.Quantity` object.\n\n This is a thin wrapper around the `numpy.insert` function.\n\n Parameters\n ----------\n obj : int, slice or sequence of int\n Object that defines the index or indices before which ``values`` is\n inserted.\n values : array-like\n Values to insert. If the type of ``values`` is different\n from that of quantity, ``values`` is converted to the matching type.\n ``values`` should be shaped so that it can be broadcast appropriately\n The unit of ``values`` must be consistent with this quantity.\n axis : int, optional\n Axis along which to insert ``values``. If ``axis`` is None then\n the quantity array is flattened before insertion.\n\n Returns\n -------\n out : `~astropy.units.Quantity`\n A copy of quantity with ``values`` inserted. Note that the\n insertion does not occur in-place: a new quantity array is returned.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> q = [1, 2] * u.m\n >>> q.insert(0, 50 * u.cm)\n <Quantity [ 0.5, 1., 2.] m>\n\n >>> q = [[1, 2], [3, 4]] * u.m\n >>> q.insert(1, [10, 20] * u.m, axis=0)\n <Quantity [[ 1., 2.],\n [ 10., 20.],\n [ 3., 4.]] m>\n\n >>> q.insert(1, 10 * u.m, axis=1)\n <Quantity [[ 1., 10., 2.],\n [ 3., 10., 4.]] m>\n\n \"\"\"\n out_array = np.insert(self.value, obj, self._to_own_unit(values), axis)\n return self._new_view(out_array)\n\n\nclass SpecificTypeQuantity(Quantity):\n \"\"\"Superclass for Quantities of specific physical type.\n\n Subclasses of these work just like :class:`~astropy.units.Quantity`, except\n that they are for specific physical types (and may have methods that are\n only appropriate for that type). Astropy examples are\n :class:`~astropy.coordinates.Angle` and\n :class:`~astropy.coordinates.Distance`\n\n At a minimum, subclasses should set ``_equivalent_unit`` to the unit\n associated with the physical type.\n \"\"\"\n # The unit for the specific physical type. Instances can only be created\n # with units that are equivalent to this.\n _equivalent_unit = None\n\n # The default unit used for views. Even with `None`, views of arrays\n # without units are possible, but will have an uninitialized unit.\n _unit = None\n\n # Default unit for initialization through the constructor.\n _default_unit = None\n\n # ensure that we get precedence over our superclass.\n __array_priority__ = Quantity.__array_priority__ + 10\n\n def __quantity_subclass__(self, unit):\n if unit.is_equivalent(self._equivalent_unit):\n return type(self), True\n else:\n return super().__quantity_subclass__(unit)[0], False\n\n def _set_unit(self, unit):\n if unit is None or not unit.is_equivalent(self._equivalent_unit):\n raise UnitTypeError(\n \"{} instances require units equivalent to '{}'\"\n .format(type(self).__name__, self._equivalent_unit) +\n (\", but no unit was given.\" if unit is None else\n f\", so cannot set it to '{unit}'.\"))\n\n super()._set_unit(unit)\n\n\ndef isclose(a, b, rtol=1.e-5, atol=None, equal_nan=False, **kwargs):\n \"\"\"\n Return a boolean array where two arrays are element-wise equal\n within a tolerance.\n\n Parameters\n ----------\n a, b : array-like or `~astropy.units.Quantity`\n Input values or arrays to compare\n rtol : array-like or `~astropy.units.Quantity`\n The relative tolerance for the comparison, which defaults to\n ``1e-5``. If ``rtol`` is a :class:`~astropy.units.Quantity`,\n then it must be dimensionless.\n atol : number or `~astropy.units.Quantity`\n The absolute tolerance for the comparison. The units (or lack\n thereof) of ``a``, ``b``, and ``atol`` must be consistent with\n each other. If `None`, ``atol`` defaults to zero in the\n appropriate units.\n equal_nan : `bool`\n Whether to compare NaN’s as equal. If `True`, NaNs in ``a`` will\n be considered equal to NaN’s in ``b``.\n\n Notes\n -----\n This is a :class:`~astropy.units.Quantity`-aware version of\n :func:`numpy.isclose`. However, this differs from the `numpy` function in\n that the default for the absolute tolerance here is zero instead of\n ``atol=1e-8`` in `numpy`, as there is no natural way to set a default\n *absolute* tolerance given two inputs that may have differently scaled\n units.\n\n Raises\n ------\n `~astropy.units.UnitsError`\n If the dimensions of ``a``, ``b``, or ``atol`` are incompatible,\n or if ``rtol`` is not dimensionless.\n\n See also\n --------\n allclose\n \"\"\"\n unquantified_args = _unquantify_allclose_arguments(a, b, rtol, atol)\n return np.isclose(*unquantified_args, equal_nan=equal_nan, **kwargs)\n\n\ndef allclose(a, b, rtol=1.e-5, atol=None, equal_nan=False, **kwargs) -> bool:\n \"\"\"\n Whether two arrays are element-wise equal within a tolerance.\n\n Parameters\n ----------\n a, b : array-like or `~astropy.units.Quantity`\n Input values or arrays to compare\n rtol : array-like or `~astropy.units.Quantity`\n The relative tolerance for the comparison, which defaults to\n ``1e-5``. If ``rtol`` is a :class:`~astropy.units.Quantity`,\n then it must be dimensionless.\n atol : number or `~astropy.units.Quantity`\n The absolute tolerance for the comparison. The units (or lack\n thereof) of ``a``, ``b``, and ``atol`` must be consistent with\n each other. If `None`, ``atol`` defaults to zero in the\n appropriate units.\n equal_nan : `bool`\n Whether to compare NaN’s as equal. If `True`, NaNs in ``a`` will\n be considered equal to NaN’s in ``b``.\n\n Notes\n -----\n This is a :class:`~astropy.units.Quantity`-aware version of\n :func:`numpy.allclose`. However, this differs from the `numpy` function in\n that the default for the absolute tolerance here is zero instead of\n ``atol=1e-8`` in `numpy`, as there is no natural way to set a default\n *absolute* tolerance given two inputs that may have differently scaled\n units.\n\n Raises\n ------\n `~astropy.units.UnitsError`\n If the dimensions of ``a``, ``b``, or ``atol`` are incompatible,\n or if ``rtol`` is not dimensionless.\n\n See also\n --------\n isclose\n \"\"\"\n unquantified_args = _unquantify_allclose_arguments(a, b, rtol, atol)\n return np.allclose(*unquantified_args, equal_nan=equal_nan, **kwargs)\n\n\ndef _unquantify_allclose_arguments(actual, desired, rtol, atol):\n actual = Quantity(actual, subok=True, copy=False)\n\n desired = Quantity(desired, subok=True, copy=False)\n try:\n desired = desired.to(actual.unit)\n except UnitsError:\n raise UnitsError(\n f\"Units for 'desired' ({desired.unit}) and 'actual' \"\n f\"({actual.unit}) are not convertible\"\n )\n\n if atol is None:\n # By default, we assume an absolute tolerance of zero in the\n # appropriate units. The default value of None for atol is\n # needed because the units of atol must be consistent with the\n # units for a and b.\n atol = Quantity(0)\n else:\n atol = Quantity(atol, subok=True, copy=False)\n try:\n atol = atol.to(actual.unit)\n except UnitsError:\n raise UnitsError(\n f\"Units for 'atol' ({atol.unit}) and 'actual' \"\n f\"({actual.unit}) are not convertible\"\n )\n\n rtol = Quantity(rtol, subok=True, copy=False)\n try:\n rtol = rtol.to(dimensionless_unscaled)\n except Exception:\n raise UnitsError(\"'rtol' should be dimensionless\")\n\n return actual.value, desired.value, rtol.value, atol.value\n", "astropy/units/structured.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"\nThis module defines structured units and quantities.\n\"\"\"\n\n# Standard library\nimport operator\n\nimport numpy as np\n\nfrom .core import UNITY, Unit, UnitBase\n\n__all__ = ['StructuredUnit']\n\n\nDTYPE_OBJECT = np.dtype('O')\n\n\ndef _names_from_dtype(dtype):\n \"\"\"Recursively extract field names from a dtype.\"\"\"\n names = []\n for name in dtype.names:\n subdtype = dtype.fields[name][0]\n if subdtype.names:\n names.append([name, _names_from_dtype(subdtype)])\n else:\n names.append(name)\n return tuple(names)\n\n\ndef _normalize_names(names):\n \"\"\"Recursively normalize, inferring upper level names for unadorned tuples.\n\n Generally, we want the field names to be organized like dtypes, as in\n ``(['pv', ('p', 'v')], 't')``. But we automatically infer upper\n field names if the list is absent from items like ``(('p', 'v'), 't')``,\n by concatenating the names inside the tuple.\n \"\"\"\n result = []\n for name in names:\n if isinstance(name, str) and len(name) > 0:\n result.append(name)\n elif (isinstance(name, list)\n and len(name) == 2\n and isinstance(name[0], str) and len(name[0]) > 0\n and isinstance(name[1], tuple) and len(name[1]) > 0):\n result.append([name[0], _normalize_names(name[1])])\n elif isinstance(name, tuple) and len(name) > 0:\n new_tuple = _normalize_names(name)\n result.append([''.join([(i[0] if isinstance(i, list) else i)\n for i in new_tuple]), new_tuple])\n else:\n raise ValueError(f'invalid entry {name!r}. Should be a name, '\n 'tuple of names, or 2-element list of the '\n 'form [name, tuple of names].')\n\n return tuple(result)\n\n\nclass StructuredUnit:\n \"\"\"Container for units for a structured Quantity.\n\n Parameters\n ----------\n units : unit-like, tuple of unit-like, or `~astropy.units.StructuredUnit`\n Tuples can be nested. If a `~astropy.units.StructuredUnit` is passed\n in, it will be returned unchanged unless different names are requested.\n names : tuple of str, tuple or list; `~numpy.dtype`; or `~astropy.units.StructuredUnit`, optional\n Field names for the units, possibly nested. Can be inferred from a\n structured `~numpy.dtype` or another `~astropy.units.StructuredUnit`.\n For nested tuples, by default the name of the upper entry will be the\n concatenation of the names of the lower levels. One can pass in a\n list with the upper-level name and a tuple of lower-level names to\n avoid this. For tuples, not all levels have to be given; for any level\n not passed in, default field names of 'f0', 'f1', etc., will be used.\n\n Notes\n -----\n It is recommended to initialze the class indirectly, using\n `~astropy.units.Unit`. E.g., ``u.Unit('AU,AU/day')``.\n\n When combined with a structured array to produce a structured\n `~astropy.units.Quantity`, array field names will take precedence.\n Generally, passing in ``names`` is needed only if the unit is used\n unattached to a `~astropy.units.Quantity` and one needs to access its\n fields.\n\n Examples\n --------\n Various ways to initialize a `~astropy.units.StructuredUnit`::\n\n >>> import astropy.units as u\n >>> su = u.Unit('(AU,AU/day),yr')\n >>> su\n Unit(\"((AU, AU / d), yr)\")\n >>> su.field_names\n (['f0', ('f0', 'f1')], 'f1')\n >>> su['f1']\n Unit(\"yr\")\n >>> su2 = u.StructuredUnit(((u.AU, u.AU/u.day), u.yr), names=(('p', 'v'), 't'))\n >>> su2 == su\n True\n >>> su2.field_names\n (['pv', ('p', 'v')], 't')\n >>> su3 = u.StructuredUnit((su2['pv'], u.day), names=(['p_v', ('p', 'v')], 't'))\n >>> su3.field_names\n (['p_v', ('p', 'v')], 't')\n >>> su3.keys()\n ('p_v', 't')\n >>> su3.values()\n (Unit(\"(AU, AU / d)\"), Unit(\"d\"))\n\n Structured units share most methods with regular units::\n\n >>> su.physical_type\n ((PhysicalType('length'), PhysicalType({'speed', 'velocity'})), PhysicalType('time'))\n >>> su.si\n Unit(\"((1.49598e+11 m, 1.73146e+06 m / s), 3.15576e+07 s)\")\n\n \"\"\"\n def __new__(cls, units, names=None):\n dtype = None\n if names is not None:\n if isinstance(names, StructuredUnit):\n dtype = names._units.dtype\n names = names.field_names\n elif isinstance(names, np.dtype):\n if not names.fields:\n raise ValueError('dtype should be structured, with fields.')\n dtype = np.dtype([(name, DTYPE_OBJECT) for name in names.names])\n names = _names_from_dtype(names)\n else:\n if not isinstance(names, tuple):\n names = (names,)\n names = _normalize_names(names)\n\n if not isinstance(units, tuple):\n units = Unit(units)\n if isinstance(units, StructuredUnit):\n # Avoid constructing a new StructuredUnit if no field names\n # are given, or if all field names are the same already anyway.\n if names is None or units.field_names == names:\n return units\n\n # Otherwise, turn (the upper level) into a tuple, for renaming.\n units = units.values()\n else:\n # Single regular unit: make a tuple for iteration below.\n units = (units,)\n\n if names is None:\n names = tuple(f'f{i}' for i in range(len(units)))\n\n elif len(units) != len(names):\n raise ValueError(\"lengths of units and field names must match.\")\n\n converted = []\n for unit, name in zip(units, names):\n if isinstance(name, list):\n # For list, the first item is the name of our level,\n # and the second another tuple of names, i.e., we recurse.\n unit = cls(unit, name[1])\n name = name[0]\n else:\n # We are at the lowest level. Check unit.\n unit = Unit(unit)\n if dtype is not None and isinstance(unit, StructuredUnit):\n raise ValueError(\"units do not match in depth with field \"\n \"names from dtype or structured unit.\")\n\n converted.append(unit)\n\n self = super().__new__(cls)\n if dtype is None:\n dtype = np.dtype([((name[0] if isinstance(name, list) else name),\n DTYPE_OBJECT) for name in names])\n # Decay array to void so we can access by field name and number.\n self._units = np.array(tuple(converted), dtype)[()]\n return self\n\n def __getnewargs__(self):\n \"\"\"When de-serializing, e.g. pickle, start with a blank structure.\"\"\"\n return (), None\n\n @property\n def field_names(self):\n \"\"\"Possibly nested tuple of the field names of the parts.\"\"\"\n return tuple(([name, unit.field_names]\n if isinstance(unit, StructuredUnit) else name)\n for name, unit in self.items())\n\n # Allow StructuredUnit to be treated as an (ordered) mapping.\n def __len__(self):\n return len(self._units.dtype.names)\n\n def __getitem__(self, item):\n # Since we are based on np.void, indexing by field number works too.\n return self._units[item]\n\n def values(self):\n return self._units.item()\n\n def keys(self):\n return self._units.dtype.names\n\n def items(self):\n return tuple(zip(self._units.dtype.names, self._units.item()))\n\n def __iter__(self):\n yield from self._units.dtype.names\n\n # Helpers for methods below.\n def _recursively_apply(self, func, cls=None):\n \"\"\"Apply func recursively.\n\n Parameters\n ----------\n func : callable\n Function to apply to all parts of the structured unit,\n recursing as needed.\n cls : type, optional\n If given, should be a subclass of `~numpy.void`. By default,\n will return a new `~astropy.units.StructuredUnit` instance.\n \"\"\"\n results = np.array(tuple(func(part) for part in self.values()),\n self._units.dtype)[()]\n if cls is not None:\n return results.view((cls, results.dtype))\n\n # Short-cut; no need to interpret field names, etc.\n result = super().__new__(self.__class__)\n result._units = results\n return result\n\n def _recursively_get_dtype(self, value, enter_lists=True):\n \"\"\"Get structured dtype according to value, using our field names.\n\n This is useful since ``np.array(value)`` would treat tuples as lower\n levels of the array, rather than as elements of a structured array.\n The routine does presume that the type of the first tuple is\n representative of the rest. Used in ``_get_converter``.\n\n For the special value of ``UNITY``, all fields are assumed to be 1.0,\n and hence this will return an all-float dtype.\n\n \"\"\"\n if enter_lists:\n while isinstance(value, list):\n value = value[0]\n if value is UNITY:\n value = (UNITY,) * len(self)\n elif not isinstance(value, tuple) or len(self) != len(value):\n raise ValueError(f\"cannot interpret value {value} for unit {self}.\")\n descr = []\n for (name, unit), part in zip(self.items(), value):\n if isinstance(unit, StructuredUnit):\n descr.append(\n (name, unit._recursively_get_dtype(part, enter_lists=False)))\n else:\n # Got a part associated with a regular unit. Gets its dtype.\n # Like for Quantity, we cast integers to float.\n part = np.array(part)\n part_dtype = part.dtype\n if part_dtype.kind in 'iu':\n part_dtype = np.dtype(float)\n descr.append((name, part_dtype, part.shape))\n return np.dtype(descr)\n\n @property\n def si(self):\n \"\"\"The `StructuredUnit` instance in SI units.\"\"\"\n return self._recursively_apply(operator.attrgetter('si'))\n\n @property\n def cgs(self):\n \"\"\"The `StructuredUnit` instance in cgs units.\"\"\"\n return self._recursively_apply(operator.attrgetter('cgs'))\n\n # Needed to pass through Unit initializer, so might as well use it.\n def _get_physical_type_id(self):\n return self._recursively_apply(\n operator.methodcaller('_get_physical_type_id'), cls=Structure)\n\n @property\n def physical_type(self):\n \"\"\"Physical types of all the fields.\"\"\"\n return self._recursively_apply(\n operator.attrgetter('physical_type'), cls=Structure)\n\n def decompose(self, bases=set()):\n \"\"\"The `StructuredUnit` composed of only irreducible units.\n\n Parameters\n ----------\n bases : sequence of `~astropy.units.UnitBase`, optional\n The bases to decompose into. When not provided,\n decomposes down to any irreducible units. When provided,\n the decomposed result will only contain the given units.\n This will raises a `UnitsError` if it's not possible\n to do so.\n\n Returns\n -------\n `~astropy.units.StructuredUnit`\n With the unit for each field containing only irreducible units.\n \"\"\"\n return self._recursively_apply(\n operator.methodcaller('decompose', bases=bases))\n\n def is_equivalent(self, other, equivalencies=[]):\n \"\"\"`True` if all fields are equivalent to the other's fields.\n\n Parameters\n ----------\n other : `~astropy.units.StructuredUnit`\n The structured unit to compare with, or what can initialize one.\n equivalencies : list of tuple, optional\n A list of equivalence pairs to try if the units are not\n directly convertible. See :ref:`unit_equivalencies`.\n The list will be applied to all fields.\n\n Returns\n -------\n bool\n \"\"\"\n try:\n other = StructuredUnit(other)\n except Exception:\n return False\n\n if len(self) != len(other):\n return False\n\n for self_part, other_part in zip(self.values(), other.values()):\n if not self_part.is_equivalent(other_part,\n equivalencies=equivalencies):\n return False\n\n return True\n\n def _get_converter(self, other, equivalencies=[]):\n if not isinstance(other, type(self)):\n other = self.__class__(other, names=self)\n\n converters = [self_part._get_converter(other_part,\n equivalencies=equivalencies)\n for (self_part, other_part) in zip(self.values(),\n other.values())]\n\n def converter(value):\n if not hasattr(value, 'dtype'):\n value = np.array(value, self._recursively_get_dtype(value))\n result = np.empty_like(value)\n for name, converter_ in zip(result.dtype.names, converters):\n result[name] = converter_(value[name])\n # Index with empty tuple to decay array scalars to numpy void.\n return result if result.shape else result[()]\n\n return converter\n\n def to(self, other, value=np._NoValue, equivalencies=[]):\n \"\"\"Return values converted to the specified unit.\n\n Parameters\n ----------\n other : `~astropy.units.StructuredUnit`\n The unit to convert to. If necessary, will be converted to\n a `~astropy.units.StructuredUnit` using the dtype of ``value``.\n value : array-like, optional\n Value(s) in the current unit to be converted to the\n specified unit. If a sequence, the first element must have\n entries of the correct type to represent all elements (i.e.,\n not have, e.g., a ``float`` where other elements have ``complex``).\n If not given, assumed to have 1. in all fields.\n equivalencies : list of tuple, optional\n A list of equivalence pairs to try if the units are not\n directly convertible. See :ref:`unit_equivalencies`.\n This list is in addition to possible global defaults set by, e.g.,\n `set_enabled_equivalencies`.\n Use `None` to turn off all equivalencies.\n\n Returns\n -------\n values : scalar or array\n Converted value(s).\n\n Raises\n ------\n UnitsError\n If units are inconsistent\n \"\"\"\n if value is np._NoValue:\n # We do not have UNITY as a default, since then the docstring\n # would list 1.0 as default, yet one could not pass that in.\n value = UNITY\n return self._get_converter(other, equivalencies=equivalencies)(value)\n\n def to_string(self, format='generic'):\n \"\"\"Output the unit in the given format as a string.\n\n Units are separated by commas.\n\n Parameters\n ----------\n format : `astropy.units.format.Base` instance or str\n The name of a format or a formatter object. If not\n provided, defaults to the generic format.\n\n Notes\n -----\n Structured units can be written to all formats, but can be\n re-read only with 'generic'.\n\n \"\"\"\n parts = [part.to_string(format) for part in self.values()]\n out_fmt = '({})' if len(self) > 1 else '({},)'\n if format.startswith('latex'):\n # Strip $ from parts and add them on the outside.\n parts = [part[1:-1] for part in parts]\n out_fmt = '$' + out_fmt + '$'\n return out_fmt.format(', '.join(parts))\n\n def _repr_latex_(self):\n return self.to_string('latex')\n\n __array_ufunc__ = None\n\n def __mul__(self, other):\n if isinstance(other, str):\n try:\n other = Unit(other, parse_strict='silent')\n except Exception:\n return NotImplemented\n if isinstance(other, UnitBase):\n new_units = tuple(part * other for part in self.values())\n return self.__class__(new_units, names=self)\n if isinstance(other, StructuredUnit):\n return NotImplemented\n\n # Anything not like a unit, try initialising as a structured quantity.\n try:\n from .quantity import Quantity\n return Quantity(other, unit=self)\n except Exception:\n return NotImplemented\n\n def __rmul__(self, other):\n return self.__mul__(other)\n\n def __truediv__(self, other):\n if isinstance(other, str):\n try:\n other = Unit(other, parse_strict='silent')\n except Exception:\n return NotImplemented\n\n if isinstance(other, UnitBase):\n new_units = tuple(part / other for part in self.values())\n return self.__class__(new_units, names=self)\n return NotImplemented\n\n def __rlshift__(self, m):\n try:\n from .quantity import Quantity\n return Quantity(m, self, copy=False, subok=True)\n except Exception:\n return NotImplemented\n\n def __str__(self):\n return self.to_string()\n\n def __repr__(self):\n return f'Unit(\"{self.to_string()}\")'\n\n def __eq__(self, other):\n try:\n other = StructuredUnit(other)\n except Exception:\n return NotImplemented\n\n return self.values() == other.values()\n\n def __ne__(self, other):\n if not isinstance(other, type(self)):\n try:\n other = StructuredUnit(other)\n except Exception:\n return NotImplemented\n\n return self.values() != other.values()\n\n\nclass Structure(np.void):\n \"\"\"Single element structure for physical type IDs, etc.\n\n Behaves like a `~numpy.void` and thus mostly like a tuple which can also\n be indexed with field names, but overrides ``__eq__`` and ``__ne__`` to\n compare only the contents, not the field names. Furthermore, this way no\n `FutureWarning` about comparisons is given.\n\n \"\"\"\n # Note that it is important for physical type IDs to not be stored in a\n # tuple, since then the physical types would be treated as alternatives in\n # :meth:`~astropy.units.UnitBase.is_equivalent`. (Of course, in that\n # case, they could also not be indexed by name.)\n\n def __eq__(self, other):\n if isinstance(other, np.void):\n other = other.item()\n\n return self.item() == other\n\n def __ne__(self, other):\n if isinstance(other, np.void):\n other = other.item()\n\n return self.item() != other\n", "docs/changes/units/13676.api.rst": null}
|
diff --git a/docs/changes/units/13676.api.rst b/docs/changes/units/13676.api.rst
new file mode 100644
index 000000000000..966372f06cef
--- /dev/null
+++ b/docs/changes/units/13676.api.rst
@@ -0,0 +1,2 @@
+When ``Quantity`` is constructed from a structured array and ``unit`` is
+``None``, the default unit is now structured like the input data.
|
{"astropy/units/structured.py": [{"type": "function", "name": "_structured_unit_like_dtype", "lines": [522, 550], "signature": "def _structured_unit_like_dtype(unit: UnitBase | StructuredUnit, dtype: np.dtype) -> StructuredUnit:", "doc": "Make a `StructuredUnit` of one unit, with the structure of a `numpy.dtype`.\n\nParameters\n----------\nunit : UnitBase\n The unit that will be filled into the structure.\ndtype : `numpy.dtype`\n The structure for the StructuredUnit.\n\nReturns\n-------\nStructuredUnit"}]}
|
5.0
|
["astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialization_and_keying", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_recursive_initialization", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_extreme_recursive_initialization", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialization_names_invalid_list_errors[names0-['p',", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialization_names_invalid_list_errors[names1-['pv',", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialization_names_invalid_list_errors[names2-['pv',", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialization_names_invalid_list_errors[names3-()]", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialization_names_invalid_list_errors[names4-None]", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialization_names_invalid_list_errors[names5-'']", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_looks_like_unit", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialize_with_float_dtype", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialize_with_structured_unit_for_names", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_initialize_single_field", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_equality", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_parsing", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_to_string", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_str", "astropy/units/tests/test_structured.py::TestStructuredUnitBasics::test_repr", "astropy/units/tests/test_structured.py::TestStructuredUnitsCopyPickle::test_copy", "astropy/units/tests/test_structured.py::TestStructuredUnitsCopyPickle::test_deepcopy", "astropy/units/tests/test_structured.py::TestStructuredUnitsCopyPickle::test_pickle[0]", "astropy/units/tests/test_structured.py::TestStructuredUnitsCopyPickle::test_pickle[1]", "astropy/units/tests/test_structured.py::TestStructuredUnitsCopyPickle::test_pickle[-1]", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_len", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_keys", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_values", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_field_names", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_as_iterable[list]", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_as_iterable[set]", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_as_dict", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_contains", "astropy/units/tests/test_structured.py::TestStructuredUnitAsMapping::test_setitem_fails", "astropy/units/tests/test_structured.py::TestStructuredUnitMethods::test_physical_type_id", "astropy/units/tests/test_structured.py::TestStructuredUnitMethods::test_physical_type", "astropy/units/tests/test_structured.py::TestStructuredUnitMethods::test_si", "astropy/units/tests/test_structured.py::TestStructuredUnitMethods::test_cgs", "astropy/units/tests/test_structured.py::TestStructuredUnitMethods::test_decompose", "astropy/units/tests/test_structured.py::TestStructuredUnitMethods::test_is_equivalent", "astropy/units/tests/test_structured.py::TestStructuredUnitMethods::test_conversion", "astropy/units/tests/test_structured.py::TestStructuredUnitArithmatic::test_multiplication", "astropy/units/tests/test_structured.py::TestStructuredUnitArithmatic::test_division", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_initialization_and_keying", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_initialization_with_unit_tuples", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_initialization_with_string", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_initialization_by_multiplication_with_unit", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_initialization_by_shifting_to_unit", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_initialization_without_unit", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_getitem", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_value", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_conversion", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_conversion_via_lshift", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_inplace_conversion", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_si", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_cgs", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_equality", "astropy/units/tests/test_structured.py::TestStructuredQuantity::test_setitem", "astropy/units/tests/test_structured.py::TestStructuredQuantityFunctions::test_empty_like", "astropy/units/tests/test_structured.py::TestStructuredQuantityFunctions::test_zeros_ones_like[zeros_like]", "astropy/units/tests/test_structured.py::TestStructuredQuantityFunctions::test_zeros_ones_like[ones_like]", "astropy/units/tests/test_structured.py::TestStructuredQuantityFunctions::test_structured_to_unstructured", "astropy/units/tests/test_structured.py::TestStructuredQuantityFunctions::test_unstructured_to_structured", "astropy/units/tests/test_structured.py::TestStructuredSpecificTypeQuantity::test_init", "astropy/units/tests/test_structured.py::TestStructuredSpecificTypeQuantity::test_error_on_non_equivalent_unit", "astropy/units/tests/test_structured.py::TestStructuredLogUnit::test_unit_initialization", "astropy/units/tests/test_structured.py::TestStructuredLogUnit::test_quantity_initialization", "astropy/units/tests/test_structured.py::TestStructuredLogUnit::test_quantity_si", "astropy/units/tests/test_structured.py::TestStructuredMaskedQuantity::test_init", "astropy/units/tests/test_structured.py::TestStructuredMaskedQuantity::test_slicing", "astropy/units/tests/test_structured.py::TestStructuredMaskedQuantity::test_conversion", "astropy/units/tests/test_structured.py::TestStructuredMaskedQuantity::test_si"]
|
[]
|
7cbba866a8c5749b90a5cb4f9877ddfad2d36037
|
{"first_commit_time": 1663269431.0, "pr_title": "structured unit / quantity stuff", "pr_body": "Signed-off-by: Nathaniel Starkman (@nstarman) <[email protected]>\r\n\r\nNecessary for https://github.com/astropy/astropy/pull/13669\r\n\r\n- Quantity now builds a structured unit if ``unit=None`` in the constructor and the value is structured.\r\n\r\nTODO:\r\n- [x] changelog\r\n- [x] tests\r\n\r\n### Checklist for package maintainer(s)\r\n<!-- This section is to be filled by package maintainer(s) who will\r\nreview this pull request. -->\r\n\r\nThis checklist is meant to remind the package maintainer(s) who will review this pull request of some common things to look for. This list is not exhaustive.\r\n\r\n- [x] Do the proposed changes actually accomplish desired goals?\r\n- [ ] Do the proposed changes follow the [Astropy coding guidelines](https://docs.astropy.org/en/latest/development/codeguide.html)?\r\n- [x] Are tests added/updated as required? If so, do they follow the [Astropy testing guidelines](https://docs.astropy.org/en/latest/development/testguide.html)?\r\n- [x] Are docs added/updated as required? If so, do they follow the [Astropy documentation guidelines](https://docs.astropy.org/en/latest/development/docguide.html#astropy-documentation-rules-and-guidelines)?\r\n- [x] Is rebase and/or squash necessary? If so, please provide the author with appropriate instructions. Also see [\"When to rebase and squash commits\"](https://docs.astropy.org/en/latest/development/when_to_rebase.html).\r\n- [x] Did the CI pass? If no, are the failures related? If you need to run daily and weekly cron jobs as part of the PR, please apply the `Extra CI` label. Codestyle issues can be fixed by the [bot](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#pre-commit).\r\n- [x] Is a change log needed? If yes, did the change log check pass? If no, add the `no-changelog-entry-needed` label. If this is a manual backport, use the `skip-changelog-checks` label unless special changelog handling is necessary.\r\n- [x] Is this a big PR that makes a \"What's new?\" entry worthwhile and if so, is (1) a \"what's new\" entry included in this PR and (2) the \"whatsnew-needed\" label applied?\r\n- [x] Is a milestone set? Milestone must be set but `astropy-bot` check might be missing; do not let the green checkmark fool you.\r\n- [x] At the time of adding the milestone, if the milestone set requires a backport to release branch(es), apply the appropriate `backport-X.Y.x` label(s) *before* merge.\r\n", "pr_timeline": [{"time": 1663269688.0, "comment": "\ud83d\udc4b Thank you for your draft pull request! Do you know that you can use `[ci skip]` or `[skip ci]` in your commit messages to skip running continuous integration tests until you are ready?"}], "issues": {}}
|
astropy/astropy
| 14,878
|
https://github.com/astropy/astropy/pull/14878
|
astropy__astropy-14878
|
[]
|
88790514bdf248e43c2fb15ee18cfd3390846145
|
diff --git a/astropy/table/row.py b/astropy/table/row.py
index 03b7d13219b9..cab570e837c6 100644
--- a/astropy/table/row.py
+++ b/astropy/table/row.py
@@ -106,6 +106,35 @@ def __iter__(self):
for col in self._table.columns.values():
yield col[index]
+ def get(self, key, default=None, /):
+ """Return the value for key if key is in the columns, else default.
+
+ Parameters
+ ----------
+ key : `str`, positional-only
+ The name of the column to look for.
+ default : `object`, optional, positional-only
+ The value to return if the ``key`` is not among the columns.
+
+ Returns
+ -------
+ `object`
+ The value in the ``key`` column of the row if present,
+ ``default`` otherwise.
+
+ Examples
+ --------
+ >>> from astropy.table import Table
+ >>> t = Table({"a": [2, 3, 5], "b": [7, 11, 13]})
+ >>> t[0].get("a")
+ 2
+ >>> t[1].get("b", 0)
+ 11
+ >>> t[2].get("c", 0)
+ 0
+ """
+ return self[key] if key in self._table.columns else default
+
def keys(self):
return self._table.columns.keys()
diff --git a/docs/changes/table/14878.feature.rst b/docs/changes/table/14878.feature.rst
new file mode 100644
index 000000000000..768574409629
--- /dev/null
+++ b/docs/changes/table/14878.feature.rst
@@ -0,0 +1,3 @@
+The new ``Row.get()`` method, analogous to ``dict.get()``, returns the value of
+the specified column from the row if the column present, otherwise it returns a
+fallback value, which by default is ``None``.
diff --git a/docs/table/access_table.rst b/docs/table/access_table.rst
index 24d0f561cab2..263ecafc9713 100644
--- a/docs/table/access_table.rst
+++ b/docs/table/access_table.rst
@@ -315,6 +315,36 @@ structured array by creating a copy or reference with :func:`numpy.array`::
>>> data = np.array(t, copy=False) # reference to data in t
+Possibly missing columns
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+In some cases it might not be guaranteed that a column is present in a table,
+but there does exist a good default value that can be used if it is not. The
+columns of a |Table| can be represented as a :class:`dict` subclass instance
+through the ``columns`` attribute, which means that a replacement for missing
+columns can be provided using the :meth:`dict.get` method::
+
+ >>> t.columns.get("b", np.zeros(len(t)))
+ <Column name='b' dtype='int32' length=5>
+ 1
+ 4
+ 7
+ 10
+ 13
+ >>> t.columns.get("x", np.zeros(len(t)))
+ array([0., 0., 0., 0., 0.])
+
+In case of a single |Row| it is possible to use its
+:meth:`~astropy.table.Row.get` method without having to go through
+``columns``::
+
+ >>> row = t[2]
+ >>> row.get("c", -1)
+ 8
+ >>> row.get("y", -1)
+ -1
+
+
Table Equality
--------------
|
diff --git a/astropy/table/tests/test_row.py b/astropy/table/tests/test_row.py
index 8ad9f46ba80b..6af2f945a8e7 100644
--- a/astropy/table/tests/test_row.py
+++ b/astropy/table/tests/test_row.py
@@ -372,3 +372,11 @@ def test_uint_indexing():
assert repr(t[1]).splitlines() == trepr
assert repr(t[np.int_(1)]).splitlines() == trepr
assert repr(t[np.uint(1)]).splitlines() == trepr
+
+
+def test_row_get():
+ row = table.Table({"a": [2, 4], "b": [3, 9]})[0]
+ assert row.get("a") == 2
+ assert row.get("x") is None
+ assert row.get("b", -1) == 3
+ assert row.get("y", -1) == -1
| 2023-05-29T22:12:40
|
{}
|
{"astropy/table/row.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\nimport collections\nfrom collections import OrderedDict\nfrom operator import index as operator_index\n\nimport numpy as np\n\n\nclass Row:\n \"\"\"A class to represent one row of a Table object.\n\n A Row object is returned when a Table object is indexed with an integer\n or when iterating over a table::\n\n >>> from astropy.table import Table\n >>> table = Table([(1, 2), (3, 4)], names=('a', 'b'),\n ... dtype=('int32', 'int32'))\n >>> row = table[1]\n >>> row\n <Row index=1>\n a b\n int32 int32\n ----- -----\n 2 4\n >>> row['a']\n 2\n >>> row[1]\n 4\n \"\"\"\n\n def __init__(self, table, index):\n # Ensure that the row index is a valid index (int)\n index = operator_index(index)\n\n n = len(table)\n\n if index < -n or index >= n:\n raise IndexError(\n f\"index {index} out of range for table with length {len(table)}\"\n )\n\n # Finally, ensure the index is positive [#8422] and set Row attributes\n self._index = index % n\n self._table = table\n\n def __getitem__(self, item):\n try:\n # Try the most common use case of accessing a single column in the Row.\n # Bypass the TableColumns __getitem__ since that does more testing\n # and allows a list of tuple or str, which is not the right thing here.\n out = OrderedDict.__getitem__(self._table.columns, item)[self._index]\n except (KeyError, TypeError):\n if self._table._is_list_or_tuple_of_str(item):\n cols = [self._table[name] for name in item]\n out = self._table.__class__(cols, copy=False)[self._index]\n else:\n # This is only to raise an exception\n out = self._table.columns[item][self._index]\n return out\n\n def __setitem__(self, item, val):\n if self._table._is_list_or_tuple_of_str(item):\n self._table._set_row(self._index, colnames=item, vals=val)\n else:\n self._table.columns[item][self._index] = val\n\n def _ipython_key_completions_(self):\n return self.colnames\n\n def __eq__(self, other):\n if self._table.masked:\n # Sent bug report to numpy-discussion group on 2012-Oct-21, subject:\n # \"Comparing rows in a structured masked array raises exception\"\n # No response, so this is still unresolved.\n raise ValueError(\n \"Unable to compare rows for masked table due to numpy.ma bug\"\n )\n return self.as_void() == other\n\n def __ne__(self, other):\n if self._table.masked:\n raise ValueError(\n \"Unable to compare rows for masked table due to numpy.ma bug\"\n )\n return self.as_void() != other\n\n def __array__(self, dtype=None):\n \"\"\"Support converting Row to np.array via np.array(table).\n\n Coercion to a different dtype via np.array(table, dtype) is not\n supported and will raise a ValueError.\n\n If the parent table is masked then the mask information is dropped.\n \"\"\"\n if dtype is not None:\n raise ValueError(\"Datatype coercion is not allowed\")\n\n return np.asarray(self.as_void())\n\n def __len__(self):\n return len(self._table.columns)\n\n def __iter__(self):\n index = self._index\n for col in self._table.columns.values():\n yield col[index]\n\n def keys(self):\n return self._table.columns.keys()\n\n def values(self):\n return self.__iter__()\n\n @property\n def table(self):\n return self._table\n\n @property\n def index(self):\n return self._index\n\n def as_void(self):\n \"\"\"\n Returns a *read-only* copy of the row values in the form of np.void or\n np.ma.mvoid objects. This corresponds to the object types returned for\n row indexing of a pure numpy structured array or masked array. This\n method is slow and its use is discouraged when possible.\n\n Returns\n -------\n void_row : ``numpy.void`` or ``numpy.ma.mvoid``\n Copy of row values.\n ``numpy.void`` if unmasked, ``numpy.ma.mvoid`` else.\n \"\"\"\n index = self._index\n cols = self._table.columns.values()\n vals = tuple(np.asarray(col)[index] for col in cols)\n if self._table.masked:\n mask = tuple(\n col.mask[index] if hasattr(col, \"mask\") else False for col in cols\n )\n void_row = np.ma.array([vals], mask=[mask], dtype=self.dtype)[0]\n else:\n void_row = np.array([vals], dtype=self.dtype)[0]\n return void_row\n\n @property\n def meta(self):\n return self._table.meta\n\n @property\n def columns(self):\n return self._table.columns\n\n @property\n def colnames(self):\n return self._table.colnames\n\n @property\n def dtype(self):\n return self._table.dtype\n\n def _base_repr_(self, html=False):\n \"\"\"\n Display row as a single-line table but with appropriate header line.\n \"\"\"\n index = self.index if (self.index >= 0) else self.index + len(self._table)\n table = self._table[index : index + 1]\n descr_vals = [self.__class__.__name__, f\"index={self.index}\"]\n if table.masked:\n descr_vals.append(\"masked=True\")\n\n return table._base_repr_(\n html, descr_vals, max_width=-1, tableid=f\"table{id(self._table)}\"\n )\n\n def _repr_html_(self):\n return self._base_repr_(html=True)\n\n def __repr__(self):\n return self._base_repr_(html=False)\n\n def __str__(self):\n index = self.index if (self.index >= 0) else self.index + len(self._table)\n return \"\\n\".join(self.table[index : index + 1].pformat(max_width=-1))\n\n def __bytes__(self):\n return str(self).encode(\"utf-8\")\n\n\ncollections.abc.Sequence.register(Row)\n", "docs/changes/table/14878.feature.rst": null, "docs/table/access_table.rst": ".. _access_table:\n\nAccessing a Table\n*****************\n\nAccessing table properties and data is generally consistent with the basic\ninterface for ``numpy`` `structured arrays\n<https://numpy.org/doc/stable/user/basics.rec.html>`_.\n\nBasics\n======\n\nFor a quick overview, the code below shows the basics of accessing table data.\nWhere relevant, there is a comment about what sort of object is returned.\nExcept where noted, table access returns objects that can be modified in order\nto update the original table data or properties. See also the section on\n:ref:`copy_versus_reference` to learn more about this topic.\n\n**Make a table**\n::\n\n from astropy.table import Table\n import numpy as np\n\n arr = np.arange(15).reshape(5, 3)\n t = Table(arr, names=('a', 'b', 'c'), meta={'keywords': {'key1': 'val1'}})\n\n**Table properties**\n::\n\n t.columns # Dict of table columns (access by column name, index, or slice)\n t.colnames # List of column names\n t.meta # Dict of meta-data\n len(t) # Number of table rows\n\n**Access table data**\n::\n\n t['a'] # Column 'a'\n t['a'][1] # Row 1 of column 'a'\n t[1] # Row 1\n t[1]['a'] # Column 'a' of row 1\n t[2:5] # Table object with rows 2:5\n t[[1, 3, 4]] # Table object with rows 1, 3, 4 (copy)\n t[np.array([1, 3, 4])] # Table object with rows 1, 3, 4 (copy)\n t[[]] # Same table definition but with no rows of data\n t['a', 'c'] # Table with cols 'a', 'c' (copy)\n dat = np.array(t) # Copy table data to numpy structured array object\n t['a'].quantity # an astropy.units.Quantity for Column 'a'\n t['a'].to('km') # an astropy.units.Quantity for Column 'a' in units of kilometers\n t.columns[1] # Column 1 (which is the 'b' column)\n t.columns[0:2] # New table with columns 0 and 1\n\n.. Note::\n Although they appear nearly equivalent, there is a factor of two performance\n difference between ``t[1]['a']`` (slower, because an intermediate |Row|\n object gets created) versus ``t['a'][1]`` (faster). Always use the latter\n when possible.\n\n**Print table or column**\n::\n\n print(t) # Print formatted version of table to the screen\n t.pprint() # Same as above\n t.pprint(show_unit=True) # Show column unit\n t.pprint(show_name=False) # Do not show column names\n t.pprint_all() # Print full table no matter how long / wide it is (same as t.pprint(max_lines=-1, max_width=-1))\n\n t.more() # Interactively scroll through table like Unix \"more\"\n\n print(t['a']) # Formatted column values\n t['a'].pprint() # Same as above, with same options as Table.pprint()\n t['a'].more() # Interactively scroll through column\n t['a', 'c'].pprint() # Print columns 'a' and 'c' of table\n\n lines = t.pformat() # Formatted table as a list of lines (same options as pprint)\n lines = t['a'].pformat() # Formatted column values as a list\n\n\nDetails\n=======\n\nFor all of the following examples it is assumed that the table has been created\nas follows::\n\n >>> from astropy.table import Table, Column\n >>> import numpy as np\n >>> import astropy.units as u\n\n >>> arr = np.arange(15, dtype=np.int32).reshape(5, 3)\n >>> t = Table(arr, names=('a', 'b', 'c'), meta={'keywords': {'key1': 'val1'}})\n >>> t['a'].format = \"{:.3f}\" # print with 3 digits after decimal point\n >>> t['a'].unit = 'm sec^-1'\n >>> t['a'].description = 'unladen swallow velocity'\n >>> print(t)\n a b c\n m sec^-1\n -------- --- ---\n 0.000 1 2\n 3.000 4 5\n 6.000 7 8\n 9.000 10 11\n 12.000 13 14\n\n.. Note::\n\n In the example above the ``format``, ``unit``, and ``description``\n attributes of the |Column| were set directly. For :ref:`mixin_columns` like\n |Quantity| you must set via the ``info`` attribute, for example,\n ``t['a'].info.format = \"{:.3f}\"``. You can use the ``info`` attribute with\n |Column| objects as well, so the general solution that works with any table\n column is to set via the ``info`` attribute. See :ref:`mixin_attributes` for\n more information.\n\n.. _table-summary-information:\n\nSummary Information\n-------------------\n\nYou can get summary information about the table as follows::\n\n >>> t.info\n <Table length=5>\n name dtype unit format description\n ---- ----- -------- ------ ------------------------\n a int32 m sec^-1 {:.3f} unladen swallow velocity\n b int32\n c int32\n\nIf called as a function then you can supply an ``option`` that specifies\nthe type of information to return. The built-in ``option`` choices are\n``'attributes'`` (column attributes, which is the default) or ``'stats'``\n(basic column statistics). The ``option`` argument can also be a list\nof available options::\n\n >>> t.info('stats') # doctest: +FLOAT_CMP\n <Table length=5>\n name mean std min max\n ---- ---- ------- --- ---\n a 6 4.24264 0 12\n b 7 4.24264 1 13\n c 8 4.24264 2 14\n\n >>> t.info(['attributes', 'stats']) # doctest: +FLOAT_CMP\n <Table length=5>\n name dtype unit format description mean std min max\n ---- ----- -------- ------ ------------------------ ---- ------- --- ---\n a int32 m sec^-1 {:.3f} unladen swallow velocity 6 4.24264 0 12\n b int32 7 4.24264 1 13\n c int32 8 4.24264 2 14\n\nColumns also have an ``info`` property that has the same behavior and\narguments, but provides information about a single column::\n\n >>> t['a'].info\n name = a\n dtype = int32\n unit = m sec^-1\n format = {:.3f}\n description = unladen swallow velocity\n class = Column\n n_bad = 0\n length = 5\n\n >>> t['a'].info('stats') # doctest: +FLOAT_CMP\n name = a\n mean = 6\n std = 4.24264\n min = 0\n max = 12\n n_bad = 0\n length = 5\n\n\nAccessing Properties\n--------------------\n\nThe code below shows accessing the table columns as a |TableColumns| object,\ngetting the column names, table metadata, and number of table rows. The table\nmetadata is an `~collections.OrderedDict` by default.\n::\n\n >>> t.columns\n <TableColumns names=('a','b','c')>\n\n >>> t.colnames\n ['a', 'b', 'c']\n\n >>> t.meta # Dict of meta-data\n {'keywords': {'key1': 'val1'}}\n\n >>> len(t)\n 5\n\n\nAccessing Data\n--------------\n\nAs expected you can access a table column by name and get an element from that\ncolumn with a numerical index::\n\n >>> t['a'] # Column 'a'\n <Column name='a' dtype='int32' unit='m sec^-1' format='{:.3f}' description='unladen swallow velocity' length=5>\n 0.000\n 3.000\n 6.000\n 9.000\n 12.000\n\n\n >>> t['a'][1] # Row 1 of column 'a'\n 3\n\nWhen a table column is printed, it is formatted according to the ``format``\nattribute (see :ref:`table_format_string`). Note the difference between the\ncolumn representation above and how it appears via ``print()`` or ``str()``::\n\n >>> print(t['a'])\n a\n m sec^-1\n --------\n 0.000\n 3.000\n 6.000\n 9.000\n 12.000\n\nLikewise a table row and a column from that row can be selected::\n\n >>> t[1] # Row object corresponding to row 1\n <Row index=1>\n a b c\n m sec^-1\n int32 int32 int32\n -------- ----- -----\n 3.000 4 5\n\n >>> t[1]['a'] # Column 'a' of row 1\n 3\n\nA |Row| object has the same columns and metadata as its parent table::\n\n >>> t[1].columns\n <TableColumns names=('a','b','c')>\n\n >>> t[1].meta\n {'keywords': {'key1': 'val1'}}\n\nSlicing a table returns a new table object with references to the original\ndata within the slice region (See :ref:`copy_versus_reference`). The table\nmetadata and column definitions are copied.\n::\n\n >>> t[2:5] # Table object with rows 2:5 (reference)\n <Table length=3>\n a b c\n m sec^-1\n int32 int32 int32\n -------- ----- -----\n 6.000 7 8\n 9.000 10 11\n 12.000 13 14\n\nIt is possible to select table rows with an array of indexes or by specifying\nmultiple column names. This returns a copy of the original table for the\nselected rows or columns. ::\n\n >>> print(t[[1, 3, 4]]) # Table object with rows 1, 3, 4 (copy)\n a b c\n m sec^-1\n -------- --- ---\n 3.000 4 5\n 9.000 10 11\n 12.000 13 14\n\n\n >>> print(t[np.array([1, 3, 4])]) # Table object with rows 1, 3, 4 (copy)\n a b c\n m sec^-1\n -------- --- ---\n 3.000 4 5\n 9.000 10 11\n 12.000 13 14\n\n\n >>> print(t['a', 'c']) # or t[['a', 'c']] or t[('a', 'c')]\n ... # Table with cols 'a', 'c' (copy)\n a c\n m sec^-1\n -------- ---\n 0.000 2\n 3.000 5\n 6.000 8\n 9.000 11\n 12.000 14\n\nWe can select rows from a table using conditionals to create boolean masks. A\ntable indexed with a boolean array will only return rows where the mask array\nelement is `True`. Different conditionals can be combined using the bitwise\noperators. ::\n\n >>> mask = (t['a'] > 4) & (t['b'] > 8) # Table rows where column a > 4\n >>> print(t[mask]) # and b > 8\n ...\n a b c\n m sec^-1\n -------- --- ---\n 9.000 10 11\n 12.000 13 14\n\nFinally, you can access the underlying table data as a native ``numpy``\nstructured array by creating a copy or reference with :func:`numpy.array`::\n\n >>> data = np.array(t) # copy of data in t as a structured array\n >>> data = np.array(t, copy=False) # reference to data in t\n\n\nTable Equality\n--------------\n\nWe can check table data equality using two different methods:\n\n- The ``==`` comparison operator. This returns a `True` or `False` for\n each row if the *entire row* matches. This is the same as the behavior of\n ``numpy`` structured arrays.\n- Table :meth:`~astropy.table.Table.values_equal` to compare table values\n element-wise. This returns a boolean `True` or `False` for each table\n *element*, so you get a `~astropy.table.Table` of values.\n\nExamples\n^^^^^^^^\n\n.. EXAMPLE START: Checking Table Equality\n\nTo check table equality::\n\n >>> t1 = Table(rows=[[1, 2, 3],\n ... [4, 5, 6],\n ... [7, 7, 9]], names=['a', 'b', 'c'])\n >>> t2 = Table(rows=[[1, 2, -1],\n ... [4, -1, 6],\n ... [7, 7, 9]], names=['a', 'b', 'c'])\n\n >>> t1 == t2\n array([False, False, True])\n\n >>> t1.values_equal(t2) # Compare to another table\n <Table length=3>\n a b c\n bool bool bool\n ---- ----- -----\n True True False\n True False True\n True True True\n\n >>> t1.values_equal([2, 4, 7]) # Compare to an array column-wise\n <Table length=3>\n a b c\n bool bool bool\n ----- ----- -----\n False True False\n True False False\n True True False\n\n >>> t1.values_equal(7) # Compare to a scalar column-wise\n <Table length=3>\n a b c\n bool bool bool\n ----- ----- -----\n False False False\n False False False\n True True False\n\n.. EXAMPLE END\n\nFormatted Printing\n------------------\n\nThe values in a table or column can be printed or retrieved as a formatted\ntable using one of several methods:\n\n- `print()` function.\n- `Table.more() <astropy.table.Table.more>` or `Column.more()\n <astropy.table.Column.more>` methods to interactively scroll through\n table values.\n- `Table.pprint() <astropy.table.Table.pprint>` or `Column.pprint()\n <astropy.table.Column.pprint>` methods to print a formatted version of\n the table to the screen.\n- `Table.pformat() <astropy.table.Table.pformat>` or `Column.pformat()\n <astropy.table.Column.pformat>` methods to return the formatted table\n or column as a list of fixed-width strings. This could be used as a quick way\n to save a table.\n\nThese methods use :ref:`table_format_string`\nif available and strive to make the output readable.\nBy default, table and column printing will\nnot print the table larger than the available interactive screen size. If the\nscreen size cannot be determined (in a non-interactive environment or on\nWindows) then a default size of 25 rows by 80 columns is used. If a table is\ntoo large, then rows and/or columns are cut from the middle so it fits.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Printing Formatted Tables\n\nTo print a formatted table::\n\n >>> arr = np.arange(3000).reshape(100, 30) # 100 rows x 30 columns array\n >>> t = Table(arr)\n >>> print(t)\n col0 col1 col2 col3 col4 col5 col6 ... col23 col24 col25 col26 col27 col28 col29\n ---- ---- ---- ---- ---- ---- ---- ... ----- ----- ----- ----- ----- ----- -----\n 0 1 2 3 4 5 6 ... 23 24 25 26 27 28 29\n 30 31 32 33 34 35 36 ... 53 54 55 56 57 58 59\n 60 61 62 63 64 65 66 ... 83 84 85 86 87 88 89\n 90 91 92 93 94 95 96 ... 113 114 115 116 117 118 119\n 120 121 122 123 124 125 126 ... 143 144 145 146 147 148 149\n 150 151 152 153 154 155 156 ... 173 174 175 176 177 178 179\n 180 181 182 183 184 185 186 ... 203 204 205 206 207 208 209\n 210 211 212 213 214 215 216 ... 233 234 235 236 237 238 239\n 240 241 242 243 244 245 246 ... 263 264 265 266 267 268 269\n 270 271 272 273 274 275 276 ... 293 294 295 296 297 298 299\n ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\n 2670 2671 2672 2673 2674 2675 2676 ... 2693 2694 2695 2696 2697 2698 2699\n 2700 2701 2702 2703 2704 2705 2706 ... 2723 2724 2725 2726 2727 2728 2729\n 2730 2731 2732 2733 2734 2735 2736 ... 2753 2754 2755 2756 2757 2758 2759\n 2760 2761 2762 2763 2764 2765 2766 ... 2783 2784 2785 2786 2787 2788 2789\n 2790 2791 2792 2793 2794 2795 2796 ... 2813 2814 2815 2816 2817 2818 2819\n 2820 2821 2822 2823 2824 2825 2826 ... 2843 2844 2845 2846 2847 2848 2849\n 2850 2851 2852 2853 2854 2855 2856 ... 2873 2874 2875 2876 2877 2878 2879\n 2880 2881 2882 2883 2884 2885 2886 ... 2903 2904 2905 2906 2907 2908 2909\n 2910 2911 2912 2913 2914 2915 2916 ... 2933 2934 2935 2936 2937 2938 2939\n 2940 2941 2942 2943 2944 2945 2946 ... 2963 2964 2965 2966 2967 2968 2969\n 2970 2971 2972 2973 2974 2975 2976 ... 2993 2994 2995 2996 2997 2998 2999\n Length = 100 rows\n\n.. EXAMPLE END\n\nmore() method\n^^^^^^^^^^^^^\n\nIn order to browse all rows of a table or column use the `Table.more()\n<astropy.table.Table.more>` or `Column.more() <astropy.table.Column.more>`\nmethods. These let you interactively scroll through the rows much like the Unix\n``more`` command. Once part of the table or column is displayed the supported\nnavigation keys are:\n\n| **f, space** : forward one page\n| **b** : back one page\n| **r** : refresh same page\n| **n** : next row\n| **p** : previous row\n| **<** : go to beginning\n| **>** : go to end\n| **q** : quit browsing\n| **h** : print this help\n\npprint() method\n^^^^^^^^^^^^^^^\n\nIn order to fully control the print output use the `Table.pprint()\n<astropy.table.Table.pprint>` or `Column.pprint()\n<astropy.table.Column.pprint>` methods. These have keyword arguments\n``max_lines``, ``max_width``, ``show_name``, ``show_unit``, and\n``show_dtype``, with meanings as shown below::\n\n >>> arr = np.arange(3000, dtype=float).reshape(100, 30)\n >>> t = Table(arr)\n >>> t['col0'].format = '%e'\n >>> t['col0'].unit = 'km**2'\n >>> t['col29'].unit = 'kg sec m**-2'\n\n >>> t.pprint(max_lines=8, max_width=40)\n col0 ... col29\n km2 ... kg sec m**-2\n ------------ ... ------------\n 0.000000e+00 ... 29.0\n ... ... ...\n 2.940000e+03 ... 2969.0\n 2.970000e+03 ... 2999.0\n Length = 100 rows\n\n >>> t.pprint(max_lines=8, max_width=40, show_unit=False)\n col0 ... col29\n ------------ ... ------\n 0.000000e+00 ... 29.0\n ... ... ...\n 2.940000e+03 ... 2969.0\n 2.970000e+03 ... 2999.0\n Length = 100 rows\n\n >>> t.pprint(max_lines=8, max_width=40, show_name=False)\n km2 ... kg sec m**-2\n ------------ ... ------------\n 0.000000e+00 ... 29.0\n 3.000000e+01 ... 59.0\n ... ... ...\n 2.940000e+03 ... 2969.0\n 2.970000e+03 ... 2999.0\n Length = 100 rows\n\n >>> t.pprint(max_lines=8, max_width=40, show_dtype=True)\n col0 col1 ... col29\n km2 ... kg sec m**-2\n float64 float64 ... float64\n ------------ ------- ... ------------\n 0.000000e+00 1.0 ... 29.0\n ... ... ... ...\n 2.970000e+03 2971.0 ... 2999.0\n Length = 100 rows\n\nIn order to force printing all values regardless of the output length or width\nuse :meth:`~astropy.table.Table.pprint_all`, which is equivalent to setting\n``max_lines`` and ``max_width`` to ``-1`` in :meth:`~astropy.table.Table.pprint`.\n:meth:`~astropy.table.Table.pprint_all` takes the same arguments as :meth:`~astropy.table.Table.pprint`.\nFor the wide table in this example you see six lines of wrapped output like the\nfollowing::\n\n >>> t.pprint_all(max_lines=8) # doctest: +SKIP\n col0 col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 col12 col13 col14 col15 col16 col17 col18 col19 col20 col21 col22 col23 col24 col25 col26 col27 col28 col29\n km2 kg sec m**-2\n ------------ ----------- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------------\n 0.000000e+00 1.000000 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0 18.0 19.0 20.0 21.0 22.0 23.0 24.0 25.0 26.0 27.0 28.0 29.0\n ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\n 2.940000e+03 2941.000000 2942.0 2943.0 2944.0 2945.0 2946.0 2947.0 2948.0 2949.0 2950.0 2951.0 2952.0 2953.0 2954.0 2955.0 2956.0 2957.0 2958.0 2959.0 2960.0 2961.0 2962.0 2963.0 2964.0 2965.0 2966.0 2967.0 2968.0 2969.0\n 2.970000e+03 2971.000000 2972.0 2973.0 2974.0 2975.0 2976.0 2977.0 2978.0 2979.0 2980.0 2981.0 2982.0 2983.0 2984.0 2985.0 2986.0 2987.0 2988.0 2989.0 2990.0 2991.0 2992.0 2993.0 2994.0 2995.0 2996.0 2997.0 2998.0 2999.0\n Length = 100 rows\n\nFor columns, the syntax and behavior of :func:`~astropy.table.Column.pprint` is\nthe same except that there is no ``max_width`` keyword argument::\n\n >>> t['col3'].pprint(max_lines=8)\n col3\n ------\n 3.0\n 33.0\n ...\n 2943.0\n 2973.0\n Length = 100 rows\n\nColumn alignment\n^^^^^^^^^^^^^^^^\n\nIndividual columns have the ability to be aligned in a number of different\nways for an enhanced viewing experience::\n\n >>> t1 = Table()\n >>> t1['long column name 1'] = [1, 2, 3]\n >>> t1['long column name 2'] = [4, 5, 6]\n >>> t1['long column name 3'] = [7, 8, 9]\n >>> t1['long column name 4'] = [700000, 800000, 900000]\n >>> t1['long column name 2'].info.format = '<'\n >>> t1['long column name 3'].info.format = '0='\n >>> t1['long column name 4'].info.format = '^'\n >>> t1.pprint()\n long column name 1 long column name 2 long column name 3 long column name 4\n ------------------ ------------------ ------------------ ------------------\n 1 4 000000000000000007 700000\n 2 5 000000000000000008 800000\n 3 6 000000000000000009 900000\n\nConveniently, alignment can be handled another way — by passing a list to the\nkeyword argument ``align``::\n\n >>> t1 = Table()\n >>> t1['column1'] = [1, 2, 3]\n >>> t1['column2'] = [2, 4, 6]\n >>> t1.pprint(align=['<', '0='])\n column1 column2\n ------- -------\n 1 0000002\n 2 0000004\n 3 0000006\n\nIt is also possible to set the alignment of all columns with a single\nstring value::\n\n >>> t1.pprint(align='^')\n column1 column2\n ------- -------\n 1 2\n 2 4\n 3 6\n\nThe fill character for justification can be set as a prefix to the\nalignment character (see `Format Specification Mini-Language\n<https://docs.python.org/3/library/string.html#format-specification-mini-language>`_\nfor additional explanation). This can be done both in the ``align`` argument\nand in the column ``format`` attribute. Note the interesting interaction below::\n\n >>> t1 = Table([[1.0, 2.0], [1, 2]], names=['column1', 'column2'])\n\n >>> t1['column1'].format = '#^.2f'\n >>> t1.pprint()\n column1 column2\n ------- -------\n ##1.00# 1\n ##2.00# 2\n\nNow if we set a global align, it seems like our original column format\ngot lost::\n\n >>> t1.pprint(align='!<')\n column1 column2\n ------- -------\n 1.00!!! 1!!!!!!\n 2.00!!! 2!!!!!!\n\nThe way to avoid this is to explicitly specify the alignment strings\nfor every column and use `None` where the column format should be\nused::\n\n >>> t1.pprint(align=[None, '!<'])\n column1 column2\n ------- -------\n ##1.00# 1!!!!!!\n ##2.00# 2!!!!!!\n\npformat() method\n^^^^^^^^^^^^^^^^\n\nIn order to get the formatted output for manipulation or writing to a file use\nthe `Table.pformat() <astropy.table.Table.pformat>` or `Column.pformat()\n<astropy.table.Column.pformat>` methods. These behave just as for\n:meth:`~astropy.table.Table.pprint` but return a list corresponding to each\nformatted line in the :meth:`~astropy.table.Table.pprint` output. The\n:meth:`~astropy.table.Table.pformat_all` method can be used to return a list\nfor all lines in the |Table|.\n\n >>> lines = t['col3'].pformat(max_lines=8)\n\nHiding columns\n^^^^^^^^^^^^^^\n\nThe |Table| class has functionality to selectively show or hide certain columns\nwithin the table when using any of the print methods. This can be useful for\ncolumns that are very wide or else \"uninteresting\" for various reasons. The\nspecification of which columns are outputted is associated with the table itself\nso that it persists through slicing, copying, and serialization (e.g. saving to\n:ref:`ecsv_format`). One use case is for specialized table subclasses that\ncontain auxiliary columns that are not typically useful to the user.\n\nThe specification of which columns to include when printing is handled through\ntwo complementary |Table| attributes:\n\n- `~astropy.table.Table.pprint_include_names`: column names to include, where\n the default value of `None` implies including all columns.\n- `~astropy.table.Table.pprint_exclude_names`: column names to exclude, where\n the default value of `None` implies excluding no columns.\n\nTypically you should use just one of the two attributes at a time. However,\nboth can be set at once and the set of columns that actually gets printed\nis conceptually expressed in this pseudo-code::\n\n include_names = (set(table.pprint_include_names() or table.colnames)\n - set(table.pprint_exclude_names() or ())\n\nExamples\n\"\"\"\"\"\"\"\"\nLet's start with defining a simple table with one row and six columns::\n\n >>> from astropy.table.table_helpers import simple_table\n >>> t = simple_table(size=1, cols=6)\n >>> print(t)\n a b c d e f\n --- --- --- --- --- ---\n 1 1.0 c 4 4.0 f\n\nNow you can get the value of the ``pprint_include_names`` attribute by calling\nit as a function, and then include some names for printing::\n\n >>> print(t.pprint_include_names())\n None\n >>> t.pprint_include_names = ('a', 'c', 'e')\n >>> print(t.pprint_include_names())\n ('a', 'c', 'e')\n >>> print(t)\n a c e\n --- --- ---\n 1 c 4.0\n\nNow you can instead exclude some columns from printing. Note that for both\ninclude and exclude, you can add column names that do not exist in the table.\nThis allows pre-defining the attributes before the table has been fully\nconstructed.\n::\n\n >>> t.pprint_include_names = None # Revert to printing all columns\n >>> t.pprint_exclude_names = ('a', 'c', 'e', 'does-not-exist')\n >>> print(t)\n b d f\n --- --- ---\n 1.0 4 f\n\nNext you can ``add`` or ``remove`` names from the attribute::\n\n >>> t = simple_table(size=1, cols=6) # Start with a fresh table\n >>> t.pprint_exclude_names.add('b') # Single name\n >>> t.pprint_exclude_names.add(['d', 'f']) # List or tuple of names\n >>> t.pprint_exclude_names.remove('f') # Single name or list/tuple of names\n >>> t.pprint_exclude_names()\n ('b', 'd')\n\nFinally, you can temporarily set the attributes within a `context manager\n<https://docs.python.org/3/reference/datamodel.html#context-managers>`_. For\nexample::\n\n >>> t = simple_table(size=1, cols=6)\n >>> t.pprint_include_names = ('a', 'b')\n >>> print(t)\n a b\n --- ---\n 1 1.0\n\n >>> # Show all (for pprint_include_names the value of None => all columns)\n >>> with t.pprint_include_names.set(None):\n ... print(t)\n a b c d e f\n --- --- --- --- --- ---\n 1 1.0 c 4 4.0 f\n\nThe specification of names for these attributes can include Unix-style globs\nlike ``*`` and ``?``. See `fnmatch` for details (and in particular how to\nescape those characters if needed). For example::\n\n >>> t = Table()\n >>> t.pprint_exclude_names = ['boring*']\n >>> t['a'] = [1]\n >>> t['b'] = ['b']\n >>> t['boring_ra'] = [122.0]\n >>> t['boring_dec'] = [89.9]\n >>> print(t)\n a b\n --- ---\n 1 b\n\nMultidimensional columns\n^^^^^^^^^^^^^^^^^^^^^^^^\n\nIf a column has more than one dimension then each element of the column is\nitself an array. In the example below there are three rows, each of which is a\n``2 x 2`` array. The formatted output for such a column shows only the first\nand last value of each row element and indicates the array dimensions in the\ncolumn name header::\n\n >>> t = Table()\n >>> arr = [ np.array([[ 1., 2.],\n ... [10., 20.]]),\n ... np.array([[ 3., 4.],\n ... [30., 40.]]),\n ... np.array([[ 5., 6.],\n ... [50., 60.]]) ]\n >>> t['a'] = arr\n >>> t['a'].shape\n (3, 2, 2)\n >>> t.pprint()\n a\n -----------\n 1.0 .. 20.0\n 3.0 .. 40.0\n 5.0 .. 60.0\n\nIn order to see all of the data values for a multidimensional column use the\ncolumn representation. This uses the standard ``numpy`` mechanism for printing\nany array::\n\n >>> t['a'].data\n array([[[ 1., 2.],\n [10., 20.]],\n [[ 3., 4.],\n [30., 40.]],\n [[ 5., 6.],\n [50., 60.]]])\n\n.. _format_stuctured_array_columns:\n\nStructured array columns\n^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. EXAMPLE START: Creating a formatted Astropy Table with a Structured Column\n\nFor columns which are structured arrays, the format string must be a a string\nthat uses `\"new style\" format strings\n<https://docs.python.org/3/library/string.html#format-string-syntax>`_ with\nparameter substitutions corresponding to the field names in the structured\narray. Consider the example below including a column of parameters values where\nthe value, min and max are stored in the in the column as fields named ``val``,\n``min``, and ``max``. By default the field values are shown as a tuple::\n\n >>> pars = np.array(\n ... [(1.2345678, -20, 3),\n ... (12.345678, 4.5678, 33)],\n ... dtype=[('val', 'f8'), ('min', 'f8'), ('max', 'f8')]\n ... )\n >>> t = Table()\n >>> t['a'] = [1, 2]\n >>> t['par'] = pars\n >>> print(t)\n a par [val, min, max]\n --- ------------------------\n 1 (1.2345678, -20., 3.)\n 2 (12.345678, 4.5678, 33.)\n\n\nHowever, setting the format string appropriately allows formatting each of the\nfield values and controlling the overall output::\n\n >>> t['par'].info.format = '{val:6.2f} ({min:5.1f}, {max:5.1f})'\n >>> print(t)\n a par [val, min, max]\n --- ---------------------\n 1 1.23 (-20.0, 3.0)\n 2 12.35 ( 4.6, 33.0)\n\n.. EXAMPLE END\n\n.. _columns_with_units:\n\nColumns with Units\n^^^^^^^^^^^^^^^^^^\n\n.. note::\n\n |Table| and |QTable| instances handle entries with units differently. The\n following describes |Table|. :ref:`quantity_and_qtable` explains how a\n |QTable| differs from a |Table|.\n\nA |Column| object with units within a standard |Table| has certain\nquantity-related conveniences available. To begin with, it can be converted\nexplicitly to a |Quantity| object via the\n:attr:`~astropy.table.Column.quantity` property and the\n:meth:`~astropy.table.Column.to` method::\n\n >>> data = [[1., 2., 3.], [40000., 50000., 60000.]]\n >>> t = Table(data, names=('a', 'b'))\n >>> t['a'].unit = u.m\n >>> t['b'].unit = 'km/s'\n >>> t['a'].quantity # doctest: +FLOAT_CMP\n <Quantity [1., 2., 3.] m>\n >>> t['b'].to(u.kpc/u.Myr) # doctest: +FLOAT_CMP\n <Quantity [40.9084866 , 51.13560825, 61.3627299 ] kpc / Myr>\n\nNote that the :attr:`~astropy.table.Column.quantity` property is actually\na *view* of the data in the column, not a copy. Hence, you can set the\nvalues of a column in a way that respects units by making in-place\nchanges to the :attr:`~astropy.table.Column.quantity` property::\n\n >>> t['b']\n <Column name='b' dtype='float64' unit='km / s' length=3>\n 40000.0\n 50000.0\n 60000.0\n\n >>> t['b'].quantity[0] = 45000000*u.m/u.s\n >>> t['b']\n <Column name='b' dtype='float64' unit='km / s' length=3>\n 45000.0\n 50000.0\n 60000.0\n\nEven without explicit conversion, columns with units can be treated like a\n|Quantity| in *some* arithmetic expressions (see the warning below for caveats\nto this)::\n\n >>> t['a'] + .005*u.km # doctest: +FLOAT_CMP\n <Quantity [6., 7., 8.] m>\n >>> from astropy.constants import c\n >>> (t['b'] / c).decompose() # doctest: +FLOAT_CMP\n <Quantity [0.15010384, 0.16678205, 0.20013846]>\n\n.. warning::\n\n |Table| columns do *not* always behave the same as |Quantity|. |Table|\n columns act more like regular ``numpy`` arrays unless either explicitly\n converted to a |Quantity| or combined with a |Quantity| using an arithmetic\n operator. For example, the following does not work in the way you would\n expect::\n\n >>> data = [[30, 90]]\n >>> t = Table(data, names=('angle',))\n >>> t['angle'].unit = 'deg'\n >>> np.sin(t['angle']) # doctest: +FLOAT_CMP\n <Column name='angle' dtype='float64' unit='deg' length=2>\n -0.988031624093\n 0.893996663601\n\n This is wrong both in that it says the result is in degrees, *and*\n `~numpy.sin` treated the values as radians rather than degrees. If at all in\n doubt that you will get the right result, the safest choice is to either use\n |QTable| or to explicitly convert to |Quantity|::\n\n >>> np.sin(t['angle'].quantity) # doctest: +FLOAT_CMP\n <Quantity [0.5, 1. ]>\n\n.. _bytestring-columns-python-3:\n\nBytestring Columns\n^^^^^^^^^^^^^^^^^^\n\nUsing bytestring columns (``numpy`` ``'S'`` dtype) is possible\nwith ``astropy`` tables since they can be compared with the natural\nPython string (``str``) type. See `The bytes/str dichotomy in Python 3\n<https://eli.thegreenplace.net/2012/01/30/the-bytesstr-dichotomy-in-python-3>`_\nfor a very brief overview of the difference.\n\nThe standard method of representing strings in ``numpy`` is via the\nunicode ``'U'`` dtype. The problem is that this requires 4 bytes per\ncharacter, and if you have a very large number of strings this could\nfill memory and impact performance. A very common use case is that these\nstrings are actually ASCII and can be represented with 1 byte per character.\nIn ``astropy`` it is possible to work directly and conveniently with\nbytestring data in |Table| and |Column| operations.\n\nNote that the bytestring issue is a particular problem when dealing with HDF5\nfiles, where character data are read as bytestrings (``'S'`` dtype) when using\nthe :ref:`table_io`. Since HDF5 files are frequently used to store very large\ndatasets, the memory bloat associated with conversion to ``'U'`` dtype is\nunacceptable.\n\n\nExamples\n\"\"\"\"\"\"\"\"\n\n.. EXAMPLE START: Bytestring Data in Astropy Tables\n\nThe examples below illustrate dealing with bytestring data in ``astropy``::\n\n >>> t = Table([['abc', 'def']], names=['a'], dtype=['S'])\n\n >>> t['a'] == 'abc' # Gives expected answer\n array([ True, False])\n\n >>> t['a'] == b'abc' # Still gives expected answer\n array([ True, False])\n\n >>> t['a'][0] == 'abc' # Expected answer\n True\n\n >>> t['a'][0] == b'abc' # Cannot compare to bytestring\n False\n\n >>> t['a'][0] = 'bä'\n >>> t\n <Table length=2>\n a\n bytes3\n ------\n bä\n def\n\n >>> t['a'] == 'bä'\n array([ True, False])\n\n.. doctest-skip::\n\n >>> # Round trip unicode strings through HDF5\n >>> t.write('test.hdf5', format='hdf5', path='data', overwrite=True)\n >>> t2 = Table.read('test.hdf5', format='hdf5', path='data')\n >>> t2\n <Table length=2>\n col0\n bytes3\n ------\n bä\n def\n\n.. EXAMPLE END\n"}
|
diff --git a/docs/changes/table/14878.feature.rst b/docs/changes/table/14878.feature.rst
new file mode 100644
index 000000000000..768574409629
--- /dev/null
+++ b/docs/changes/table/14878.feature.rst
@@ -0,0 +1,3 @@
+The new ``Row.get()`` method, analogous to ``dict.get()``, returns the value of
+the specified column from the row if the column present, otherwise it returns a
+fallback value, which by default is ``None``.
diff --git a/docs/table/access_table.rst b/docs/table/access_table.rst
index 24d0f561cab2..263ecafc9713 100644
--- a/docs/table/access_table.rst
+++ b/docs/table/access_table.rst
@@ -315,6 +315,36 @@ structured array by creating a copy or reference with :func:`numpy.array`::
>>> data = np.array(t, copy=False) # reference to data in t
+Possibly missing columns
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+In some cases it might not be guaranteed that a column is present in a table,
+but there does exist a good default value that can be used if it is not. The
+columns of a |Table| can be represented as a :class:`dict` subclass instance
+through the ``columns`` attribute, which means that a replacement for missing
+columns can be provided using the :meth:`dict.get` method::
+
+ >>> t.columns.get("b", np.zeros(len(t)))
+ <Column name='b' dtype='int32' length=5>
+ 1
+ 4
+ 7
+ 10
+ 13
+ >>> t.columns.get("x", np.zeros(len(t)))
+ array([0., 0., 0., 0., 0.])
+
+In case of a single |Row| it is possible to use its
+:meth:`~astropy.table.Row.get` method without having to go through
+``columns``::
+
+ >>> row = t[2]
+ >>> row.get("c", -1)
+ 8
+ >>> row.get("y", -1)
+ -1
+
+
Table Equality
--------------
|
{"astropy/table/row.py": [{"type": "function", "name": "Row.get", "lines": [109, 136], "signature": "def get(self, key, default=None, /):", "doc": "Return the value for key if key is in the columns, else default.\n\nParameters\n----------\nkey : `str`, positional-only\n The name of the column to look for.\ndefault : `object`, optional, positional-only\n The value to return if the ``key`` is not among the columns.\n\nReturns\n-------\n`object`\n The value in the ``key`` column of the row if present,\n ``default`` otherwise.\n\nExamples\n--------\n>>> from astropy.table import Table\n>>> t = Table({\"a\": [2, 3, 5], \"b\": [7, 11, 13]})\n>>> t[0].get(\"a\")\n2\n>>> t[1].get(\"b\", 0)\n11\n>>> t[2].get(\"c\", 0)\n0"}]}
|
5.2
|
["astropy/table/tests/test_row.py::test_row_get"]
|
["astropy/table/tests/test_row.py::test_masked_row_with_object_col", "astropy/table/tests/test_row.py::TestRow::test_subclass[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_subclass[masked]", "astropy/table/tests/test_row.py::TestRow::test_subclass[subclass]", "astropy/table/tests/test_row.py::TestRow::test_values[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_values[masked]", "astropy/table/tests/test_row.py::TestRow::test_values[subclass]", "astropy/table/tests/test_row.py::TestRow::test_ref[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_ref[masked]", "astropy/table/tests/test_row.py::TestRow::test_ref[subclass]", "astropy/table/tests/test_row.py::TestRow::test_left_equal[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_left_equal[masked]", "astropy/table/tests/test_row.py::TestRow::test_left_equal[subclass]", "astropy/table/tests/test_row.py::TestRow::test_left_not_equal[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_left_not_equal[masked]", "astropy/table/tests/test_row.py::TestRow::test_left_not_equal[subclass]", "astropy/table/tests/test_row.py::TestRow::test_right_equal[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_right_equal[masked]", "astropy/table/tests/test_row.py::TestRow::test_right_equal[subclass]", "astropy/table/tests/test_row.py::TestRow::test_convert_numpy_array[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_convert_numpy_array[masked]", "astropy/table/tests/test_row.py::TestRow::test_convert_numpy_array[subclass]", "astropy/table/tests/test_row.py::TestRow::test_format_row[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_format_row[masked]", "astropy/table/tests/test_row.py::TestRow::test_format_row[subclass]", "astropy/table/tests/test_row.py::TestRow::test_as_void[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_as_void[masked]", "astropy/table/tests/test_row.py::TestRow::test_as_void[subclass]", "astropy/table/tests/test_row.py::TestRow::test_row_and_as_void_with_objects[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_row_and_as_void_with_objects[masked]", "astropy/table/tests/test_row.py::TestRow::test_row_and_as_void_with_objects[subclass]", "astropy/table/tests/test_row.py::TestRow::test_bounds_checking[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_bounds_checking[masked]", "astropy/table/tests/test_row.py::TestRow::test_bounds_checking[subclass]", "astropy/table/tests/test_row.py::TestRow::test_create_rows_from_list[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_create_rows_from_list[masked]", "astropy/table/tests/test_row.py::TestRow::test_create_rows_from_list[subclass]", "astropy/table/tests/test_row.py::TestRow::test_row_keys_values[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_row_keys_values[masked]", "astropy/table/tests/test_row.py::TestRow::test_row_keys_values[subclass]", "astropy/table/tests/test_row.py::TestRow::test_row_as_mapping[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_row_as_mapping[masked]", "astropy/table/tests/test_row.py::TestRow::test_row_as_mapping[subclass]", "astropy/table/tests/test_row.py::TestRow::test_row_as_sequence[unmasked]", "astropy/table/tests/test_row.py::TestRow::test_row_as_sequence[masked]", "astropy/table/tests/test_row.py::TestRow::test_row_as_sequence[subclass]", "astropy/table/tests/test_row.py::test_row_tuple_column_slice", "astropy/table/tests/test_row.py::test_row_tuple_column_slice_transaction", "astropy/table/tests/test_row.py::test_uint_indexing"]
|
32bd0ffb6898c6a57b9ec1a55c13c7a0efe8d273
|
{"first_commit_time": 1685467555.0, "pr_title": "Implement `astropy.table.Row.get()`", "pr_body": "### Description\r\n\r\nCurrently code that tries to access a column in a `Row` when the presence of the column is not guaranteed either has to check for the presence of the column beforehand or handle the resulting `KeyError` explicitly. In many cases (see e.g. https://github.com/astropy/astropy/pull/14780#discussion_r1200857265) it might be appealing to use `Row.columns.get()` instead, which allows specifying a fallback value to be used if the column is not present, but (surprisingly) `Row.columns` actually returns the `columns` from the parent table of the `Row`: https://github.com/astropy/astropy/blob/88790514bdf248e43c2fb15ee18cfd3390846145/astropy/table/row.py#L152-L154\r\n\r\nChanging the behavior of `Row.columns` would be a backwards-incompatible change, so it is safer to implement a new `Row.get()` method.\r\n\r\n~~When implementing the unit tests for the new feature I thought that it would be better if the test table would be a module level fixture so that it would not be recreated for every instance of the parametrized tests, and then I found it useful to share the test table with an already existing test so that it could be reused even more. Because of this I also refactored and parametrized an already existing unit test for `Row`.~~", "pr_timeline": [{"time": 1685612750.0, "comment": "Thank you for your contribution to Astropy! \ud83c\udf0c This checklist is meant to remind the package maintainers who will review this pull request of some common things to look for.\n\n - [x] Do the proposed changes actually accomplish desired goals?\n - [x] Do the proposed changes follow the [Astropy coding guidelines](https://docs.astropy.org/en/latest/development/codeguide.html)?\n - [x] Are tests added/updated as required? If so, do they follow the [Astropy testing guidelines](https://docs.astropy.org/en/latest/development/testguide.html)?\n - [x] Are docs added/updated as required? If so, do they follow the [Astropy documentation guidelines](https://docs.astropy.org/en/latest/development/docguide.html#astropy-documentation-rules-and-guidelines)?\n - [x] Is rebase and/or squash necessary? If so, please provide the author with appropriate instructions. Also see [\"When to rebase and squash commits\"](https://docs.astropy.org/en/latest/development/when_to_rebase.html).\n - [x] Did the CI pass? If no, are the failures related? If you need to run daily and weekly cron jobs as part of the PR, please apply the \"Extra CI\" label. Codestyle issues can be fixed by the [bot](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#pre-commit).\n - [x] Is a change log needed? If yes, did the change log check pass? If no, add the \"no-changelog-entry-needed\" label. If this is a manual backport, use the \"skip-changelog-checks\" label unless special changelog handling is necessary.\n - [x] Is this a big PR that makes a \"What's new?\" entry worthwhile and if so, is (1) a \"what's new\" entry included in this PR and (2) the \"whatsnew-needed\" label applied?\n - [x] Is a milestone set? Milestone must be set but we cannot check for it on Actions; do not let the green checkmark fool you.\n - [x] At the time of adding the milestone, if the milestone set requires a backport to release branch(es), apply the appropriate \"backport-X.Y.x\" label(s) *before* merge."}, {"time": 1685398605.0, "comment": "I had to force-push the second commit because an issue and a pull request were opened while I was writing the opening message, so I had to update the change log entry filename to match the actual pull request number."}, {"time": 1685449980.0, "comment": "From the sidelines: Definitely happy with making `Row` a bit more `dict`-like. I was wondering if `Table` should similarly get a `get` method, but I guess having to pass in a default column makes less sense."}, {"time": 1685469077.0, "comment": "> I was wondering if `Table` should similarly get a `get` method\r\n\r\nThere would also be the question if an integer key like `t.get(0)` should return a row.\r\n\r\n`Table.columns.get()` already works and is in my opinion simple enough to use and simple enough to understand if it is encountered in code. The main motivation for implementing a separate `Row.get()` was that `Row.columns.get()` does not produce the expected outcome."}], "issues": {}}
|
astropy/astropy
| 16,135
|
https://github.com/astropy/astropy/pull/16135
|
astropy__astropy-16135
|
[]
|
ea875472867f296eee3ed75989ed402d55587940
|
diff --git a/astropy/coordinates/representation/cylindrical.py b/astropy/coordinates/representation/cylindrical.py
index 9127fb2dcb08..acd9ab936953 100644
--- a/astropy/coordinates/representation/cylindrical.py
+++ b/astropy/coordinates/representation/cylindrical.py
@@ -11,7 +11,7 @@
from .base import BaseDifferential, BaseRepresentation
from .cartesian import CartesianRepresentation
-from .spherical import _spherical_op_funcs
+from .spherical import PhysicsSphericalRepresentation, _spherical_op_funcs
class CylindricalRepresentation(BaseRepresentation):
@@ -135,6 +135,22 @@ def _scale_operation(self, op, *args):
result.differentials[key] = differential.__class__(*new_comps, copy=False)
return result
+ def represent_as(self, other_class, differential_class=None):
+ if isinstance(other_class, type):
+ if issubclass(other_class, PhysicsSphericalRepresentation):
+ diffs = self._re_represent_differentials(
+ other_class, differential_class
+ )
+ r = np.hypot(self.rho, self.z)
+ return other_class(
+ r=r,
+ theta=np.arccos(self.z / r),
+ phi=self.phi,
+ differentials=diffs,
+ )
+
+ return super().represent_as(other_class, differential_class)
+
class CylindricalDifferential(BaseDifferential):
"""Differential(s) of points in cylindrical coordinates.
diff --git a/astropy/coordinates/representation/spherical.py b/astropy/coordinates/representation/spherical.py
index 5ef93c8f4a00..dba9c7e1f9bc 100644
--- a/astropy/coordinates/representation/spherical.py
+++ b/astropy/coordinates/representation/spherical.py
@@ -750,6 +750,19 @@ def represent_as(self, other_class, differential_class=None):
differentials=diffs,
copy=False,
)
+ from .cylindrical import CylindricalRepresentation
+
+ if issubclass(other_class, CylindricalRepresentation):
+ diffs = self._re_represent_differentials(
+ other_class, differential_class
+ )
+ return other_class(
+ rho=self.r * np.sin(self.theta),
+ phi=self.phi,
+ z=self.r * np.cos(self.theta),
+ differentials=diffs,
+ copy=False,
+ )
return super().represent_as(other_class, differential_class)
|
diff --git a/astropy/coordinates/tests/test_representation.py b/astropy/coordinates/tests/test_representation.py
index d2d257e30a6f..2f84236bdd5e 100644
--- a/astropy/coordinates/tests/test_representation.py
+++ b/astropy/coordinates/tests/test_representation.py
@@ -842,6 +842,25 @@ def test_representation_shortcuts(self):
)
assert representation_equal_up_to_angular_type(got, expected)
+ got = sph.represent_as(CylindricalRepresentation, CylindricalDifferential)
+ assert np.may_share_memory(sph.phi, got.phi)
+ expected = BaseRepresentation.represent_as(
+ sph, CylindricalRepresentation, CylindricalDifferential
+ )
+ assert_allclose_quantity(got.rho, expected.rho, atol=5e-17 * u.kpc)
+ assert_allclose_quantity(got.phi, expected.phi, atol=3e-16 * u.deg)
+ assert_array_equal(got.z, expected.z)
+
+ def test_to_cylindrical_at_the_origin(self):
+ """Test that the transformation to cylindrical at the origin preserves phi."""
+ sph = PhysicsSphericalRepresentation(
+ phi=270 * u.deg, theta=45 * u.deg, r=0 * u.kpc
+ )
+ cyl = sph.represent_as(CylindricalRepresentation)
+ assert cyl.rho == 0.0 * u.kpc
+ assert cyl.z == 0.0 * u.kpc
+ assert cyl.phi == 270 * u.deg # phi is preserved exactly
+
def test_initialize_with_nan(self):
# Regression test for gh-11558: initialization used to fail.
psr = PhysicsSphericalRepresentation(
@@ -1380,6 +1399,39 @@ def test_transform(self):
assert_allclose_quantity(s3.z, expected.z)
assert_allclose_quantity(s3.rho, expected.rho)
+ def test_representation_shortcuts(self):
+ """Test that shortcuts in ``represent_as`` don't fail."""
+ difs = CylindricalDifferential(
+ d_rho=4 * u.km / u.s, d_phi=5 * u.mas / u.yr, d_z=6 * u.km / u.s
+ )
+ cyl = CylindricalRepresentation(
+ rho=1 * u.kpc, phi=2 * u.deg, z=3 * u.kpc, differentials={"s": difs}
+ )
+
+ # PhysicsSpherical Representation
+ got = cyl.represent_as(
+ PhysicsSphericalRepresentation, PhysicsSphericalDifferential
+ )
+ expected = BaseRepresentation.represent_as(
+ cyl, PhysicsSphericalRepresentation, PhysicsSphericalDifferential
+ )
+ assert_allclose_quantity(got.r, expected.r)
+ assert_allclose_quantity(got.phi, expected.phi)
+ assert_allclose_quantity(got.theta, expected.theta)
+ assert representation_equal_up_to_angular_type(got, expected)
+
+ def test_to_physicsspherical_at_the_origin(self):
+ """Test that the transformation to physicsspherical at the origin preserves phi."""
+ cyl = CylindricalRepresentation(
+ rho=0 * u.kpc,
+ phi=23.5 * u.deg,
+ z=3 * u.kpc,
+ )
+ sph = cyl.represent_as(PhysicsSphericalRepresentation)
+ assert_allclose(sph.r, 3 * u.kpc)
+ assert_allclose(sph.theta, 0 * u.deg)
+ assert cyl.phi == 23.5 * u.deg # phi is preserved exactly
+
class TestUnitSphericalCosLatDifferential:
@pytest.mark.parametrize("matrix", list(matrices.values()))
| 2024-02-29T23:40:43
|
{}
|
{"astropy/coordinates/representation/cylindrical.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Cylindrical representations and differentials.\"\"\"\n\nimport operator\n\nimport numpy as np\n\nimport astropy.units as u\nfrom astropy.coordinates.angles import Angle\nfrom astropy.utils.compat import COPY_IF_NEEDED\n\nfrom .base import BaseDifferential, BaseRepresentation\nfrom .cartesian import CartesianRepresentation\nfrom .spherical import _spherical_op_funcs\n\n\nclass CylindricalRepresentation(BaseRepresentation):\n \"\"\"\n Representation of points in 3D cylindrical coordinates.\n\n Parameters\n ----------\n rho : `~astropy.units.Quantity`\n The distance from the z axis to the point(s).\n\n phi : `~astropy.units.Quantity` or str\n The azimuth of the point(s), in angular units, which will be wrapped\n to an angle between 0 and 360 degrees. This can also be instances of\n `~astropy.coordinates.Angle`,\n\n z : `~astropy.units.Quantity`\n The z coordinate(s) of the point(s)\n\n differentials : dict, `~astropy.coordinates.CylindricalDifferential`, optional\n Any differential classes that should be associated with this\n representation. The input must either be a single\n `~astropy.coordinates.CylindricalDifferential` instance, or a dictionary of of differential\n instances with keys set to a string representation of the SI unit with\n which the differential (derivative) is taken. For example, for a\n velocity differential on a positional representation, the key would be\n ``'s'`` for seconds, indicating that the derivative is a time\n derivative.\n\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n attr_classes = {\"rho\": u.Quantity, \"phi\": Angle, \"z\": u.Quantity}\n\n def __init__(self, rho, phi=None, z=None, differentials=None, copy=True):\n super().__init__(rho, phi, z, copy=copy, differentials=differentials)\n\n if not self._rho.unit.is_equivalent(self._z.unit):\n raise u.UnitsError(\"rho and z should have matching physical types\")\n\n @property\n def rho(self):\n \"\"\"\n The distance of the point(s) from the z-axis.\n \"\"\"\n return self._rho\n\n @property\n def phi(self):\n \"\"\"\n The azimuth of the point(s).\n \"\"\"\n return self._phi\n\n @property\n def z(self):\n \"\"\"\n The height of the point(s).\n \"\"\"\n return self._z\n\n def unit_vectors(self):\n sinphi, cosphi = np.sin(self.phi), np.cos(self.phi)\n l = np.broadcast_to(1.0, self.shape)\n return {\n \"rho\": CartesianRepresentation(cosphi, sinphi, 0, copy=COPY_IF_NEEDED),\n \"phi\": CartesianRepresentation(-sinphi, cosphi, 0, copy=COPY_IF_NEEDED),\n \"z\": CartesianRepresentation(0, 0, l, unit=u.one, copy=COPY_IF_NEEDED),\n }\n\n def scale_factors(self):\n rho = self.rho / u.radian\n l = np.broadcast_to(1.0 * u.one, self.shape, subok=True)\n return {\"rho\": l, \"phi\": rho, \"z\": l}\n\n @classmethod\n def from_cartesian(cls, cart):\n \"\"\"\n Converts 3D rectangular cartesian coordinates to cylindrical polar\n coordinates.\n \"\"\"\n rho = np.hypot(cart.x, cart.y)\n phi = np.arctan2(cart.y, cart.x)\n z = cart.z\n\n return cls(rho=rho, phi=phi, z=z, copy=False)\n\n def to_cartesian(self):\n \"\"\"\n Converts cylindrical polar coordinates to 3D rectangular cartesian\n coordinates.\n \"\"\"\n x = self.rho * np.cos(self.phi)\n y = self.rho * np.sin(self.phi)\n z = self.z\n\n return CartesianRepresentation(x=x, y=y, z=z, copy=False)\n\n def _scale_operation(self, op, *args):\n if any(\n differential.base_representation is not self.__class__\n for differential in self.differentials.values()\n ):\n return super()._scale_operation(op, *args)\n\n phi_op, _, rho_op = _spherical_op_funcs(op, *args)\n z_op = lambda x: op(x, *args)\n\n result = self.__class__(\n rho_op(self.rho), phi_op(self.phi), z_op(self.z), copy=COPY_IF_NEEDED\n )\n for key, differential in self.differentials.items():\n new_comps = (\n op(getattr(differential, comp))\n for op, comp in zip(\n (rho_op, operator.pos, z_op), differential.components\n )\n )\n result.differentials[key] = differential.__class__(*new_comps, copy=False)\n return result\n\n\nclass CylindricalDifferential(BaseDifferential):\n \"\"\"Differential(s) of points in cylindrical coordinates.\n\n Parameters\n ----------\n d_rho : `~astropy.units.Quantity` ['speed']\n The differential cylindrical radius.\n d_phi : `~astropy.units.Quantity` ['angular speed']\n The differential azimuth.\n d_z : `~astropy.units.Quantity` ['speed']\n The differential height.\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n base_representation = CylindricalRepresentation\n\n def __init__(self, d_rho, d_phi=None, d_z=None, copy=True):\n super().__init__(d_rho, d_phi, d_z, copy=copy)\n if not self._d_rho.unit.is_equivalent(self._d_z.unit):\n raise u.UnitsError(\"d_rho and d_z should have equivalent units.\")\n", "astropy/coordinates/representation/spherical.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Spherical representations and differentials.\"\"\"\nimport operator\n\nimport numpy as np\nfrom erfa import ufunc as erfa_ufunc\n\nimport astropy.units as u\nfrom astropy.coordinates.angles import Angle, Latitude, Longitude\nfrom astropy.coordinates.distances import Distance\nfrom astropy.coordinates.matrix_utilities import is_O3\nfrom astropy.utils import classproperty\nfrom astropy.utils.compat import COPY_IF_NEEDED\n\nfrom .base import BaseDifferential, BaseRepresentation\nfrom .cartesian import CartesianRepresentation\n\n\nclass UnitSphericalRepresentation(BaseRepresentation):\n \"\"\"\n Representation of points on a unit sphere.\n\n Parameters\n ----------\n lon, lat : `~astropy.units.Quantity` ['angle'] or str\n The longitude and latitude of the point(s), in angular units. The\n latitude should be between -90 and 90 degrees, and the longitude will\n be wrapped to an angle between 0 and 360 degrees. These can also be\n instances of `~astropy.coordinates.Angle`,\n `~astropy.coordinates.Longitude`, or `~astropy.coordinates.Latitude`.\n\n differentials : dict, `~astropy.coordinates.BaseDifferential`, optional\n Any differential classes that should be associated with this\n representation. The input must either be a single `~astropy.coordinates.BaseDifferential`\n instance (see `._compatible_differentials` for valid types), or a\n dictionary of of differential instances with keys set to a string\n representation of the SI unit with which the differential (derivative)\n is taken. For example, for a velocity differential on a positional\n representation, the key would be ``'s'`` for seconds, indicating that\n the derivative is a time derivative.\n\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n attr_classes = {\"lon\": Longitude, \"lat\": Latitude}\n\n @classproperty\n def _dimensional_representation(cls):\n return SphericalRepresentation\n\n def __init__(self, lon, lat=None, differentials=None, copy=True):\n super().__init__(lon, lat, differentials=differentials, copy=copy)\n\n @classproperty\n def _compatible_differentials(cls):\n return [\n UnitSphericalDifferential,\n UnitSphericalCosLatDifferential,\n SphericalDifferential,\n SphericalCosLatDifferential,\n RadialDifferential,\n ]\n\n # Could let the metaclass define these automatically, but good to have\n # a bit clearer docstrings.\n @property\n def lon(self):\n \"\"\"\n The longitude of the point(s).\n \"\"\"\n return self._lon\n\n @property\n def lat(self):\n \"\"\"\n The latitude of the point(s).\n \"\"\"\n return self._lat\n\n def unit_vectors(self):\n sinlon, coslon = np.sin(self.lon), np.cos(self.lon)\n sinlat, coslat = np.sin(self.lat), np.cos(self.lat)\n return {\n \"lon\": CartesianRepresentation(-sinlon, coslon, 0.0, copy=COPY_IF_NEEDED),\n \"lat\": CartesianRepresentation(\n -sinlat * coslon, -sinlat * sinlon, coslat, copy=COPY_IF_NEEDED\n ),\n }\n\n def scale_factors(self, omit_coslat=False):\n sf_lat = np.broadcast_to(1.0 / u.radian, self.shape, subok=True)\n sf_lon = sf_lat if omit_coslat else np.cos(self.lat) / u.radian\n return {\"lon\": sf_lon, \"lat\": sf_lat}\n\n def to_cartesian(self):\n \"\"\"\n Converts spherical polar coordinates to 3D rectangular cartesian\n coordinates.\n \"\"\"\n # erfa s2c: Convert [unit]spherical coordinates to Cartesian.\n p = erfa_ufunc.s2c(self.lon, self.lat)\n return CartesianRepresentation(p, xyz_axis=-1, copy=False)\n\n @classmethod\n def from_cartesian(cls, cart):\n \"\"\"\n Converts 3D rectangular cartesian coordinates to spherical polar\n coordinates.\n \"\"\"\n p = cart.get_xyz(xyz_axis=-1)\n # erfa c2s: P-vector to [unit]spherical coordinates.\n return cls(*erfa_ufunc.c2s(p), copy=False)\n\n def represent_as(self, other_class, differential_class=None):\n # Take a short cut if the other class is a spherical representation\n # TODO! for differential_class. This cannot (currently) be implemented\n # like in the other Representations since `_re_represent_differentials`\n # keeps differentials' unit keys, but this can result in a mismatch\n # between the UnitSpherical expected key (e.g. \"s\") and that expected\n # in the other class (here \"s / m\"). For more info, see PR #11467\n if isinstance(other_class, type) and not differential_class:\n if issubclass(other_class, PhysicsSphericalRepresentation):\n return other_class(\n phi=self.lon,\n theta=90 * u.deg - self.lat,\n r=1.0,\n copy=COPY_IF_NEEDED,\n )\n elif issubclass(other_class, SphericalRepresentation):\n return other_class(\n lon=self.lon,\n lat=self.lat,\n distance=1.0,\n copy=COPY_IF_NEEDED,\n )\n\n return super().represent_as(other_class, differential_class)\n\n def transform(self, matrix):\n r\"\"\"Transform the unit-spherical coordinates using a 3x3 matrix.\n\n This returns a new representation and does not modify the original one.\n Any differentials attached to this representation will also be\n transformed.\n\n Parameters\n ----------\n matrix : (3,3) array-like\n A 3x3 matrix, such as a rotation matrix (or a stack of matrices).\n\n Returns\n -------\n `~astropy.coordinates.UnitSphericalRepresentation` or `~astropy.coordinates.SphericalRepresentation`\n If ``matrix`` is O(3) -- :math:`M \\dot M^T = I` -- like a rotation,\n then the result is a `~astropy.coordinates.UnitSphericalRepresentation`.\n All other matrices will change the distance, so the dimensional\n representation is used instead.\n\n \"\"\"\n # the transformation matrix does not need to be a rotation matrix,\n # so the unit-distance is not guaranteed. For speed, we check if the\n # matrix is in O(3) and preserves lengths.\n if np.all(is_O3(matrix)): # remain in unit-rep\n xyz = erfa_ufunc.s2c(self.lon, self.lat)\n p = erfa_ufunc.rxp(matrix, xyz)\n lon, lat = erfa_ufunc.c2s(p)\n rep = self.__class__(lon=lon, lat=lat)\n # handle differentials\n new_diffs = {\n k: d.transform(matrix, self, rep) for k, d in self.differentials.items()\n }\n rep = rep.with_differentials(new_diffs)\n\n else: # switch to dimensional representation\n rep = self._dimensional_representation(\n lon=self.lon, lat=self.lat, distance=1, differentials=self.differentials\n ).transform(matrix)\n\n return rep\n\n def _scale_operation(self, op, *args):\n return self._dimensional_representation(\n lon=self.lon, lat=self.lat, distance=1.0, differentials=self.differentials\n )._scale_operation(op, *args)\n\n def __neg__(self):\n if any(\n differential.base_representation is not self.__class__\n for differential in self.differentials.values()\n ):\n return super().__neg__()\n\n result = self.__class__(self.lon + 180.0 * u.deg, -self.lat, copy=False)\n for key, differential in self.differentials.items():\n new_comps = (\n op(getattr(differential, comp))\n for op, comp in zip(\n (operator.pos, operator.neg), differential.components\n )\n )\n result.differentials[key] = differential.__class__(*new_comps, copy=False)\n return result\n\n def norm(self):\n \"\"\"Vector norm.\n\n The norm is the standard Frobenius norm, i.e., the square root of the\n sum of the squares of all components with non-angular units, which is\n always unity for vectors on the unit sphere.\n\n Returns\n -------\n norm : `~astropy.units.Quantity` ['dimensionless']\n Dimensionless ones, with the same shape as the representation.\n \"\"\"\n return u.Quantity(np.ones(self.shape), u.dimensionless_unscaled, copy=False)\n\n def _combine_operation(self, op, other, reverse=False):\n self._raise_if_has_differentials(op.__name__)\n\n result = self.to_cartesian()._combine_operation(op, other, reverse)\n if result is NotImplemented:\n return NotImplemented\n else:\n return self._dimensional_representation.from_cartesian(result)\n\n def mean(self, *args, **kwargs):\n \"\"\"Vector mean.\n\n The representation is converted to cartesian, the means of the x, y,\n and z components are calculated, and the result is converted to a\n `~astropy.coordinates.SphericalRepresentation`.\n\n Refer to `~numpy.mean` for full documentation of the arguments, noting\n that ``axis`` is the entry in the ``shape`` of the representation, and\n that the ``out`` argument cannot be used.\n \"\"\"\n self._raise_if_has_differentials(\"mean\")\n return self._dimensional_representation.from_cartesian(\n self.to_cartesian().mean(*args, **kwargs)\n )\n\n def sum(self, *args, **kwargs):\n \"\"\"Vector sum.\n\n The representation is converted to cartesian, the sums of the x, y,\n and z components are calculated, and the result is converted to a\n `~astropy.coordinates.SphericalRepresentation`.\n\n Refer to `~numpy.sum` for full documentation of the arguments, noting\n that ``axis`` is the entry in the ``shape`` of the representation, and\n that the ``out`` argument cannot be used.\n \"\"\"\n self._raise_if_has_differentials(\"sum\")\n return self._dimensional_representation.from_cartesian(\n self.to_cartesian().sum(*args, **kwargs)\n )\n\n def cross(self, other):\n \"\"\"Cross product of two representations.\n\n The calculation is done by converting both ``self`` and ``other``\n to `~astropy.coordinates.CartesianRepresentation`, and converting the\n result back to `~astropy.coordinates.SphericalRepresentation`.\n\n Parameters\n ----------\n other : `~astropy.coordinates.BaseRepresentation` subclass instance\n The representation to take the cross product with.\n\n Returns\n -------\n cross_product : `~astropy.coordinates.SphericalRepresentation`\n With vectors perpendicular to both ``self`` and ``other``.\n \"\"\"\n self._raise_if_has_differentials(\"cross\")\n return self._dimensional_representation.from_cartesian(\n self.to_cartesian().cross(other)\n )\n\n\nclass RadialRepresentation(BaseRepresentation):\n \"\"\"\n Representation of the distance of points from the origin.\n\n Note that this is mostly intended as an internal helper representation.\n It can do little else but being used as a scale in multiplication.\n\n Parameters\n ----------\n distance : `~astropy.units.Quantity` ['length']\n The distance of the point(s) from the origin.\n\n differentials : dict, `~astropy.coordinates.BaseDifferential`, optional\n Any differential classes that should be associated with this\n representation. The input must either be a single `~astropy.coordinates.BaseDifferential`\n instance (see `._compatible_differentials` for valid types), or a\n dictionary of of differential instances with keys set to a string\n representation of the SI unit with which the differential (derivative)\n is taken. For example, for a velocity differential on a positional\n representation, the key would be ``'s'`` for seconds, indicating that\n the derivative is a time derivative.\n\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n attr_classes = {\"distance\": u.Quantity}\n\n def __init__(self, distance, differentials=None, copy=True):\n super().__init__(distance, differentials=differentials, copy=copy)\n\n @property\n def distance(self):\n \"\"\"\n The distance from the origin to the point(s).\n \"\"\"\n return self._distance\n\n def unit_vectors(self):\n \"\"\"Cartesian unit vectors are undefined for radial representation.\"\"\"\n raise NotImplementedError(\n f\"Cartesian unit vectors are undefined for {self.__class__} instances\"\n )\n\n def scale_factors(self):\n l = np.broadcast_to(1.0 * u.one, self.shape, subok=True)\n return {\"distance\": l}\n\n def to_cartesian(self):\n \"\"\"Cannot convert radial representation to cartesian.\"\"\"\n raise NotImplementedError(\n f\"cannot convert {self.__class__} instance to cartesian.\"\n )\n\n @classmethod\n def from_cartesian(cls, cart):\n \"\"\"\n Converts 3D rectangular cartesian coordinates to radial coordinate.\n \"\"\"\n return cls(distance=cart.norm(), copy=False)\n\n def __mul__(self, other):\n if isinstance(other, BaseRepresentation):\n return self.distance * other\n else:\n return super().__mul__(other)\n\n def norm(self):\n \"\"\"Vector norm.\n\n Just the distance itself.\n\n Returns\n -------\n norm : `~astropy.units.Quantity` ['dimensionless']\n Dimensionless ones, with the same shape as the representation.\n \"\"\"\n return self.distance\n\n def _combine_operation(self, op, other, reverse=False):\n return NotImplemented\n\n def transform(self, matrix):\n \"\"\"Radial representations cannot be transformed by a Cartesian matrix.\n\n Parameters\n ----------\n matrix : array-like\n The transformation matrix in a Cartesian basis.\n Must be a multiplication: a diagonal matrix with identical elements.\n Must have shape (..., 3, 3), where the last 2 indices are for the\n matrix on each other axis. Make sure that the matrix shape is\n compatible with the shape of this representation.\n\n Raises\n ------\n ValueError\n If the matrix is not a multiplication.\n\n \"\"\"\n scl = matrix[..., 0, 0]\n # check that the matrix is a scaled identity matrix on the last 2 axes.\n if np.any(matrix != scl[..., np.newaxis, np.newaxis] * np.identity(3)):\n raise ValueError(\n \"Radial representations can only be \"\n \"transformed by a scaled identity matrix\"\n )\n\n return self * scl\n\n\ndef _spherical_op_funcs(op, *args):\n \"\"\"For given operator, return functions that adjust lon, lat, distance.\"\"\"\n if op is operator.neg:\n return lambda x: x + 180 * u.deg, operator.neg, operator.pos\n\n try:\n scale_sign = np.sign(args[0])\n except Exception:\n # This should always work, even if perhaps we get a negative distance.\n return operator.pos, operator.pos, lambda x: op(x, *args)\n\n scale = abs(args[0])\n return (\n lambda x: x + 180 * u.deg * np.signbit(scale_sign),\n lambda x: x * scale_sign,\n lambda x: op(x, scale),\n )\n\n\nclass SphericalRepresentation(BaseRepresentation):\n \"\"\"\n Representation of points in 3D spherical coordinates.\n\n Parameters\n ----------\n lon, lat : `~astropy.units.Quantity` ['angle']\n The longitude and latitude of the point(s), in angular units. The\n latitude should be between -90 and 90 degrees, and the longitude will\n be wrapped to an angle between 0 and 360 degrees. These can also be\n instances of `~astropy.coordinates.Angle`,\n `~astropy.coordinates.Longitude`, or `~astropy.coordinates.Latitude`.\n\n distance : `~astropy.units.Quantity` ['length']\n The distance to the point(s). If the distance is a length, it is\n passed to the :class:`~astropy.coordinates.Distance` class, otherwise\n it is passed to the :class:`~astropy.units.Quantity` class.\n\n differentials : dict, `~astropy.coordinates.BaseDifferential`, optional\n Any differential classes that should be associated with this\n representation. The input must either be a single `~astropy.coordinates.BaseDifferential`\n instance (see `._compatible_differentials` for valid types), or a\n dictionary of of differential instances with keys set to a string\n representation of the SI unit with which the differential (derivative)\n is taken. For example, for a velocity differential on a positional\n representation, the key would be ``'s'`` for seconds, indicating that\n the derivative is a time derivative.\n\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n attr_classes = {\"lon\": Longitude, \"lat\": Latitude, \"distance\": u.Quantity}\n _unit_representation = UnitSphericalRepresentation\n\n def __init__(self, lon, lat=None, distance=None, differentials=None, copy=True):\n super().__init__(lon, lat, distance, copy=copy, differentials=differentials)\n if (\n not isinstance(self._distance, Distance)\n and self._distance.unit.physical_type == \"length\"\n ):\n try:\n self._distance = Distance(self._distance, copy=False)\n except ValueError as e:\n if e.args[0].startswith(\"distance must be >= 0\"):\n raise ValueError(\n \"Distance must be >= 0. To allow negative distance values, you\"\n \" must explicitly pass in a `Distance` object with the \"\n \"argument 'allow_negative=True'.\"\n ) from e\n raise\n\n @classproperty\n def _compatible_differentials(cls):\n return [\n UnitSphericalDifferential,\n UnitSphericalCosLatDifferential,\n SphericalDifferential,\n SphericalCosLatDifferential,\n RadialDifferential,\n ]\n\n @property\n def lon(self):\n \"\"\"\n The longitude of the point(s).\n \"\"\"\n return self._lon\n\n @property\n def lat(self):\n \"\"\"\n The latitude of the point(s).\n \"\"\"\n return self._lat\n\n @property\n def distance(self):\n \"\"\"\n The distance from the origin to the point(s).\n \"\"\"\n return self._distance\n\n def unit_vectors(self):\n sinlon, coslon = np.sin(self.lon), np.cos(self.lon)\n sinlat, coslat = np.sin(self.lat), np.cos(self.lat)\n return {\n \"lon\": CartesianRepresentation(-sinlon, coslon, 0.0, copy=COPY_IF_NEEDED),\n \"lat\": CartesianRepresentation(\n -sinlat * coslon, -sinlat * sinlon, coslat, copy=COPY_IF_NEEDED\n ),\n \"distance\": CartesianRepresentation(\n coslat * coslon, coslat * sinlon, sinlat, copy=COPY_IF_NEEDED\n ),\n }\n\n def scale_factors(self, omit_coslat=False):\n sf_lat = self.distance / u.radian\n sf_lon = sf_lat if omit_coslat else sf_lat * np.cos(self.lat)\n sf_distance = np.broadcast_to(1.0 * u.one, self.shape, subok=True)\n return {\"lon\": sf_lon, \"lat\": sf_lat, \"distance\": sf_distance}\n\n def represent_as(self, other_class, differential_class=None):\n # Take a short cut if the other class is a spherical representation\n\n if isinstance(other_class, type):\n if issubclass(other_class, PhysicsSphericalRepresentation):\n diffs = self._re_represent_differentials(\n other_class, differential_class\n )\n return other_class(\n phi=self.lon,\n theta=90 * u.deg - self.lat,\n r=self.distance,\n differentials=diffs,\n copy=False,\n )\n\n elif issubclass(other_class, UnitSphericalRepresentation):\n diffs = self._re_represent_differentials(\n other_class, differential_class\n )\n return other_class(\n lon=self.lon, lat=self.lat, differentials=diffs, copy=False\n )\n\n return super().represent_as(other_class, differential_class)\n\n def to_cartesian(self):\n \"\"\"\n Converts spherical polar coordinates to 3D rectangular cartesian\n coordinates.\n \"\"\"\n # We need to convert Distance to Quantity to allow negative values.\n if isinstance(self.distance, Distance):\n d = self.distance.view(u.Quantity)\n else:\n d = self.distance\n\n # erfa s2p: Convert spherical polar coordinates to p-vector.\n p = erfa_ufunc.s2p(self.lon, self.lat, d)\n\n return CartesianRepresentation(p, xyz_axis=-1, copy=False)\n\n @classmethod\n def from_cartesian(cls, cart):\n \"\"\"\n Converts 3D rectangular cartesian coordinates to spherical polar\n coordinates.\n \"\"\"\n p = cart.get_xyz(xyz_axis=-1)\n # erfa p2s: P-vector to spherical polar coordinates.\n return cls(*erfa_ufunc.p2s(p), copy=False)\n\n def transform(self, matrix):\n \"\"\"Transform the spherical coordinates using a 3x3 matrix.\n\n This returns a new representation and does not modify the original one.\n Any differentials attached to this representation will also be\n transformed.\n\n Parameters\n ----------\n matrix : (3,3) array-like\n A 3x3 matrix, such as a rotation matrix (or a stack of matrices).\n\n \"\"\"\n xyz = erfa_ufunc.s2c(self.lon, self.lat)\n p = erfa_ufunc.rxp(matrix, xyz)\n lon, lat, ur = erfa_ufunc.p2s(p)\n rep = self.__class__(lon=lon, lat=lat, distance=self.distance * ur)\n\n # handle differentials\n new_diffs = {\n k: d.transform(matrix, self, rep) for k, d in self.differentials.items()\n }\n return rep.with_differentials(new_diffs)\n\n def norm(self):\n \"\"\"Vector norm.\n\n The norm is the standard Frobenius norm, i.e., the square root of the\n sum of the squares of all components with non-angular units. For\n spherical coordinates, this is just the absolute value of the distance.\n\n Returns\n -------\n norm : `astropy.units.Quantity`\n Vector norm, with the same shape as the representation.\n \"\"\"\n return np.abs(self.distance)\n\n def _scale_operation(self, op, *args):\n # TODO: expand special-casing to UnitSpherical and RadialDifferential.\n if any(\n differential.base_representation is not self.__class__\n for differential in self.differentials.values()\n ):\n return super()._scale_operation(op, *args)\n\n lon_op, lat_op, distance_op = _spherical_op_funcs(op, *args)\n\n result = self.__class__(\n lon_op(self.lon),\n lat_op(self.lat),\n distance_op(self.distance),\n copy=COPY_IF_NEEDED,\n )\n for key, differential in self.differentials.items():\n new_comps = (\n op(getattr(differential, comp))\n for op, comp in zip(\n (operator.pos, lat_op, distance_op), differential.components\n )\n )\n result.differentials[key] = differential.__class__(*new_comps, copy=False)\n return result\n\n\nclass PhysicsSphericalRepresentation(BaseRepresentation):\n \"\"\"\n Representation of points in 3D spherical coordinates (using the physics\n convention of using ``phi`` and ``theta`` for azimuth and inclination\n from the pole).\n\n Parameters\n ----------\n phi, theta : `~astropy.units.Quantity` or str\n The azimuth and inclination of the point(s), in angular units. The\n inclination should be between 0 and 180 degrees, and the azimuth will\n be wrapped to an angle between 0 and 360 degrees. These can also be\n instances of `~astropy.coordinates.Angle`. If ``copy`` is False, `phi`\n will be changed inplace if it is not between 0 and 360 degrees.\n\n r : `~astropy.units.Quantity`\n The distance to the point(s). If the distance is a length, it is\n passed to the :class:`~astropy.coordinates.Distance` class, otherwise\n it is passed to the :class:`~astropy.units.Quantity` class.\n\n differentials : dict, `~astropy.coordinates.PhysicsSphericalDifferential`, optional\n Any differential classes that should be associated with this\n representation. The input must either be a single\n `~astropy.coordinates.PhysicsSphericalDifferential` instance, or a dictionary of of\n differential instances with keys set to a string representation of the\n SI unit with which the differential (derivative) is taken. For example,\n for a velocity differential on a positional representation, the key\n would be ``'s'`` for seconds, indicating that the derivative is a time\n derivative.\n\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n attr_classes = {\"phi\": Angle, \"theta\": Angle, \"r\": u.Quantity}\n\n def __init__(self, phi, theta=None, r=None, differentials=None, copy=True):\n super().__init__(phi, theta, r, copy=copy, differentials=differentials)\n\n # Wrap/validate phi/theta\n # Note that _phi already holds our own copy if copy=True.\n self._phi.wrap_at(360 * u.deg, inplace=True)\n\n if np.any(self._theta < 0.0 * u.deg) or np.any(self._theta > 180.0 * u.deg):\n raise ValueError(\n \"Inclination angle(s) must be within 0 deg <= angle <= 180 deg, \"\n f\"got {theta.to(u.degree)}\"\n )\n\n if self._r.unit.physical_type == \"length\":\n self._r = self._r.view(Distance)\n\n @property\n def phi(self):\n \"\"\"\n The azimuth of the point(s).\n \"\"\"\n return self._phi\n\n @property\n def theta(self):\n \"\"\"\n The elevation of the point(s).\n \"\"\"\n return self._theta\n\n @property\n def r(self):\n \"\"\"\n The distance from the origin to the point(s).\n \"\"\"\n return self._r\n\n def unit_vectors(self):\n sinphi, cosphi = np.sin(self.phi), np.cos(self.phi)\n sintheta, costheta = np.sin(self.theta), np.cos(self.theta)\n return {\n \"phi\": CartesianRepresentation(-sinphi, cosphi, 0.0, copy=COPY_IF_NEEDED),\n \"theta\": CartesianRepresentation(\n costheta * cosphi, costheta * sinphi, -sintheta, copy=COPY_IF_NEEDED\n ),\n \"r\": CartesianRepresentation(\n sintheta * cosphi, sintheta * sinphi, costheta, copy=COPY_IF_NEEDED\n ),\n }\n\n def scale_factors(self):\n r = self.r / u.radian\n sintheta = np.sin(self.theta)\n l = np.broadcast_to(1.0 * u.one, self.shape, subok=True)\n return {\"phi\": r * sintheta, \"theta\": r, \"r\": l}\n\n def represent_as(self, other_class, differential_class=None):\n # Take a short cut if the other class is a spherical representation\n\n if isinstance(other_class, type):\n if issubclass(other_class, SphericalRepresentation):\n diffs = self._re_represent_differentials(\n other_class, differential_class\n )\n return other_class(\n lon=self.phi,\n lat=90 * u.deg - self.theta,\n distance=self.r,\n differentials=diffs,\n copy=False,\n )\n elif issubclass(other_class, UnitSphericalRepresentation):\n diffs = self._re_represent_differentials(\n other_class, differential_class\n )\n return other_class(\n lon=self.phi,\n lat=90 * u.deg - self.theta,\n differentials=diffs,\n copy=False,\n )\n\n return super().represent_as(other_class, differential_class)\n\n def to_cartesian(self):\n \"\"\"\n Converts spherical polar coordinates to 3D rectangular cartesian\n coordinates.\n \"\"\"\n # We need to convert Distance to Quantity to allow negative values.\n if isinstance(self.r, Distance):\n d = self.r.view(u.Quantity)\n else:\n d = self.r\n\n x = d * np.sin(self.theta) * np.cos(self.phi)\n y = d * np.sin(self.theta) * np.sin(self.phi)\n z = d * np.cos(self.theta)\n\n return CartesianRepresentation(x=x, y=y, z=z, copy=False)\n\n @classmethod\n def from_cartesian(cls, cart):\n \"\"\"\n Converts 3D rectangular cartesian coordinates to spherical polar\n coordinates.\n \"\"\"\n s = np.hypot(cart.x, cart.y)\n r = np.hypot(s, cart.z)\n\n phi = np.arctan2(cart.y, cart.x)\n theta = np.arctan2(s, cart.z)\n\n return cls(phi=phi, theta=theta, r=r, copy=False)\n\n def transform(self, matrix):\n \"\"\"Transform the spherical coordinates using a 3x3 matrix.\n\n This returns a new representation and does not modify the original one.\n Any differentials attached to this representation will also be\n transformed.\n\n Parameters\n ----------\n matrix : (3,3) array-like\n A 3x3 matrix, such as a rotation matrix (or a stack of matrices).\n\n \"\"\"\n # apply transformation in unit-spherical coordinates\n xyz = erfa_ufunc.s2c(self.phi, 90 * u.deg - self.theta)\n p = erfa_ufunc.rxp(matrix, xyz)\n lon, lat, ur = erfa_ufunc.p2s(p) # `ur` is transformed unit-`r`\n # create transformed physics-spherical representation,\n # reapplying the distance scaling\n rep = self.__class__(phi=lon, theta=90 * u.deg - lat, r=self.r * ur)\n\n new_diffs = {\n k: d.transform(matrix, self, rep) for k, d in self.differentials.items()\n }\n return rep.with_differentials(new_diffs)\n\n def norm(self):\n \"\"\"Vector norm.\n\n The norm is the standard Frobenius norm, i.e., the square root of the\n sum of the squares of all components with non-angular units. For\n spherical coordinates, this is just the absolute value of the radius.\n\n Returns\n -------\n norm : `astropy.units.Quantity`\n Vector norm, with the same shape as the representation.\n \"\"\"\n return np.abs(self.r)\n\n def _scale_operation(self, op, *args):\n if any(\n differential.base_representation is not self.__class__\n for differential in self.differentials.values()\n ):\n return super()._scale_operation(op, *args)\n\n phi_op, adjust_theta_sign, r_op = _spherical_op_funcs(op, *args)\n # Also run phi_op on theta to ensure theta remains between 0 and 180:\n # any time the scale is negative, we do -theta + 180 degrees.\n result = self.__class__(\n phi_op(self.phi),\n phi_op(adjust_theta_sign(self.theta)),\n r_op(self.r),\n copy=COPY_IF_NEEDED,\n )\n for key, differential in self.differentials.items():\n new_comps = (\n op(getattr(differential, comp))\n for op, comp in zip(\n (operator.pos, adjust_theta_sign, r_op), differential.components\n )\n )\n result.differentials[key] = differential.__class__(*new_comps, copy=False)\n return result\n\n\nclass BaseSphericalDifferential(BaseDifferential):\n def _d_lon_coslat(self, base):\n \"\"\"Convert longitude differential d_lon to d_lon_coslat.\n\n Parameters\n ----------\n base : instance of ``cls.base_representation``\n The base from which the latitude will be taken.\n \"\"\"\n self._check_base(base)\n return self.d_lon * np.cos(base.lat)\n\n @classmethod\n def _get_d_lon(cls, d_lon_coslat, base):\n \"\"\"Convert longitude differential d_lon_coslat to d_lon.\n\n Parameters\n ----------\n d_lon_coslat : `~astropy.units.Quantity`\n Longitude differential that includes ``cos(lat)``.\n base : instance of ``cls.base_representation``\n The base from which the latitude will be taken.\n \"\"\"\n cls._check_base(base)\n return d_lon_coslat / np.cos(base.lat)\n\n def _combine_operation(self, op, other, reverse=False):\n \"\"\"Combine two differentials, or a differential with a representation.\n\n If ``other`` is of the same differential type as ``self``, the\n components will simply be combined. If both are different parts of\n a `~astropy.coordinates.SphericalDifferential` (e.g., a\n `~astropy.coordinates.UnitSphericalDifferential` and a\n `~astropy.coordinates.RadialDifferential`), they will combined\n appropriately.\n\n If ``other`` is a representation, it will be used as a base for which\n to evaluate the differential, and the result is a new representation.\n\n Parameters\n ----------\n op : `~operator` callable\n Operator to apply (e.g., `~operator.add`, `~operator.sub`, etc.\n other : `~astropy.coordinates.BaseRepresentation` subclass instance\n The other differential or representation.\n reverse : bool\n Whether the operands should be reversed (e.g., as we got here via\n ``self.__rsub__`` because ``self`` is a subclass of ``other``).\n \"\"\"\n if (\n isinstance(other, BaseSphericalDifferential)\n and not isinstance(self, type(other))\n or isinstance(other, RadialDifferential)\n ):\n all_components = set(self.components) | set(other.components)\n first, second = (self, other) if not reverse else (other, self)\n result_args = {\n c: op(getattr(first, c, 0.0), getattr(second, c, 0.0))\n for c in all_components\n }\n return SphericalDifferential(**result_args)\n\n return super()._combine_operation(op, other, reverse)\n\n\nclass UnitSphericalDifferential(BaseSphericalDifferential):\n \"\"\"Differential(s) of points on a unit sphere.\n\n Parameters\n ----------\n d_lon, d_lat : `~astropy.units.Quantity`\n The longitude and latitude of the differentials.\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n base_representation = UnitSphericalRepresentation\n\n @classproperty\n def _dimensional_differential(cls):\n return SphericalDifferential\n\n def __init__(self, d_lon, d_lat=None, copy=True):\n super().__init__(d_lon, d_lat, copy=copy)\n if not self._d_lon.unit.is_equivalent(self._d_lat.unit):\n raise u.UnitsError(\"d_lon and d_lat should have equivalent units.\")\n\n @classmethod\n def from_cartesian(cls, other, base):\n # Go via the dimensional equivalent, so that the longitude and latitude\n # differentials correctly take into account the norm of the base.\n dimensional = cls._dimensional_differential.from_cartesian(other, base)\n return dimensional.represent_as(cls)\n\n def to_cartesian(self, base):\n if isinstance(base, SphericalRepresentation):\n scale = base.distance\n elif isinstance(base, PhysicsSphericalRepresentation):\n scale = base.r\n else:\n return super().to_cartesian(base)\n\n base = base.represent_as(UnitSphericalRepresentation)\n return scale * super().to_cartesian(base)\n\n def represent_as(self, other_class, base=None):\n # Only have enough information to represent other unit-spherical.\n if issubclass(other_class, UnitSphericalCosLatDifferential):\n return other_class(self._d_lon_coslat(base), self.d_lat)\n\n return super().represent_as(other_class, base)\n\n @classmethod\n def from_representation(cls, representation, base=None):\n # All spherical differentials can be done without going to Cartesian,\n # though CosLat needs base for the latitude.\n if isinstance(representation, SphericalDifferential):\n return cls(representation.d_lon, representation.d_lat)\n elif isinstance(\n representation,\n (SphericalCosLatDifferential, UnitSphericalCosLatDifferential),\n ):\n d_lon = cls._get_d_lon(representation.d_lon_coslat, base)\n return cls(d_lon, representation.d_lat)\n elif isinstance(representation, PhysicsSphericalDifferential):\n return cls(representation.d_phi, -representation.d_theta)\n\n return super().from_representation(representation, base)\n\n def transform(self, matrix, base, transformed_base):\n \"\"\"Transform differential using a 3x3 matrix in a Cartesian basis.\n\n This returns a new differential and does not modify the original one.\n\n Parameters\n ----------\n matrix : (3,3) array-like\n A 3x3 (or stack thereof) matrix, such as a rotation matrix.\n base : instance of ``cls.base_representation``\n Base relative to which the differentials are defined. If the other\n class is a differential representation, the base will be converted\n to its ``base_representation``.\n transformed_base : instance of ``cls.base_representation``\n Base relative to which the transformed differentials are defined.\n If the other class is a differential representation, the base will\n be converted to its ``base_representation``.\n \"\"\"\n # the transformation matrix does not need to be a rotation matrix,\n # so the unit-distance is not guaranteed. For speed, we check if the\n # matrix is in O(3) and preserves lengths.\n if np.all(is_O3(matrix)): # remain in unit-rep\n # TODO! implement without Cartesian intermediate step.\n # some of this can be moved to the parent class.\n diff = super().transform(matrix, base, transformed_base)\n\n else: # switch to dimensional representation\n du = self.d_lon.unit / base.lon.unit # derivative unit\n diff = self._dimensional_differential(\n d_lon=self.d_lon, d_lat=self.d_lat, d_distance=0 * du\n ).transform(matrix, base, transformed_base)\n\n return diff\n\n def _scale_operation(self, op, *args, scaled_base=False):\n if scaled_base:\n return self.copy()\n else:\n return super()._scale_operation(op, *args)\n\n\nclass SphericalDifferential(BaseSphericalDifferential):\n \"\"\"Differential(s) of points in 3D spherical coordinates.\n\n Parameters\n ----------\n d_lon, d_lat : `~astropy.units.Quantity`\n The differential longitude and latitude.\n d_distance : `~astropy.units.Quantity`\n The differential distance.\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n base_representation = SphericalRepresentation\n _unit_differential = UnitSphericalDifferential\n\n def __init__(self, d_lon, d_lat=None, d_distance=None, copy=True):\n super().__init__(d_lon, d_lat, d_distance, copy=copy)\n if not self._d_lon.unit.is_equivalent(self._d_lat.unit):\n raise u.UnitsError(\"d_lon and d_lat should have equivalent units.\")\n\n def represent_as(self, other_class, base=None):\n # All spherical differentials can be done without going to Cartesian,\n # though CosLat needs base for the latitude.\n if issubclass(other_class, UnitSphericalDifferential):\n return other_class(self.d_lon, self.d_lat)\n elif issubclass(other_class, RadialDifferential):\n return other_class(self.d_distance)\n elif issubclass(other_class, SphericalCosLatDifferential):\n return other_class(self._d_lon_coslat(base), self.d_lat, self.d_distance)\n elif issubclass(other_class, UnitSphericalCosLatDifferential):\n return other_class(self._d_lon_coslat(base), self.d_lat)\n elif issubclass(other_class, PhysicsSphericalDifferential):\n return other_class(self.d_lon, -self.d_lat, self.d_distance)\n else:\n return super().represent_as(other_class, base)\n\n @classmethod\n def from_representation(cls, representation, base=None):\n # Other spherical differentials can be done without going to Cartesian,\n # though CosLat needs base for the latitude.\n if isinstance(representation, SphericalCosLatDifferential):\n d_lon = cls._get_d_lon(representation.d_lon_coslat, base)\n return cls(d_lon, representation.d_lat, representation.d_distance)\n elif isinstance(representation, PhysicsSphericalDifferential):\n return cls(\n representation.d_phi, -representation.d_theta, representation.d_r\n )\n\n return super().from_representation(representation, base)\n\n def _scale_operation(self, op, *args, scaled_base=False):\n if scaled_base:\n return self.__class__(self.d_lon, self.d_lat, op(self.d_distance, *args))\n else:\n return super()._scale_operation(op, *args)\n\n\nclass BaseSphericalCosLatDifferential(BaseDifferential):\n \"\"\"Differentials from points on a spherical base representation.\n\n With cos(lat) assumed to be included in the longitude differential.\n \"\"\"\n\n @classmethod\n def _get_base_vectors(cls, base):\n \"\"\"Get unit vectors and scale factors from (unit)spherical base.\n\n Parameters\n ----------\n base : instance of ``self.base_representation``\n The points for which the unit vectors and scale factors should be\n retrieved.\n\n Returns\n -------\n unit_vectors : dict of `~astropy.coordinates.CartesianRepresentation`\n In the directions of the coordinates of base.\n scale_factors : dict of `~astropy.units.Quantity`\n Scale factors for each of the coordinates. The scale factor for\n longitude does not include the cos(lat) factor.\n\n Raises\n ------\n TypeError : if the base is not of the correct type\n \"\"\"\n cls._check_base(base)\n return base.unit_vectors(), base.scale_factors(omit_coslat=True)\n\n def _d_lon(self, base):\n \"\"\"Convert longitude differential with cos(lat) to one without.\n\n Parameters\n ----------\n base : instance of ``cls.base_representation``\n The base from which the latitude will be taken.\n \"\"\"\n self._check_base(base)\n return self.d_lon_coslat / np.cos(base.lat)\n\n @classmethod\n def _get_d_lon_coslat(cls, d_lon, base):\n \"\"\"Convert longitude differential d_lon to d_lon_coslat.\n\n Parameters\n ----------\n d_lon : `~astropy.units.Quantity`\n Value of the longitude differential without ``cos(lat)``.\n base : instance of ``cls.base_representation``\n The base from which the latitude will be taken.\n \"\"\"\n cls._check_base(base)\n return d_lon * np.cos(base.lat)\n\n def _combine_operation(self, op, other, reverse=False):\n \"\"\"Combine two differentials, or a differential with a representation.\n\n If ``other`` is of the same differential type as ``self``, the\n components will simply be combined. If both are different parts of\n a `~astropy.coordinates.SphericalDifferential` (e.g., a\n `~astropy.coordinates.UnitSphericalDifferential` and a\n `~astropy.coordinates.RadialDifferential`), they will combined\n appropriately.\n\n If ``other`` is a representation, it will be used as a base for which\n to evaluate the differential, and the result is a new representation.\n\n Parameters\n ----------\n op : `~operator` callable\n Operator to apply (e.g., `~operator.add`, `~operator.sub`, etc.\n other : `~astropy.coordinates.BaseRepresentation` subclass instance\n The other differential or representation.\n reverse : bool\n Whether the operands should be reversed (e.g., as we got here via\n ``self.__rsub__`` because ``self`` is a subclass of ``other``).\n \"\"\"\n if (\n isinstance(other, BaseSphericalCosLatDifferential)\n and not isinstance(self, type(other))\n or isinstance(other, RadialDifferential)\n ):\n all_components = set(self.components) | set(other.components)\n first, second = (self, other) if not reverse else (other, self)\n result_args = {\n c: op(getattr(first, c, 0.0), getattr(second, c, 0.0))\n for c in all_components\n }\n return SphericalCosLatDifferential(**result_args)\n\n return super()._combine_operation(op, other, reverse)\n\n\nclass UnitSphericalCosLatDifferential(BaseSphericalCosLatDifferential):\n \"\"\"Differential(s) of points on a unit sphere.\n\n Parameters\n ----------\n d_lon_coslat, d_lat : `~astropy.units.Quantity`\n The longitude and latitude of the differentials.\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n base_representation = UnitSphericalRepresentation\n attr_classes = {\"d_lon_coslat\": u.Quantity, \"d_lat\": u.Quantity}\n\n @classproperty\n def _dimensional_differential(cls):\n return SphericalCosLatDifferential\n\n def __init__(self, d_lon_coslat, d_lat=None, copy=True):\n super().__init__(d_lon_coslat, d_lat, copy=copy)\n if not self._d_lon_coslat.unit.is_equivalent(self._d_lat.unit):\n raise u.UnitsError(\"d_lon_coslat and d_lat should have equivalent units.\")\n\n @classmethod\n def from_cartesian(cls, other, base):\n # Go via the dimensional equivalent, so that the longitude and latitude\n # differentials correctly take into account the norm of the base.\n dimensional = cls._dimensional_differential.from_cartesian(other, base)\n return dimensional.represent_as(cls)\n\n def to_cartesian(self, base):\n if isinstance(base, SphericalRepresentation):\n scale = base.distance\n elif isinstance(base, PhysicsSphericalRepresentation):\n scale = base.r\n else:\n return super().to_cartesian(base)\n\n base = base.represent_as(UnitSphericalRepresentation)\n return scale * super().to_cartesian(base)\n\n def represent_as(self, other_class, base=None):\n # Only have enough information to represent other unit-spherical.\n if issubclass(other_class, UnitSphericalDifferential):\n return other_class(self._d_lon(base), self.d_lat)\n\n return super().represent_as(other_class, base)\n\n @classmethod\n def from_representation(cls, representation, base=None):\n # All spherical differentials can be done without going to Cartesian,\n # though w/o CosLat needs base for the latitude.\n if isinstance(representation, SphericalCosLatDifferential):\n return cls(representation.d_lon_coslat, representation.d_lat)\n elif isinstance(\n representation, (SphericalDifferential, UnitSphericalDifferential)\n ):\n d_lon_coslat = cls._get_d_lon_coslat(representation.d_lon, base)\n return cls(d_lon_coslat, representation.d_lat)\n elif isinstance(representation, PhysicsSphericalDifferential):\n d_lon_coslat = cls._get_d_lon_coslat(representation.d_phi, base)\n return cls(d_lon_coslat, -representation.d_theta)\n\n return super().from_representation(representation, base)\n\n def transform(self, matrix, base, transformed_base):\n \"\"\"Transform differential using a 3x3 matrix in a Cartesian basis.\n\n This returns a new differential and does not modify the original one.\n\n Parameters\n ----------\n matrix : (3,3) array-like\n A 3x3 (or stack thereof) matrix, such as a rotation matrix.\n base : instance of ``cls.base_representation``\n Base relative to which the differentials are defined. If the other\n class is a differential representation, the base will be converted\n to its ``base_representation``.\n transformed_base : instance of ``cls.base_representation``\n Base relative to which the transformed differentials are defined.\n If the other class is a differential representation, the base will\n be converted to its ``base_representation``.\n \"\"\"\n # the transformation matrix does not need to be a rotation matrix,\n # so the unit-distance is not guaranteed. For speed, we check if the\n # matrix is in O(3) and preserves lengths.\n if np.all(is_O3(matrix)): # remain in unit-rep\n # TODO! implement without Cartesian intermediate step.\n diff = super().transform(matrix, base, transformed_base)\n\n else: # switch to dimensional representation\n du = self.d_lat.unit / base.lat.unit # derivative unit\n diff = self._dimensional_differential(\n d_lon_coslat=self.d_lon_coslat, d_lat=self.d_lat, d_distance=0 * du\n ).transform(matrix, base, transformed_base)\n\n return diff\n\n def _scale_operation(self, op, *args, scaled_base=False):\n if scaled_base:\n return self.copy()\n else:\n return super()._scale_operation(op, *args)\n\n\nclass SphericalCosLatDifferential(BaseSphericalCosLatDifferential):\n \"\"\"Differential(s) of points in 3D spherical coordinates.\n\n Parameters\n ----------\n d_lon_coslat, d_lat : `~astropy.units.Quantity`\n The differential longitude (with cos(lat) included) and latitude.\n d_distance : `~astropy.units.Quantity`\n The differential distance.\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n base_representation = SphericalRepresentation\n _unit_differential = UnitSphericalCosLatDifferential\n attr_classes = {\n \"d_lon_coslat\": u.Quantity,\n \"d_lat\": u.Quantity,\n \"d_distance\": u.Quantity,\n }\n\n def __init__(self, d_lon_coslat, d_lat=None, d_distance=None, copy=True):\n super().__init__(d_lon_coslat, d_lat, d_distance, copy=copy)\n if not self._d_lon_coslat.unit.is_equivalent(self._d_lat.unit):\n raise u.UnitsError(\"d_lon_coslat and d_lat should have equivalent units.\")\n\n def represent_as(self, other_class, base=None):\n # All spherical differentials can be done without going to Cartesian,\n # though some need base for the latitude to remove cos(lat).\n if issubclass(other_class, UnitSphericalCosLatDifferential):\n return other_class(self.d_lon_coslat, self.d_lat)\n elif issubclass(other_class, RadialDifferential):\n return other_class(self.d_distance)\n elif issubclass(other_class, SphericalDifferential):\n return other_class(self._d_lon(base), self.d_lat, self.d_distance)\n elif issubclass(other_class, UnitSphericalDifferential):\n return other_class(self._d_lon(base), self.d_lat)\n elif issubclass(other_class, PhysicsSphericalDifferential):\n return other_class(self._d_lon(base), -self.d_lat, self.d_distance)\n\n return super().represent_as(other_class, base)\n\n @classmethod\n def from_representation(cls, representation, base=None):\n # Other spherical differentials can be done without going to Cartesian,\n # though we need base for the latitude to remove coslat.\n if isinstance(representation, SphericalDifferential):\n d_lon_coslat = cls._get_d_lon_coslat(representation.d_lon, base)\n return cls(d_lon_coslat, representation.d_lat, representation.d_distance)\n elif isinstance(representation, PhysicsSphericalDifferential):\n d_lon_coslat = cls._get_d_lon_coslat(representation.d_phi, base)\n return cls(d_lon_coslat, -representation.d_theta, representation.d_r)\n\n return super().from_representation(representation, base)\n\n def _scale_operation(self, op, *args, scaled_base=False):\n if scaled_base:\n return self.__class__(\n self.d_lon_coslat, self.d_lat, op(self.d_distance, *args)\n )\n else:\n return super()._scale_operation(op, *args)\n\n\nclass RadialDifferential(BaseDifferential):\n \"\"\"Differential(s) of radial distances.\n\n Parameters\n ----------\n d_distance : `~astropy.units.Quantity`\n The differential distance.\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n base_representation = RadialRepresentation\n\n def to_cartesian(self, base):\n unit_vec = base.represent_as(UnitSphericalRepresentation).to_cartesian()\n return self.d_distance * unit_vec\n\n def norm(self, base=None):\n return self.d_distance\n\n @classmethod\n def from_cartesian(cls, other, base):\n return cls(\n other.dot(base.represent_as(UnitSphericalRepresentation)), copy=False\n )\n\n @classmethod\n def from_representation(cls, representation, base=None):\n if isinstance(\n representation, (SphericalDifferential, SphericalCosLatDifferential)\n ):\n return cls(representation.d_distance)\n elif isinstance(representation, PhysicsSphericalDifferential):\n return cls(representation.d_r)\n else:\n return super().from_representation(representation, base)\n\n def _combine_operation(self, op, other, reverse=False):\n if isinstance(other, self.base_representation):\n if reverse:\n first, second = other.distance, self.d_distance\n else:\n first, second = self.d_distance, other.distance\n return other.__class__(op(first, second), copy=False)\n elif isinstance(\n other, (BaseSphericalDifferential, BaseSphericalCosLatDifferential)\n ):\n all_components = set(self.components) | set(other.components)\n first, second = (self, other) if not reverse else (other, self)\n result_args = {\n c: op(getattr(first, c, 0.0), getattr(second, c, 0.0))\n for c in all_components\n }\n return SphericalDifferential(**result_args)\n\n else:\n return super()._combine_operation(op, other, reverse)\n\n\nclass PhysicsSphericalDifferential(BaseDifferential):\n \"\"\"Differential(s) of 3D spherical coordinates using physics convention.\n\n Parameters\n ----------\n d_phi, d_theta : `~astropy.units.Quantity`\n The differential azimuth and inclination.\n d_r : `~astropy.units.Quantity`\n The differential radial distance.\n copy : bool, optional\n If `True` (default), arrays will be copied. If `False`, arrays will\n be references, though possibly broadcast to ensure matching shapes.\n \"\"\"\n\n base_representation = PhysicsSphericalRepresentation\n\n def __init__(self, d_phi, d_theta=None, d_r=None, copy=True):\n super().__init__(d_phi, d_theta, d_r, copy=copy)\n if not self._d_phi.unit.is_equivalent(self._d_theta.unit):\n raise u.UnitsError(\"d_phi and d_theta should have equivalent units.\")\n\n def represent_as(self, other_class, base=None):\n # All spherical differentials can be done without going to Cartesian,\n # though CosLat needs base for the latitude. For those, explicitly\n # do the equivalent of self._d_lon_coslat in SphericalDifferential.\n if issubclass(other_class, SphericalDifferential):\n return other_class(self.d_phi, -self.d_theta, self.d_r)\n elif issubclass(other_class, UnitSphericalDifferential):\n return other_class(self.d_phi, -self.d_theta)\n elif issubclass(other_class, SphericalCosLatDifferential):\n self._check_base(base)\n d_lon_coslat = self.d_phi * np.sin(base.theta)\n return other_class(d_lon_coslat, -self.d_theta, self.d_r)\n elif issubclass(other_class, UnitSphericalCosLatDifferential):\n self._check_base(base)\n d_lon_coslat = self.d_phi * np.sin(base.theta)\n return other_class(d_lon_coslat, -self.d_theta)\n elif issubclass(other_class, RadialDifferential):\n return other_class(self.d_r)\n\n return super().represent_as(other_class, base)\n\n @classmethod\n def from_representation(cls, representation, base=None):\n # Other spherical differentials can be done without going to Cartesian,\n # though we need base for the latitude to remove coslat. For that case,\n # do the equivalent of cls._d_lon in SphericalDifferential.\n if isinstance(representation, SphericalDifferential):\n return cls(\n representation.d_lon, -representation.d_lat, representation.d_distance\n )\n elif isinstance(representation, SphericalCosLatDifferential):\n cls._check_base(base)\n d_phi = representation.d_lon_coslat / np.sin(base.theta)\n return cls(d_phi, -representation.d_lat, representation.d_distance)\n\n return super().from_representation(representation, base)\n\n def _scale_operation(self, op, *args, scaled_base=False):\n if scaled_base:\n return self.__class__(self.d_phi, self.d_theta, op(self.d_r, *args))\n else:\n return super()._scale_operation(op, *args)\n"}
|
{"astropy/coordinates/representation/cylindrical.py": [{"type": "function", "name": "CylindricalRepresentation.represent_as", "lines": [138, 152], "signature": "def represent_as(self, other_class, differential_class=None):", "doc": ""}]}
|
v5.3
|
["astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_representation_shortcuts", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_to_cylindrical_at_the_origin"]
|
["astropy/coordinates/tests/test_representation.py::TestRadialRepresentation::test_transform", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_name", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_empty_init", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_init_quantity", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_init_no_mutate_input", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_init_lonlat", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_init_subclass", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_init_array", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_init_array_nocopy", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_init_float32_array", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_reprobj", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_broadcasting", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_broadcasting_mismatch", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_broadcasting_and_nocopy", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_readonly", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_getitem_len_iterable", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_getitem_len_iterable_scalar", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_setitem", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_negative_distance", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_nan_distance", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_raise_on_extra_arguments", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_representation_shortcuts", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_transform", "astropy/coordinates/tests/test_representation.py::TestSphericalRepresentation::test_transform_with_NaN", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_name", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_empty_init", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_init_quantity", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_init_lonlat", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_init_array", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_init_array_nocopy", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_reprobj", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_broadcasting", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_broadcasting_mismatch", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_readonly", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_getitem", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_getitem_scalar", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_representation_shortcuts", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalRepresentation::test_transform", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_name", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_empty_init", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_init_quantity", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_init_phitheta", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_init_array", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_init_array_nocopy", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_reprobj", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_broadcasting", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_broadcasting_mismatch", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_readonly", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_getitem", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_getitem_scalar", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_initialize_with_nan", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_transform", "astropy/coordinates/tests/test_representation.py::TestPhysicsSphericalRepresentation::test_transform_with_NaN", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_name", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_empty_init", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_quantity", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_singleunit", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_array", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_one_array", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_one_array_size_fail", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_xyz_but_more_than_one_array_fail", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_one_array_yz_fail", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_init_array_nocopy", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_xyz_is_view_if_possible", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_reprobj", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_broadcasting", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_broadcasting_mismatch", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_readonly", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_xyz", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_unit_mismatch", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_unit_non_length", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_getitem", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_getitem_scalar", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_transform", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentation::test_transform_non_contiguous_matrix", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_name", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_empty_init", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_init_quantity", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_init_array", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_init_array_nocopy", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_reprobj", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_broadcasting", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_broadcasting_mismatch", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_readonly", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_getitem", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_getitem_scalar", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_transform", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_representation_shortcuts", "astropy/coordinates/tests/test_representation.py::TestCylindricalRepresentation::test_to_physicsspherical_at_the_origin", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalCosLatDifferential::test_transform[matrix0]", "astropy/coordinates/tests/test_representation.py::TestUnitSphericalCosLatDifferential::test_transform[matrix1]", "astropy/coordinates/tests/test_representation.py::test_cartesian_spherical_roundtrip", "astropy/coordinates/tests/test_representation.py::test_cartesian_setting_with_other", "astropy/coordinates/tests/test_representation.py::test_cartesian_physics_spherical_roundtrip", "astropy/coordinates/tests/test_representation.py::test_spherical_physics_spherical_roundtrip", "astropy/coordinates/tests/test_representation.py::test_cartesian_cylindrical_roundtrip", "astropy/coordinates/tests/test_representation.py::test_unit_spherical_roundtrip", "astropy/coordinates/tests/test_representation.py::test_no_unnecessary_copies", "astropy/coordinates/tests/test_representation.py::test_representation_repr", "astropy/coordinates/tests/test_representation.py::test_representation_repr_multi_d", "astropy/coordinates/tests/test_representation.py::test_representation_str", "astropy/coordinates/tests/test_representation.py::test_representation_str_multi_d", "astropy/coordinates/tests/test_representation.py::test_subclass_representation", "astropy/coordinates/tests/test_representation.py::test_minimal_subclass", "astropy/coordinates/tests/test_representation.py::test_duplicate_warning", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_init_differential", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_init_differential_compatible", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_init_differential_multiple_equivalent_keys", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_init_array_broadcasting", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_reprobj", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_readonly", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_represent_as", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_represent_as_unit_spherical_with_diff[SphericalDifferential-UnitSphericalDifferential]", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_represent_as_unit_spherical_with_diff[SphericalCosLatDifferential-UnitSphericalCosLatDifferential]", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_getitem", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_setitem", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_transform", "astropy/coordinates/tests/test_representation.py::TestCartesianRepresentationWithDifferential::test_with_differentials", "astropy/coordinates/tests/test_representation.py::test_repr_with_differentials", "astropy/coordinates/tests/test_representation.py::test_to_cartesian", "astropy/coordinates/tests/test_representation.py::test_unitphysics", "astropy/coordinates/tests/test_representation.py::test_distance_warning", "astropy/coordinates/tests/test_representation.py::test_dtype_preservation_in_indexing", "astropy/coordinates/tests/test_representation.py::TestInfo::test_info_unit", "astropy/coordinates/tests/test_representation.py::TestInfo::test_roundtrip[rep]", "astropy/coordinates/tests/test_representation.py::TestInfo::test_roundtrip[diff]", "astropy/coordinates/tests/test_representation.py::TestInfo::test_roundtrip[rep_w_diff]", "astropy/coordinates/tests/test_representation.py::test_differential_norm_noncartesian[SphericalDifferential]", "astropy/coordinates/tests/test_representation.py::test_differential_norm_noncartesian[SphericalCosLatDifferential]", "astropy/coordinates/tests/test_representation.py::test_differential_norm_noncartesian[CylindricalDifferential]", "astropy/coordinates/tests/test_representation.py::test_differential_norm_noncartesian[PhysicsSphericalDifferential]", "astropy/coordinates/tests/test_representation.py::test_differential_norm_noncartesian[UnitSphericalDifferential]", "astropy/coordinates/tests/test_representation.py::test_differential_norm_noncartesian[UnitSphericalCosLatDifferential]", "astropy/coordinates/tests/test_representation.py::test_differential_norm_radial"]
|
2d281019494aaebf522f6626c0dae37510c16688
|
{"first_commit_time": 1709249972.0, "pr_title": "feat: fast-path physicsspherical to cylindrical", "pr_body": "- [ ] By checking this box, the PR author has requested that maintainers do **NOT** use the \"Squash and Merge\" button. Maintainers should respect this when possible; however, the final decision is at the discretion of the maintainer that merges the PR.\r\n", "pr_timeline": [{"time": 1709250069.0, "comment": "Thank you for your contribution to Astropy! \ud83c\udf0c This checklist is meant to remind the package maintainers who will review this pull request of some common things to look for.\n\n - [ ] Do the proposed changes actually accomplish desired goals?\n - [ ] Do the proposed changes follow the [Astropy coding guidelines](https://docs.astropy.org/en/latest/development/codeguide.html)?\n - [ ] Are tests added/updated as required? If so, do they follow the [Astropy testing guidelines](https://docs.astropy.org/en/latest/development/testguide.html)?\n - [ ] Are docs added/updated as required? If so, do they follow the [Astropy documentation guidelines](https://docs.astropy.org/en/latest/development/docguide.html#astropy-documentation-rules-and-guidelines)?\n - [ ] Is rebase and/or squash necessary? If so, please provide the author with appropriate instructions. Also see instructions for [rebase](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#rebase-if-necessary) and [squash](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#squash-if-necessary).\n - [ ] Did the CI pass? If no, are the failures related? If you need to run daily and weekly cron jobs as part of the PR, please apply the \"Extra CI\" label. Codestyle issues can be fixed by the [bot](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#pre-commit).\n - [ ] Is a change log needed? If yes, did the change log check pass? If no, add the \"no-changelog-entry-needed\" label. If this is a manual backport, use the \"skip-changelog-checks\" label unless special changelog handling is necessary.\n - [ ] Is this a big PR that makes a \"What's new?\" entry worthwhile and if so, is (1) a \"what's new\" entry included in this PR and (2) the \"whatsnew-needed\" label applied?\n - [ ] At the time of adding the milestone, if the milestone set requires a backport to release branch(es), apply the appropriate \"backport-X.Y.x\" label(s) *before* merge."}, {"time": 1709250073.0, "comment": "\ud83d\udc4b Thank you for your draft pull request! Do you know that you can use `[ci skip]` or `[skip ci]` in your commit messages to skip running continuous integration tests until you are ready?"}, {"time": 1709264446.0, "comment": "Including the reverse makes sense to me. I just encountered this in my research and noted it was going through Cartesian, which seemed unnecessary since only cos and sin theta need to be computed to actually do the transform."}, {"time": 1709304393.0, "comment": "Why coverage bot says your new code isn't covered?\r\n\r\np.s. Oooo I wonder if #15779 would be useful here."}, {"time": 1709309074.0, "comment": "> Why coverage bot says your new code isn't covered?\r\n\r\nIt might actually be that we do not explicitly test PhysicsSpherical to Cylindrical...\r\n\r\n> p.s. Oooo I wonder if #15779 would be useful here.\r\n\r\nYes, I'm looking forward very much to have benchmarks again!\r\n"}, {"time": 1709497744.0, "comment": "Note that the failures are real - need to use `allclose` since the two different paths no longer give exactly identical results."}, {"time": 1709523934.0, "comment": "Yikes. CI failure is complete and not due to this PR. I rebased and picked up an upstream issue."}, {"time": 1709548313.0, "comment": "@nstarman this is due to pytest 8.1, see #16146 "}, {"time": 1709579901.0, "comment": "Re: CI -- pytest yanked 8.1. I restarted the jobs and unless you broke it, they should pass now (except devdeps)."}, {"time": 1710554457.0, "comment": "I just applied a label so this runs benchmark implemented in https://github.com/astropy/astropy/pull/15779 (cc @MridulS) assuming this functionality is already benchmarked over at https://github.com/astropy/astropy-benchmarks/tree/main/benchmarks ?"}, {"time": 1710556154.0, "comment": "Does the benchmark log help at all?\r\n\r\nhttps://github.com/astropy/astropy/actions/runs/8304528883/job/22730265639?pr=16135"}, {"time": 1710556273.0, "comment": "@MridulS , the log actually says\r\n\r\n```\r\nSOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.\r\nError: PERFORMANCE DECREASED.\r\n```\r\n\r\nbut the job is green. Is this expected?\r\n\r\nhttps://github.com/astropy/astropy/blob/4d436bc2730c059be3fa3ad17f7b7f42cc5cef4d/.github/workflows/ci_benchmark.yml#L84-L88"}, {"time": 1710559736.0, "comment": "Ahhhh. No it's supposed to give a red cross. I guess in the new iterations I removed that logic. I'll send a PR tomorrow to fix that. "}, {"time": 1710780213.0, "comment": "Was looking back at this PR to see its status: @eerovaher - are you happy with @nstarman's changes following your comments?\r\n\r\nOn the benchmarks, in principle it would be bad if this *decreased* performance, but looking at the log, I see\r\n```\r\n| Change | Before [d3a39fa1] | After [ce2f5fbb] | Ratio | Benchmark (Parameter) |\r\n|----------|----------------------|---------------------|---------|--------------------------------|\r\n| + | 54.7\u00b110\u03bcs | 209\u00b18\u03bcs | 3.81 | units.time_quantity_times_unit |\r\n```\r\nThis is indeed a large change but has absolutely nothing to do with this PR. Which in itself is perhaps worrying? (Perhaps best discussed elsewhere, though!)"}, {"time": 1710780438.0, "comment": "Hmm. I don't see a big jump here:\r\n\r\nhttps://spacetelescope.github.io/bench/astropy-benchmarks/#units.time_quantity_times_unit"}, {"time": 1710780467.0, "comment": "Might wanna rebase again, just to be sure."}, {"time": 1710781213.0, "comment": "Sounds like it was a fluke. Since the test was in `units`, unrelated to the coordinates stuff here, I think there is no need to re-test. Though we probably need some representation tests, so one can see the benefit of this PR!"}, {"time": 1710786740.0, "comment": "Rebased :)\r\n"}, {"time": 1710865249.0, "comment": "I'm not seeing any performance increases in the benchmark suite. The most likely explanation is that the `PhysicsSphericalRepresentation` to `CylindricalRepresentation` conversion is not included in the benchmarks, but I don't know what is actually included and because the pull request that implemented benchmarking was merged without any documentation then I can't look it up either."}, {"time": 1710867436.0, "comment": "The PR benchmark basically runs https://github.com/astropy/astropy-benchmarks/tree/main/benchmarks but in a *relative* way instead of absolute.\r\n\r\nNot sure how much this issue helps to explain but it has links: https://github.com/astropy/astropy-benchmarks/issues/106"}, {"time": 1710867476.0, "comment": "> not included in the benchmarks\r\n\r\nWell, that is a problem... We cannot benchmark something that isn't written."}, {"time": 1711025493.0, "comment": "I measured the performance of the code locally using the following code (based on the new test code):\r\n```python\r\nPython 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]\r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 8.15.0 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: from astropy.coordinates import (\r\n ...: PhysicsSphericalRepresentation,\r\n ...: PhysicsSphericalDifferential,\r\n ...: CylindricalDifferential,\r\n ...: CylindricalRepresentation,\r\n ...: )\r\n\r\nIn [2]: from astropy import units as u\r\n\r\nIn [3]: difs = CylindricalDifferential(\r\n ...: d_rho=4 * u.km / u.s, d_phi=5 * u.mas / u.yr, d_z=6 * u.km / u.s\r\n ...: )\r\n\r\nIn [4]: cyl = CylindricalRepresentation(\r\n ...: rho=1 * u.kpc, phi=2 * u.deg, z=3 * u.kpc, differentials={\"s\": difs}\r\n ...: )\r\n\r\nIn [5]: %timeit cyl.represent_as(PhysicsSphericalRepresentation, PhysicsSphericalDifferential)\r\n```\r\nThe problem is I am not seeing any difference in performance between this branch and `main`. @nstarman, have you timed the code?"}, {"time": 1711035750.0, "comment": "On my computer I consistently get a 5% speed boost."}, {"time": 1711036217.0, "comment": "Did you see the 5% boost with the code I posted or did you run something else?"}, {"time": 1711036501.0, "comment": "Also, I discovered there's another good reason for this PR.\r\n\r\nPre this PR\r\n```python\r\nimport astropy.units as u\r\nimport astropy.coordinates as cs\r\n\r\n>>> cyl = cs.CylindricalRepresentation(rho=0 * u.kpc, phi=270 * u.deg, z=3 * u.kpc)\r\n>>> cyl.represent_as(cs.PhysicsSphericalRepresentation).phi.deg\r\n180.0\r\n```\r\n\r\nWith this PR\r\n```python\r\nimport astropy.units as u\r\nimport astropy.coordinates as cs\r\n\r\n>>> cyl = cs.CylindricalRepresentation(rho=0 * u.kpc, phi=270 * u.deg, z=3 * u.kpc)\r\n>>> cyl.represent_as(cs.PhysicsSphericalRepresentation).phi.deg\r\n270.0\r\n```\r\n\r\nConsider the transformation (https://en.wikipedia.org/wiki/Cylindrical_coordinate_system)\r\n\r\n<img src=\"https://github.com/astropy/astropy/assets/8949649/980b922a-4492-44da-bf6a-261e466614b4\" width=\"300\" />\r\n\r\nThe `phi` value is conserved through the transformation. Technically at `r=0`, `phi` is not well defined, however I think that keeping `phi` is better than changing it to a value determined by the `Cartesian -> Spherical` conventions. \r\n\r\n"}, {"time": 1711036895.0, "comment": "I agree with that reasoning. Do add that example to the tests."}, {"time": 1711054985.0, "comment": "On the performance: I think there is a reasonable benefit only when no differentials are involved - those needs a lot more calculations and are not improved by this PR."}], "issues": {}}
|
|
astropy/astropy
| 16,516
|
https://github.com/astropy/astropy/pull/16516
|
astropy__astropy-16516
|
[]
|
fb4ff546ebca64c4d92a573e400873aeba1f5766
|
diff --git a/astropy/units/equivalencies.py b/astropy/units/equivalencies.py
index f77b195df97d..b6968fc701ea 100644
--- a/astropy/units/equivalencies.py
+++ b/astropy/units/equivalencies.py
@@ -34,6 +34,7 @@
"molar_mass_amu",
"pixel_scale",
"plate_scale",
+ "magnetic_flux_field",
"Equivalency",
]
@@ -873,3 +874,47 @@ def plate_scale(platescale):
"plate_scale",
{"platescale": platescale},
)
+
+
+def magnetic_flux_field(mu_r=1):
+ r"""
+ Convert magnetic field between magnetic field strength :math:`(\mathbf{H})` and
+ magnetic flux density :math:`(\mathbf{B})` using the relationship:
+
+ .. math::
+
+ \mathbf{B} = \mu_r \mu_0 \mathbf{H}
+
+ where:
+ - :math:`\mu_0` is the vacuum permeability, a physical constant.
+ - :math:`\mu_r` is the relative permeability of the medium, a dimensionless
+ quantity.
+
+ The default setting (:math:`\mu_r=1`) represents conditions in a vacuum.
+
+ Parameters
+ ----------
+ mu_r : float, optional
+ The relative magnetic permeability of the medium. This is a dimensionless quantity
+ and has a default value of :math:`\mu_r=1` which corresponds to free space (vacuum).
+
+ Examples
+ --------
+ >>> import astropy.units as u
+ >>> H = 1 * u.Oe
+ >>> H.to(u.G, equivalencies=u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 1. G>
+ >>> H.to(u.G, equivalencies=u.magnetic_flux_field(mu_r=0.8)) # doctest: +FLOAT_CMP
+ <Quantity 0.8 G>
+ >>> B = 1 * u.T
+ >>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 795774.71502628 A / m>
+ >>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field(mu_r=0.8)) # doctest: +FLOAT_CMP
+ <Quantity 994718.39378285 A / m>
+
+ """
+ mu0 = _si.mu0.value
+ return Equivalency(
+ [(si.T, si.A / si.m, lambda x: x / (mu_r * mu0), lambda x: x * mu_r * mu0)],
+ "magnetic_flux_field",
+ )
diff --git a/docs/changes/units/16516.feature.rst b/docs/changes/units/16516.feature.rst
new file mode 100644
index 000000000000..4fa419d8579b
--- /dev/null
+++ b/docs/changes/units/16516.feature.rst
@@ -0,0 +1,2 @@
+Add ``magnetic_flux_field`` equivalency to convert magnetic field between
+magnetic field strength (H) and magnetic flux density (B).
diff --git a/docs/units/equivalencies.rst b/docs/units/equivalencies.rst
index 6093d1c529e5..20647e927f01 100644
--- a/docs/units/equivalencies.rst
+++ b/docs/units/equivalencies.rst
@@ -529,6 +529,46 @@ for more information.
.. EXAMPLE END
+Magnetic Flux Density and Field Strength Equivalency
+----------------------------------------------------
+
+The :func:`~astropy.units.equivalencies.magnetic_flux_field` equivalency allows
+conversion between Magnetic Flux Density (B) and its equivalent Magnetic Field
+Strength (H), governed by the equation
+
+.. math::
+
+ \mathbf{B} = \mu_r \mu_0 \mathbf{H}.
+
+Where :math:`\mu_0` is the vacuum permeability and :math:`\mu_r` is the
+relative permeability of the medium. For a vacuum, :math:`\mu_r=1`.
+
+Example
+^^^^^^^
+
+.. EXAMPLE START: Magnetic Flux Density Magnetic Field Strength Equivalency
+
+To convert between Magnetic Flux Density (B) and its equivalent Magnetic Field
+Strength (H) in a vacuum.
+
+ >>> H = 1 * u.Oe
+ >>> H.to(u.G, u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 1. G>
+ >>> H.to(u.T, u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 0.0001 T>
+ >>> B = 1 * u.T
+ >>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 795774.71502628 A / m>
+
+Conversion in a medium with :math:`\mu_r=0.9`::
+
+ >>> H.to(u.G, u.magnetic_flux_field(mu_r=0.9)) # doctest: +FLOAT_CMP
+ <Quantity 0.9 G>
+ >>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field(mu_r=0.9)) # doctest: +FLOAT_CMP
+ <Quantity 884194.12780697 A / m>
+
+.. EXAMPLE END
+
Writing New Equivalencies
=========================
|
diff --git a/astropy/units/tests/test_equivalencies.py b/astropy/units/tests/test_equivalencies.py
index bd13257194eb..7aab96eb16cd 100644
--- a/astropy/units/tests/test_equivalencies.py
+++ b/astropy/units/tests/test_equivalencies.py
@@ -1013,3 +1013,17 @@ def test_spectral_density_factor_deprecation():
u.erg / u.Hz / u.cm**2 / u.s, 1, u.spectral_density(u.AA, factor=3500)
)
assert_quantity_allclose(a, 4.086160166177361e-12)
+
+
+def test_magnetic_flux_field():
+ H = 1 * u.A / u.m
+ B = (H * constants.mu0).to(u.T)
+ assert_allclose(H.to_value(u.T, u.magnetic_flux_field()), B.value)
+ assert_allclose(B.to_value(u.A / u.m, u.magnetic_flux_field()), H.value)
+
+ H = 1 * u.Oe
+ B = 1 * u.G
+ assert_allclose(H.to_value(u.G, u.magnetic_flux_field()), 1)
+ assert_allclose(B.to_value(u.Oe, u.magnetic_flux_field()), 1)
+ assert_allclose(H.to_value(u.G, u.magnetic_flux_field(mu_r=0.8)), 0.8)
+ assert_allclose(B.to_value(u.Oe, u.magnetic_flux_field(mu_r=0.8)), 1.25)
| 2024-05-30T15:25:34
|
{}
|
{"astropy/units/equivalencies.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"A set of standard astronomical equivalencies.\"\"\"\n\nfrom collections import UserList\n\n# THIRD-PARTY\nimport numpy as np\n\n# LOCAL\nfrom astropy.constants import si as _si\nfrom astropy.utils import deprecated_renamed_argument\nfrom astropy.utils.misc import isiterable\n\nfrom . import astrophys, cgs, dimensionless_unscaled, misc, si\nfrom .core import Unit, UnitsError\nfrom .function import units as function_units\n\n__all__ = [\n \"parallax\",\n \"spectral\",\n \"spectral_density\",\n \"doppler_radio\",\n \"doppler_optical\",\n \"doppler_relativistic\",\n \"doppler_redshift\",\n \"mass_energy\",\n \"brightness_temperature\",\n \"thermodynamic_temperature\",\n \"beam_angular_area\",\n \"dimensionless_angles\",\n \"logarithmic\",\n \"temperature\",\n \"temperature_energy\",\n \"molar_mass_amu\",\n \"pixel_scale\",\n \"plate_scale\",\n \"Equivalency\",\n]\n\n\nclass Equivalency(UserList):\n \"\"\"\n A container for a units equivalency.\n\n Attributes\n ----------\n name: `str`\n The name of the equivalency.\n kwargs: `dict`\n Any positional or keyword arguments used to make the equivalency.\n \"\"\"\n\n def __init__(self, equiv_list, name=\"\", kwargs=None):\n self.data = equiv_list\n self.name = [name]\n self.kwargs = [kwargs] if kwargs is not None else [{}]\n\n def __add__(self, other):\n if isinstance(other, Equivalency):\n new = super().__add__(other)\n new.name = self.name[:] + other.name\n new.kwargs = self.kwargs[:] + other.kwargs\n return new\n else:\n return self.data.__add__(other)\n\n def __eq__(self, other):\n return (\n isinstance(other, self.__class__)\n and self.name == other.name\n and self.kwargs == other.kwargs\n )\n\n\ndef dimensionless_angles():\n \"\"\"Allow angles to be equivalent to dimensionless (with 1 rad = 1 m/m = 1).\n\n It is special compared to other equivalency pairs in that it\n allows this independent of the power to which the angle is raised,\n and independent of whether it is part of a more complicated unit.\n \"\"\"\n return Equivalency([(si.radian, None)], \"dimensionless_angles\")\n\n\ndef logarithmic():\n \"\"\"Allow logarithmic units to be converted to dimensionless fractions.\"\"\"\n return Equivalency(\n [(dimensionless_unscaled, function_units.dex, np.log10, lambda x: 10.0**x)],\n \"logarithmic\",\n )\n\n\ndef parallax():\n \"\"\"\n Returns a list of equivalence pairs that handle the conversion\n between parallax angle and distance.\n \"\"\"\n\n def parallax_converter(x):\n x = np.asanyarray(x)\n d = 1 / x\n\n if isiterable(d):\n d[d < 0] = np.nan\n return d\n\n else:\n if d < 0:\n return np.array(np.nan)\n else:\n return d\n\n return Equivalency(\n [(si.arcsecond, astrophys.parsec, parallax_converter)], \"parallax\"\n )\n\n\ndef spectral():\n \"\"\"\n Returns a list of equivalence pairs that handle spectral\n wavelength, wave number, frequency, and energy equivalencies.\n\n Allows conversions between wavelength units, wave number units,\n frequency units, and energy units as they relate to light.\n\n There are two types of wave number:\n\n * spectroscopic - :math:`1 / \\\\lambda` (per meter)\n * angular - :math:`2 \\\\pi / \\\\lambda` (radian per meter)\n\n \"\"\"\n c = _si.c.value\n h = _si.h.value\n hc = h * c\n two_pi = 2.0 * np.pi\n inv_m_spec = si.m**-1\n inv_m_ang = si.radian / si.m\n\n return Equivalency(\n [\n (si.m, si.Hz, lambda x: c / x),\n (si.m, si.J, lambda x: hc / x),\n (si.Hz, si.J, lambda x: h * x, lambda x: x / h),\n (si.m, inv_m_spec, lambda x: 1.0 / x),\n (si.Hz, inv_m_spec, lambda x: x / c, lambda x: c * x),\n (si.J, inv_m_spec, lambda x: x / hc, lambda x: hc * x),\n (inv_m_spec, inv_m_ang, lambda x: x * two_pi, lambda x: x / two_pi),\n (si.m, inv_m_ang, lambda x: two_pi / x),\n (si.Hz, inv_m_ang, lambda x: two_pi * x / c, lambda x: c * x / two_pi),\n (si.J, inv_m_ang, lambda x: x * two_pi / hc, lambda x: hc * x / two_pi),\n ],\n \"spectral\",\n )\n\n\n@deprecated_renamed_argument(\n \"factor\", None, since=\"7.0\", alternative='\"wav\" as a \"Quantity\"'\n)\ndef spectral_density(wav, factor=None):\n \"\"\"\n Returns a list of equivalence pairs that handle spectral density\n with regard to wavelength and frequency.\n\n Parameters\n ----------\n wav : `~astropy.units.Quantity`\n `~astropy.units.Quantity` associated with values being converted\n (e.g., wavelength or frequency).\n factor : array_like\n If ``wav`` is a |Unit| instead of a |Quantity| then ``factor``\n is the value ``wav`` will be multiplied with to convert it to\n a |Quantity|.\n\n .. deprecated:: 7.0\n\n ``factor`` is deprecated. Pass in ``wav`` as a |Quantity|,\n not as a |Unit|.\n \"\"\"\n from .core import UnitBase\n\n if isinstance(wav, UnitBase):\n if factor is None:\n raise ValueError(\"If `wav` is specified as a unit, `factor` should be set\")\n wav = factor * wav # Convert to Quantity\n c_Aps = _si.c.to_value(si.AA / si.s) # Angstrom/s\n h_cgs = _si.h.cgs.value # erg * s\n hc = c_Aps * h_cgs\n\n # flux density\n f_la = cgs.erg / si.angstrom / si.cm**2 / si.s\n f_nu = cgs.erg / si.Hz / si.cm**2 / si.s\n nu_f_nu = cgs.erg / si.cm**2 / si.s\n la_f_la = nu_f_nu\n phot_f_la = astrophys.photon / (si.cm**2 * si.s * si.AA)\n phot_f_nu = astrophys.photon / (si.cm**2 * si.s * si.Hz)\n la_phot_f_la = astrophys.photon / (si.cm**2 * si.s)\n\n # luminosity density\n L_nu = cgs.erg / si.s / si.Hz\n L_la = cgs.erg / si.s / si.angstrom\n nu_L_nu = cgs.erg / si.s\n la_L_la = nu_L_nu\n phot_L_la = astrophys.photon / (si.s * si.AA)\n phot_L_nu = astrophys.photon / (si.s * si.Hz)\n\n # surface brightness (flux equiv)\n S_la = cgs.erg / si.angstrom / si.cm**2 / si.s / si.sr\n S_nu = cgs.erg / si.Hz / si.cm**2 / si.s / si.sr\n nu_S_nu = cgs.erg / si.cm**2 / si.s / si.sr\n la_S_la = nu_S_nu\n phot_S_la = astrophys.photon / (si.cm**2 * si.s * si.AA * si.sr)\n phot_S_nu = astrophys.photon / (si.cm**2 * si.s * si.Hz * si.sr)\n\n # surface brightness (luminosity equiv)\n SL_nu = cgs.erg / si.s / si.Hz / si.sr\n SL_la = cgs.erg / si.s / si.angstrom / si.sr\n nu_SL_nu = cgs.erg / si.s / si.sr\n la_SL_la = nu_SL_nu\n phot_SL_la = astrophys.photon / (si.s * si.AA * si.sr)\n phot_SL_nu = astrophys.photon / (si.s * si.Hz * si.sr)\n\n def f_la_to_f_nu(x):\n return x * (wav.to_value(si.AA, spectral()) ** 2 / c_Aps)\n\n def f_la_from_f_nu(x):\n return x / (wav.to_value(si.AA, spectral()) ** 2 / c_Aps)\n\n def f_nu_to_nu_f_nu(x):\n return x * wav.to_value(si.Hz, spectral())\n\n def f_nu_from_nu_f_nu(x):\n return x / wav.to_value(si.Hz, spectral())\n\n def f_la_to_la_f_la(x):\n return x * wav.to_value(si.AA, spectral())\n\n def f_la_from_la_f_la(x):\n return x / wav.to_value(si.AA, spectral())\n\n def phot_f_la_to_f_la(x):\n return hc * x / wav.to_value(si.AA, spectral())\n\n def phot_f_la_from_f_la(x):\n return x * wav.to_value(si.AA, spectral()) / hc\n\n def phot_f_la_to_f_nu(x):\n return h_cgs * x * wav.to_value(si.AA, spectral())\n\n def phot_f_la_from_f_nu(x):\n return x / (wav.to_value(si.AA, spectral()) * h_cgs)\n\n def phot_f_la_to_phot_f_nu(x):\n return x * wav.to_value(si.AA, spectral()) ** 2 / c_Aps\n\n def phot_f_la_from_phot_f_nu(x):\n return c_Aps * x / wav.to_value(si.AA, spectral()) ** 2\n\n phot_f_nu_to_f_nu = phot_f_la_to_f_la\n phot_f_nu_from_f_nu = phot_f_la_from_f_la\n\n def phot_f_nu_to_f_la(x):\n return x * hc * c_Aps / wav.to_value(si.AA, spectral()) ** 3\n\n def phot_f_nu_from_f_la(x):\n return x * wav.to_value(si.AA, spectral()) ** 3 / (hc * c_Aps)\n\n # for luminosity density\n L_nu_to_nu_L_nu = f_nu_to_nu_f_nu\n L_nu_from_nu_L_nu = f_nu_from_nu_f_nu\n L_la_to_la_L_la = f_la_to_la_f_la\n L_la_from_la_L_la = f_la_from_la_f_la\n\n phot_L_la_to_L_la = phot_f_la_to_f_la\n phot_L_la_from_L_la = phot_f_la_from_f_la\n phot_L_la_to_L_nu = phot_f_la_to_f_nu\n phot_L_la_from_L_nu = phot_f_la_from_f_nu\n phot_L_la_to_phot_L_nu = phot_f_la_to_phot_f_nu\n phot_L_la_from_phot_L_nu = phot_f_la_from_phot_f_nu\n phot_L_nu_to_L_nu = phot_f_nu_to_f_nu\n phot_L_nu_from_L_nu = phot_f_nu_from_f_nu\n phot_L_nu_to_L_la = phot_f_nu_to_f_la\n phot_L_nu_from_L_la = phot_f_nu_from_f_la\n\n return Equivalency(\n [\n # flux\n (f_la, f_nu, f_la_to_f_nu, f_la_from_f_nu),\n (f_nu, nu_f_nu, f_nu_to_nu_f_nu, f_nu_from_nu_f_nu),\n (f_la, la_f_la, f_la_to_la_f_la, f_la_from_la_f_la),\n (phot_f_la, f_la, phot_f_la_to_f_la, phot_f_la_from_f_la),\n (phot_f_la, f_nu, phot_f_la_to_f_nu, phot_f_la_from_f_nu),\n (phot_f_la, phot_f_nu, phot_f_la_to_phot_f_nu, phot_f_la_from_phot_f_nu),\n (phot_f_nu, f_nu, phot_f_nu_to_f_nu, phot_f_nu_from_f_nu),\n (phot_f_nu, f_la, phot_f_nu_to_f_la, phot_f_nu_from_f_la),\n # integrated flux\n (la_phot_f_la, la_f_la, phot_f_la_to_f_la, phot_f_la_from_f_la),\n # luminosity\n (L_la, L_nu, f_la_to_f_nu, f_la_from_f_nu),\n (L_nu, nu_L_nu, L_nu_to_nu_L_nu, L_nu_from_nu_L_nu),\n (L_la, la_L_la, L_la_to_la_L_la, L_la_from_la_L_la),\n (phot_L_la, L_la, phot_L_la_to_L_la, phot_L_la_from_L_la),\n (phot_L_la, L_nu, phot_L_la_to_L_nu, phot_L_la_from_L_nu),\n (phot_L_la, phot_L_nu, phot_L_la_to_phot_L_nu, phot_L_la_from_phot_L_nu),\n (phot_L_nu, L_nu, phot_L_nu_to_L_nu, phot_L_nu_from_L_nu),\n (phot_L_nu, L_la, phot_L_nu_to_L_la, phot_L_nu_from_L_la),\n # surface brightness (flux equiv)\n (S_la, S_nu, f_la_to_f_nu, f_la_from_f_nu),\n (S_nu, nu_S_nu, f_nu_to_nu_f_nu, f_nu_from_nu_f_nu),\n (S_la, la_S_la, f_la_to_la_f_la, f_la_from_la_f_la),\n (phot_S_la, S_la, phot_f_la_to_f_la, phot_f_la_from_f_la),\n (phot_S_la, S_nu, phot_f_la_to_f_nu, phot_f_la_from_f_nu),\n (phot_S_la, phot_S_nu, phot_f_la_to_phot_f_nu, phot_f_la_from_phot_f_nu),\n (phot_S_nu, S_nu, phot_f_nu_to_f_nu, phot_f_nu_from_f_nu),\n (phot_S_nu, S_la, phot_f_nu_to_f_la, phot_f_nu_from_f_la),\n # surface brightness (luminosity equiv)\n (SL_la, SL_nu, f_la_to_f_nu, f_la_from_f_nu),\n (SL_nu, nu_SL_nu, L_nu_to_nu_L_nu, L_nu_from_nu_L_nu),\n (SL_la, la_SL_la, L_la_to_la_L_la, L_la_from_la_L_la),\n (phot_SL_la, SL_la, phot_L_la_to_L_la, phot_L_la_from_L_la),\n (phot_SL_la, SL_nu, phot_L_la_to_L_nu, phot_L_la_from_L_nu),\n (phot_SL_la, phot_SL_nu, phot_L_la_to_phot_L_nu, phot_L_la_from_phot_L_nu),\n (phot_SL_nu, SL_nu, phot_L_nu_to_L_nu, phot_L_nu_from_L_nu),\n (phot_SL_nu, SL_la, phot_L_nu_to_L_la, phot_L_nu_from_L_la),\n ],\n \"spectral_density\",\n {\"wav\": wav, \"factor\": factor},\n )\n\n\ndef doppler_radio(rest):\n r\"\"\"\n Return the equivalency pairs for the radio convention for velocity.\n\n The radio convention for the relation between velocity and frequency is:\n\n :math:`V = c \\frac{f_0 - f}{f_0} ; f(V) = f_0 ( 1 - V/c )`\n\n Parameters\n ----------\n rest : `~astropy.units.Quantity`\n Any quantity supported by the standard spectral equivalencies\n (wavelength, energy, frequency, wave number).\n\n References\n ----------\n `NRAO site defining the conventions <https://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`_\n\n Examples\n --------\n >>> import astropy.units as u\n >>> CO_restfreq = 115.27120*u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> radio_CO_equiv = u.doppler_radio(CO_restfreq)\n >>> measured_freq = 115.2832*u.GHz\n >>> radio_velocity = measured_freq.to(u.km/u.s, equivalencies=radio_CO_equiv)\n >>> radio_velocity # doctest: +FLOAT_CMP\n <Quantity -31.209092088877583 km / s>\n \"\"\"\n assert_is_spectral_unit(rest)\n\n ckms = _si.c.to_value(\"km/s\")\n\n def to_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n return (restfreq - x) / (restfreq) * ckms\n\n def from_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n voverc = x / ckms\n return restfreq * (1 - voverc)\n\n def to_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return (x - restwav) / (x) * ckms\n\n def from_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return restwav * ckms / (ckms - x)\n\n def to_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n return (resten - x) / (resten) * ckms\n\n def from_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n voverc = x / ckms\n return resten * (1 - voverc)\n\n return Equivalency(\n [\n (si.Hz, si.km / si.s, to_vel_freq, from_vel_freq),\n (si.AA, si.km / si.s, to_vel_wav, from_vel_wav),\n (si.eV, si.km / si.s, to_vel_en, from_vel_en),\n ],\n \"doppler_radio\",\n {\"rest\": rest},\n )\n\n\ndef doppler_optical(rest):\n r\"\"\"\n Return the equivalency pairs for the optical convention for velocity.\n\n The optical convention for the relation between velocity and frequency is:\n\n :math:`V = c \\frac{f_0 - f}{f } ; f(V) = f_0 ( 1 + V/c )^{-1}`\n\n Parameters\n ----------\n rest : `~astropy.units.Quantity`\n Any quantity supported by the standard spectral equivalencies\n (wavelength, energy, frequency, wave number).\n\n References\n ----------\n `NRAO site defining the conventions <https://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`_\n\n Examples\n --------\n >>> import astropy.units as u\n >>> CO_restfreq = 115.27120*u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> optical_CO_equiv = u.doppler_optical(CO_restfreq)\n >>> measured_freq = 115.2832*u.GHz\n >>> optical_velocity = measured_freq.to(u.km/u.s, equivalencies=optical_CO_equiv)\n >>> optical_velocity # doctest: +FLOAT_CMP\n <Quantity -31.20584348799674 km / s>\n \"\"\"\n assert_is_spectral_unit(rest)\n\n ckms = _si.c.to_value(\"km/s\")\n\n def to_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n return ckms * (restfreq - x) / x\n\n def from_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n voverc = x / ckms\n return restfreq / (1 + voverc)\n\n def to_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return ckms * (x / restwav - 1)\n\n def from_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n voverc = x / ckms\n return restwav * (1 + voverc)\n\n def to_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n return ckms * (resten - x) / x\n\n def from_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n voverc = x / ckms\n return resten / (1 + voverc)\n\n return Equivalency(\n [\n (si.Hz, si.km / si.s, to_vel_freq, from_vel_freq),\n (si.AA, si.km / si.s, to_vel_wav, from_vel_wav),\n (si.eV, si.km / si.s, to_vel_en, from_vel_en),\n ],\n \"doppler_optical\",\n {\"rest\": rest},\n )\n\n\ndef doppler_relativistic(rest):\n r\"\"\"\n Return the equivalency pairs for the relativistic convention for velocity.\n\n The full relativistic convention for the relation between velocity and frequency is:\n\n :math:`V = c \\frac{f_0^2 - f^2}{f_0^2 + f^2} ; f(V) = f_0 \\frac{\\left(1 - (V/c)^2\\right)^{1/2}}{(1+V/c)}`\n\n Parameters\n ----------\n rest : `~astropy.units.Quantity`\n Any quantity supported by the standard spectral equivalencies\n (wavelength, energy, frequency, wave number).\n\n References\n ----------\n `NRAO site defining the conventions <https://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`_\n\n Examples\n --------\n >>> import astropy.units as u\n >>> CO_restfreq = 115.27120*u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> relativistic_CO_equiv = u.doppler_relativistic(CO_restfreq)\n >>> measured_freq = 115.2832*u.GHz\n >>> relativistic_velocity = measured_freq.to(u.km/u.s, equivalencies=relativistic_CO_equiv)\n >>> relativistic_velocity # doctest: +FLOAT_CMP\n <Quantity -31.207467619351537 km / s>\n >>> measured_velocity = 1250 * u.km/u.s\n >>> relativistic_frequency = measured_velocity.to(u.GHz, equivalencies=relativistic_CO_equiv)\n >>> relativistic_frequency # doctest: +FLOAT_CMP\n <Quantity 114.79156866993588 GHz>\n >>> relativistic_wavelength = measured_velocity.to(u.mm, equivalencies=relativistic_CO_equiv)\n >>> relativistic_wavelength # doctest: +FLOAT_CMP\n <Quantity 2.6116243681798923 mm>\n \"\"\"\n assert_is_spectral_unit(rest)\n\n ckms = _si.c.to_value(\"km/s\")\n\n def to_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n return (restfreq**2 - x**2) / (restfreq**2 + x**2) * ckms\n\n def from_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n voverc = x / ckms\n return restfreq * ((1 - voverc) / (1 + (voverc))) ** 0.5\n\n def to_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return (x**2 - restwav**2) / (restwav**2 + x**2) * ckms\n\n def from_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n voverc = x / ckms\n return restwav * ((1 + voverc) / (1 - voverc)) ** 0.5\n\n def to_vel_en(x):\n resten = rest.to_value(si.eV, spectral())\n return (resten**2 - x**2) / (resten**2 + x**2) * ckms\n\n def from_vel_en(x):\n resten = rest.to_value(si.eV, spectral())\n voverc = x / ckms\n return resten * ((1 - voverc) / (1 + (voverc))) ** 0.5\n\n return Equivalency(\n [\n (si.Hz, si.km / si.s, to_vel_freq, from_vel_freq),\n (si.AA, si.km / si.s, to_vel_wav, from_vel_wav),\n (si.eV, si.km / si.s, to_vel_en, from_vel_en),\n ],\n \"doppler_relativistic\",\n {\"rest\": rest},\n )\n\n\ndef doppler_redshift():\n \"\"\"\n Returns the equivalence between Doppler redshift (unitless) and radial velocity.\n\n .. note::\n\n This equivalency is not compatible with cosmological\n redshift in `astropy.cosmology.units`.\n\n \"\"\"\n rv_unit = si.km / si.s\n C_KMS = _si.c.to_value(rv_unit)\n\n def convert_z_to_rv(z):\n zponesq = (1 + z) ** 2\n return C_KMS * (zponesq - 1) / (zponesq + 1)\n\n def convert_rv_to_z(rv):\n beta = rv / C_KMS\n return np.sqrt((1 + beta) / (1 - beta)) - 1\n\n return Equivalency(\n [(dimensionless_unscaled, rv_unit, convert_z_to_rv, convert_rv_to_z)],\n \"doppler_redshift\",\n )\n\n\ndef molar_mass_amu():\n \"\"\"\n Returns the equivalence between amu and molar mass.\n \"\"\"\n return Equivalency([(si.g / si.mol, misc.u)], \"molar_mass_amu\")\n\n\ndef mass_energy():\n \"\"\"\n Returns a list of equivalence pairs that handle the conversion\n between mass and energy.\n \"\"\"\n c2 = _si.c.value**2\n return Equivalency(\n [\n (si.kg, si.J, lambda x: x * c2, lambda x: x / c2),\n (si.kg / si.m**2, si.J / si.m**2, lambda x: x * c2, lambda x: x / c2),\n (si.kg / si.m**3, si.J / si.m**3, lambda x: x * c2, lambda x: x / c2),\n (si.kg / si.s, si.J / si.s, lambda x: x * c2, lambda x: x / c2),\n ],\n \"mass_energy\",\n )\n\n\ndef brightness_temperature(frequency, beam_area=None):\n r\"\"\"\n Defines the conversion between Jy/sr and \"brightness temperature\",\n :math:`T_B`, in Kelvins. The brightness temperature is a unit very\n commonly used in radio astronomy. See, e.g., \"Tools of Radio Astronomy\"\n (Wilson 2009) eqn 8.16 and eqn 8.19 (these pages are available on `google\n books\n <https://books.google.com/books?id=9KHw6R8rQEMC&pg=PA179&source=gbs_toc_r&cad=4#v=onepage&q&f=false>`__).\n\n :math:`T_B \\equiv S_\\nu / \\left(2 k \\nu^2 / c^2 \\right)`\n\n If the input is in Jy/beam or Jy (assuming it came from a single beam), the\n beam area is essential for this computation: the brightness temperature is\n inversely proportional to the beam area.\n\n Parameters\n ----------\n frequency : `~astropy.units.Quantity`\n The observed ``spectral`` equivalent `~astropy.units.Unit` (e.g.,\n frequency or wavelength). The variable is named 'frequency' because it\n is more commonly used in radio astronomy.\n BACKWARD COMPATIBILITY NOTE: previous versions of the brightness\n temperature equivalency used the keyword ``disp``, which is no longer\n supported.\n beam_area : `~astropy.units.Quantity` ['solid angle']\n Beam area in angular units, i.e. steradian equivalent\n\n Examples\n --------\n Arecibo C-band beam::\n\n >>> import numpy as np\n >>> from astropy import units as u\n >>> beam_sigma = 50*u.arcsec\n >>> beam_area = 2*np.pi*(beam_sigma)**2\n >>> freq = 5*u.GHz\n >>> equiv = u.brightness_temperature(freq)\n >>> (1*u.Jy/beam_area).to(u.K, equivalencies=equiv) # doctest: +FLOAT_CMP\n <Quantity 3.526295144567176 K>\n\n VLA synthetic beam::\n\n >>> bmaj = 15*u.arcsec\n >>> bmin = 15*u.arcsec\n >>> fwhm_to_sigma = 1./(8*np.log(2))**0.5\n >>> beam_area = 2.*np.pi*(bmaj*bmin*fwhm_to_sigma**2)\n >>> freq = 5*u.GHz\n >>> equiv = u.brightness_temperature(freq)\n >>> (u.Jy/beam_area).to(u.K, equivalencies=equiv) # doctest: +FLOAT_CMP\n <Quantity 217.2658703625732 K>\n\n Any generic surface brightness:\n\n >>> surf_brightness = 1e6*u.MJy/u.sr\n >>> surf_brightness.to(u.K, equivalencies=u.brightness_temperature(500*u.GHz)) # doctest: +FLOAT_CMP\n <Quantity 130.1931904778803 K>\n \"\"\"\n nu = frequency.to(si.GHz, spectral())\n factor_Jy = (2 * _si.k_B * si.K * nu**2 / _si.c**2).to(astrophys.Jy).value\n factor_K = (astrophys.Jy / (2 * _si.k_B * nu**2 / _si.c**2)).to(si.K).value\n\n if beam_area is not None:\n beam = beam_area.to_value(si.sr)\n\n def convert_Jy_to_K(x_jybm):\n return x_jybm / beam / factor_Jy\n\n def convert_K_to_Jy(x_K):\n return x_K * beam / factor_K\n\n return Equivalency(\n [\n (astrophys.Jy, si.K, convert_Jy_to_K, convert_K_to_Jy),\n (astrophys.Jy / astrophys.beam, si.K, convert_Jy_to_K, convert_K_to_Jy),\n ],\n \"brightness_temperature\",\n {\"frequency\": frequency, \"beam_area\": beam_area},\n )\n else:\n\n def convert_JySr_to_K(x_jysr):\n return x_jysr / factor_Jy\n\n def convert_K_to_JySr(x_K):\n return x_K / factor_K # multiplied by 1x for 1 steradian\n\n return Equivalency(\n [(astrophys.Jy / si.sr, si.K, convert_JySr_to_K, convert_K_to_JySr)],\n \"brightness_temperature\",\n {\"frequency\": frequency, \"beam_area\": beam_area},\n )\n\n\ndef beam_angular_area(beam_area):\n \"\"\"\n Convert between the ``beam`` unit, which is commonly used to express the area\n of a radio telescope resolution element, and an area on the sky.\n This equivalency also supports direct conversion between ``Jy/beam`` and\n ``Jy/steradian`` units, since that is a common operation.\n\n Parameters\n ----------\n beam_area : unit-like\n The area of the beam in angular area units (e.g., steradians)\n Must have angular area equivalent units.\n \"\"\"\n return Equivalency(\n [\n (astrophys.beam, Unit(beam_area)),\n (astrophys.beam**-1, Unit(beam_area) ** -1),\n (astrophys.Jy / astrophys.beam, astrophys.Jy / Unit(beam_area)),\n ],\n \"beam_angular_area\",\n {\"beam_area\": beam_area},\n )\n\n\ndef thermodynamic_temperature(frequency, T_cmb=None):\n r\"\"\"Defines the conversion between Jy/sr and \"thermodynamic temperature\",\n :math:`T_{CMB}`, in Kelvins. The thermodynamic temperature is a unit very\n commonly used in cosmology. See eqn 8 in [1].\n\n :math:`K_{CMB} \\equiv I_\\nu / \\left(2 k \\nu^2 / c^2 f(\\nu) \\right)`\n\n with :math:`f(\\nu) = \\frac{ x^2 e^x}{(e^x - 1 )^2}`\n where :math:`x = h \\nu / k T`\n\n Parameters\n ----------\n frequency : `~astropy.units.Quantity`\n The observed `spectral` equivalent `~astropy.units.Unit` (e.g.,\n frequency or wavelength). Must have spectral units.\n T_cmb : `~astropy.units.Quantity` ['temperature'] or None\n The CMB temperature at z=0. If `None`, the default cosmology will be\n used to get this temperature. Must have units of temperature.\n\n Notes\n -----\n For broad band receivers, this conversion do not hold\n as it highly depends on the frequency\n\n References\n ----------\n .. [1] Planck 2013 results. IX. HFI spectral response\n https://arxiv.org/abs/1303.5070\n\n Examples\n --------\n Planck HFI 143 GHz::\n\n >>> from astropy import units as u\n >>> from astropy.cosmology import Planck15\n >>> freq = 143 * u.GHz\n >>> equiv = u.thermodynamic_temperature(freq, Planck15.Tcmb0)\n >>> (1. * u.mK).to(u.MJy / u.sr, equivalencies=equiv) # doctest: +FLOAT_CMP\n <Quantity 0.37993172 MJy / sr>\n\n \"\"\"\n nu = frequency.to(si.GHz, spectral())\n\n if T_cmb is None:\n from astropy.cosmology import default_cosmology\n\n T_cmb = default_cosmology.get().Tcmb0\n\n def f(nu, T_cmb=T_cmb):\n x = _si.h * nu / _si.k_B / T_cmb\n return x**2 * np.exp(x) / np.expm1(x) ** 2\n\n def convert_Jy_to_K(x_jybm):\n factor = (f(nu) * 2 * _si.k_B * si.K * nu**2 / _si.c**2).to_value(astrophys.Jy)\n return x_jybm / factor\n\n def convert_K_to_Jy(x_K):\n factor = (astrophys.Jy / (f(nu) * 2 * _si.k_B * nu**2 / _si.c**2)).to_value(\n si.K\n )\n return x_K / factor\n\n return Equivalency(\n [(astrophys.Jy / si.sr, si.K, convert_Jy_to_K, convert_K_to_Jy)],\n \"thermodynamic_temperature\",\n {\"frequency\": frequency, \"T_cmb\": T_cmb},\n )\n\n\ndef temperature():\n \"\"\"Convert between Kelvin, Celsius, Rankine and Fahrenheit here because\n Unit and CompositeUnit cannot do addition or subtraction properly.\n \"\"\"\n from .imperial import deg_F as F\n from .imperial import deg_R as R\n\n K = si.K\n C = si.deg_C\n\n return Equivalency(\n [\n (K, C, lambda x: x - 273.15, lambda x: x + 273.15),\n (C, F, lambda x: x * 1.8 + 32.0, lambda x: (x - 32.0) / 1.8),\n (K, F, lambda x: x * 1.8 - 459.67, lambda x: (x + 459.67) / 1.8),\n (R, F, lambda x: x - 459.67, lambda x: x + 459.67),\n (R, C, lambda x: (x - 491.67) * (5 / 9), lambda x: x * 1.8 + 491.67),\n (R, K, lambda x: x * (5 / 9), lambda x: x * 1.8),\n ],\n \"temperature\",\n )\n\n\ndef temperature_energy():\n \"\"\"Convert between Kelvin and keV(eV) to an equivalent amount.\"\"\"\n e = _si.e.value\n k_B = _si.k_B.value\n return Equivalency(\n [(si.K, si.eV, lambda x: x / (e / k_B), lambda x: x * (e / k_B))],\n \"temperature_energy\",\n )\n\n\ndef assert_is_spectral_unit(value):\n try:\n value.to(si.Hz, spectral())\n except (AttributeError, UnitsError) as ex:\n raise UnitsError(\n \"The 'rest' value must be a spectral equivalent \"\n \"(frequency, wavelength, or energy).\"\n )\n\n\ndef pixel_scale(pixscale):\n \"\"\"\n Convert between pixel distances (in units of ``pix``) and other units,\n given a particular ``pixscale``.\n\n Parameters\n ----------\n pixscale : `~astropy.units.Quantity`\n The pixel scale either in units of <unit>/pixel or pixel/<unit>.\n \"\"\"\n decomposed = pixscale.unit.decompose()\n dimensions = dict(zip(decomposed.bases, decomposed.powers))\n pix_power = dimensions.get(misc.pix, 0)\n\n if pix_power == -1:\n physical_unit = Unit(pixscale * misc.pix)\n elif pix_power == 1:\n physical_unit = Unit(misc.pix / pixscale)\n else:\n raise UnitsError(\n \"The pixel scale unit must have pixel dimensionality of 1 or -1.\"\n )\n\n return Equivalency(\n [(misc.pix, physical_unit)], \"pixel_scale\", {\"pixscale\": pixscale}\n )\n\n\ndef plate_scale(platescale):\n \"\"\"\n Convert between lengths (to be interpreted as lengths in the focal plane)\n and angular units with a specified ``platescale``.\n\n Parameters\n ----------\n platescale : `~astropy.units.Quantity`\n The pixel scale either in units of distance/pixel or distance/angle.\n \"\"\"\n if platescale.unit.is_equivalent(si.arcsec / si.m):\n platescale_val = platescale.to_value(si.radian / si.m)\n elif platescale.unit.is_equivalent(si.m / si.arcsec):\n platescale_val = (1 / platescale).to_value(si.radian / si.m)\n else:\n raise UnitsError(\"The pixel scale must be in angle/distance or distance/angle\")\n\n return Equivalency(\n [(si.m, si.radian, lambda d: d * platescale_val, lambda a: a / platescale_val)],\n \"plate_scale\",\n {\"platescale\": platescale},\n )\n", "docs/changes/units/16516.feature.rst": null, "docs/units/equivalencies.rst": ".. _unit_equivalencies:\n\nEquivalencies\n*************\n\nThe unit module has machinery for supporting equivalences between\ndifferent units in certain contexts, namely when equations can\nuniquely relate a value in one unit to a different unit. A good\nexample is the equivalence between wavelength, frequency, and energy\nfor specifying a wavelength of radiation. Normally these units are not\nconvertible, but when understood as representing light, they are\nconvertible in certain contexts. Here we describe how to use the\nequivalencies included in `astropy.units` and how to\ndefine new equivalencies.\n\nEquivalencies are used by passing a list of equivalency pairs to the\n``equivalencies`` keyword argument of `Quantity.to()\n<astropy.units.quantity.Quantity.to>` `Unit.to()\n<astropy.units.core.UnitBase.to>` or `Unit.get_converter()\n<astropy.units.core.UnitBase.get_converter>` methods.\nThe list can be supplied directly,\nbut ``astropy`` contains several functions that return appropriate lists so\nconstructing them is often not necessary. Alternatively, if a larger piece of\ncode needs the same equivalencies, you can set them for a :ref:`given context\n<equivalency-context>`.\n\nBuilt-In Equivalencies\n======================\n\nHow to Convert Parallax to Distance\n-----------------------------------\n\nThe length unit *parsec* is defined such that a star one parsec away\nwill exhibit a 1-arcsecond parallax. (Think of the name as a contraction\nbetween *parallax* and *arcsecond*.)\n\nThe :func:`~astropy.units.equivalencies.parallax` function handles\nconversions between parallax angles and length.\n\n.. EXAMPLE START: Converting Parallax to Distance\n\nIn general, you should not be able to change units of length into\nangles or vice versa, so :meth:`~astropy.units.core.UnitBase.to`\nraises an exception::\n\n >>> from astropy import units as u\n >>> (0.8 * u.arcsec).to(u.parsec) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n UnitConversionError: 'arcsec' (angle) and 'pc' (length) are not convertible\n\nTo trigger the conversion between parallax angle and distance, provide\n:func:`~astropy.units.equivalencies.parallax` as the optional keyword\nargument (``equivalencies=``) to the\n:meth:`~astropy.units.core.UnitBase.to` method.\n\n >>> (0.8 * u.arcsec).to(u.parsec, equivalencies=u.parallax())\n <Quantity 1.25 pc>\n\n.. EXAMPLE END\n\nAngles as Dimensionless Units\n-----------------------------\n\nAngles are treated as a physically distinct type, which usually helps to avoid\nmistakes. However, this is not very handy when working with units related to\nrotational energy or the small angle approximation. (Indeed, this\ndouble-sidedness underlies why radians went from a `supplementary to derived unit\n<https://www.bipm.org/en/committees/cg/cgpm/20-1995/resolution-8>`__.) The function\n:func:`~astropy.units.equivalencies.dimensionless_angles` provides the required\nequivalency list that helps convert between angles and dimensionless units. It\nis somewhat different from all others in that it allows an arbitrary change in\nthe number of powers to which radians is raised (i.e., including zero and\nthus dimensionless).\n\nExamples\n^^^^^^^^\n\n.. EXAMPLE START: Angles as Dimensionless Units\n\nNormally the following would raise exceptions::\n\n >>> u.degree.to('') # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n UnitConversionError: 'deg' (angle) and '' (dimensionless) are not convertible\n >>> (u.kg * u.m**2 * (u.cycle / u.s)**2).to(u.J) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n UnitConversionError: 'cycle2 kg m2 / s2' and 'J' (energy) are not convertible\n\nBut when passing the proper conversion function,\n:func:`~astropy.units.equivalencies.dimensionless_angles`, it works.\n\n >>> u.deg.to('', equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n 0.017453292519943295\n >>> (0.5e38 * u.kg * u.m**2 * (u.cycle / u.s)**2).to(u.J,\n ... equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n <Quantity 1.9739208802178715e+39 J>\n >>> import numpy as np\n >>> np.exp((1j*0.125*u.cycle).to('', equivalencies=u.dimensionless_angles())) # doctest: +FLOAT_CMP\n <Quantity 0.70710678+0.70710678j>\n\n.. EXAMPLE END\n\nIn an example with complex numbers you may well be doing a fair\nnumber of similar calculations. For such situations, there is the\noption to :ref:`set default equivalencies <equivalency-context>`.\n\nIn some situations, this equivalency may behave differently than\nanticipated. For instance, it might at first seem reasonable to use it\nfor converting from an angular velocity :math:`\\omega` in radians per\nsecond to the corresponding frequency :math:`f` in hertz (i.e., to\nimplement :math:`f=\\omega/2\\pi`). However, attempting this yields:\n\n >>> (1*u.rad/u.s).to(u.Hz, equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n <Quantity 1. Hz>\n >>> (1*u.cycle/u.s).to(u.Hz, equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n <Quantity 6.283185307179586 Hz>\n\nHere, we might have expected ~0.159 Hz in the first example and 1 Hz in\nthe second. However, :func:`~astropy.units.equivalencies.dimensionless_angles`\nconverts to radians per second and then drops radians as a unit. The\nimplicit mistake made in these examples is that the unit Hz is taken to be\nequivalent to cycles per second, which it is not (it is just \"per second\").\nThis realization also leads to the solution: to use an explicit equivalency\nbetween cycles per second and hertz:\n\n >>> (1*u.rad/u.s).to(u.Hz, equivalencies=[(u.cy/u.s, u.Hz)]) # doctest: +FLOAT_CMP\n <Quantity 0.15915494309189535 Hz>\n >>> (1*u.cy/u.s).to(u.Hz, equivalencies=[(u.cy/u.s, u.Hz)]) # doctest: +FLOAT_CMP\n <Quantity 1. Hz>\n\n.. _astropy-units-spectral-equivalency:\n\nSpectral Units\n--------------\n\n:func:`~astropy.units.equivalencies.spectral` is a function that returns\nan equivalency list to handle conversions between wavelength,\nfrequency, energy, and wave number.\n\n.. EXAMPLE START: Using Spectral Units for Conversions\n\nAs mentioned with parallax units, we pass a list of equivalencies (in this case,\nthe result of :func:`~astropy.units.equivalencies.spectral`) as the second\nargument to the :meth:`~astropy.units.quantity.Quantity.to` method and\nwavelength, and then frequency and energy can be converted.\n\n >>> ([1000, 2000] * u.nm).to(u.Hz, equivalencies=u.spectral()) # doctest: +FLOAT_CMP\n <Quantity [2.99792458e+14, 1.49896229e+14] Hz>\n >>> ([1000, 2000] * u.nm).to(u.eV, equivalencies=u.spectral()) # doctest: +FLOAT_CMP\n <Quantity [1.23984193, 0.61992096] eV>\n\nThese equivalencies even work with non-base units::\n\n >>> # Inches to calories\n >>> from astropy.units import imperial\n >>> imperial.inch.to(imperial.Cal, equivalencies=u.spectral()) # doctest: +FLOAT_CMP\n np.float64(1.869180759162485e-27)\n\n.. EXAMPLE END\n\n.. _astropy-units-doppler-equivalencies:\n\nSpectral (Doppler) Equivalencies\n--------------------------------\n\nSpectral equivalencies allow you to convert between wavelength,\nfrequency, energy, and wave number, but not to velocity, which is\nfrequently the quantity of interest.\n\nIt is fairly convenient to define the equivalency, but note that there are\ndifferent `conventions <https://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`__.\nIn these conventions :math:`f_0` is the rest frequency, :math:`f` is the\nobserved frequency, :math:`V` is the velocity, and :math:`c` is the speed of\nlight:\n\n * Radio :math:`V = c \\frac{f_0 - f}{f_0} ; f(V) = f_0 ( 1 - V/c )`\n * Optical :math:`V = c \\frac{f_0 - f}{f } ; f(V) = f_0 ( 1 + V/c )^{-1}`\n * Relativistic :math:`V = c \\frac{f_0^2 - f^2}{f_0^2 + f^2} ; f(V) = f_0 \\frac{\\left(1 - (V/c)^2\\right)^{1/2}}{(1+V/c)}`\n\nThese three conventions are implemented in\n:mod:`astropy.units.equivalencies` as\n:func:`~astropy.units.equivalencies.doppler_optical`,\n:func:`~astropy.units.equivalencies.doppler_radio`, and\n:func:`~astropy.units.equivalencies.doppler_relativistic`.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Using Spectral (Doppler) Equivalencies\n\nTo define an equivalency::\n\n >>> restfreq = 115.27120 * u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> freq_to_vel = u.doppler_radio(restfreq)\n >>> (116e9 * u.Hz).to(u.km / u.s, equivalencies=freq_to_vel) # doctest: +FLOAT_CMP\n <Quantity -1895.4321928669085 km / s>\n\n.. EXAMPLE END\n\nSpectral Flux and Luminosity Density Units\n------------------------------------------\n\nThere is also support for spectral flux and luminosity density units,\ntheir equivalent surface brightness units, and integrated flux units. Their use\nis more complex, since it is necessary to also supply the location in the\nspectrum for which the conversions will be done, and the units of those spectral\nlocations. The function that handles these unit conversions is\n:func:`~astropy.units.equivalencies.spectral_density`. This function takes as\nits arguments the |Quantity| for the spectral location.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Converting Spectral Flux and Luminosity Density Units\n\nTo perform unit conversions with\n:func:`~astropy.units.equivalencies.spectral_density`::\n\n >>> (1.5 * u.Jy).to(u.photon / u.cm**2 / u.s / u.Hz,\n ... equivalencies=u.spectral_density(3500 * u.AA)) # doctest: +FLOAT_CMP\n <Quantity 2.6429112e-12 ph / (Hz s cm2)>\n >>> (1.5 * u.Jy).to(u.photon / u.cm**2 / u.s / u.micron,\n ... equivalencies=u.spectral_density(3500 * u.AA)) # doctest: +FLOAT_CMP\n <Quantity 6467.95791275 ph / (micron s cm2)>\n >>> a = 1. * (u.photon / u.s / u.angstrom)\n >>> a.to(u.erg / u.s / u.Hz,\n ... equivalencies=u.spectral_density(5500 * u.AA)) # doctest: +FLOAT_CMP\n <Quantity 3.6443382634999996e-23 erg / (Hz s)>\n >>> w = 5000 * u.AA\n >>> a = 1. * (u.erg / u.cm**2 / u.s)\n >>> b = a.to(u.photon / u.cm**2 / u.s, u.spectral_density(w))\n >>> b # doctest: +FLOAT_CMP\n <Quantity 2.51705828e+11 ph / (s cm2)>\n >>> b.to(a.unit, u.spectral_density(w)) # doctest: +FLOAT_CMP\n <Quantity 1. erg / (s cm2)>\n\n.. EXAMPLE END\n\nBrightness Temperature and Surface Brightness Equivalency\n---------------------------------------------------------\n\nThere is an equivalency between surface brightness (flux density per area) and\nbrightness temperature. This equivalency is often referred to as \"Antenna Gain\"\nsince, at a given frequency, telescope brightness sensitivity is unrelated to\naperture size, but flux density sensitivity is, so this equivalency is only\ndependent on the aperture size. See `Tools of Radio Astronomy\n<https://books.google.com/books?id=9KHw6R8rQEMC&pg=PA179&source=gbs_toc_r&cad=4#v=onepage&q&f=false>`_\nfor details.\n\n.. note:: The brightness temperature mentioned here is the Rayleigh-Jeans\n equivalent temperature, which results in a linear relation between\n flux and temperature. This is the convention that is most often used\n in relation to observations, but if you are interested in computing\n the *exact* temperature of a blackbody function that would produce a\n given flux, you should not use this equivalency.\n\nExamples\n^^^^^^^^\n\n.. EXAMPLE START: Converting Brightness Temperature and Surface Brightness\n Equivalency\n\nThe :func:`~astropy.units.equivalencies.brightness_temperature` equivalency\nrequires the beam area and frequency as arguments. Recalling that the area of a\n2D Gaussian is :math:`2 \\pi \\sigma^2` (see `wikipedia\n<https://en.wikipedia.org/wiki/Gaussian_function#Two-dimensional_Gaussian_function>`_),\nhere is an example::\n\n >>> beam_sigma = 50*u.arcsec\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> freq = 5 * u.GHz\n >>> (1*u.Jy/omega_B).to(u.K, equivalencies=u.brightness_temperature(freq)) # doctest: +FLOAT_CMP\n <Quantity 3.526295144567176 K>\n\nIf you have beam full-width half-maxima (FWHM), which are often quoted and are\nthe values stored in the FITS header keywords BMAJ and BMIN, a more appropriate\nexample converts the FWHM to sigma::\n\n >>> beam_fwhm = 50*u.arcsec\n >>> fwhm_to_sigma = 1. / (8 * np.log(2))**0.5\n >>> beam_sigma = beam_fwhm * fwhm_to_sigma\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> (1*u.Jy/omega_B).to(u.K, equivalencies=u.brightness_temperature(freq)) # doctest: +FLOAT_CMP\n <Quantity 19.553932298231704 K>\n\nYou can also convert between ``Jy/beam`` and ``K`` by specifying the beam area::\n\n >>> (1*u.Jy/u.beam).to(u.K, u.brightness_temperature(freq, beam_area=omega_B)) # doctest: +FLOAT_CMP\n <Quantity 19.553932298231704 K>\n\n.. EXAMPLE END\n\nBeam Equivalency\n----------------\n\nRadio data, especially from interferometers, is often produced in units of\n``Jy/beam``. Converting this number to a beam-independent value (e.g.,\n``Jy/sr``), can be done with the\n:func:`~astropy.units.equivalencies.beam_angular_area` equivalency.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Converting Radio Data to a Beam-Independent Value\n\nTo convert units of ``Jy/beam`` to ``Jy/sr``::\n\n >>> beam_fwhm = 50*u.arcsec\n >>> fwhm_to_sigma = 1. / (8 * np.log(2))**0.5\n >>> beam_sigma = beam_fwhm * fwhm_to_sigma\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> (1*u.Jy/u.beam).to(u.MJy/u.sr, equivalencies=u.beam_angular_area(omega_B)) # doctest: +FLOAT_CMP\n <Quantity 15.019166691021288 MJy / sr>\n\n\nNote that the `radio_beam <https://github.com/radio-astro-tools/radio-beam>`_\npackage deals with beam input/output and various operations more directly.\n\n.. EXAMPLE END\n\nTemperature Energy Equivalency\n------------------------------\n\nThe :func:`~astropy.units.equivalencies.temperature_energy` equivalency allows\nconversion between temperature and its equivalent in energy (i.e., the\ntemperature multiplied by the Boltzmann constant), usually expressed in\nelectronvolts. This is used frequently for observations at high-energy, be it\nfor solar or X-ray astronomy.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Temperature Energy Equivalency\n\nTo convert between temperature and its equivalent in energy::\n\n >>> t_k = 1e6 * u.K\n >>> t_k.to(u.eV, equivalencies=u.temperature_energy()) # doctest: +FLOAT_CMP\n <Quantity 86.17332384960955 eV>\n\n.. EXAMPLE END\n\n.. _tcmb-equivalency:\n\nThermodynamic Temperature Equivalency\n-------------------------------------\n\nThis :func:`~astropy.units.equivalencies.thermodynamic_temperature`\nequivalency allows conversion between ``Jy/beam`` and \"thermodynamic\ntemperature\", :math:`T_{CMB}`, in Kelvins.\n\nExamples\n^^^^^^^^\n\n.. EXAMPLE START: Thermodynamic Temperature Equivalency\n\nTo convert between ``Jy/beam`` and thermodynamic temperature::\n\n >>> nu = 143 * u.GHz\n >>> t_k = 0.002632051878 * u.K\n >>> t_k.to(u.MJy / u.sr, equivalencies=u.thermodynamic_temperature(nu)) # doctest: +FLOAT_CMP\n <Quantity 1. MJy / sr>\n\nBy default, this will use the :math:`T_{CMB}` value for the default\n:ref:`cosmology <astropy-cosmology>` in ``astropy``, but it is possible to\nspecify a custom :math:`T_{CMB}` value for a specific cosmology as the second\nargument to the equivalency::\n\n >>> from astropy.cosmology import WMAP9\n >>> t_k.to(u.MJy / u.sr, equivalencies=u.thermodynamic_temperature(nu, T_cmb=WMAP9.Tcmb0)) # doctest: +FLOAT_CMP\n <Quantity 0.99982392 MJy / sr>\n\n.. EXAMPLE END\n\nMolar Mass AMU Equivalency\n--------------------------\n\nThe :func:`~astropy.units.equivalencies.molar_mass_amu` equivalency allows\nconversion between the atomic mass unit and the equivalent g/mol. For context,\nrefer to the `NIST definition of SI Base Units\n<https://www.nist.gov/si-redefinition/definitions-si-base-units>`_.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Molar Mass AMU Equivalency\n\nTo convert between atomic mass unit and the equivalent g/mol::\n\n >>> x = 1 * (u.g / u.mol)\n >>> y = 1 * u.u\n >>> x.to(u.u, equivalencies=u.molar_mass_amu()) # doctest: +FLOAT_CMP\n <Quantity 1.0 u>\n >>> y.to(u.g/u.mol, equivalencies=u.molar_mass_amu()) # doctest: +FLOAT_CMP\n <Quantity 1.0 g / mol>\n\n.. EXAMPLE END\n\nPixel and Plate Scale Equivalencies\n-----------------------------------\n\nThese equivalencies are for converting between angular scales and either linear\nscales in the focal plane or distances in units of the number of pixels.\n\nExamples\n^^^^^^^^\n\n.. EXAMPLE START: Pixel and Plate Scale Equivalencies\n\nSuppose you are working with cutouts from the Sloan Digital Sky Survey,\nwhich defaults to a pixel scale of 0.4 arcseconds per pixel, and want to know\nthe true size of something that you measure to be 240 pixels across in the\ncutout image::\n\n >>> sdss_pixelscale = u.pixel_scale(0.4*u.arcsec/u.pixel)\n >>> (240*u.pixel).to(u.arcmin, sdss_pixelscale) # doctest: +FLOAT_CMP\n <Quantity 1.6 arcmin>\n\nOr maybe you are designing an instrument for a telescope that someone told you\nhas an inverse plate scale of 7.8 meters per radian (for your desired focus),\nand you want to know how big your pixels need to be to cover half an arcsecond.\nUsing :func:`~astropy.units.equivalencies.plate_scale`::\n\n >>> tel_platescale = u.plate_scale(7.8*u.m/u.radian)\n >>> (0.5*u.arcsec).to(u.micron, tel_platescale) # doctest: +FLOAT_CMP\n <Quantity 18.9077335632719 micron>\n\nThe :func:`~astropy.units.equivalencies.pixel_scale` equivalency can also work\nin more general context, where the scale is specified as any quantity that is\nreducible to ``<composite unit>/u.pix`` or ``u.pix/<composite unit>`` (that is,\nthe dimensionality of ``u.pix`` is 1 or -1). For instance, you may define the\ndots per inch (DPI) for a digital image to calculate its physical size::\n\n >>> dpi = u.pixel_scale(100 * u.pix / u.imperial.inch)\n >>> (1024 * u.pix).to(u.cm, dpi) # doctest: +FLOAT_CMP\n <Quantity 26.0096 cm>\n\n.. EXAMPLE END\n\nPhotometric Zero Point Equivalency\n----------------------------------\n\nThe :func:`~astropy.units.zero_point_flux` equivalency provides a way to move\nbetween photometric systems (i.e., those defined relative to a particular\nzero-point flux) and absolute fluxes. This is most useful in conjunction with\nsupport for :ref:`logarithmic_units`.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Photometric Zero Point Equivalency\n\nSuppose you are observing a target with a filter with a reported standard zero\npoint of 3631.1 Jy::\n\n >>> target_flux = 1.2 * u.nanomaggy\n >>> zero_point_star_equiv = u.zero_point_flux(3631.1 * u.Jy)\n >>> u.Magnitude(target_flux.to(u.AB, zero_point_star_equiv)) # doctest: +FLOAT_CMP\n <Magnitude 22.30195136 mag(AB)>\n\n.. EXAMPLE END\n\nTemperature Equivalency\n-----------------------\n\nThe :func:`~astropy.units.temperature` equivalency allows conversion\nbetween the Celsius, Fahrenheit, Rankine and Kelvin.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Using the Temperature Equivalency\n\nTo convert between temperature scales::\n\n >>> temp_C = 0 * u.Celsius\n >>> temp_Kelvin = temp_C.to(u.K, equivalencies=u.temperature())\n >>> temp_Kelvin # doctest: +FLOAT_CMP\n <Quantity 273.15 K>\n >>> temp_F = temp_C.to(u.imperial.deg_F, equivalencies=u.temperature())\n >>> temp_F # doctest: +FLOAT_CMP\n <Quantity 32. deg_F>\n >>> temp_R = temp_C.to(u.imperial.deg_R, equivalencies=u.temperature())\n >>> temp_R # doctest: +FLOAT_CMP\n <Quantity 491.67 deg_R>\n\n.. note:: You can also use ``u.deg_C`` instead of ``u.Celsius``.\n\n.. EXAMPLE END\n\nMass-Energy Equivalency\n-----------------------\n\n.. EXAMPLE START: Using the Mass-Energy Equivalency\n\nIn a special relativity context it can be convenient to use the\n:func:`~astropy.units.equivalencies.mass_energy` equivalency. For instance::\n\n >>> (1 * u.g).to(u.eV, u.mass_energy()) # doctest: +FLOAT_CMP\n <Quantity 5.60958865e+32 eV>\n\n.. EXAMPLE END\n\nDoppler Redshift Equivalency\n----------------------------\n\nConversion between Doppler redshift and radial velocity can be done with the\n:func:`~astropy.units.equivalencies.doppler_redshift` equivalency.\n\nExample\n^^^^^^^\n\n.. EXAMPLE START: Converting Doppler redshift to radial velocity\n\nTo convert Doppler redshift (unitless) to ``km/s``::\n\n >>> z = 0.1 * u.dimensionless_unscaled\n >>> z.to(u.km / u.s, u.doppler_redshift()) # doctest: +FLOAT_CMP\n <Quantity 28487.0661448 km / s>\n\nHowever, it cannot take the cosmological redshift unit from `astropy.cosmology.units`\nbecause the latter should not be interpreted the same since the recessional\nvelocity from the expansion of space can exceed the speed of light; see\n`Hubble's law: Redshift velocity and recessional velocity <https://en.wikipedia.org/wiki/Hubble%27s_law#Redshift_velocity_and_recessional_velocity>`_\nfor more information.\n\n.. EXAMPLE END\n\nWriting New Equivalencies\n=========================\n\nAn equivalence list is a :class:`list` of tuples, where each :class:`tuple` has\nfour elements::\n\n (from_unit, to_unit, forward, backward)\n\n``from_unit`` and ``to_unit`` are the equivalent units. ``forward`` and\n``backward`` are functions that convert values between those units. ``forward``\nand ``backward`` are optional, and if omitted then the equivalency declares\nthat the two units should be taken as equivalent. The functions must take and\nreturn non-|Quantity| objects to avoid infinite recursion; See\n:ref:`complicated-equiv-example` for more details.\n\nExamples\n--------\n\n.. EXAMPLE START: Writing New Equivalencies\n\nUntil 1964, the metric liter was defined as the volume of 1kg of water at 4°C at\n760mm mercury pressure. Volumes and masses are not normally directly\nconvertible, but if we hold the constants in the 1964 definition of the liter as\ntrue, we could build an equivalency for them::\n\n >>> liters_water = [\n ... (u.l, u.g, lambda x: 1000.0 * x, lambda x: x / 1000.0)\n ... ]\n >>> u.l.to(u.kg, 1, equivalencies=liters_water)\n 1.0\n\nNote that the equivalency can be used with any other compatible unit::\n\n >>> imperial.gallon.to(imperial.pound, 1, equivalencies=liters_water) # doctest: +FLOAT_CMP\n 8.345404463333525\n\nAnd it also works in the other direction::\n\n >>> imperial.lb.to(imperial.pint, 1, equivalencies=liters_water) # doctest: +FLOAT_CMP\n 0.9586114172355459\n\n.. EXAMPLE END\n\n.. _complicated-equiv-example:\n\nA More Complex Example: Spectral Doppler Equivalencies\n------------------------------------------------------\n\n.. EXAMPLE START: Writing Spectral Doppler Equivalencies\n\nWe show how to define an equivalency using the radio convention for CO 1-0.\nThis function is already defined in\n:func:`~astropy.units.equivalencies.doppler_radio`, but this example is\nillustrative::\n\n >>> from astropy.constants import si\n >>> restfreq = 115.27120 # rest frequency of 12 CO 1-0 in GHz\n >>> freq_to_vel = [(u.GHz, u.km/u.s,\n ... lambda x: (restfreq-x) / restfreq * si.c.to_value('km/s'),\n ... lambda x: (1-x/si.c.to_value('km/s')) * restfreq )]\n >>> u.Hz.to(u.km / u.s, 116e9, equivalencies=freq_to_vel) # doctest: +FLOAT_CMP\n np.float64(-1895.4321928669262)\n >>> (116e9 * u.Hz).to(u.km / u.s, equivalencies=freq_to_vel) # doctest: +FLOAT_CMP\n <Quantity -1895.4321928669262 km / s>\n\n.. EXAMPLE END\n\nNote that once this is defined for GHz and km/s, it will work for all other\nunits of frequency and velocity. ``x`` is converted from the input frequency\nunit (e.g., Hz) to GHz before being passed to ``lambda x:``. Similarly, the\nreturn value is assumed to be in units of ``km/s``, which is why the ``value``\nof ``c`` is used instead of the :class:`~astropy.constants.Constant`.\n\nDisplaying Available Equivalencies\n==================================\n\nThe :meth:`~astropy.units.core.UnitBase.find_equivalent_units` method also\nunderstands equivalencies.\n\nExample\n-------\n\n.. EXAMPLE START: Displaying Available Equivalencies\n\nWithout passing equivalencies, there are three compatible units for ``Hz`` in\nthe standard set::\n\n >>> u.Hz.find_equivalent_units()\n Primary name | Unit definition | Aliases\n [\n Bq | 1 / s | becquerel ,\n Ci | 3.7e+10 / s | curie ,\n Hz | 1 / s | Hertz, hertz ,\n ]\n\nHowever, when passing the spectral equivalency, you can see there are\nall kinds of things that ``Hz`` can be converted to::\n\n >>> u.Hz.find_equivalent_units(equivalencies=u.spectral())\n Primary name | Unit definition | Aliases\n [\n AU | 1.49598e+11 m | au, astronomical_unit ,\n Angstrom | 1e-10 m | AA, angstrom ,\n Bq | 1 / s | becquerel ,\n Ci | 3.7e+10 / s | curie ,\n Hz | 1 / s | Hertz, hertz ,\n J | m2 kg / s2 | Joule, joule ,\n Ry | 2.17987e-18 m2 kg / s2 | rydberg ,\n cm | 0.01 m | centimeter ,\n eV | 1.60218e-19 m2 kg / s2 | electronvolt ,\n earthRad | 6.3781e+06 m | R_earth, Rearth ,\n erg | 1e-07 m2 kg / s2 | ,\n foe | 1e+44 m2 kg / s2 | Bethe, bethe ,\n jupiterRad | 7.1492e+07 m | R_jup, Rjup, R_jupiter, Rjupiter ,\n k | 100 / m | Kayser, kayser ,\n lsec | 2.99792e+08 m | lightsecond ,\n lyr | 9.46073e+15 m | lightyear ,\n m | irreducible | meter ,\n micron | 1e-06 m | ,\n pc | 3.08568e+16 m | parsec ,\n solRad | 6.957e+08 m | R_sun, Rsun ,\n ]\n\n.. EXAMPLE END\n\n.. _equivalency-context:\n\nUsing Equivalencies in Larger Pieces of Code\n============================================\n\nSometimes you may have an involved calculation where you are regularly switching\nback and forth between equivalent units. For these cases, you can set\nequivalencies that will by default be used, in a way similar to how you can\n:ref:`enable other units <enabling-other-units>`.\n\nExamples\n--------\n\n.. EXAMPLE START: Using Equivalencies in Larger Pieces of Code\n\nTo enable radians to be treated as a dimensionless unit use\n:func:`~astropy.units.set_enabled_equivalencies` as a `context manager\n<https://docs.python.org/3/reference/datamodel.html#context-managers>`_::\n\n >>> with u.set_enabled_equivalencies(u.dimensionless_angles()):\n ... phase = 0.5 * u.cycle\n ... c = np.exp(1j*phase)\n >>> c # doctest: +FLOAT_CMP\n <Quantity -1.+1.2246468e-16j>\n\nTo permanently and globally enable radians to be treated as a dimensionless\nunit use :func:`~astropy.units.set_enabled_equivalencies` not as a context\nmanager:\n\n.. doctest-skip::\n\n >>> u.set_enabled_equivalencies(u.dimensionless_angles())\n <astropy.units.core._UnitContext object at ...>\n >>> u.deg.to('') # doctest: +FLOAT_CMP\n 0.017453292519943295\n\nThe disadvantage of the above approach is that you may forget to turn the\ndefault off (done by giving an empty argument).\n\n:func:`~astropy.units.set_enabled_equivalencies` accepts any list of\nequivalencies, so you could add, for example,\n:func:`~astropy.units.equivalencies.spectral` and\n:func:`~astropy.units.equivalencies.spectral_density` (since these return\nlists, they should indeed be combined by adding them together).\n\n.. EXAMPLE END\n"}
|
diff --git a/docs/changes/units/16516.feature.rst b/docs/changes/units/16516.feature.rst
new file mode 100644
index 000000000000..4fa419d8579b
--- /dev/null
+++ b/docs/changes/units/16516.feature.rst
@@ -0,0 +1,2 @@
+Add ``magnetic_flux_field`` equivalency to convert magnetic field between
+magnetic field strength (H) and magnetic flux density (B).
diff --git a/docs/units/equivalencies.rst b/docs/units/equivalencies.rst
index 6093d1c529e5..20647e927f01 100644
--- a/docs/units/equivalencies.rst
+++ b/docs/units/equivalencies.rst
@@ -529,6 +529,46 @@ for more information.
.. EXAMPLE END
+Magnetic Flux Density and Field Strength Equivalency
+----------------------------------------------------
+
+The :func:`~astropy.units.equivalencies.magnetic_flux_field` equivalency allows
+conversion between Magnetic Flux Density (B) and its equivalent Magnetic Field
+Strength (H), governed by the equation
+
+.. math::
+
+ \mathbf{B} = \mu_r \mu_0 \mathbf{H}.
+
+Where :math:`\mu_0` is the vacuum permeability and :math:`\mu_r` is the
+relative permeability of the medium. For a vacuum, :math:`\mu_r=1`.
+
+Example
+^^^^^^^
+
+.. EXAMPLE START: Magnetic Flux Density Magnetic Field Strength Equivalency
+
+To convert between Magnetic Flux Density (B) and its equivalent Magnetic Field
+Strength (H) in a vacuum.
+
+ >>> H = 1 * u.Oe
+ >>> H.to(u.G, u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 1. G>
+ >>> H.to(u.T, u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 0.0001 T>
+ >>> B = 1 * u.T
+ >>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field()) # doctest: +FLOAT_CMP
+ <Quantity 795774.71502628 A / m>
+
+Conversion in a medium with :math:`\mu_r=0.9`::
+
+ >>> H.to(u.G, u.magnetic_flux_field(mu_r=0.9)) # doctest: +FLOAT_CMP
+ <Quantity 0.9 G>
+ >>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field(mu_r=0.9)) # doctest: +FLOAT_CMP
+ <Quantity 884194.12780697 A / m>
+
+.. EXAMPLE END
+
Writing New Equivalencies
=========================
|
{"astropy/units/equivalencies.py": [{"type": "function", "name": "magnetic_flux_field", "lines": [879, 919], "signature": "def magnetic_flux_field(mu_r=1):", "doc": "Convert magnetic field between magnetic field strength :math:`(\\mathbf{H})` and\nmagnetic flux density :math:`(\\mathbf{B})` using the relationship:\n\n.. math::\n\n \\mathbf{B} = \\mu_r \\mu_0 \\mathbf{H}\n\nwhere:\n - :math:`\\mu_0` is the vacuum permeability, a physical constant.\n - :math:`\\mu_r` is the relative permeability of the medium, a dimensionless\n quantity.\n\nThe default setting (:math:`\\mu_r=1`) represents conditions in a vacuum.\n\nParameters\n----------\nmu_r : float, optional\n The relative magnetic permeability of the medium. This is a dimensionless quantity\n and has a default value of :math:`\\mu_r=1` which corresponds to free space (vacuum).\n\nExamples\n--------\n>>> import astropy.units as u\n>>> H = 1 * u.Oe\n>>> H.to(u.G, equivalencies=u.magnetic_flux_field()) # doctest: +FLOAT_CMP\n<Quantity 1. G>\n>>> H.to(u.G, equivalencies=u.magnetic_flux_field(mu_r=0.8)) # doctest: +FLOAT_CMP\n<Quantity 0.8 G>\n>>> B = 1 * u.T\n>>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field()) # doctest: +FLOAT_CMP\n<Quantity 795774.71502628 A / m>\n>>> B.to(u.A / u.m, equivalencies=u.magnetic_flux_field(mu_r=0.8)) # doctest: +FLOAT_CMP\n<Quantity 994718.39378285 A / m>"}]}
|
v5.3
|
["astropy/units/tests/test_equivalencies.py::test_magnetic_flux_field"]
|
["astropy/units/tests/test_equivalencies.py::test_find_equivalent_units", "astropy/units/tests/test_equivalencies.py::test_dimensionless_angles", "astropy/units/tests/test_equivalencies.py::test_logarithmic[log_unit0]", "astropy/units/tests/test_equivalencies.py::test_logarithmic[log_unit1]", "astropy/units/tests/test_equivalencies.py::test_logarithmic[log_unit2]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_0[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_0[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_0[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_0[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_0[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_0[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_0[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_0[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_0[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_circle[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_circle[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_circle[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_circle[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_circle[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_circle[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_circle[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_circle[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_circle[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_30kms[doppler_optical-999.899940784289]", "astropy/units/tests/test_equivalencies.py::test_30kms[doppler_radio-999.8999307714406]", "astropy/units/tests/test_equivalencies.py::test_30kms[doppler_relativistic-999.8999357778647]", "astropy/units/tests/test_equivalencies.py::test_bad_restfreqs[doppler_optical-5]", "astropy/units/tests/test_equivalencies.py::test_bad_restfreqs[doppler_radio-value1]", "astropy/units/tests/test_equivalencies.py::test_bad_restfreqs[doppler_relativistic-None]", "astropy/units/tests/test_equivalencies.py::test_doppler_redshift[0-rv_ans0]", "astropy/units/tests/test_equivalencies.py::test_doppler_redshift[0.001-rv_ans1]", "astropy/units/tests/test_equivalencies.py::test_doppler_redshift[-1-rv_ans2]", "astropy/units/tests/test_equivalencies.py::test_doppler_redshift_no_cosmology", "astropy/units/tests/test_equivalencies.py::test_massenergy", "astropy/units/tests/test_equivalencies.py::test_is_equivalent", "astropy/units/tests/test_equivalencies.py::test_parallax", "astropy/units/tests/test_equivalencies.py::test_parallax2", "astropy/units/tests/test_equivalencies.py::test_spectral", "astropy/units/tests/test_equivalencies.py::test_spectral2", "astropy/units/tests/test_equivalencies.py::test_spectral3", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val0-in_unit0]", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val1-in_unit1]", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val2-in_unit2]", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val3-in_unit3]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity2[wav0]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity2[wav1]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity2[wav2]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity2[wav3]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity3", "astropy/units/tests/test_equivalencies.py::test_spectraldensity4", "astropy/units/tests/test_equivalencies.py::test_spectraldensity5", "astropy/units/tests/test_equivalencies.py::test_spectraldensity6", "astropy/units/tests/test_equivalencies.py::test_spectraldensity_not_allowed[from_unit0-to_unit0]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity_not_allowed[from_unit1-to_unit1]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity_not_allowed[from_unit2-to_unit2]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity_not_allowed[from_unit3-to_unit3]", "astropy/units/tests/test_equivalencies.py::test_equivalent_units", "astropy/units/tests/test_equivalencies.py::test_equivalent_units2", "astropy/units/tests/test_equivalencies.py::test_trivial_equivalency", "astropy/units/tests/test_equivalencies.py::test_invalid_equivalency", "astropy/units/tests/test_equivalencies.py::test_irrelevant_equivalency", "astropy/units/tests/test_equivalencies.py::test_brightness_temperature", "astropy/units/tests/test_equivalencies.py::test_surfacebrightness", "astropy/units/tests/test_equivalencies.py::test_beam", "astropy/units/tests/test_equivalencies.py::test_thermodynamic_temperature", "astropy/units/tests/test_equivalencies.py::test_equivalency_context", "astropy/units/tests/test_equivalencies.py::test_equivalency_context_manager", "astropy/units/tests/test_equivalencies.py::test_temperature", "astropy/units/tests/test_equivalencies.py::test_temperature_energy", "astropy/units/tests/test_equivalencies.py::test_molar_mass_amu", "astropy/units/tests/test_equivalencies.py::test_compose_equivalencies", "astropy/units/tests/test_equivalencies.py::test_pixel_scale", "astropy/units/tests/test_equivalencies.py::test_pixel_scale_invalid_scale_unit", "astropy/units/tests/test_equivalencies.py::test_pixel_scale_acceptable_scale_unit", "astropy/units/tests/test_equivalencies.py::test_plate_scale", "astropy/units/tests/test_equivalencies.py::test_equivelency", "astropy/units/tests/test_equivalencies.py::test_add_equivelencies", "astropy/units/tests/test_equivalencies.py::test_pprint", "astropy/units/tests/test_equivalencies.py::test_spectral_density_factor_deprecation"]
|
2d281019494aaebf522f6626c0dae37510c16688
|
{"first_commit_time": 1717080179.0, "pr_title": "Magnetic flux field equivalency", "pr_body": "<!-- These comments are hidden when you submit the pull request,\r\nso you do not need to remove them! -->\r\n\r\n<!-- Please be sure to check out our contributing guidelines,\r\nhttps://github.com/astropy/astropy/blob/main/CONTRIBUTING.md .\r\nPlease be sure to check out our code of conduct,\r\nhttps://github.com/astropy/astropy/blob/main/CODE_OF_CONDUCT.md . -->\r\n\r\n<!-- If you are new or need to be re-acquainted with Astropy\r\ncontributing workflow, please see\r\nhttp://docs.astropy.org/en/latest/development/workflow/development_workflow.html .\r\nThere is even a practical example at\r\nhttps://docs.astropy.org/en/latest/development/workflow/git_edit_workflow_examples.html#astropy-fix-example . -->\r\n\r\n<!-- Please just have a quick search on GitHub to see if a similar\r\npull request has already been posted.\r\nWe have old closed pull requests that might provide useful code or ideas\r\nthat directly tie in with your pull request. -->\r\n\r\n<!-- We have several automatic features that run when a pull request is open.\r\nThey can appear daunting but do not worry because maintainers will help\r\nyou navigate them, if necessary. -->\r\n\r\n### Description\r\n<!-- Provide a general description of what your pull request does.\r\nComplete the following sentence and add relevant details as you see fit. -->\r\n\r\n<!-- In addition please ensure that the pull request title is descriptive\r\nand allows maintainers to infer the applicable subpackage(s). -->\r\n\r\n<!-- READ THIS FOR MANUAL BACKPORT FROM A MAINTAINER:\r\nApply \"skip-basebranch-check\" label **before** you open the PR! -->\r\n\r\nAdd functionality to Astropy to convert between two very common measures of magnetic field: magnetic flux density and magnetic field strength. The relation between these two quantities is straight forward in a vacuum:\r\n\r\n```math\r\n\\mathbf{B} = \\mu_0 \\mathbf{H}\r\n```\r\n\r\nReferences: \r\nhttps://www.e-magnetica.pl/doku.php/confusion_between_b_and_h (Summary of equations)\r\n\r\n\r\n<!-- If the pull request closes any open issues you can add this.\r\nIf you replace <Issue Number> with a number, GitHub will automatically link it.\r\nIf this pull request is unrelated to any issues, please remove\r\nthe following line. -->\r\n\r\nFixes #<Issue Number>\r\n\r\n<!-- Optional opt-out -->\r\n\r\n- [ ] By checking this box, the PR author has requested that maintainers do **NOT** use the \"Squash and Merge\" button. Maintainers should respect this when possible; however, the final decision is at the discretion of the maintainer that merges the PR.\r\n", "pr_timeline": [{"time": 1717082764.0, "comment": "Thank you for your contribution to Astropy! \ud83c\udf0c This checklist is meant to remind the package maintainers who will review this pull request of some common things to look for.\n\n - [ ] Do the proposed changes actually accomplish desired goals?\n - [ ] Do the proposed changes follow the [Astropy coding guidelines](https://docs.astropy.org/en/latest/development/codeguide.html)?\n - [ ] Are tests added/updated as required? If so, do they follow the [Astropy testing guidelines](https://docs.astropy.org/en/latest/development/testguide.html)?\n - [ ] Are docs added/updated as required? If so, do they follow the [Astropy documentation guidelines](https://docs.astropy.org/en/latest/development/docguide.html#astropy-documentation-rules-and-guidelines)?\n - [ ] Is rebase and/or squash necessary? If so, please provide the author with appropriate instructions. Also see instructions for [rebase](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#rebase-if-necessary) and [squash](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#squash-if-necessary).\n - [ ] Did the CI pass? If no, are the failures related? If you need to run daily and weekly cron jobs as part of the PR, please apply the \"Extra CI\" label. Codestyle issues can be fixed by the [bot](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#pre-commit).\n - [ ] Is a change log needed? If yes, did the change log check pass? If no, add the \"no-changelog-entry-needed\" label. If this is a manual backport, use the \"skip-changelog-checks\" label unless special changelog handling is necessary.\n - [ ] Is this a big PR that makes a \"What's new?\" entry worthwhile and if so, is (1) a \"what's new\" entry included in this PR and (2) the \"whatsnew-needed\" label applied?\n - [ ] At the time of adding the milestone, if the milestone set requires a backport to release branch(es), apply the appropriate \"backport-X.Y.x\" label(s) *before* merge."}, {"time": 1717082962.0, "comment": "Thanks! Definitely needs a change log but I'll let units maintainers decide if what's new also needed."}, {"time": 1717083001.0, "comment": "Can you also please edit the PR description to provide your motivation for this PR? Thanks!"}, {"time": 1717083911.0, "comment": "Thanks for the PR. I guess it is the promised follow-up of #15962!\r\n\r\nBut like @pllim asked, it would still be useful to have a bit of a justification for adding the equivalency, especially as it is one that one can relatively easily do oneself, in a way that arguably is clearer for readers of the code (since they'll see an explicit division by `const.mu0`; though there are similar existing ones, like `temperature_energy`). Also, as I noted in #15962, it is not obvious this is something encountered frequently, and hence it is not obvious that it is worth the cost of making the list of equivalencies yet longer and therefore the likelihood that any equivalency gets found smaller...\r\n\r\nIs the main benefit that one does not have to worry about working in SI or cgs in the conversion?\r\n\r\nNote that if we pursue this, there definitely needs to be an entry in `docs/units/equivalencies.rst` explaining the use (this gets rendered in the documentation at https://docs.astropy.org/en/stable/units/equivalencies.html)"}, {"time": 1717085977.0, "comment": "@mhvk you are correct this is the follow up to #15962! I have added a short justification in the PR description detailing the conversion with a reference. Let me know if you want me to extend it.\r\n\r\nMore than happy to have the conversation if it is worth while to add this to the project. You are absolutely correct that this is a relatively straight forward conversion (simply a factor of $\\mu_0$). From experience, there is often confusion between these two quantities even though they are strictly defined. A benefit that you mentioned is the SI/CGS conversion but also I think it should reduce user error/speed. If users need to do the simple conversion manually each time then they either need to memorize the equation or look it up. While this is a relatively trivial task for experienced users, we have had people in the past get this wrong due to the confusion that non-experts often have between B and H. I understand it is difficult to weigh it up against having too many but this convenience may be similar to the reason you have the other \"simplistic\" equivalencies such as `mass_energy` and `temperature_energy`?\r\n\r\nAnother option for a more complete version of the conversion is to have the full definition in matter\r\n\r\n```math\r\n\\mathbf{B} = \\mu_r \\mu_0 \\mathbf{H}\r\n```\r\nwhere `\\mu_r` is the relative permeability. We could have this as an input parameter which take a default value of 1 (the vacuum condition). However, while having more functionality, this additional functionality would likely be used less than the default version. \r\n\r\nI'll add to the `docs/units/equivalencies.rst` file while we decide if this is something you would like to support or not."}, {"time": 1717086375.0, "comment": "So there are a few converters online which I know colleagues often use to do this:\r\n\r\n- https://maurermagnetic.com/en/demagnetizing/technology/convert-magnetic-units/\r\n- https://www.kjmagnetics.com/unit.converter.asp\r\n- https://www.bussi-demagnetizers.com/en/tecnologia-smagnetizzazione/italiano-convertitore-principali-unita-di-misura-del-magnetismo-e-del-magnetismo-residuo"}, {"time": 1717418419.0, "comment": "Thanks, I enabled auto-merged, so this should soon be in. Thanks, @samjrholt!"}, {"time": 1717419959.0, "comment": "@mhvk Actually auto-merge might never complete: circleci is being awaited but was skipped because the last commit had `[skip CI]` in its message. "}, {"time": 1717423628.0, "comment": "I squash-merged it. Thanks, all!"}, {"time": 1717425077.0, "comment": "Thanks everyone!"}], "issues": {}}
|
astropy/astropy
| 16,677
|
https://github.com/astropy/astropy/pull/16677
|
astropy__astropy-16677
|
[]
|
f5126c765a6a8db8abc7504275d9c2e90ffbd526
|
diff --git a/astropy/modeling/core.py b/astropy/modeling/core.py
index b469f32cdf72..f0fd3619f2d4 100644
--- a/astropy/modeling/core.py
+++ b/astropy/modeling/core.py
@@ -747,6 +747,10 @@ def __init__(self, *args, meta=None, name=None, **kwargs):
self._initialize_slices()
self._initialize_unit_support()
+ # Initialize the cache for the constraints (used primarily when
+ # sync_constraints is False)
+ self._constraints_cache = {}
+
def _default_inputs_outputs(self):
if self.n_inputs == 1 and self.n_outputs == 1:
self._inputs = ("x",)
@@ -1293,14 +1297,30 @@ def sync_constraints(self, value):
raise ValueError("sync_constraints only accepts True or False as values")
self._sync_constraints = value
+ # We need to invalidate the cache whenever sync_constraints is changed.
+ # If we are setting sync_constraints to True, then this will ensure
+ # that we recompute the properties next time they are called, and if
+ # setting to False, it will allow us to make sure the cache is up-to-date
+ # below before disabling syncing.
+ self._constraints_cache.clear()
+
+ # If setting to False, cache all the values with the present state
+ # to make sure we don't ever update the cache once the syncing is
+ # disabled. Note that these will automatically then cause 'fixed',
+ # 'bounds' and 'tied' to be called.
+ if not value:
+ _ = self.has_fixed
+ _ = self.has_bounds
+ _ = self.has_tied
+
@property
def fixed(self):
"""
A ``dict`` mapping parameter names to their fixed constraint.
"""
- if not hasattr(self, "_fixed") or self.sync_constraints:
- self._fixed = _ConstraintsDict(self, "fixed")
- return self._fixed
+ if "fixed" not in self._constraints_cache or self.sync_constraints:
+ self._constraints_cache["fixed"] = _ConstraintsDict(self, "fixed")
+ return self._constraints_cache["fixed"]
@property
def bounds(self):
@@ -1308,18 +1328,47 @@ def bounds(self):
A ``dict`` mapping parameter names to their upper and lower bounds as
``(min, max)`` tuples or ``[min, max]`` lists.
"""
- if not hasattr(self, "_bounds") or self.sync_constraints:
- self._bounds = _ConstraintsDict(self, "bounds")
- return self._bounds
+ if "bounds" not in self._constraints_cache or self.sync_constraints:
+ self._constraints_cache["bounds"] = _ConstraintsDict(self, "bounds")
+ return self._constraints_cache["bounds"]
@property
def tied(self):
"""
A ``dict`` mapping parameter names to their tied constraint.
"""
- if not hasattr(self, "_tied") or self.sync_constraints:
- self._tied = _ConstraintsDict(self, "tied")
- return self._tied
+ if "tied" not in self._constraints_cache or self.sync_constraints:
+ self._constraints_cache["tied"] = _ConstraintsDict(self, "tied")
+ return self._constraints_cache["tied"]
+
+ @property
+ def has_fixed(self):
+ """
+ Whether the model has any fixed constraints.
+ """
+ if "has_fixed" not in self._constraints_cache or self.sync_constraints:
+ self._constraints_cache["has_fixed"] = any(self.fixed.values())
+ return self._constraints_cache["has_fixed"]
+
+ @property
+ def has_bounds(self):
+ """
+ Whether the model has any bounds constraints.
+ """
+ if "has_bounds" not in self._constraints_cache or self.sync_constraints:
+ self._constraints_cache["has_bounds"] = any(
+ b != (None, None) for b in self.bounds.values()
+ )
+ return self._constraints_cache["has_bounds"]
+
+ @property
+ def has_tied(self):
+ """
+ Whether the model has any tied constraints.
+ """
+ if "has_tied" not in self._constraints_cache or self.sync_constraints:
+ self._constraints_cache["has_tied"] = any(self.tied.values())
+ return self._constraints_cache["has_tied"]
@property
def eqcons(self):
@@ -3171,6 +3220,10 @@ def __init__(self, op, left, right, name=None):
self.n_left_params = len(self.left.parameters)
self._map_parameters()
+ # Initialize the cache for the constraints (used primarily when
+ # sync_constraints is False)
+ self._constraints_cache = {}
+
def _get_left_inputs_from_args(self, args):
return args[: self.left.n_inputs]
diff --git a/astropy/modeling/fitting.py b/astropy/modeling/fitting.py
index 84d8866d0260..a0c6c34b45a6 100644
--- a/astropy/modeling/fitting.py
+++ b/astropy/modeling/fitting.py
@@ -1176,7 +1176,7 @@ def _wrap_deriv(params, model, weights, x, y, z=None):
if weights is None:
weights = 1.0
- if any(model.fixed.values()) or any(model.tied.values()):
+ if model.has_fixed or model.has_tied:
# update the parameters with the current values from the fitter
fitter_to_model_params(model, params)
if z is None:
@@ -2002,9 +2002,9 @@ def fitter_to_model_params(model, fps, use_min_max_bounds=True):
"""
_, fit_param_indices, _ = model_to_fit_params(model)
- has_tied = any(model.tied.values())
- has_fixed = any(model.fixed.values())
- has_bound = any(b != (None, None) for b in model.bounds.values())
+ has_tied = model.has_tied
+ has_fixed = model.has_fixed
+ has_bound = model.has_bounds
parameters = model.parameters
if not (has_tied or has_fixed or has_bound):
@@ -2069,7 +2069,7 @@ def model_to_fit_params(model):
fitparam_indices = list(range(len(model.param_names)))
model_params = model.parameters
model_bounds = list(model.bounds.values())
- if any(model.fixed.values()) or any(model.tied.values()):
+ if model.has_fixed or model.has_tied:
params = list(model_params)
param_metrics = model._param_metrics
for idx, name in list(enumerate(model.param_names))[::-1]:
@@ -2100,16 +2100,13 @@ def _validate_constraints(supported_constraints, model):
"""Make sure model constraints are supported by the current fitter."""
message = "Optimizer cannot handle {0} constraints."
- if any(model.fixed.values()) and "fixed" not in supported_constraints:
+ if model.has_fixed and "fixed" not in supported_constraints:
raise UnsupportedConstraintError(message.format("fixed parameter"))
- if any(model.tied.values()) and "tied" not in supported_constraints:
+ if model.has_tied and "tied" not in supported_constraints:
raise UnsupportedConstraintError(message.format("tied parameter"))
- if (
- any(tuple(b) != (None, None) for b in model.bounds.values())
- and "bounds" not in supported_constraints
- ):
+ if model.has_bounds and "bounds" not in supported_constraints:
raise UnsupportedConstraintError(message.format("bound parameter"))
if model.eqcons and "eqcons" not in supported_constraints:
diff --git a/docs/changes/modeling/16677.feature.rst b/docs/changes/modeling/16677.feature.rst
new file mode 100644
index 000000000000..8c7d14e8555d
--- /dev/null
+++ b/docs/changes/modeling/16677.feature.rst
@@ -0,0 +1,3 @@
+Added ``Model.has_tied``, ``Model.has_fixed``, and ``Model.has_bounds`` attributes to make
+it easy to check whether models have various kinds of constraints set without having to
+inspect ``Model.tied``, ``Model.fixed``, and ``Model.bounds`` in detail.
|
diff --git a/astropy/modeling/tests/test_core.py b/astropy/modeling/tests/test_core.py
index d1dbe011a61e..097a35cd60d2 100644
--- a/astropy/modeling/tests/test_core.py
+++ b/astropy/modeling/tests/test_core.py
@@ -1505,3 +1505,68 @@ def test_model_string_indexing():
assert compound["Model1"] == gauss
assert compound["Model2"] == airy
+
+
+def test_has_constraints():
+ model1 = models.Gaussian1D()
+
+ assert not model1.has_tied
+ assert not model1.has_fixed
+ assert model1.has_bounds
+
+ model1.amplitude.fixed = True
+
+ assert model1.has_fixed
+
+ model1.mean.tied = lambda model: model.amplitude
+
+ assert model1.has_tied
+
+ model2 = models.Linear1D()
+
+ assert not model2.has_tied
+ assert not model2.has_fixed
+ assert not model2.has_bounds
+
+ model2.slope.bounds = (1, 2)
+
+ assert model2.has_bounds
+
+
+def test_has_constraints_with_sync_constraints():
+ # Check that has_tied/has_fixed/has_bounds works when sync_constraints is used
+
+ model = models.Linear1D()
+
+ assert not model.has_tied
+ assert not model.has_fixed
+ assert not model.has_bounds
+
+ model.sync_constraints = False
+
+ model.slope.fixed = True
+ model.intercept.tied = lambda model: model.slope
+ model.intercept.bounds = (1, 2)
+
+ assert not model.has_tied
+ assert not model.has_fixed
+ assert not model.has_bounds
+
+ model.slope.fixed = False
+
+ model.sync_constraints = True
+
+ assert model.has_tied
+ assert not model.has_fixed
+ assert model.has_bounds
+
+ model.slope.fixed = True
+
+ # If we set sync_constraints to False, model.has_fixed should then still
+ # return the correct result because the above line was called before
+ # sync_constraints was set to False. Basically we need any change in
+ # sync_constraints to invalidate the cache.
+
+ model.sync_constraints = False
+
+ assert model.has_fixed
| 2024-07-04T20:45:35
|
{}
|
{"astropy/modeling/core.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\n\"\"\"\nThis module defines base classes for all models. The base class of all\nmodels is `~astropy.modeling.Model`. `~astropy.modeling.FittableModel` is\nthe base class for all fittable models. Fittable models can be linear or\nnonlinear in a regression analysis sense.\n\nAll models provide a `__call__` method which performs the transformation in\na purely mathematical way, i.e. the models are unitless. Model instances can\nrepresent either a single model, or a \"model set\" representing multiple copies\nof the same type of model, but with potentially different values of the\nparameters in each model making up the set.\n\"\"\"\n\n# pylint: disable=invalid-name, protected-access, redefined-outer-name\nimport abc\nimport copy\nimport functools\nimport inspect\nimport operator\nimport re\nimport warnings\nfrom collections import defaultdict, deque\nfrom inspect import signature\nfrom textwrap import indent\n\nimport numpy as np\n\nfrom astropy.nddata.utils import add_array, extract_array\nfrom astropy.table import Table\nfrom astropy.units import Quantity, UnitsError, dimensionless_unscaled\nfrom astropy.units.utils import quantity_asanyarray\nfrom astropy.utils import (\n find_current_module,\n isiterable,\n metadata,\n sharedmethod,\n)\nfrom astropy.utils.codegen import make_function_with_signature\nfrom astropy.utils.compat import COPY_IF_NEEDED\nfrom astropy.utils.exceptions import _add_note_to_exception\n\nfrom .bounding_box import CompoundBoundingBox, ModelBoundingBox\nfrom .parameters import InputParameterError, Parameter, _tofloat, param_repr_oneline\nfrom .utils import (\n _combine_equivalency_dict,\n _ConstraintsDict,\n _SpecialOperatorsDict,\n combine_labels,\n get_inputs_and_params,\n make_binary_operator_eval,\n)\n\n__all__ = [\n \"Model\",\n \"FittableModel\",\n \"Fittable1DModel\",\n \"Fittable2DModel\",\n \"CompoundModel\",\n \"fix_inputs\",\n \"custom_model\",\n \"ModelDefinitionError\",\n \"bind_bounding_box\",\n \"bind_compound_bounding_box\",\n]\n\n\ndef _model_oper(oper, **kwargs):\n \"\"\"\n Returns a function that evaluates a given Python arithmetic operator\n between two models. The operator should be given as a string, like ``'+'``\n or ``'**'``.\n \"\"\"\n return lambda left, right: CompoundModel(oper, left, right, **kwargs)\n\n\nclass ModelDefinitionError(TypeError):\n \"\"\"Used for incorrect models definitions.\"\"\"\n\n\nclass _ModelMeta(abc.ABCMeta):\n \"\"\"\n Metaclass for Model.\n\n Currently just handles auto-generating the param_names list based on\n Parameter descriptors declared at the class-level of Model subclasses.\n \"\"\"\n\n _is_dynamic = False\n \"\"\"\n This flag signifies whether this class was created in the \"normal\" way,\n with a class statement in the body of a module, as opposed to a call to\n `type` or some other metaclass constructor, such that the resulting class\n does not belong to a specific module. This is important for pickling of\n dynamic classes.\n\n This flag is always forced to False for new classes, so code that creates\n dynamic classes should manually set it to True on those classes when\n creating them.\n \"\"\"\n\n # Default empty dict for _parameters_, which will be empty on model\n # classes that don't have any Parameters\n\n def __new__(mcls, name, bases, members, **kwds):\n # See the docstring for _is_dynamic above\n if \"_is_dynamic\" not in members:\n members[\"_is_dynamic\"] = mcls._is_dynamic\n opermethods = [\n (\"__add__\", _model_oper(\"+\")),\n (\"__sub__\", _model_oper(\"-\")),\n (\"__mul__\", _model_oper(\"*\")),\n (\"__truediv__\", _model_oper(\"/\")),\n (\"__pow__\", _model_oper(\"**\")),\n (\"__or__\", _model_oper(\"|\")),\n (\"__and__\", _model_oper(\"&\")),\n (\"_fix_inputs\", _model_oper(\"fix_inputs\")),\n ]\n\n members[\"_parameters_\"] = {\n k: v for k, v in members.items() if isinstance(v, Parameter)\n }\n\n for opermethod, opercall in opermethods:\n members[opermethod] = opercall\n cls = super().__new__(mcls, name, bases, members, **kwds)\n\n param_names = list(members[\"_parameters_\"])\n\n # Need to walk each base MRO to collect all parameter names\n for base in bases:\n for tbase in base.__mro__:\n if issubclass(tbase, Model):\n # Preserve order of definitions\n param_names = list(tbase._parameters_) + param_names\n # Remove duplicates (arising from redefinition in subclass).\n param_names = list(dict.fromkeys(param_names))\n if cls._parameters_:\n if hasattr(cls, \"_param_names\"):\n # Slight kludge to support compound models, where\n # cls.param_names is a property; could be improved with a\n # little refactoring but fine for now\n cls._param_names = tuple(param_names)\n else:\n cls.param_names = tuple(param_names)\n\n return cls\n\n def __init__(cls, name, bases, members, **kwds):\n super().__init__(name, bases, members, **kwds)\n cls._create_inverse_property(members)\n cls._create_bounding_box_property(members)\n pdict = {}\n for base in bases:\n for tbase in base.__mro__:\n if issubclass(tbase, Model):\n for parname, val in cls._parameters_.items():\n pdict[parname] = val\n cls._handle_special_methods(members, pdict)\n\n def __repr__(cls):\n \"\"\"\n Custom repr for Model subclasses.\n \"\"\"\n return cls._format_cls_repr()\n\n def _repr_pretty_(cls, p, cycle):\n \"\"\"\n Repr for IPython's pretty printer.\n\n By default IPython \"pretty prints\" classes, so we need to implement\n this so that IPython displays the custom repr for Models.\n \"\"\"\n p.text(repr(cls))\n\n def __reduce__(cls):\n if not cls._is_dynamic:\n # Just return a string specifying where the class can be imported\n # from\n return cls.__name__\n members = dict(cls.__dict__)\n # Delete any ABC-related attributes--these will be restored when\n # the class is reconstructed:\n for key in list(members):\n if key.startswith(\"_abc_\"):\n del members[key]\n\n # Delete custom __init__ and __call__ if they exist:\n for key in (\"__init__\", \"__call__\"):\n if key in members:\n del members[key]\n\n return (type(cls), (cls.__name__, cls.__bases__, members))\n\n @property\n def name(cls):\n \"\"\"\n The name of this model class--equivalent to ``cls.__name__``.\n\n This attribute is provided for symmetry with the `Model.name` attribute\n of model instances.\n \"\"\"\n return cls.__name__\n\n @property\n def _is_concrete(cls):\n \"\"\"\n A class-level property that determines whether the class is a concrete\n implementation of a Model--i.e. it is not some abstract base class or\n internal implementation detail (i.e. begins with '_').\n \"\"\"\n return not (cls.__name__.startswith(\"_\") or inspect.isabstract(cls))\n\n def rename(cls, name=None, inputs=None, outputs=None):\n \"\"\"\n Creates a copy of this model class with a new name, inputs or outputs.\n\n The new class is technically a subclass of the original class, so that\n instance and type checks will still work. For example::\n\n >>> from astropy.modeling.models import Rotation2D\n >>> SkyRotation = Rotation2D.rename('SkyRotation')\n >>> SkyRotation\n <class 'astropy.modeling.core.SkyRotation'>\n Name: SkyRotation (Rotation2D)\n N_inputs: 2\n N_outputs: 2\n Fittable parameters: ('angle',)\n >>> issubclass(SkyRotation, Rotation2D)\n True\n >>> r = SkyRotation(90)\n >>> isinstance(r, Rotation2D)\n True\n \"\"\"\n mod = find_current_module(2)\n if mod:\n modname = mod.__name__\n else:\n modname = \"__main__\"\n\n if name is None:\n name = cls.name\n if inputs is None:\n inputs = cls.inputs\n else:\n if not isinstance(inputs, tuple):\n raise TypeError(\"Expected 'inputs' to be a tuple of strings.\")\n elif len(inputs) != len(cls.inputs):\n raise ValueError(f\"{cls.name} expects {len(cls.inputs)} inputs\")\n if outputs is None:\n outputs = cls.outputs\n else:\n if not isinstance(outputs, tuple):\n raise TypeError(\"Expected 'outputs' to be a tuple of strings.\")\n elif len(outputs) != len(cls.outputs):\n raise ValueError(f\"{cls.name} expects {len(cls.outputs)} outputs\")\n new_cls = type(name, (cls,), {\"inputs\": inputs, \"outputs\": outputs})\n new_cls.__module__ = modname\n new_cls.__qualname__ = name\n\n return new_cls\n\n def _create_inverse_property(cls, members):\n inverse = members.get(\"inverse\")\n if inverse is None or cls.__bases__[0] is object:\n # The latter clause is the prevent the below code from running on\n # the Model base class, which implements the default getter and\n # setter for .inverse\n return\n\n if isinstance(inverse, property):\n # We allow the @property decorator to be omitted entirely from\n # the class definition, though its use should be encouraged for\n # clarity\n inverse = inverse.fget\n\n # Store the inverse getter internally, then delete the given .inverse\n # attribute so that cls.inverse resolves to Model.inverse instead\n cls._inverse = inverse\n del cls.inverse\n\n def _create_bounding_box_property(cls, members):\n \"\"\"\n Takes any bounding_box defined on a concrete Model subclass (either\n as a fixed tuple or a property or method) and wraps it in the generic\n getter/setter interface for the bounding_box attribute.\n \"\"\"\n # TODO: Much of this is verbatim from _create_inverse_property--I feel\n # like there could be a way to generify properties that work this way,\n # but for the time being that would probably only confuse things more.\n bounding_box = members.get(\"bounding_box\")\n if bounding_box is None or cls.__bases__[0] is object:\n return\n\n if isinstance(bounding_box, property):\n bounding_box = bounding_box.fget\n\n if not callable(bounding_box):\n # See if it's a hard-coded bounding_box (as a sequence) and\n # normalize it\n try:\n bounding_box = ModelBoundingBox.validate(\n cls, bounding_box, _preserve_ignore=True\n )\n except ValueError as exc:\n raise ModelDefinitionError(exc.args[0])\n else:\n sig = signature(bounding_box)\n # May be a method that only takes 'self' as an argument (like a\n # property, but the @property decorator was forgotten)\n #\n # However, if the method takes additional arguments then this is a\n # parameterized bounding box and should be callable\n if len(sig.parameters) > 1:\n bounding_box = cls._create_bounding_box_subclass(bounding_box, sig)\n\n # See the Model.bounding_box getter definition for how this attribute\n # is used\n cls._bounding_box = bounding_box\n del cls.bounding_box\n\n def _create_bounding_box_subclass(cls, func, sig):\n \"\"\"\n For Models that take optional arguments for defining their bounding\n box, we create a subclass of ModelBoundingBox with a ``__call__`` method\n that supports those additional arguments.\n\n Takes the function's Signature as an argument since that is already\n computed in _create_bounding_box_property, so no need to duplicate that\n effort.\n \"\"\"\n # TODO: Might be convenient if calling the bounding box also\n # automatically sets the _user_bounding_box. So that\n #\n # >>> model.bounding_box(arg=1)\n #\n # in addition to returning the computed bbox, also sets it, so that\n # it's a shortcut for\n #\n # >>> model.bounding_box = model.bounding_box(arg=1)\n #\n # Not sure if that would be non-obvious / confusing though...\n\n def __call__(self, **kwargs):\n return func(self._model, **kwargs)\n\n kwargs = []\n for idx, param in enumerate(sig.parameters.values()):\n if idx == 0:\n # Presumed to be a 'self' argument\n continue\n\n if param.default is param.empty:\n raise ModelDefinitionError(\n f\"The bounding_box method for {cls.name} is not correctly \"\n \"defined: If defined as a method all arguments to that \"\n \"method (besides self) must be keyword arguments with \"\n \"default values that can be used to compute a default \"\n \"bounding box.\"\n )\n\n kwargs.append((param.name, param.default))\n\n __call__.__signature__ = sig\n\n return type(\n f\"{cls.name}ModelBoundingBox\", (ModelBoundingBox,), {\"__call__\": __call__}\n )\n\n def _handle_special_methods(cls, members, pdict):\n # Handle init creation from inputs\n def update_wrapper(wrapper, cls):\n # Set up the new __call__'s metadata attributes as though it were\n # manually defined in the class definition\n # A bit like functools.update_wrapper but uses the class instead of\n # the wrapped function\n wrapper.__module__ = cls.__module__\n wrapper.__doc__ = getattr(cls, wrapper.__name__).__doc__\n if hasattr(cls, \"__qualname__\"):\n wrapper.__qualname__ = f\"{cls.__qualname__}.{wrapper.__name__}\"\n\n if (\n \"__call__\" not in members\n and \"n_inputs\" in members\n and isinstance(members[\"n_inputs\"], int)\n and members[\"n_inputs\"] > 0\n ):\n # Don't create a custom __call__ for classes that already have one\n # explicitly defined (this includes the Model base class, and any\n # other classes that manually override __call__\n\n def __call__(self, *inputs, **kwargs):\n \"\"\"Evaluate this model on the supplied inputs.\"\"\"\n return super(cls, self).__call__(*inputs, **kwargs)\n\n # When called, models can take two optional keyword arguments:\n #\n # * model_set_axis, which indicates (for multi-dimensional input)\n # which axis is used to indicate different models\n #\n # * equivalencies, a dictionary of equivalencies to be applied to\n # the input values, where each key should correspond to one of\n # the inputs.\n #\n # The following code creates the __call__ function with these\n # two keyword arguments.\n\n args = (\"self\",)\n kwargs = {\n \"model_set_axis\": None,\n \"with_bounding_box\": False,\n \"fill_value\": np.nan,\n \"equivalencies\": None,\n \"inputs_map\": None,\n }\n\n new_call = make_function_with_signature(\n __call__, args, kwargs, varargs=\"inputs\", varkwargs=\"new_inputs\"\n )\n\n # The following makes it look like __call__\n # was defined in the class\n update_wrapper(new_call, cls)\n\n cls.__call__ = new_call\n\n if (\n \"__init__\" not in members\n and not inspect.isabstract(cls)\n and cls._parameters_\n ):\n # Build list of all parameters including inherited ones\n\n # If *all* the parameters have default values we can make them\n # keyword arguments; otherwise they must all be positional\n # arguments\n if all(p.default is not None for p in pdict.values()):\n args = (\"self\",)\n kwargs = []\n for param_name, param_val in pdict.items():\n default = param_val.default\n unit = param_val.unit\n # If the unit was specified in the parameter but the\n # default is not a Quantity, attach the unit to the\n # default.\n if unit is not None:\n default = Quantity(\n default, unit, copy=COPY_IF_NEEDED, subok=True\n )\n kwargs.append((param_name, default))\n else:\n args = (\"self\",) + tuple(pdict.keys())\n kwargs = {}\n\n def __init__(self, *params, **kwargs):\n return super(cls, self).__init__(*params, **kwargs)\n\n new_init = make_function_with_signature(\n __init__, args, kwargs, varkwargs=\"kwargs\"\n )\n update_wrapper(new_init, cls)\n cls.__init__ = new_init\n\n # *** Arithmetic operators for creating compound models ***\n __add__ = _model_oper(\"+\")\n __sub__ = _model_oper(\"-\")\n __mul__ = _model_oper(\"*\")\n __truediv__ = _model_oper(\"/\")\n __pow__ = _model_oper(\"**\")\n __or__ = _model_oper(\"|\")\n __and__ = _model_oper(\"&\")\n _fix_inputs = _model_oper(\"fix_inputs\")\n\n # *** Other utilities ***\n\n def _format_cls_repr(cls, keywords=[]):\n \"\"\"\n Internal implementation of ``__repr__``.\n\n This is separated out for ease of use by subclasses that wish to\n override the default ``__repr__`` while keeping the same basic\n formatting.\n \"\"\"\n # For the sake of familiarity start the output with the standard class\n # __repr__\n parts = [super().__repr__()]\n\n if not cls._is_concrete:\n return parts[0]\n\n def format_inheritance(cls):\n bases = []\n for base in cls.mro()[1:]:\n if not issubclass(base, Model):\n continue\n elif inspect.isabstract(base) or base.__name__.startswith(\"_\"):\n break\n bases.append(base.name)\n if bases:\n return f\"{cls.name} ({' -> '.join(bases)})\"\n return cls.name\n\n try:\n default_keywords = [\n (\"Name\", format_inheritance(cls)),\n (\"N_inputs\", cls.n_inputs),\n (\"N_outputs\", cls.n_outputs),\n ]\n\n if cls.param_names:\n default_keywords.append((\"Fittable parameters\", cls.param_names))\n\n for keyword, value in default_keywords + keywords:\n if value is not None:\n parts.append(f\"{keyword}: {value}\")\n\n return \"\\n\".join(parts)\n except Exception:\n # If any of the above formatting fails fall back on the basic repr\n # (this is particularly useful in debugging)\n return parts[0]\n\n\nclass Model(metaclass=_ModelMeta):\n \"\"\"\n Base class for all models.\n\n This is an abstract class and should not be instantiated directly.\n\n The following initialization arguments apply to the majority of Model\n subclasses by default (exceptions include specialized utility models\n like `~astropy.modeling.mappings.Mapping`). Parametric models take all\n their parameters as arguments, followed by any of the following optional\n keyword arguments:\n\n Parameters\n ----------\n name : str, optional\n A human-friendly name associated with this model instance\n (particularly useful for identifying the individual components of a\n compound model).\n\n meta : dict, optional\n An optional dict of user-defined metadata to attach to this model.\n How this is used and interpreted is up to the user or individual use\n case.\n\n n_models : int, optional\n If given an integer greater than 1, a *model set* is instantiated\n instead of a single model. This affects how the parameter arguments\n are interpreted. In this case each parameter must be given as a list\n or array--elements of this array are taken along the first axis (or\n ``model_set_axis`` if specified), such that the Nth element is the\n value of that parameter for the Nth model in the set.\n\n See the section on model sets in the documentation for more details.\n\n model_set_axis : int, optional\n This argument only applies when creating a model set (i.e. ``n_models >\n 1``). It changes how parameter values are interpreted. Normally the\n first axis of each input parameter array (properly the 0th axis) is\n taken as the axis corresponding to the model sets. However, any axis\n of an input array may be taken as this \"model set axis\". This accepts\n negative integers as well--for example use ``model_set_axis=-1`` if the\n last (most rapidly changing) axis should be associated with the model\n sets. Also, ``model_set_axis=False`` can be used to tell that a given\n input should be used to evaluate all the models in the model set.\n\n fixed : dict, optional\n Dictionary ``{parameter_name: bool}`` setting the fixed constraint\n for one or more parameters. `True` means the parameter is held fixed\n during fitting and is prevented from updates once an instance of the\n model has been created.\n\n Alternatively the `~astropy.modeling.Parameter.fixed` property of a\n parameter may be used to lock or unlock individual parameters.\n\n tied : dict, optional\n Dictionary ``{parameter_name: callable}`` of parameters which are\n linked to some other parameter. The dictionary values are callables\n providing the linking relationship.\n\n Alternatively the `~astropy.modeling.Parameter.tied` property of a\n parameter may be used to set the ``tied`` constraint on individual\n parameters.\n\n bounds : dict, optional\n A dictionary ``{parameter_name: value}`` of lower and upper bounds of\n parameters. Keys are parameter names. Values are a list or a tuple\n of length 2 giving the desired range for the parameter.\n\n Alternatively the `~astropy.modeling.Parameter.min` and\n `~astropy.modeling.Parameter.max` or\n ~astropy.modeling.Parameter.bounds` properties of a parameter may be\n used to set bounds on individual parameters.\n\n eqcons : list, optional\n List of functions of length n such that ``eqcons[j](x0, *args) == 0.0``\n in a successfully optimized problem.\n\n ineqcons : list, optional\n List of functions of length n such that ``ieqcons[j](x0, *args) >=\n 0.0`` is a successfully optimized problem.\n\n Examples\n --------\n >>> from astropy.modeling import models\n >>> def tie_center(model):\n ... mean = 50 * model.stddev\n ... return mean\n >>> tied_parameters = {'mean': tie_center}\n\n Specify that ``'mean'`` is a tied parameter in one of two ways:\n\n >>> g1 = models.Gaussian1D(amplitude=10, mean=5, stddev=.3,\n ... tied=tied_parameters)\n\n or\n\n >>> g1 = models.Gaussian1D(amplitude=10, mean=5, stddev=.3)\n >>> g1.mean.tied\n False\n >>> g1.mean.tied = tie_center\n >>> g1.mean.tied\n <function tie_center at 0x...>\n\n Fixed parameters:\n\n >>> g1 = models.Gaussian1D(amplitude=10, mean=5, stddev=.3,\n ... fixed={'stddev': True})\n >>> g1.stddev.fixed\n True\n\n or\n\n >>> g1 = models.Gaussian1D(amplitude=10, mean=5, stddev=.3)\n >>> g1.stddev.fixed\n False\n >>> g1.stddev.fixed = True\n >>> g1.stddev.fixed\n True\n \"\"\"\n\n parameter_constraints = Parameter.constraints\n \"\"\"\n Primarily for informational purposes, these are the types of constraints\n that can be set on a model's parameters.\n \"\"\"\n\n model_constraints = (\"eqcons\", \"ineqcons\")\n \"\"\"\n Primarily for informational purposes, these are the types of constraints\n that constrain model evaluation.\n \"\"\"\n\n param_names = ()\n \"\"\"\n Names of the parameters that describe models of this type.\n\n The parameters in this tuple are in the same order they should be passed in\n when initializing a model of a specific type. Some types of models, such\n as polynomial models, have a different number of parameters depending on\n some other property of the model, such as the degree.\n\n When defining a custom model class the value of this attribute is\n automatically set by the `~astropy.modeling.Parameter` attributes defined\n in the class body.\n \"\"\"\n\n n_inputs = 0\n \"\"\"The number of inputs.\"\"\"\n n_outputs = 0\n \"\"\" The number of outputs.\"\"\"\n\n standard_broadcasting = True\n fittable = False\n linear = True\n _separable = None\n \"\"\" A boolean flag to indicate whether a model is separable.\"\"\"\n meta = metadata.MetaData()\n \"\"\"A dict-like object to store optional information.\"\"\"\n\n # By default models either use their own inverse property or have no\n # inverse at all, but users may also assign a custom inverse to a model,\n # optionally; in that case it is of course up to the user to determine\n # whether their inverse is *actually* an inverse to the model they assign\n # it to.\n _inverse = None\n _user_inverse = None\n\n _bounding_box = None\n _user_bounding_box = None\n\n _has_inverse_bounding_box = False\n\n # Default n_models attribute, so that __len__ is still defined even when a\n # model hasn't completed initialization yet\n _n_models = 1\n\n # New classes can set this as a boolean value.\n # It is converted to a dictionary mapping input name to a boolean value.\n _input_units_strict = False\n\n # Allow dimensionless input (and corresponding output). If this is True,\n # input values to evaluate will gain the units specified in input_units. If\n # this is a dictionary then it should map input name to a bool to allow\n # dimensionless numbers for that input.\n # Only has an effect if input_units is defined.\n _input_units_allow_dimensionless = False\n\n # Default equivalencies to apply to input values. If set, this should be a\n # dictionary where each key is a string that corresponds to one of the\n # model inputs. Only has an effect if input_units is defined.\n input_units_equivalencies = None\n\n # Covariance matrix can be set by fitter if available.\n # If cov_matrix is available, then std will set as well\n _cov_matrix = None\n _stds = None\n\n def __init_subclass__(cls, **kwargs):\n super().__init_subclass__()\n\n def __init__(self, *args, meta=None, name=None, **kwargs):\n super().__init__()\n self._default_inputs_outputs()\n if meta is not None:\n self.meta = meta\n self._name = name\n # add parameters to instance level by walking MRO list\n mro = self.__class__.__mro__\n for cls in mro:\n if issubclass(cls, Model):\n for parname, val in cls._parameters_.items():\n newpar = copy.deepcopy(val)\n newpar.model = self\n if parname not in self.__dict__:\n self.__dict__[parname] = newpar\n\n self._initialize_constraints(kwargs)\n kwargs = self._initialize_setters(kwargs)\n # Remaining keyword args are either parameter values or invalid\n # Parameter values must be passed in as keyword arguments in order to\n # distinguish them\n self._initialize_parameters(args, kwargs)\n self._initialize_slices()\n self._initialize_unit_support()\n\n def _default_inputs_outputs(self):\n if self.n_inputs == 1 and self.n_outputs == 1:\n self._inputs = (\"x\",)\n self._outputs = (\"y\",)\n elif self.n_inputs == 2 and self.n_outputs == 1:\n self._inputs = (\"x\", \"y\")\n self._outputs = (\"z\",)\n else:\n try:\n self._inputs = tuple(\"x\" + str(idx) for idx in range(self.n_inputs))\n self._outputs = tuple(\"x\" + str(idx) for idx in range(self.n_outputs))\n except TypeError:\n # self.n_inputs and self.n_outputs are properties\n # This is the case when subclasses of Model do not define\n # ``n_inputs``, ``n_outputs``, ``inputs`` or ``outputs``.\n self._inputs = ()\n self._outputs = ()\n\n def _initialize_setters(self, kwargs):\n \"\"\"\n This exists to inject defaults for settable properties for models\n originating from `custom_model`.\n \"\"\"\n if hasattr(self, \"_settable_properties\"):\n setters = {\n name: kwargs.pop(name, default)\n for name, default in self._settable_properties.items()\n }\n for name, value in setters.items():\n setattr(self, name, value)\n\n return kwargs\n\n @property\n def inputs(self):\n return self._inputs\n\n @inputs.setter\n def inputs(self, val):\n if len(val) != self.n_inputs:\n raise ValueError(\n f\"Expected {self.n_inputs} number of inputs, got {len(val)}.\"\n )\n self._inputs = val\n self._initialize_unit_support()\n\n @property\n def outputs(self):\n return self._outputs\n\n @outputs.setter\n def outputs(self, val):\n if len(val) != self.n_outputs:\n raise ValueError(\n f\"Expected {self.n_outputs} number of outputs, got {len(val)}.\"\n )\n self._outputs = val\n\n @property\n def n_inputs(self):\n # TODO: remove the code in the ``if`` block when support\n # for models with ``inputs`` as class variables is removed.\n if hasattr(self.__class__, \"n_inputs\") and isinstance(\n self.__class__.n_inputs, property\n ):\n try:\n return len(self.__class__.inputs)\n except TypeError:\n try:\n return len(self.inputs)\n except AttributeError:\n return 0\n\n return self.__class__.n_inputs\n\n @property\n def n_outputs(self):\n # TODO: remove the code in the ``if`` block when support\n # for models with ``outputs`` as class variables is removed.\n if hasattr(self.__class__, \"n_outputs\") and isinstance(\n self.__class__.n_outputs, property\n ):\n try:\n return len(self.__class__.outputs)\n except TypeError:\n try:\n return len(self.outputs)\n except AttributeError:\n return 0\n\n return self.__class__.n_outputs\n\n def _calculate_separability_matrix(self):\n \"\"\"\n This is a hook which customises the behavior of modeling.separable.\n\n This allows complex subclasses to customise the separability matrix.\n If it returns `NotImplemented` the default behavior is used.\n \"\"\"\n return NotImplemented\n\n def _initialize_unit_support(self):\n \"\"\"\n Convert self._input_units_strict and\n self.input_units_allow_dimensionless to dictionaries\n mapping input name to a boolean value.\n \"\"\"\n if isinstance(self._input_units_strict, bool):\n self._input_units_strict = {\n key: self._input_units_strict for key in self.inputs\n }\n\n if isinstance(self._input_units_allow_dimensionless, bool):\n self._input_units_allow_dimensionless = {\n key: self._input_units_allow_dimensionless for key in self.inputs\n }\n\n @property\n def input_units_strict(self):\n \"\"\"\n Enforce strict units on inputs to evaluate. If this is set to True,\n input values to evaluate will be in the exact units specified by\n input_units. If the input quantities are convertible to input_units,\n they are converted. If this is a dictionary then it should map input\n name to a bool to set strict input units for that parameter.\n \"\"\"\n val = self._input_units_strict\n if isinstance(val, bool):\n return {key: val for key in self.inputs}\n return dict(zip(self.inputs, val.values()))\n\n @property\n def input_units_allow_dimensionless(self):\n \"\"\"\n Allow dimensionless input (and corresponding output). If this is True,\n input values to evaluate will gain the units specified in input_units. If\n this is a dictionary then it should map input name to a bool to allow\n dimensionless numbers for that input.\n Only has an effect if input_units is defined.\n \"\"\"\n val = self._input_units_allow_dimensionless\n if isinstance(val, bool):\n return {key: val for key in self.inputs}\n return dict(zip(self.inputs, val.values()))\n\n @property\n def uses_quantity(self):\n \"\"\"\n True if this model has been created with `~astropy.units.Quantity`\n objects or if there are no parameters.\n\n This can be used to determine if this model should be evaluated with\n `~astropy.units.Quantity` or regular floats.\n \"\"\"\n pisq = [isinstance(p, Quantity) for p in self._param_sets(units=True)]\n return (len(pisq) == 0) or any(pisq)\n\n def __repr__(self):\n return self._format_repr()\n\n def __str__(self):\n return self._format_str()\n\n def __len__(self):\n return self._n_models\n\n @staticmethod\n def _strip_ones(intup):\n return tuple(item for item in intup if item != 1)\n\n def __setattr__(self, attr, value):\n if isinstance(self, CompoundModel):\n param_names = self._param_names\n param_names = self.param_names\n\n if param_names is not None and attr in self.param_names:\n param = self.__dict__[attr]\n value = _tofloat(value)\n if param._validator is not None:\n param._validator(self, value)\n # check consistency with previous shape and size\n eshape = self._param_metrics[attr][\"shape\"]\n if eshape == ():\n eshape = (1,)\n vshape = np.array(value).shape\n if vshape == ():\n vshape = (1,)\n esize = self._param_metrics[attr][\"size\"]\n if np.size(value) != esize or self._strip_ones(vshape) != self._strip_ones(\n eshape\n ):\n raise InputParameterError(\n f\"Value for parameter {attr} does not match shape or size\\nexpected\"\n f\" by model ({vshape}, {np.size(value)}) vs ({eshape}, {esize})\"\n )\n if param.unit is None:\n if isinstance(value, Quantity):\n param._unit = value.unit\n param.value = value.value\n else:\n param.value = value\n else:\n if not isinstance(value, Quantity):\n raise UnitsError(\n f\"The '{param.name}' parameter should be given as a\"\n \" Quantity because it was originally \"\n \"initialized as a Quantity\"\n )\n param._unit = value.unit\n param.value = value.value\n else:\n if attr in [\"fittable\", \"linear\"]:\n self.__dict__[attr] = value\n else:\n super().__setattr__(attr, value)\n\n def _pre_evaluate(self, *args, **kwargs):\n \"\"\"\n Model specific input setup that needs to occur prior to model evaluation.\n \"\"\"\n # Broadcast inputs into common size\n inputs, broadcasted_shapes = self.prepare_inputs(*args, **kwargs)\n\n # Setup actual model evaluation method\n parameters = self._param_sets(raw=True, units=True)\n\n def evaluate(_inputs):\n return self.evaluate(*_inputs, *parameters)\n\n return evaluate, inputs, broadcasted_shapes, kwargs\n\n def get_bounding_box(self, with_bbox=True):\n \"\"\"\n Return the ``bounding_box`` of a model if it exists or ``None``\n otherwise.\n\n Parameters\n ----------\n with_bbox :\n The value of the ``with_bounding_box`` keyword argument\n when calling the model. Default is `True` for usage when\n looking up the model's ``bounding_box`` without risk of error.\n \"\"\"\n bbox = None\n\n if not isinstance(with_bbox, bool) or with_bbox:\n try:\n bbox = self.bounding_box\n except NotImplementedError:\n pass\n\n if isinstance(bbox, CompoundBoundingBox) and not isinstance(\n with_bbox, bool\n ):\n bbox = bbox[with_bbox]\n\n return bbox\n\n @property\n def _argnames(self):\n \"\"\"The inputs used to determine input_shape for bounding_box evaluation.\"\"\"\n return self.inputs\n\n def _validate_input_shape(\n self, _input, idx, argnames, model_set_axis, check_model_set_axis\n ):\n \"\"\"Perform basic validation of a single model input's shape.\n\n The shape has the minimum dimensions for the given model_set_axis.\n\n Returns the shape of the input if validation succeeds.\n \"\"\"\n input_shape = np.shape(_input)\n # Ensure that the input's model_set_axis matches the model's\n # n_models\n if input_shape and check_model_set_axis:\n # Note: Scalar inputs *only* get a pass on this\n if len(input_shape) < model_set_axis + 1:\n raise ValueError(\n f\"For model_set_axis={model_set_axis}, all inputs must be at \"\n f\"least {model_set_axis + 1}-dimensional.\"\n )\n if input_shape[model_set_axis] != self._n_models:\n try:\n argname = argnames[idx]\n except IndexError:\n # the case of model.inputs = ()\n argname = str(idx)\n\n raise ValueError(\n f\"Input argument '{argname}' does not have the correct dimensions\"\n f\" in model_set_axis={model_set_axis} for a model set with\"\n f\" n_models={self._n_models}.\"\n )\n\n return input_shape\n\n def _validate_input_shapes(self, inputs, argnames, model_set_axis):\n \"\"\"\n Perform basic validation of model inputs\n --that they are mutually broadcastable and that they have\n the minimum dimensions for the given model_set_axis.\n\n If validation succeeds, returns the total shape that will result from\n broadcasting the input arrays with each other.\n \"\"\"\n check_model_set_axis = self._n_models > 1 and model_set_axis is not False\n\n all_shapes = []\n for idx, _input in enumerate(inputs):\n all_shapes.append(\n self._validate_input_shape(\n _input, idx, argnames, model_set_axis, check_model_set_axis\n )\n )\n\n try:\n input_shape = np.broadcast_shapes(*all_shapes)\n except ValueError as exc:\n _add_note_to_exception(\n exc, \"All inputs must have identical shapes or must be scalars.\"\n )\n raise exc\n\n return input_shape\n\n def input_shape(self, inputs):\n \"\"\"Get input shape for bounding_box evaluation.\"\"\"\n return self._validate_input_shapes(inputs, self._argnames, self.model_set_axis)\n\n def _generic_evaluate(self, evaluate, _inputs, fill_value, with_bbox):\n \"\"\"Generic model evaluation routine.\n\n Selects and evaluates model with or without bounding_box enforcement.\n \"\"\"\n # Evaluate the model using the prepared evaluation method either\n # enforcing the bounding_box or not.\n bbox = self.get_bounding_box(with_bbox)\n if (not isinstance(with_bbox, bool) or with_bbox) and bbox is not None:\n outputs = bbox.evaluate(evaluate, _inputs, fill_value)\n else:\n outputs = evaluate(_inputs)\n return outputs\n\n def _post_evaluate(self, inputs, outputs, broadcasted_shapes, with_bbox, **kwargs):\n \"\"\"\n Model specific post evaluation processing of outputs.\n \"\"\"\n if self.get_bounding_box(with_bbox) is None and self.n_outputs == 1:\n outputs = (outputs,)\n\n outputs = self.prepare_outputs(broadcasted_shapes, *outputs, **kwargs)\n outputs = self._process_output_units(inputs, outputs)\n\n if self.n_outputs == 1:\n return outputs[0]\n return outputs\n\n @property\n def bbox_with_units(self):\n return not isinstance(self, CompoundModel)\n\n def __call__(self, *args, **kwargs):\n \"\"\"\n Evaluate this model using the given input(s) and the parameter values\n that were specified when the model was instantiated.\n \"\"\"\n # Turn any keyword arguments into positional arguments.\n args, kwargs = self._get_renamed_inputs_as_positional(*args, **kwargs)\n\n # Read model evaluation related parameters\n with_bbox = kwargs.pop(\"with_bounding_box\", False)\n fill_value = kwargs.pop(\"fill_value\", np.nan)\n\n # prepare for model evaluation (overridden in CompoundModel)\n evaluate, inputs, broadcasted_shapes, kwargs = self._pre_evaluate(\n *args, **kwargs\n )\n\n outputs = self._generic_evaluate(evaluate, inputs, fill_value, with_bbox)\n\n # post-process evaluation results (overridden in CompoundModel)\n return self._post_evaluate(\n inputs, outputs, broadcasted_shapes, with_bbox, **kwargs\n )\n\n def _get_renamed_inputs_as_positional(self, *args, **kwargs):\n def _keyword2positional(kwargs):\n # Inputs were passed as keyword (not positional) arguments.\n # Because the signature of the ``__call__`` is defined at\n # the class level, the name of the inputs cannot be changed at\n # the instance level and the old names are always present in the\n # signature of the method. In order to use the new names of the\n # inputs, the old names are taken out of ``kwargs``, the input\n # values are sorted in the order of self.inputs and passed as\n # positional arguments to ``__call__``.\n\n # These are the keys that are always present as keyword arguments.\n keys = [\n \"model_set_axis\",\n \"with_bounding_box\",\n \"fill_value\",\n \"equivalencies\",\n \"inputs_map\",\n ]\n\n new_inputs = {}\n # kwargs contain the names of the new inputs + ``keys``\n allkeys = list(kwargs.keys())\n # Remove the names of the new inputs from kwargs and save them\n # to a dict ``new_inputs``.\n for key in allkeys:\n if key not in keys:\n new_inputs[key] = kwargs[key]\n del kwargs[key]\n return new_inputs, kwargs\n\n n_args = len(args)\n\n new_inputs, kwargs = _keyword2positional(kwargs)\n n_all_args = n_args + len(new_inputs)\n\n if n_all_args < self.n_inputs:\n raise ValueError(\n f\"Missing input arguments - expected {self.n_inputs}, got {n_all_args}\"\n )\n elif n_all_args > self.n_inputs:\n raise ValueError(\n f\"Too many input arguments - expected {self.n_inputs}, got {n_all_args}\"\n )\n if n_args == 0:\n # Create positional arguments from the keyword arguments in ``new_inputs``.\n new_args = []\n for k in self.inputs:\n new_args.append(new_inputs[k])\n elif n_args != self.n_inputs:\n # Some inputs are passed as positional, others as keyword arguments.\n args = list(args)\n\n # Create positional arguments from the keyword arguments in ``new_inputs``.\n new_args = []\n for k in self.inputs:\n if k in new_inputs:\n new_args.append(new_inputs[k])\n else:\n new_args.append(args[0])\n del args[0]\n else:\n new_args = args\n return new_args, kwargs\n\n # *** Properties ***\n @property\n def name(self):\n \"\"\"User-provided name for this model instance.\"\"\"\n return self._name\n\n @name.setter\n def name(self, val):\n \"\"\"Assign a (new) name to this model.\"\"\"\n self._name = val\n\n @property\n def model_set_axis(self):\n \"\"\"\n The index of the model set axis--that is the axis of a parameter array\n that pertains to which model a parameter value pertains to--as\n specified when the model was initialized.\n\n See the documentation on :ref:`astropy:modeling-model-sets`\n for more details.\n \"\"\"\n return self._model_set_axis\n\n @property\n def param_sets(self):\n \"\"\"\n Return parameters as a pset.\n\n This is a list with one item per parameter set, which is an array of\n that parameter's values across all parameter sets, with the last axis\n associated with the parameter set.\n \"\"\"\n return self._param_sets()\n\n @property\n def parameters(self):\n \"\"\"\n A flattened array of all parameter values in all parameter sets.\n\n Fittable parameters maintain this list and fitters modify it.\n \"\"\"\n # Currently the sequence of a model's parameters must be contiguous\n # within the _parameters array (which may be a view of a larger array,\n # for example when taking a sub-expression of a compound model), so\n # the assumption here is reliable:\n if not self.param_names:\n # Trivial, but not unheard of\n return self._parameters\n\n self._parameters_to_array()\n start = self._param_metrics[self.param_names[0]][\"slice\"].start\n stop = self._param_metrics[self.param_names[-1]][\"slice\"].stop\n\n return self._parameters[start:stop]\n\n @parameters.setter\n def parameters(self, value):\n \"\"\"\n Assigning to this attribute updates the parameters array rather than\n replacing it.\n \"\"\"\n if not self.param_names:\n return\n\n start = self._param_metrics[self.param_names[0]][\"slice\"].start\n stop = self._param_metrics[self.param_names[-1]][\"slice\"].stop\n\n try:\n value = np.array(value).flatten()\n self._parameters[start:stop] = value\n except ValueError as e:\n raise InputParameterError(\n \"Input parameter values not compatible with the model \"\n f\"parameters array: {e!r}\"\n )\n self._array_to_parameters()\n\n @property\n def sync_constraints(self):\n \"\"\"\n This is a boolean property that indicates whether or not accessing constraints\n automatically check the constituent models current values. It defaults to True\n on creation of a model, but for fitting purposes it should be set to False\n for performance reasons.\n \"\"\"\n if not hasattr(self, \"_sync_constraints\"):\n self._sync_constraints = True\n return self._sync_constraints\n\n @sync_constraints.setter\n def sync_constraints(self, value):\n if not isinstance(value, bool):\n raise ValueError(\"sync_constraints only accepts True or False as values\")\n self._sync_constraints = value\n\n @property\n def fixed(self):\n \"\"\"\n A ``dict`` mapping parameter names to their fixed constraint.\n \"\"\"\n if not hasattr(self, \"_fixed\") or self.sync_constraints:\n self._fixed = _ConstraintsDict(self, \"fixed\")\n return self._fixed\n\n @property\n def bounds(self):\n \"\"\"\n A ``dict`` mapping parameter names to their upper and lower bounds as\n ``(min, max)`` tuples or ``[min, max]`` lists.\n \"\"\"\n if not hasattr(self, \"_bounds\") or self.sync_constraints:\n self._bounds = _ConstraintsDict(self, \"bounds\")\n return self._bounds\n\n @property\n def tied(self):\n \"\"\"\n A ``dict`` mapping parameter names to their tied constraint.\n \"\"\"\n if not hasattr(self, \"_tied\") or self.sync_constraints:\n self._tied = _ConstraintsDict(self, \"tied\")\n return self._tied\n\n @property\n def eqcons(self):\n \"\"\"List of parameter equality constraints.\"\"\"\n return self._mconstraints[\"eqcons\"]\n\n @property\n def ineqcons(self):\n \"\"\"List of parameter inequality constraints.\"\"\"\n return self._mconstraints[\"ineqcons\"]\n\n def has_inverse(self):\n \"\"\"\n Returns True if the model has an analytic or user\n inverse defined.\n \"\"\"\n try:\n self.inverse # noqa: B018\n except NotImplementedError:\n return False\n\n return True\n\n @property\n def inverse(self):\n \"\"\"\n Returns a new `~astropy.modeling.Model` instance which performs the\n inverse transform, if an analytic inverse is defined for this model.\n\n Even on models that don't have an inverse defined, this property can be\n set with a manually-defined inverse, such a pre-computed or\n experimentally determined inverse (often given as a\n `~astropy.modeling.polynomial.PolynomialModel`, but not by\n requirement).\n\n A custom inverse can be deleted with ``del model.inverse``. In this\n case the model's inverse is reset to its default, if a default exists\n (otherwise the default is to raise `NotImplementedError`).\n\n Note to authors of `~astropy.modeling.Model` subclasses: To define an\n inverse for a model simply override this property to return the\n appropriate model representing the inverse. The machinery that will\n make the inverse manually-overridable is added automatically by the\n base class.\n \"\"\"\n if self._user_inverse is not None:\n return self._user_inverse\n elif self._inverse is not None:\n result = self._inverse()\n if result is not NotImplemented:\n if not self._has_inverse_bounding_box:\n result.bounding_box = None\n return result\n\n raise NotImplementedError(\n \"No analytical or user-supplied inverse transform \"\n \"has been implemented for this model.\"\n )\n\n @inverse.setter\n def inverse(self, value):\n if not isinstance(value, (Model, type(None))):\n raise ValueError(\n \"The ``inverse`` attribute may be assigned a `Model` \"\n \"instance or `None` (where `None` explicitly forces the \"\n \"model to have no inverse.\"\n )\n\n self._user_inverse = value\n\n @inverse.deleter\n def inverse(self):\n \"\"\"\n Resets the model's inverse to its default (if one exists, otherwise\n the model will have no inverse).\n \"\"\"\n try:\n del self._user_inverse\n except AttributeError:\n pass\n\n @property\n def has_user_inverse(self):\n \"\"\"\n A flag indicating whether or not a custom inverse model has been\n assigned to this model by a user, via assignment to ``model.inverse``.\n \"\"\"\n return self._user_inverse is not None\n\n @property\n def bounding_box(self):\n r\"\"\"\n A `tuple` of length `n_inputs` defining the bounding box limits, or\n raise `NotImplementedError` for no bounding_box.\n\n The default limits are given by a ``bounding_box`` property or method\n defined in the class body of a specific model. If not defined then\n this property just raises `NotImplementedError` by default (but may be\n assigned a custom value by a user). ``bounding_box`` can be set\n manually to an array-like object of shape ``(model.n_inputs, 2)``. For\n further usage, see :ref:`astropy:bounding-boxes`\n\n The limits are ordered according to the `numpy` ``'C'`` indexing\n convention, and are the reverse of the model input order,\n e.g. for inputs ``('x', 'y', 'z')``, ``bounding_box`` is defined:\n\n * for 1D: ``(x_low, x_high)``\n * for 2D: ``((y_low, y_high), (x_low, x_high))``\n * for 3D: ``((z_low, z_high), (y_low, y_high), (x_low, x_high))``\n\n Examples\n --------\n Setting the ``bounding_box`` limits for a 1D and 2D model:\n\n >>> from astropy.modeling.models import Gaussian1D, Gaussian2D\n >>> model_1d = Gaussian1D()\n >>> model_2d = Gaussian2D(x_stddev=1, y_stddev=1)\n >>> model_1d.bounding_box = (-5, 5)\n >>> model_2d.bounding_box = ((-6, 6), (-5, 5))\n\n Setting the bounding_box limits for a user-defined 3D `custom_model`:\n\n >>> from astropy.modeling.models import custom_model\n >>> def const3d(x, y, z, amp=1):\n ... return amp\n ...\n >>> Const3D = custom_model(const3d)\n >>> model_3d = Const3D()\n >>> model_3d.bounding_box = ((-6, 6), (-5, 5), (-4, 4))\n\n To reset ``bounding_box`` to its default limits just delete the\n user-defined value--this will reset it back to the default defined\n on the class:\n\n >>> del model_1d.bounding_box\n\n To disable the bounding box entirely (including the default),\n set ``bounding_box`` to `None`:\n\n >>> model_1d.bounding_box = None\n >>> model_1d.bounding_box # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n NotImplementedError: No bounding box is defined for this model\n (note: the bounding box was explicitly disabled for this model;\n use `del model.bounding_box` to restore the default bounding box,\n if one is defined for this model).\n \"\"\"\n if self._user_bounding_box is not None:\n if self._user_bounding_box is NotImplemented:\n raise NotImplementedError(\n \"No bounding box is defined for this model (note: the \"\n \"bounding box was explicitly disabled for this model; \"\n \"use `del model.bounding_box` to restore the default \"\n \"bounding box, if one is defined for this model).\"\n )\n return self._user_bounding_box\n elif self._bounding_box is None:\n raise NotImplementedError(\"No bounding box is defined for this model.\")\n elif isinstance(self._bounding_box, ModelBoundingBox):\n # This typically implies a hard-coded bounding box. This will\n # probably be rare, but it is an option\n return self._bounding_box\n elif inspect.ismethod(self._bounding_box):\n return ModelBoundingBox.validate(self, self._bounding_box())\n else:\n # The only other allowed possibility is that it's a ModelBoundingBox\n # subclass, so we call it with its default arguments and return an\n # instance of it (that can be called to recompute the bounding box\n # with any optional parameters)\n # (In other words, in this case self._bounding_box is a *class*)\n bounding_box = self._bounding_box((), model=self)()\n return self._bounding_box(bounding_box, model=self)\n\n @bounding_box.setter\n def bounding_box(self, bounding_box):\n \"\"\"\n Assigns the bounding box limits.\n \"\"\"\n if bounding_box is None:\n cls = None\n # We use this to explicitly set an unimplemented bounding box (as\n # opposed to no user bounding box defined)\n bounding_box = NotImplemented\n elif isinstance(bounding_box, (CompoundBoundingBox, dict)):\n cls = CompoundBoundingBox\n elif isinstance(self._bounding_box, type) and issubclass(\n self._bounding_box, ModelBoundingBox\n ):\n cls = self._bounding_box\n else:\n cls = ModelBoundingBox\n\n if cls is not None:\n try:\n bounding_box = cls.validate(self, bounding_box, _preserve_ignore=True)\n except ValueError as exc:\n raise ValueError(exc.args[0])\n\n self._user_bounding_box = bounding_box\n\n def set_slice_args(self, *args):\n if isinstance(self._user_bounding_box, CompoundBoundingBox):\n self._user_bounding_box.slice_args = args\n else:\n raise RuntimeError(\"The bounding_box for this model is not compound\")\n\n @bounding_box.deleter\n def bounding_box(self):\n self._user_bounding_box = None\n\n @property\n def has_user_bounding_box(self):\n \"\"\"\n A flag indicating whether or not a custom bounding_box has been\n assigned to this model by a user, via assignment to\n ``model.bounding_box``.\n \"\"\"\n return self._user_bounding_box is not None\n\n @property\n def cov_matrix(self):\n \"\"\"\n Fitter should set covariance matrix, if available.\n \"\"\"\n return self._cov_matrix\n\n @cov_matrix.setter\n def cov_matrix(self, cov):\n self._cov_matrix = cov\n\n unfix_untied_params = [\n p\n for p in self.param_names\n if (self.fixed[p] is False) and (self.tied[p] is False)\n ]\n if type(cov) == list: # model set\n param_stds = []\n for c in cov:\n param_stds.append(\n [np.sqrt(x) if x > 0 else None for x in np.diag(c.cov_matrix)]\n )\n for p, param_name in enumerate(unfix_untied_params):\n par = getattr(self, param_name)\n par.std = [item[p] for item in param_stds]\n setattr(self, param_name, par)\n else:\n param_stds = [\n np.sqrt(x) if x > 0 else None for x in np.diag(cov.cov_matrix)\n ]\n for param_name in unfix_untied_params:\n par = getattr(self, param_name)\n par.std = param_stds.pop(0)\n setattr(self, param_name, par)\n\n @property\n def stds(self):\n \"\"\"\n Standard deviation of parameters, if covariance matrix is available.\n \"\"\"\n return self._stds\n\n @stds.setter\n def stds(self, stds):\n self._stds = stds\n\n @property\n def separable(self):\n \"\"\"A flag indicating whether a model is separable.\"\"\"\n if self._separable is not None:\n return self._separable\n raise NotImplementedError(\n 'The \"separable\" property is not defined for '\n f\"model {self.__class__.__name__}\"\n )\n\n # *** Public methods ***\n\n def without_units_for_data(self, **kwargs):\n \"\"\"\n Return an instance of the model for which the parameter values have\n been converted to the right units for the data, then the units have\n been stripped away.\n\n The input and output Quantity objects should be given as keyword\n arguments.\n\n Notes\n -----\n This method is needed in order to be able to fit models with units in\n the parameters, since we need to temporarily strip away the units from\n the model during the fitting (which might be done by e.g. scipy\n functions).\n\n The units that the parameters should be converted to are not\n necessarily the units of the input data, but are derived from them.\n Model subclasses that want fitting to work in the presence of\n quantities need to define a ``_parameter_units_for_data_units`` method\n that takes the input and output units (as two dictionaries) and\n returns a dictionary giving the target units for each parameter.\n\n \"\"\"\n model = self.copy()\n\n inputs_unit = {\n inp: getattr(kwargs[inp], \"unit\", dimensionless_unscaled)\n for inp in self.inputs\n if kwargs[inp] is not None\n }\n\n outputs_unit = {\n out: getattr(kwargs[out], \"unit\", dimensionless_unscaled)\n for out in self.outputs\n if kwargs[out] is not None\n }\n parameter_units = self._parameter_units_for_data_units(\n inputs_unit, outputs_unit\n )\n for name, unit in parameter_units.items():\n parameter = getattr(model, name)\n if parameter.unit is not None:\n parameter.value = parameter.quantity.to(unit).value\n parameter._set_unit(None, force=True)\n\n if isinstance(model, CompoundModel):\n model.strip_units_from_tree()\n\n return model\n\n def output_units(self, **kwargs):\n \"\"\"\n Return a dictionary of output units for this model given a dictionary\n of fitting inputs and outputs.\n\n The input and output Quantity objects should be given as keyword\n arguments.\n\n Notes\n -----\n This method is needed in order to be able to fit models with units in\n the parameters, since we need to temporarily strip away the units from\n the model during the fitting (which might be done by e.g. scipy\n functions).\n\n This method will force extra model evaluations, which maybe computationally\n expensive. To avoid this, one can add a return_units property to the model,\n see :ref:`astropy:models_return_units`.\n \"\"\"\n units = self.return_units\n\n if units is None or units == {}:\n inputs = {inp: kwargs[inp] for inp in self.inputs}\n\n values = self(**inputs)\n if self.n_outputs == 1:\n values = (values,)\n\n units = {\n out: getattr(values[index], \"unit\", dimensionless_unscaled)\n for index, out in enumerate(self.outputs)\n }\n\n return units\n\n def strip_units_from_tree(self):\n for item in self._leaflist:\n for parname in item.param_names:\n par = getattr(item, parname)\n par._set_unit(None, force=True)\n\n def with_units_from_data(self, **kwargs):\n \"\"\"\n Return an instance of the model which has units for which the parameter\n values are compatible with the data units specified.\n\n The input and output Quantity objects should be given as keyword\n arguments.\n\n Notes\n -----\n This method is needed in order to be able to fit models with units in\n the parameters, since we need to temporarily strip away the units from\n the model during the fitting (which might be done by e.g. scipy\n functions).\n\n The units that the parameters will gain are not necessarily the units\n of the input data, but are derived from them. Model subclasses that\n want fitting to work in the presence of quantities need to define a\n ``_parameter_units_for_data_units`` method that takes the input and output\n units (as two dictionaries) and returns a dictionary giving the target\n units for each parameter.\n \"\"\"\n model = self.copy()\n inputs_unit = {\n inp: getattr(kwargs[inp], \"unit\", dimensionless_unscaled)\n for inp in self.inputs\n if kwargs[inp] is not None\n }\n\n outputs_unit = {\n out: getattr(kwargs[out], \"unit\", dimensionless_unscaled)\n for out in self.outputs\n if kwargs[out] is not None\n }\n\n parameter_units = self._parameter_units_for_data_units(\n inputs_unit, outputs_unit\n )\n\n # We are adding units to parameters that already have a value, but we\n # don't want to convert the parameter, just add the unit directly,\n # hence the call to ``_set_unit``.\n for name, unit in parameter_units.items():\n parameter = getattr(model, name)\n parameter._set_unit(unit, force=True)\n\n return model\n\n @property\n def _has_units(self):\n # Returns True if any of the parameters have units\n return any(getattr(self, param).unit is not None for param in self.param_names)\n\n @property\n def _supports_unit_fitting(self):\n # If the model has a ``_parameter_units_for_data_units`` method, this\n # indicates that we have enough information to strip the units away\n # and add them back after fitting, when fitting quantities\n return hasattr(self, \"_parameter_units_for_data_units\")\n\n @abc.abstractmethod\n def evaluate(self, *args, **kwargs):\n \"\"\"Evaluate the model on some input variables.\"\"\"\n\n def sum_of_implicit_terms(self, *args, **kwargs):\n \"\"\"\n Evaluate the sum of any implicit model terms on some input variables.\n This includes any fixed terms used in evaluating a linear model that\n do not have corresponding parameters exposed to the user. The\n prototypical case is `astropy.modeling.functional_models.Shift`, which\n corresponds to a function y = a + bx, where b=1 is intrinsically fixed\n by the type of model, such that sum_of_implicit_terms(x) == x. This\n method is needed by linear fitters to correct the dependent variable\n for the implicit term(s) when solving for the remaining terms\n (ie. a = y - bx).\n \"\"\"\n\n def render(self, out=None, coords=None):\n \"\"\"\n Evaluate a model at fixed positions, respecting the ``bounding_box``.\n\n The key difference relative to evaluating the model directly is that\n this method is limited to a bounding box if the `Model.bounding_box`\n attribute is set.\n\n Parameters\n ----------\n out : `numpy.ndarray`, optional\n An array that the evaluated model will be added to. If this is not\n given (or given as ``None``), a new array will be created.\n coords : array-like, optional\n An array to be used to translate from the model's input coordinates\n to the ``out`` array. It should have the property that\n ``self(coords)`` yields the same shape as ``out``. If ``out`` is\n not specified, ``coords`` will be used to determine the shape of\n the returned array. If this is not provided (or None), the model\n will be evaluated on a grid determined by `Model.bounding_box`.\n\n Returns\n -------\n out : `numpy.ndarray`\n The model added to ``out`` if ``out`` is not ``None``, or else a\n new array from evaluating the model over ``coords``.\n If ``out`` and ``coords`` are both `None`, the returned array is\n limited to the `Model.bounding_box` limits. If\n `Model.bounding_box` is `None`, ``arr`` or ``coords`` must be\n passed.\n\n Raises\n ------\n ValueError\n If ``coords`` are not given and the `Model.bounding_box` of\n this model is not set.\n\n Examples\n --------\n :ref:`astropy:bounding-boxes`\n \"\"\"\n try:\n bbox = self.bounding_box\n except NotImplementedError:\n bbox = None\n\n if isinstance(bbox, ModelBoundingBox):\n bbox = bbox.bounding_box()\n\n ndim = self.n_inputs\n\n if (coords is None) and (out is None) and (bbox is None):\n raise ValueError(\"If no bounding_box is set, coords or out must be input.\")\n\n # for consistent indexing\n if ndim == 1:\n if coords is not None:\n coords = [coords]\n if bbox is not None:\n bbox = [bbox]\n\n if coords is not None:\n coords = np.asanyarray(coords, dtype=float)\n # Check dimensions match out and model\n assert len(coords) == ndim\n if out is not None:\n if coords[0].shape != out.shape:\n raise ValueError(\"inconsistent shape of the output.\")\n else:\n out = np.zeros(coords[0].shape)\n\n if out is not None:\n out = np.asanyarray(out)\n if out.ndim != ndim:\n raise ValueError(\n \"the array and model must have the same number of dimensions.\"\n )\n\n if bbox is not None:\n # Assures position is at center pixel,\n # important when using add_array.\n pd = (\n np.array([(np.mean(bb), np.ceil((bb[1] - bb[0]) / 2)) for bb in bbox])\n .astype(int)\n .T\n )\n pos, delta = pd\n\n if coords is not None:\n sub_shape = tuple(delta * 2 + 1)\n sub_coords = np.array(\n [extract_array(c, sub_shape, pos) for c in coords]\n )\n else:\n limits = [slice(p - d, p + d + 1, 1) for p, d in pd.T]\n sub_coords = np.mgrid[limits]\n\n sub_coords = sub_coords[::-1]\n\n if out is None:\n out = self(*sub_coords)\n else:\n try:\n out = add_array(out, self(*sub_coords), pos)\n except ValueError:\n raise ValueError(\n \"The `bounding_box` is larger than the input out in \"\n \"one or more dimensions. Set \"\n \"`model.bounding_box = None`.\"\n )\n else:\n if coords is None:\n im_shape = out.shape\n limits = [slice(i) for i in im_shape]\n coords = np.mgrid[limits]\n\n coords = coords[::-1]\n\n out += self(*coords)\n\n return out\n\n @property\n def input_units(self):\n \"\"\"\n This property is used to indicate what units or sets of units the\n evaluate method expects, and returns a dictionary mapping inputs to\n units (or `None` if any units are accepted).\n\n Model sub-classes can also use function annotations in evaluate to\n indicate valid input units, in which case this property should\n not be overridden since it will return the input units based on the\n annotations.\n \"\"\"\n if hasattr(self, \"_input_units\"):\n return self._input_units\n elif hasattr(self.evaluate, \"__annotations__\"):\n annotations = self.evaluate.__annotations__.copy()\n annotations.pop(\"return\", None)\n if annotations:\n # If there are not annotations for all inputs this will error.\n return {name: annotations[name] for name in self.inputs}\n else:\n # None means any unit is accepted\n return None\n\n @property\n def return_units(self):\n \"\"\"\n This property is used to indicate what units or sets of units the\n output of evaluate should be in, and returns a dictionary mapping\n outputs to units (or `None` if any units are accepted).\n\n Model sub-classes can also use function annotations in evaluate to\n indicate valid output units, in which case this property should not be\n overridden since it will return the return units based on the\n annotations.\n \"\"\"\n if hasattr(self, \"_return_units\"):\n return self._return_units\n elif hasattr(self.evaluate, \"__annotations__\"):\n return self.evaluate.__annotations__.get(\"return\", None)\n else:\n # None means any unit is accepted\n return None\n\n def _prepare_inputs_single_model(self, params, inputs, **kwargs):\n broadcasts = []\n for idx, _input in enumerate(inputs):\n input_shape = _input.shape\n\n # Ensure that array scalars are always upgrade to 1-D arrays for the\n # sake of consistency with how parameters work. They will be cast back\n # to scalars at the end\n if not input_shape:\n inputs[idx] = _input.reshape((1,))\n\n if not params:\n max_broadcast = input_shape\n else:\n max_broadcast = ()\n\n for param in params:\n try:\n if self.standard_broadcasting:\n broadcast = np.broadcast_shapes(input_shape, param.shape)\n else:\n broadcast = input_shape\n except ValueError as exc:\n _add_note_to_exception(\n exc,\n f\"self input argument {self.inputs[idx]!r} of shape\"\n f\" {input_shape!r} cannot be broadcast with parameter\"\n f\" {param.name!r} of shape {param.shape!r}.\",\n )\n raise exc\n\n if len(broadcast) > len(max_broadcast):\n max_broadcast = broadcast\n elif len(broadcast) == len(max_broadcast):\n max_broadcast = max(max_broadcast, broadcast)\n\n broadcasts.append(max_broadcast)\n\n if self.n_outputs > self.n_inputs:\n extra_outputs = self.n_outputs - self.n_inputs\n if not broadcasts:\n # If there were no inputs then the broadcasts list is empty\n # just add a None since there is no broadcasting of outputs and\n # inputs necessary (see _prepare_outputs_single_self)\n broadcasts.append(None)\n broadcasts.extend([broadcasts[0]] * extra_outputs)\n\n return inputs, (broadcasts,)\n\n @staticmethod\n def _remove_axes_from_shape(shape, axis):\n \"\"\"\n Given a shape tuple as the first input, construct a new one by removing\n that particular axis from the shape and all preceding axes. Negative axis\n numbers are permittted, where the axis is relative to the last axis.\n \"\"\"\n if len(shape) == 0:\n return shape\n if axis < 0:\n axis = len(shape) + axis\n return shape[:axis] + shape[axis + 1 :]\n if axis >= len(shape):\n axis = len(shape) - 1\n shape = shape[axis + 1 :]\n return shape\n\n def _prepare_inputs_model_set(self, params, inputs, model_set_axis_input, **kwargs):\n reshaped = []\n pivots = []\n\n model_set_axis_param = self.model_set_axis # needed to reshape param\n for idx, _input in enumerate(inputs):\n max_param_shape = ()\n if self._n_models > 1 and model_set_axis_input is not False:\n # Use the shape of the input *excluding* the model axis\n input_shape = (\n _input.shape[:model_set_axis_input]\n + _input.shape[model_set_axis_input + 1 :]\n )\n else:\n input_shape = _input.shape\n\n for param in params:\n try:\n np.broadcast_shapes(\n input_shape,\n self._remove_axes_from_shape(param.shape, model_set_axis_param),\n )\n except ValueError as exc:\n _add_note_to_exception(\n exc,\n f\"Model input argument {self.inputs[idx]!r} of shape\"\n f\" {input_shape!r} \"\n f\"cannot be broadcast with parameter {param.name!r} of shape \"\n f\"{self._remove_axes_from_shape(param.shape, model_set_axis_param)!r}.\",\n )\n raise exc\n\n if len(param.shape) - 1 > len(max_param_shape):\n max_param_shape = self._remove_axes_from_shape(\n param.shape, model_set_axis_param\n )\n\n # We've now determined that, excluding the model_set_axis, the\n # input can broadcast with all the parameters\n input_ndim = len(input_shape)\n if model_set_axis_input is False:\n if len(max_param_shape) > input_ndim:\n # Just needs to prepend new axes to the input\n n_new_axes = 1 + len(max_param_shape) - input_ndim\n new_axes = (1,) * n_new_axes\n new_shape = new_axes + _input.shape\n pivot = model_set_axis_param\n else:\n pivot = input_ndim - len(max_param_shape)\n new_shape = _input.shape[:pivot] + (1,) + _input.shape[pivot:]\n new_input = _input.reshape(new_shape)\n else:\n if len(max_param_shape) >= input_ndim:\n n_new_axes = len(max_param_shape) - input_ndim\n pivot = self.model_set_axis\n new_axes = (1,) * n_new_axes\n new_shape = (\n _input.shape[: pivot + 1] + new_axes + _input.shape[pivot + 1 :]\n )\n new_input = _input.reshape(new_shape)\n else:\n pivot = _input.ndim - len(max_param_shape) - 1\n new_input = np.rollaxis(_input, model_set_axis_input, pivot + 1)\n pivots.append(pivot)\n reshaped.append(new_input)\n\n if self.n_inputs < self.n_outputs:\n pivots.extend([model_set_axis_input] * (self.n_outputs - self.n_inputs))\n\n return reshaped, (pivots,)\n\n def prepare_inputs(\n self, *inputs, model_set_axis=None, equivalencies=None, **kwargs\n ):\n \"\"\"\n This method is used in `~astropy.modeling.Model.__call__` to ensure\n that all the inputs to the model can be broadcast into compatible\n shapes (if one or both of them are input as arrays), particularly if\n there are more than one parameter sets. This also makes sure that (if\n applicable) the units of the input will be compatible with the evaluate\n method.\n \"\"\"\n # When we instantiate the model class, we make sure that __call__ can\n # take the following two keyword arguments: model_set_axis and\n # equivalencies.\n if model_set_axis is None:\n # By default the model_set_axis for the input is assumed to be the\n # same as that for the parameters the model was defined with\n # TODO: Ensure that negative model_set_axis arguments are respected\n model_set_axis = self.model_set_axis\n\n params = [getattr(self, name) for name in self.param_names]\n inputs = [np.asanyarray(_input, dtype=float) for _input in inputs]\n\n self._validate_input_shapes(inputs, self.inputs, model_set_axis)\n\n inputs_map = kwargs.get(\"inputs_map\", None)\n\n inputs = self._validate_input_units(inputs, equivalencies, inputs_map)\n\n # The input formatting required for single models versus a multiple\n # model set are different enough that they've been split into separate\n # subroutines\n if self._n_models == 1:\n return self._prepare_inputs_single_model(params, inputs, **kwargs)\n else:\n return self._prepare_inputs_model_set(\n params, inputs, model_set_axis, **kwargs\n )\n\n def _validate_input_units(self, inputs, equivalencies=None, inputs_map=None):\n inputs = list(inputs)\n name = self.name or self.__class__.__name__\n # Check that the units are correct, if applicable\n\n if self.input_units is not None:\n # If a leaflist is provided that means this is in the context of\n # a compound model and it is necessary to create the appropriate\n # alias for the input coordinate name for the equivalencies dict\n if inputs_map:\n edict = {}\n for mod, mapping in inputs_map:\n if self is mod:\n edict[mapping[0]] = equivalencies[mapping[1]]\n else:\n edict = equivalencies\n # We combine any instance-level input equivalencies with user\n # specified ones at call-time.\n input_units_equivalencies = _combine_equivalency_dict(\n self.inputs, edict, self.input_units_equivalencies\n )\n\n # We now iterate over the different inputs and make sure that their\n # units are consistent with those specified in input_units.\n for i in range(len(inputs)):\n input_name = self.inputs[i]\n input_unit = self.input_units.get(input_name, None)\n\n if input_unit is None:\n continue\n\n if isinstance(inputs[i], Quantity):\n # We check for consistency of the units with input_units,\n # taking into account any equivalencies\n\n if inputs[i].unit.is_equivalent(\n input_unit, equivalencies=input_units_equivalencies[input_name]\n ):\n # If equivalencies have been specified, we need to\n # convert the input to the input units - this is\n # because some equivalencies are non-linear, and\n # we need to be sure that we evaluate the model in\n # its own frame of reference. If input_units_strict\n # is set, we also need to convert to the input units.\n if (\n len(input_units_equivalencies) > 0\n or self.input_units_strict[input_name]\n ):\n inputs[i] = inputs[i].to(\n input_unit,\n equivalencies=input_units_equivalencies[input_name],\n )\n\n else:\n # We consider the following two cases separately so as\n # to be able to raise more appropriate/nicer exceptions\n\n if input_unit is dimensionless_unscaled:\n raise UnitsError(\n f\"{name}: Units of input '{self.inputs[i]}', \"\n f\"{inputs[i].unit} ({inputs[i].unit.physical_type}),\"\n \"could not be converted to \"\n \"required dimensionless \"\n \"input\"\n )\n else:\n raise UnitsError(\n f\"{name}: Units of input '{self.inputs[i]}', \"\n f\"{inputs[i].unit} ({inputs[i].unit.physical_type}),\"\n \" could not be \"\n \"converted to required input\"\n f\" units of {input_unit} ({input_unit.physical_type})\"\n )\n else:\n # If we allow dimensionless input, we add the units to the\n # input values without conversion, otherwise we raise an\n # exception.\n\n if (\n not self.input_units_allow_dimensionless[input_name]\n and input_unit is not dimensionless_unscaled\n and input_unit is not None\n ):\n if np.any(inputs[i] != 0):\n raise UnitsError(\n f\"{name}: Units of input '{self.inputs[i]}',\"\n \" (dimensionless), could not be converted to required \"\n f\"input units of {input_unit} \"\n f\"({input_unit.physical_type})\"\n )\n return inputs\n\n def _process_output_units(self, inputs, outputs):\n inputs_are_quantity = any(isinstance(i, Quantity) for i in inputs)\n if self.return_units and inputs_are_quantity:\n # We allow a non-iterable unit only if there is one output\n if self.n_outputs == 1 and not isiterable(self.return_units):\n return_units = {self.outputs[0]: self.return_units}\n else:\n return_units = self.return_units\n\n outputs = tuple(\n Quantity(out, return_units.get(out_name, None), subok=True)\n for out, out_name in zip(outputs, self.outputs)\n )\n return outputs\n\n @staticmethod\n def _prepare_output_single_model(output, broadcast_shape):\n if broadcast_shape is not None:\n if not broadcast_shape:\n return output.item()\n else:\n try:\n return output.reshape(broadcast_shape)\n except ValueError:\n try:\n return output.item()\n except ValueError:\n return output\n\n return output\n\n def _prepare_outputs_single_model(self, outputs, broadcasted_shapes):\n outputs = list(outputs)\n shapes = broadcasted_shapes[0]\n for idx, output in enumerate(outputs):\n if None in shapes:\n # Previously, we used our own function (check_broadcast) instead\n # of np.broadcast_shapes in the following try block\n # - check_broadcast raised an exception when passed a None.\n # - as of numpy 1.26, np.broadcast raises a deprecation warning\n # when passed a `None` value, but returns an empty tuple.\n #\n # Since () and None have different effects downstream of this function,\n # and to preserve backward-compatibility, we handle this special here\n broadcast_shape = shapes[idx]\n else:\n try:\n broadcast_shape = np.broadcast_shapes(*shapes)\n except Exception:\n broadcast_shape = shapes[idx]\n\n outputs[idx] = self._prepare_output_single_model(output, broadcast_shape)\n\n return tuple(outputs)\n\n def _prepare_outputs_model_set(self, outputs, broadcasted_shapes, model_set_axis):\n pivots = broadcasted_shapes[0]\n # If model_set_axis = False was passed then use\n # self._model_set_axis to format the output.\n if model_set_axis is None or model_set_axis is False:\n model_set_axis = self.model_set_axis\n outputs = list(outputs)\n for idx, output in enumerate(outputs):\n pivot = pivots[idx]\n if pivot < output.ndim and pivot != model_set_axis:\n outputs[idx] = np.rollaxis(output, pivot, model_set_axis)\n return tuple(outputs)\n\n def prepare_outputs(self, broadcasted_shapes, *outputs, **kwargs):\n model_set_axis = kwargs.get(\"model_set_axis\", None)\n\n if len(self) == 1:\n return self._prepare_outputs_single_model(outputs, broadcasted_shapes)\n else:\n return self._prepare_outputs_model_set(\n outputs, broadcasted_shapes, model_set_axis\n )\n\n def copy(self):\n \"\"\"\n Return a copy of this model.\n\n Uses a deep copy so that all model attributes, including parameter\n values, are copied as well.\n \"\"\"\n return copy.deepcopy(self)\n\n def deepcopy(self):\n \"\"\"\n Return a deep copy of this model.\n\n \"\"\"\n return self.copy()\n\n @sharedmethod\n def rename(self, name):\n \"\"\"\n Return a copy of this model with a new name.\n \"\"\"\n new_model = self.copy()\n new_model._name = name\n return new_model\n\n def coerce_units(\n self,\n input_units=None,\n return_units=None,\n input_units_equivalencies=None,\n input_units_allow_dimensionless=False,\n ):\n \"\"\"\n Attach units to this (unitless) model.\n\n Parameters\n ----------\n input_units : dict or tuple, optional\n Input units to attach. If dict, each key is the name of a model input,\n and the value is the unit to attach. If tuple, the elements are units\n to attach in order corresponding to `Model.inputs`.\n return_units : dict or tuple, optional\n Output units to attach. If dict, each key is the name of a model output,\n and the value is the unit to attach. If tuple, the elements are units\n to attach in order corresponding to `Model.outputs`.\n input_units_equivalencies : dict, optional\n Default equivalencies to apply to input values. If set, this should be a\n dictionary where each key is a string that corresponds to one of the\n model inputs.\n input_units_allow_dimensionless : bool or dict, optional\n Allow dimensionless input. If this is True, input values to evaluate will\n gain the units specified in input_units. If this is a dictionary then it\n should map input name to a bool to allow dimensionless numbers for that\n input.\n\n Returns\n -------\n `CompoundModel`\n A `CompoundModel` composed of the current model plus\n `~astropy.modeling.mappings.UnitsMapping` model(s) that attach the units.\n\n Raises\n ------\n ValueError\n If the current model already has units.\n\n Examples\n --------\n Wrapping a unitless model to require and convert units:\n\n >>> from astropy.modeling.models import Polynomial1D\n >>> from astropy import units as u\n >>> poly = Polynomial1D(1, c0=1, c1=2)\n >>> model = poly.coerce_units((u.m,), (u.s,))\n >>> model(u.Quantity(10, u.m)) # doctest: +FLOAT_CMP\n <Quantity 21. s>\n >>> model(u.Quantity(1000, u.cm)) # doctest: +FLOAT_CMP\n <Quantity 21. s>\n >>> model(u.Quantity(10, u.cm)) # doctest: +FLOAT_CMP\n <Quantity 1.2 s>\n\n Wrapping a unitless model but still permitting unitless input:\n\n >>> from astropy.modeling.models import Polynomial1D\n >>> from astropy import units as u\n >>> poly = Polynomial1D(1, c0=1, c1=2)\n >>> model = poly.coerce_units((u.m,), (u.s,), input_units_allow_dimensionless=True)\n >>> model(u.Quantity(10, u.m)) # doctest: +FLOAT_CMP\n <Quantity 21. s>\n >>> model(10) # doctest: +FLOAT_CMP\n <Quantity 21. s>\n \"\"\"\n from .mappings import UnitsMapping\n\n result = self\n\n if input_units is not None:\n if self.input_units is not None:\n model_units = self.input_units\n else:\n model_units = {}\n\n for unit in [model_units.get(i) for i in self.inputs]:\n if unit is not None and unit != dimensionless_unscaled:\n raise ValueError(\n \"Cannot specify input_units for model with existing input units\"\n )\n\n if isinstance(input_units, dict):\n if input_units.keys() != set(self.inputs):\n message = (\n f\"\"\"input_units keys ({\", \".join(input_units.keys())}) \"\"\"\n f\"\"\"do not match model inputs ({\", \".join(self.inputs)})\"\"\"\n )\n raise ValueError(message)\n input_units = [input_units[i] for i in self.inputs]\n\n if len(input_units) != self.n_inputs:\n message = (\n \"input_units length does not match n_inputs: \"\n f\"expected {self.n_inputs}, received {len(input_units)}\"\n )\n raise ValueError(message)\n\n mapping = tuple(\n (unit, model_units.get(i)) for i, unit in zip(self.inputs, input_units)\n )\n input_mapping = UnitsMapping(\n mapping,\n input_units_equivalencies=input_units_equivalencies,\n input_units_allow_dimensionless=input_units_allow_dimensionless,\n )\n input_mapping.inputs = self.inputs\n input_mapping.outputs = self.inputs\n result = input_mapping | result\n\n if return_units is not None:\n if self.return_units is not None:\n model_units = self.return_units\n else:\n model_units = {}\n\n for unit in [model_units.get(i) for i in self.outputs]:\n if unit is not None and unit != dimensionless_unscaled:\n raise ValueError(\n \"Cannot specify return_units for model \"\n \"with existing output units\"\n )\n\n if isinstance(return_units, dict):\n if return_units.keys() != set(self.outputs):\n message = (\n f\"\"\"return_units keys ({\", \".join(return_units.keys())}) \"\"\"\n f\"\"\"do not match model outputs ({\", \".join(self.outputs)})\"\"\"\n )\n raise ValueError(message)\n return_units = [return_units[i] for i in self.outputs]\n\n if len(return_units) != self.n_outputs:\n message = (\n \"return_units length does not match n_outputs: \"\n f\"expected {self.n_outputs}, received {len(return_units)}\"\n )\n raise ValueError(message)\n\n mapping = tuple(\n (model_units.get(i), unit)\n for i, unit in zip(self.outputs, return_units)\n )\n return_mapping = UnitsMapping(mapping)\n return_mapping.inputs = self.outputs\n return_mapping.outputs = self.outputs\n result = result | return_mapping\n\n return result\n\n @property\n def n_submodels(self):\n \"\"\"\n Return the number of components in a single model, which is\n obviously 1.\n \"\"\"\n return 1\n\n def _initialize_constraints(self, kwargs):\n \"\"\"\n Pop parameter constraint values off the keyword arguments passed to\n `Model.__init__` and store them in private instance attributes.\n \"\"\"\n # Pop any constraints off the keyword arguments\n for constraint in self.parameter_constraints:\n values = kwargs.pop(constraint, {})\n for ckey, cvalue in values.items():\n param = getattr(self, ckey)\n setattr(param, constraint, cvalue)\n self._mconstraints = {}\n for constraint in self.model_constraints:\n values = kwargs.pop(constraint, [])\n self._mconstraints[constraint] = values\n\n def _initialize_parameters(self, args, kwargs):\n \"\"\"\n Initialize the _parameters array that stores raw parameter values for\n all parameter sets for use with vectorized fitting algorithms; on\n FittableModels the _param_name attributes actually just reference\n slices of this array.\n \"\"\"\n n_models = kwargs.pop(\"n_models\", None)\n\n if not (\n n_models is None\n or (isinstance(n_models, (int, np.integer)) and n_models >= 1)\n ):\n raise ValueError(\n \"n_models must be either None (in which case it is \"\n \"determined from the model_set_axis of the parameter initial \"\n \"values) or it must be a positive integer \"\n f\"(got {n_models!r})\"\n )\n\n model_set_axis = kwargs.pop(\"model_set_axis\", None)\n if model_set_axis is None:\n if n_models is not None and n_models > 1:\n # Default to zero\n model_set_axis = 0\n else:\n # Otherwise disable\n model_set_axis = False\n else:\n if not (\n model_set_axis is False\n or np.issubdtype(type(model_set_axis), np.integer)\n ):\n raise ValueError(\n \"model_set_axis must be either False or an integer \"\n \"specifying the parameter array axis to map to each \"\n f\"model in a set of models (got {model_set_axis!r}).\"\n )\n\n # Process positional arguments by matching them up with the\n # corresponding parameters in self.param_names--if any also appear as\n # keyword arguments this presents a conflict\n params = set()\n if len(args) > len(self.param_names):\n raise TypeError(\n f\"{self.__class__.__name__}.__init__() takes at most \"\n f\"{len(self.param_names)} positional arguments ({len(args)} given)\"\n )\n\n self._model_set_axis = model_set_axis\n self._param_metrics = defaultdict(dict)\n\n for idx, arg in enumerate(args):\n if arg is None:\n # A value of None implies using the default value, if exists\n continue\n # We use quantity_asanyarray here instead of np.asanyarray because\n # if any of the arguments are quantities, we need to return a\n # Quantity object not a plain Numpy array.\n param_name = self.param_names[idx]\n params.add(param_name)\n if not isinstance(arg, Parameter):\n value = quantity_asanyarray(arg, dtype=float)\n else:\n value = arg\n self._initialize_parameter_value(param_name, value)\n\n # At this point the only remaining keyword arguments should be\n # parameter names; any others are in error.\n for param_name in self.param_names:\n if param_name in kwargs:\n if param_name in params:\n raise TypeError(\n f\"{self.__class__.__name__}.__init__() got multiple values for\"\n f\" parameter {param_name!r}\"\n )\n value = kwargs.pop(param_name)\n if value is None:\n continue\n # We use quantity_asanyarray here instead of np.asanyarray\n # because if any of the arguments are quantities, we need\n # to return a Quantity object not a plain Numpy array.\n value = quantity_asanyarray(value, dtype=float)\n params.add(param_name)\n self._initialize_parameter_value(param_name, value)\n # Now deal with case where param_name is not supplied by args or kwargs\n for param_name in self.param_names:\n if param_name not in params:\n self._initialize_parameter_value(param_name, None)\n\n if kwargs:\n # If any keyword arguments were left over at this point they are\n # invalid--the base class should only be passed the parameter\n # values, constraints, and param_dim\n for kwarg in kwargs:\n # Just raise an error on the first unrecognized argument\n raise TypeError(\n f\"{self.__class__.__name__}.__init__() got an unrecognized\"\n f\" parameter {kwarg!r}\"\n )\n\n # Determine the number of model sets: If the model_set_axis is\n # None then there is just one parameter set; otherwise it is determined\n # by the size of that axis on the first parameter--if the other\n # parameters don't have the right number of axes or the sizes of their\n # model_set_axis don't match an error is raised\n if model_set_axis is not False and n_models != 1 and params:\n max_ndim = 0\n if model_set_axis < 0:\n min_ndim = abs(model_set_axis)\n else:\n min_ndim = model_set_axis + 1\n\n for name in self.param_names:\n value = getattr(self, name)\n param_ndim = np.ndim(value)\n if param_ndim < min_ndim:\n raise InputParameterError(\n \"All parameter values must be arrays of dimension at least\"\n f\" {min_ndim} for model_set_axis={model_set_axis} (the value\"\n f\" given for {name!r} is only {param_ndim}-dimensional)\"\n )\n\n max_ndim = max(max_ndim, param_ndim)\n\n if n_models is None:\n # Use the dimensions of the first parameter to determine\n # the number of model sets\n n_models = value.shape[model_set_axis]\n elif value.shape[model_set_axis] != n_models:\n raise InputParameterError(\n f\"Inconsistent dimensions for parameter {name!r} for\"\n f\" {n_models} model sets. The length of axis\"\n f\" {model_set_axis} must be the same for all input parameter\"\n \" values\"\n )\n\n self._check_param_broadcast(max_ndim)\n else:\n if n_models is None:\n n_models = 1\n\n self._check_param_broadcast(None)\n\n self._n_models = n_models\n # now validate parameters\n for name in params:\n param = getattr(self, name)\n if param._validator is not None:\n param._validator(self, param.value)\n\n def _initialize_parameter_value(self, param_name, value):\n \"\"\"Mostly deals with consistency checks and determining unit issues.\"\"\"\n if isinstance(value, Parameter):\n self.__dict__[param_name] = value\n return\n param = getattr(self, param_name)\n # Use default if value is not provided\n if value is None:\n default = param.default\n if default is None:\n # No value was supplied for the parameter and the\n # parameter does not have a default, therefore the model\n # is underspecified\n raise TypeError(\n f\"{self.__class__.__name__}.__init__() requires a value for \"\n f\"parameter {param_name!r}\"\n )\n value = default\n unit = param.unit\n else:\n if isinstance(value, Quantity):\n unit = value.unit\n value = value.value\n else:\n unit = None\n if unit is None and param.unit is not None:\n raise InputParameterError(\n f\"{self.__class__.__name__}.__init__() requires a Quantity for\"\n f\" parameter {param_name!r}\"\n )\n\n param._unit = unit\n param._set_unit(unit, force=True)\n param.internal_unit = None\n if param._setter is not None:\n if unit is not None:\n _val = param._setter(value * unit)\n else:\n _val = param._setter(value)\n if isinstance(_val, Quantity):\n param.internal_unit = _val.unit\n param._internal_value = np.array(_val.value)\n else:\n param.internal_unit = None\n param._internal_value = np.array(_val)\n else:\n param._value = np.array(value)\n\n def _initialize_slices(self):\n param_metrics = self._param_metrics\n total_size = 0\n\n for name in self.param_names:\n param = getattr(self, name)\n value = param.value\n param_size = np.size(value)\n param_shape = np.shape(value)\n param_slice = slice(total_size, total_size + param_size)\n param_metrics[name][\"slice\"] = param_slice\n param_metrics[name][\"shape\"] = param_shape\n param_metrics[name][\"size\"] = param_size\n total_size += param_size\n self._parameters = np.empty(total_size, dtype=np.float64)\n\n def _parameters_to_array(self):\n # Now set the parameter values (this will also fill\n # self._parameters)\n param_metrics = self._param_metrics\n for name in self.param_names:\n param = getattr(self, name)\n value = param.value\n if not isinstance(value, np.ndarray):\n value = np.array([value])\n self._parameters[param_metrics[name][\"slice\"]] = value.ravel()\n\n # Finally validate all the parameters; we do this last so that\n # validators that depend on one of the other parameters' values will\n # work\n\n def _array_to_parameters(self):\n param_metrics = self._param_metrics\n for name in self.param_names:\n param = getattr(self, name)\n value = self._parameters[param_metrics[name][\"slice\"]]\n value.shape = param_metrics[name][\"shape\"]\n param.value = value\n\n def _check_param_broadcast(self, max_ndim):\n \"\"\"\n This subroutine checks that all parameter arrays can be broadcast\n against each other, and determines the shapes parameters must have in\n order to broadcast correctly.\n\n If model_set_axis is None this merely checks that the parameters\n broadcast and returns an empty dict if so. This mode is only used for\n single model sets.\n \"\"\"\n all_shapes = []\n model_set_axis = self._model_set_axis\n\n for name in self.param_names:\n param = getattr(self, name)\n value = param.value\n param_shape = np.shape(value)\n param_ndim = len(param_shape)\n if max_ndim is not None and param_ndim < max_ndim:\n # All arrays have the same number of dimensions up to the\n # model_set_axis dimension, but after that they may have a\n # different number of trailing axes. The number of trailing\n # axes must be extended for mutual compatibility. For example\n # if max_ndim = 3 and model_set_axis = 0, an array with the\n # shape (2, 2) must be extended to (2, 1, 2). However, an\n # array with shape (2,) is extended to (2, 1).\n new_axes = (1,) * (max_ndim - param_ndim)\n\n if model_set_axis < 0:\n # Just need to prepend axes to make up the difference\n broadcast_shape = new_axes + param_shape\n else:\n broadcast_shape = (\n param_shape[: model_set_axis + 1]\n + new_axes\n + param_shape[model_set_axis + 1 :]\n )\n self._param_metrics[name][\"broadcast_shape\"] = broadcast_shape\n all_shapes.append(broadcast_shape)\n else:\n all_shapes.append(param_shape)\n\n # Now check mutual broadcastability of all shapes\n try:\n np.broadcast_shapes(*all_shapes)\n except ValueError as exc:\n # In a previous version, we used to have our own version of\n # np.broadcast_shapes (check_broadcast). In order to preserve\n # backward compatibility, we now have to go the extra mile and\n # parse an error message controlled by numpy.\n base_message = (\n \"All parameter arrays \"\n \"must have shapes that are mutually compatible according \"\n \"to the broadcasting rules.\"\n )\n broadcast_shapes_error_re = re.compile(\n r\"shape mismatch: objects cannot be broadcast to a single shape\\. \"\n r\"Mismatch is between \"\n r\"arg (?P<argno_a>\\d+) with shape (?P<shape_a>\\((\\d+(, ?)?)+\\)) and \"\n r\"arg (?P<argno_b>\\d+) with shape (?P<shape_b>\\((\\d+(, ?)?)+\\))\\.\"\n )\n if (match := broadcast_shapes_error_re.fullmatch(str(exc))) is not None:\n shape_a = match.group(\"shape_a\")\n shape_b = match.group(\"shape_b\")\n shape_a_idx = int(match.group(\"argno_a\"))\n shape_b_idx = int(match.group(\"argno_b\"))\n param_a = self.param_names[shape_a_idx]\n param_b = self.param_names[shape_b_idx]\n message = (\n f\"Parameter {param_a!r} of shape {shape_a} cannot be broadcast with \"\n f\"parameter {param_b!r} of shape {shape_b}.\"\n )\n else:\n warnings.warn(\n \"Failed to parse error message from np.broadcast_shapes. \"\n \"Please report this at \"\n \"https://github.com/astropy/astropy/issues/new/choose\",\n category=RuntimeWarning,\n stacklevel=1,\n )\n message = \"Some parameters failed to broadcast with each other.\"\n\n raise InputParameterError(f\"{message} {base_message}\") from None\n\n def _param_sets(self, raw=False, units=False):\n \"\"\"\n Implementation of the Model.param_sets property.\n\n This internal implementation has a ``raw`` argument which controls\n whether or not to return the raw parameter values (i.e. the values that\n are actually stored in the ._parameters array, as opposed to the values\n displayed to users. In most cases these are one in the same but there\n are currently a few exceptions.\n\n Note: This is notably an overcomplicated device and may be removed\n entirely in the near future.\n \"\"\"\n values = []\n shapes = []\n for name in self.param_names:\n param = getattr(self, name)\n\n if raw and param._setter:\n value = param._internal_value\n else:\n value = param.value\n\n broadcast_shape = self._param_metrics[name].get(\"broadcast_shape\")\n if broadcast_shape is not None:\n value = value.reshape(broadcast_shape)\n\n shapes.append(np.shape(value))\n\n if len(self) == 1:\n # Add a single param set axis to the parameter's value (thus\n # converting scalars to shape (1,) array values) for\n # consistency\n value = np.array([value])\n\n if units:\n if raw and param.internal_unit is not None:\n unit = param.internal_unit\n else:\n unit = param.unit\n if unit is not None:\n value = Quantity(value, unit, subok=True)\n\n values.append(value)\n\n if len(set(shapes)) != 1 or units:\n # If the parameters are not all the same shape, converting to an\n # array is going to produce an object array\n # However the way Numpy creates object arrays is tricky in that it\n # will recurse into array objects in the list and break them up\n # into separate objects. Doing things this way ensures a 1-D\n # object array the elements of which are the individual parameter\n # arrays. There's not much reason to do this over returning a list\n # except for consistency\n psets = np.empty(len(values), dtype=object)\n psets[:] = values\n return psets\n\n return np.array(values)\n\n def _format_repr(self, args=[], kwargs={}, defaults={}):\n \"\"\"\n Internal implementation of ``__repr__``.\n\n This is separated out for ease of use by subclasses that wish to\n override the default ``__repr__`` while keeping the same basic\n formatting.\n \"\"\"\n parts = [repr(a) for a in args]\n\n parts.extend(\n f\"{name}={param_repr_oneline(getattr(self, name))}\"\n for name in self.param_names\n )\n\n if self.name is not None:\n parts.append(f\"name={self.name!r}\")\n\n for kwarg, value in kwargs.items():\n if kwarg in defaults and defaults[kwarg] == value:\n continue\n parts.append(f\"{kwarg}={value!r}\")\n\n if len(self) > 1:\n parts.append(f\"n_models={len(self)}\")\n\n return f\"<{self.__class__.__name__}({', '.join(parts)})>\"\n\n def _format_str(self, keywords=[], defaults={}):\n \"\"\"\n Internal implementation of ``__str__``.\n\n This is separated out for ease of use by subclasses that wish to\n override the default ``__str__`` while keeping the same basic\n formatting.\n \"\"\"\n default_keywords = [\n (\"Model\", self.__class__.__name__),\n (\"Name\", self.name),\n (\"Inputs\", self.inputs),\n (\"Outputs\", self.outputs),\n (\"Model set size\", len(self)),\n ]\n\n parts = [\n f\"{keyword}: {value}\"\n for keyword, value in default_keywords\n if value is not None\n ]\n\n for keyword, value in keywords:\n if keyword.lower() in defaults and defaults[keyword.lower()] == value:\n continue\n parts.append(f\"{keyword}: {value}\")\n parts.append(\"Parameters:\")\n\n if len(self) == 1:\n columns = [[getattr(self, name).value] for name in self.param_names]\n else:\n columns = [getattr(self, name).value for name in self.param_names]\n\n if columns:\n param_table = Table(columns, names=self.param_names)\n # Set units on the columns\n for name in self.param_names:\n param_table[name].unit = getattr(self, name).unit\n parts.append(indent(str(param_table), 4 * \" \"))\n\n return \"\\n\".join(parts)\n\n\nclass FittableModel(Model):\n \"\"\"\n Base class for models that can be fitted using the built-in fitting\n algorithms.\n \"\"\"\n\n linear = False\n # derivative with respect to parameters\n fit_deriv = None\n \"\"\"\n Function (similar to the model's `~Model.evaluate`) to compute the\n derivatives of the model with respect to its parameters, for use by fitting\n algorithms. In other words, this computes the Jacobian matrix with respect\n to the model's parameters.\n \"\"\"\n # Flag that indicates if the model derivatives with respect to parameters\n # are given in columns or rows\n col_fit_deriv = True\n fittable = True\n\n\nclass Fittable1DModel(FittableModel):\n \"\"\"\n Base class for one-dimensional fittable models.\n\n This class provides an easier interface to defining new models.\n Examples can be found in `astropy.modeling.functional_models`.\n \"\"\"\n\n n_inputs = 1\n n_outputs = 1\n _separable = True\n\n\nclass Fittable2DModel(FittableModel):\n \"\"\"\n Base class for two-dimensional fittable models.\n\n This class provides an easier interface to defining new models.\n Examples can be found in `astropy.modeling.functional_models`.\n \"\"\"\n\n n_inputs = 2\n n_outputs = 1\n\n\ndef _make_arithmetic_operator(oper):\n # We don't bother with tuple unpacking here for efficiency's sake, but for\n # documentation purposes:\n #\n # f_eval, f_n_inputs, f_n_outputs = f\n #\n # and similarly for g\n def op(f, g):\n return (make_binary_operator_eval(oper, f[0], g[0]), f[1], f[2])\n\n return op\n\n\ndef _composition_operator(f, g):\n # We don't bother with tuple unpacking here for efficiency's sake, but for\n # documentation purposes:\n #\n # f_eval, f_n_inputs, f_n_outputs = f\n #\n # and similarly for g\n return (lambda inputs, params: g[0](f[0](inputs, params), params), f[1], g[2])\n\n\ndef _join_operator(f, g):\n # We don't bother with tuple unpacking here for efficiency's sake, but for\n # documentation purposes:\n #\n # f_eval, f_n_inputs, f_n_outputs = f\n #\n # and similarly for g\n return (\n lambda inputs, params: (\n f[0](inputs[: f[1]], params) + g[0](inputs[f[1] :], params)\n ),\n f[1] + g[1],\n f[2] + g[2],\n )\n\n\nBINARY_OPERATORS = {\n \"+\": _make_arithmetic_operator(operator.add),\n \"-\": _make_arithmetic_operator(operator.sub),\n \"*\": _make_arithmetic_operator(operator.mul),\n \"/\": _make_arithmetic_operator(operator.truediv),\n \"**\": _make_arithmetic_operator(operator.pow),\n \"|\": _composition_operator,\n \"&\": _join_operator,\n}\n\nSPECIAL_OPERATORS = _SpecialOperatorsDict()\n\n\ndef _add_special_operator(sop_name, sop):\n return SPECIAL_OPERATORS.add(sop_name, sop)\n\n\nclass CompoundModel(Model):\n \"\"\"\n Base class for compound models.\n\n While it can be used directly, the recommended way\n to combine models is through the model operators.\n \"\"\"\n\n def __init__(self, op, left, right, name=None):\n self.__dict__[\"_param_names\"] = None\n self._n_submodels = None\n self.op = op\n self.left = left\n self.right = right\n self._bounding_box = None\n self._user_bounding_box = None\n self._leaflist = None\n self._tdict = None\n self._parameters = None\n self._parameters_ = None\n self._param_metrics = None\n\n if op != \"fix_inputs\" and len(left) != len(right):\n raise ValueError(\"Both operands must have equal values for n_models\")\n self._n_models = len(left)\n\n if op != \"fix_inputs\" and (\n (left.model_set_axis != right.model_set_axis) or left.model_set_axis\n ): # not False and not 0\n raise ValueError(\n \"model_set_axis must be False or 0 and consistent for operands\"\n )\n self._model_set_axis = left.model_set_axis\n\n if op in [\"+\", \"-\", \"*\", \"/\", \"**\"] or op in SPECIAL_OPERATORS:\n if left.n_inputs != right.n_inputs or left.n_outputs != right.n_outputs:\n raise ModelDefinitionError(\n \"Both operands must match numbers of inputs and outputs\"\n )\n self.n_inputs = left.n_inputs\n self.n_outputs = left.n_outputs\n self.inputs = left.inputs\n self.outputs = left.outputs\n elif op == \"&\":\n self.n_inputs = left.n_inputs + right.n_inputs\n self.n_outputs = left.n_outputs + right.n_outputs\n self.inputs = combine_labels(left.inputs, right.inputs)\n self.outputs = combine_labels(left.outputs, right.outputs)\n elif op == \"|\":\n if left.n_outputs != right.n_inputs:\n raise ModelDefinitionError(\n \"Unsupported operands for |:\"\n f\" {left.name} (n_inputs={left.n_inputs},\"\n f\" n_outputs={left.n_outputs}) and\"\n f\" {right.name} (n_inputs={right.n_inputs},\"\n f\" n_outputs={right.n_outputs}); n_outputs for the left-hand model\"\n \" must match n_inputs for the right-hand model.\"\n )\n\n self.n_inputs = left.n_inputs\n self.n_outputs = right.n_outputs\n self.inputs = left.inputs\n self.outputs = right.outputs\n elif op == \"fix_inputs\":\n if not isinstance(left, Model):\n raise ValueError(\n 'First argument to \"fix_inputs\" must be an instance of '\n \"an astropy Model.\"\n )\n if not isinstance(right, dict):\n raise ValueError(\n 'Expected a dictionary for second argument of \"fix_inputs\".'\n )\n\n # Dict keys must match either possible indices\n # for model on left side, or names for inputs.\n self.n_inputs = left.n_inputs - len(right)\n # Assign directly to the private attribute (instead of using the setter)\n # to avoid asserting the new number of outputs matches the old one.\n self._outputs = left.outputs\n self.n_outputs = left.n_outputs\n newinputs = list(left.inputs)\n keys = right.keys()\n input_ind = []\n for key in keys:\n if np.issubdtype(type(key), np.integer):\n if key >= left.n_inputs or key < 0:\n raise ValueError(\n \"Substitution key integer value \"\n \"not among possible input choices.\"\n )\n if key in input_ind:\n raise ValueError(\n \"Duplicate specification of same input (index/name).\"\n )\n input_ind.append(key)\n elif isinstance(key, str):\n if key not in left.inputs:\n raise ValueError(\n \"Substitution key string not among possible input choices.\"\n )\n # Check to see it doesn't match positional\n # specification.\n ind = left.inputs.index(key)\n if ind in input_ind:\n raise ValueError(\n \"Duplicate specification of same input (index/name).\"\n )\n input_ind.append(ind)\n # Remove substituted inputs\n input_ind.sort()\n input_ind.reverse()\n for ind in input_ind:\n del newinputs[ind]\n self.inputs = tuple(newinputs)\n # Now check to see if the input model has bounding_box defined.\n # If so, remove the appropriate dimensions and set it for this\n # instance.\n try:\n self.bounding_box = self.left.bounding_box.fix_inputs(self, right)\n except NotImplementedError:\n pass\n\n else:\n raise ModelDefinitionError(\"Illegal operator: \", self.op)\n self.name = name\n self._fittable = None\n self.fit_deriv = None\n self.col_fit_deriv = None\n if op in (\"|\", \"+\", \"-\"):\n self.linear = left.linear and right.linear\n else:\n self.linear = False\n self.eqcons = []\n self.ineqcons = []\n self.n_left_params = len(self.left.parameters)\n self._map_parameters()\n\n def _get_left_inputs_from_args(self, args):\n return args[: self.left.n_inputs]\n\n def _get_right_inputs_from_args(self, args):\n op = self.op\n if op == \"&\":\n # Args expected to look like (*left inputs, *right inputs, *left params, *right params)\n return args[self.left.n_inputs : self.left.n_inputs + self.right.n_inputs]\n elif op == \"|\" or op == \"fix_inputs\":\n return None\n else:\n return args[: self.left.n_inputs]\n\n def _get_left_params_from_args(self, args):\n op = self.op\n if op == \"&\":\n # Args expected to look like (*left inputs, *right inputs, *left params, *right params)\n n_inputs = self.left.n_inputs + self.right.n_inputs\n return args[n_inputs : n_inputs + self.n_left_params]\n else:\n return args[self.left.n_inputs : self.left.n_inputs + self.n_left_params]\n\n def _get_right_params_from_args(self, args):\n op = self.op\n if op == \"fix_inputs\":\n return None\n if op == \"&\":\n # Args expected to look like (*left inputs, *right inputs, *left params, *right params)\n return args[self.left.n_inputs + self.right.n_inputs + self.n_left_params :]\n else:\n return args[self.left.n_inputs + self.n_left_params :]\n\n def _get_kwarg_model_parameters_as_positional(self, args, kwargs):\n # could do it with inserts but rebuilding seems like simpilist way\n\n # TODO: Check if any param names are in kwargs maybe as an intersection of sets?\n if self.op == \"&\":\n new_args = list(args[: self.left.n_inputs + self.right.n_inputs])\n args_pos = self.left.n_inputs + self.right.n_inputs\n else:\n new_args = list(args[: self.left.n_inputs])\n args_pos = self.left.n_inputs\n\n for param_name in self.param_names:\n kw_value = kwargs.pop(param_name, None)\n if kw_value is not None:\n value = kw_value\n else:\n try:\n value = args[args_pos]\n except IndexError:\n raise IndexError(\"Missing parameter or input\")\n\n args_pos += 1\n new_args.append(value)\n\n return new_args, kwargs\n\n def _apply_operators_to_value_lists(self, leftval, rightval, **kw):\n op = self.op\n if op == \"+\":\n return binary_operation(operator.add, leftval, rightval)\n elif op == \"-\":\n return binary_operation(operator.sub, leftval, rightval)\n elif op == \"*\":\n return binary_operation(operator.mul, leftval, rightval)\n elif op == \"/\":\n return binary_operation(operator.truediv, leftval, rightval)\n elif op == \"**\":\n return binary_operation(operator.pow, leftval, rightval)\n elif op == \"&\":\n if not isinstance(leftval, tuple):\n leftval = (leftval,)\n if not isinstance(rightval, tuple):\n rightval = (rightval,)\n return leftval + rightval\n elif op in SPECIAL_OPERATORS:\n return binary_operation(SPECIAL_OPERATORS[op], leftval, rightval)\n else:\n raise ModelDefinitionError(\"Unrecognized operator {op}\")\n\n def evaluate(self, *args, **kw):\n op = self.op\n args, kw = self._get_kwarg_model_parameters_as_positional(args, kw)\n left_inputs = self._get_left_inputs_from_args(args)\n left_params = self._get_left_params_from_args(args)\n\n if op == \"fix_inputs\":\n pos_index = dict(zip(self.left.inputs, range(self.left.n_inputs)))\n fixed_inputs = {\n key if np.issubdtype(type(key), np.integer) else pos_index[key]: value\n for key, value in self.right.items()\n }\n left_inputs = [\n fixed_inputs[ind] if ind in fixed_inputs.keys() else inp\n for ind, inp in enumerate(left_inputs)\n ]\n\n leftval = self.left.evaluate(*left_inputs, *left_params)\n\n if op == \"fix_inputs\":\n return leftval\n\n right_inputs = self._get_right_inputs_from_args(args)\n right_params = self._get_right_params_from_args(args)\n\n if op == \"|\":\n if isinstance(leftval, tuple):\n return self.right.evaluate(*leftval, *right_params)\n else:\n return self.right.evaluate(leftval, *right_params)\n else:\n rightval = self.right.evaluate(*right_inputs, *right_params)\n\n return self._apply_operators_to_value_lists(leftval, rightval, **kw)\n\n @property\n def n_submodels(self):\n if self._leaflist is None:\n self._make_leaflist()\n return len(self._leaflist)\n\n @property\n def submodel_names(self):\n \"\"\"Return the names of submodels in a ``CompoundModel``.\"\"\"\n if self._leaflist is None:\n self._make_leaflist()\n names = [item.name for item in self._leaflist]\n nonecount = 0\n newnames = []\n for item in names:\n if item is None:\n newnames.append(f\"None_{nonecount}\")\n nonecount += 1\n else:\n newnames.append(item)\n return tuple(newnames)\n\n def _pre_evaluate(self, *args, **kwargs):\n \"\"\"\n CompoundModel specific input setup that needs to occur prior to\n model evaluation.\n\n Note\n ----\n All of the _pre_evaluate for each component model will be\n performed at the time that the individual model is evaluated.\n \"\"\"\n # If equivalencies are provided, necessary to map parameters and pass\n # the leaflist as a keyword input for use by model evaluation so that\n # the compound model input names can be matched to the model input\n # names.\n if \"equivalencies\" in kwargs:\n # Restructure to be useful for the individual model lookup\n kwargs[\"inputs_map\"] = [\n (value[0], (value[1], key)) for key, value in self.inputs_map().items()\n ]\n\n # Setup actual model evaluation method\n def evaluate(_inputs):\n return self._evaluate(*_inputs, **kwargs)\n\n return evaluate, args, None, kwargs\n\n @property\n def _argnames(self):\n \"\"\"\n No inputs should be used to determine input_shape when handling compound models.\n \"\"\"\n return ()\n\n def _post_evaluate(self, inputs, outputs, broadcasted_shapes, with_bbox, **kwargs):\n \"\"\"\n CompoundModel specific post evaluation processing of outputs.\n\n Note\n ----\n All of the _post_evaluate for each component model will be\n performed at the time that the individual model is evaluated.\n \"\"\"\n if self.get_bounding_box(with_bbox) is not None and self.n_outputs == 1:\n return outputs[0]\n return outputs\n\n def _evaluate(self, *args, **kw):\n op = self.op\n if op != \"fix_inputs\":\n if op != \"&\":\n leftval = self.left(*args, **kw)\n if op != \"|\":\n rightval = self.right(*args, **kw)\n else:\n rightval = None\n\n else:\n leftval = self.left(*(args[: self.left.n_inputs]), **kw)\n rightval = self.right(*(args[self.left.n_inputs :]), **kw)\n\n if op != \"|\":\n return self._apply_operators_to_value_lists(leftval, rightval, **kw)\n\n elif op == \"|\":\n if isinstance(leftval, tuple):\n return self.right(*leftval, **kw)\n else:\n return self.right(leftval, **kw)\n\n else:\n subs = self.right\n newargs = list(args)\n subinds = []\n subvals = []\n for key in subs.keys():\n if np.issubdtype(type(key), np.integer):\n subinds.append(key)\n elif isinstance(key, str):\n ind = self.left.inputs.index(key)\n subinds.append(ind)\n subvals.append(subs[key])\n # Turn inputs specified in kw into positional indices.\n # Names for compound inputs do not propagate to sub models.\n kwind = []\n kwval = []\n for kwkey in list(kw.keys()):\n if kwkey in self.inputs:\n ind = self.inputs.index(kwkey)\n if ind < len(args):\n raise ValueError(\n \"Keyword argument duplicates positional value supplied.\"\n )\n kwind.append(ind)\n kwval.append(kw[kwkey])\n del kw[kwkey]\n # Build new argument list\n # Append keyword specified args first\n if kwind:\n kwargs = list(zip(kwind, kwval))\n kwargs.sort()\n kwindsorted, kwvalsorted = list(zip(*kwargs))\n newargs = newargs + list(kwvalsorted)\n if subinds:\n subargs = list(zip(subinds, subvals))\n subargs.sort()\n # subindsorted, subvalsorted = list(zip(*subargs))\n # The substitutions must be inserted in order\n for ind, val in subargs:\n newargs.insert(ind, val)\n return self.left(*newargs, **kw)\n\n @property\n def param_names(self):\n \"\"\"An ordered list of parameter names.\"\"\"\n return self._param_names\n\n def _make_leaflist(self):\n tdict = {}\n leaflist = []\n make_subtree_dict(self, \"\", tdict, leaflist)\n self._leaflist = leaflist\n self._tdict = tdict\n\n def __getattr__(self, name):\n \"\"\"\n If someone accesses an attribute not already defined, map the\n parameters, and then see if the requested attribute is one of\n the parameters.\n \"\"\"\n # The following test is needed to avoid infinite recursion\n # caused by deepcopy. There may be other such cases discovered.\n if name == \"__setstate__\":\n raise AttributeError\n if name in self._param_names:\n return self.__dict__[name]\n else:\n raise AttributeError(f'Attribute \"{name}\" not found')\n\n def __getitem__(self, index):\n if self._leaflist is None:\n self._make_leaflist()\n leaflist = self._leaflist\n tdict = self._tdict\n if isinstance(index, slice):\n if index.step:\n raise ValueError(\"Steps in slices not supported for compound models\")\n if index.start is not None:\n if isinstance(index.start, str):\n start = self._str_index_to_int(index.start)\n else:\n start = index.start\n else:\n start = 0\n if index.stop is not None:\n if isinstance(index.stop, str):\n stop = self._str_index_to_int(index.stop)\n else:\n stop = index.stop - 1\n else:\n stop = len(leaflist) - 1\n if index.stop == 0:\n raise ValueError(\"Slice endpoint cannot be 0\")\n if start < 0:\n start = len(leaflist) + start\n if stop < 0:\n stop = len(leaflist) + stop\n # now search for matching node:\n if stop == start: # only single value, get leaf instead in code below\n index = start\n else:\n for key in tdict:\n node, leftind, rightind = tdict[key]\n if leftind == start and rightind == stop:\n return node\n raise IndexError(\"No appropriate subtree matches slice\")\n if np.issubdtype(type(index), np.integer):\n return leaflist[index]\n elif isinstance(index, str):\n return leaflist[self._str_index_to_int(index)]\n else:\n raise TypeError(\"index must be integer, slice, or model name string\")\n\n def _str_index_to_int(self, str_index):\n # Search through leaflist for item with that name\n found = []\n for nleaf, leaf in enumerate(self._leaflist):\n if getattr(leaf, \"name\", None) == str_index:\n found.append(nleaf)\n if len(found) == 0:\n raise IndexError(f\"No component with name '{str_index}' found\")\n if len(found) > 1:\n raise IndexError(\n f\"Multiple components found using '{str_index}' as name\\n\"\n f\"at indices {found}\"\n )\n return found[0]\n\n @property\n def n_inputs(self):\n \"\"\"The number of inputs of a model.\"\"\"\n return self._n_inputs\n\n @n_inputs.setter\n def n_inputs(self, value):\n self._n_inputs = value\n\n @property\n def n_outputs(self):\n \"\"\"The number of outputs of a model.\"\"\"\n return self._n_outputs\n\n @n_outputs.setter\n def n_outputs(self, value):\n self._n_outputs = value\n\n @property\n def eqcons(self):\n return self._eqcons\n\n @eqcons.setter\n def eqcons(self, value):\n self._eqcons = value\n\n @property\n def ineqcons(self):\n return self._eqcons\n\n @ineqcons.setter\n def ineqcons(self, value):\n self._eqcons = value\n\n def traverse_postorder(self, include_operator=False):\n \"\"\"Postorder traversal of the CompoundModel tree.\"\"\"\n res = []\n if isinstance(self.left, CompoundModel):\n res = res + self.left.traverse_postorder(include_operator)\n else:\n res = res + [self.left]\n if isinstance(self.right, CompoundModel):\n res = res + self.right.traverse_postorder(include_operator)\n else:\n res = res + [self.right]\n if include_operator:\n res.append(self.op)\n else:\n res.append(self)\n return res\n\n def _format_expression(self, format_leaf=None):\n leaf_idx = 0\n operands = deque()\n\n if format_leaf is None:\n format_leaf = lambda i, l: f\"[{i}]\"\n\n for node in self.traverse_postorder():\n if not isinstance(node, CompoundModel):\n operands.append(format_leaf(leaf_idx, node))\n leaf_idx += 1\n continue\n\n right = operands.pop()\n left = operands.pop()\n if node.op in OPERATOR_PRECEDENCE:\n oper_order = OPERATOR_PRECEDENCE[node.op]\n\n if isinstance(node, CompoundModel):\n if (\n isinstance(node.left, CompoundModel)\n and OPERATOR_PRECEDENCE[node.left.op] < oper_order\n ):\n left = f\"({left})\"\n if (\n isinstance(node.right, CompoundModel)\n and OPERATOR_PRECEDENCE[node.right.op] < oper_order\n ):\n right = f\"({right})\"\n\n operands.append(f\"{left} {node.op} {right}\")\n else:\n left = f\"(({left}),\"\n right = f\"({right}))\"\n operands.append(\" \".join((node.op[0], left, right)))\n\n return \"\".join(operands)\n\n def _format_components(self):\n if self._parameters_ is None:\n self._map_parameters()\n return \"\\n\\n\".join(f\"[{idx}]: {m!r}\" for idx, m in enumerate(self._leaflist))\n\n def __str__(self):\n expression = self._format_expression()\n components = self._format_components()\n keywords = [\n (\"Expression\", expression),\n (\"Components\", \"\\n\" + indent(components, 4 * \" \")),\n ]\n return super()._format_str(keywords=keywords)\n\n def rename(self, name):\n self.name = name\n return self\n\n @property\n def isleaf(self):\n return False\n\n @property\n def inverse(self):\n if self.op == \"|\":\n return self.right.inverse | self.left.inverse\n elif self.op == \"&\":\n return self.left.inverse & self.right.inverse\n else:\n return NotImplemented\n\n @property\n def fittable(self):\n \"\"\"Set the fittable attribute on a compound model.\"\"\"\n if self._fittable is None:\n if self._leaflist is None:\n self._map_parameters()\n self._fittable = all(m.fittable for m in self._leaflist)\n return self._fittable\n\n __add__ = _model_oper(\"+\")\n __sub__ = _model_oper(\"-\")\n __mul__ = _model_oper(\"*\")\n __truediv__ = _model_oper(\"/\")\n __pow__ = _model_oper(\"**\")\n __or__ = _model_oper(\"|\")\n __and__ = _model_oper(\"&\")\n\n def _map_parameters(self):\n \"\"\"\n Map all the constituent model parameters to the compound object,\n renaming as necessary by appending a suffix number.\n\n This can be an expensive operation, particularly for a complex\n expression tree.\n\n All the corresponding parameter attributes are created that one\n expects for the Model class.\n\n The parameter objects that the attributes point to are the same\n objects as in the constiutent models. Changes made to parameter\n values to either are seen by both.\n\n Prior to calling this, none of the associated attributes will\n exist. This method must be called to make the model usable by\n fitting engines.\n\n If oldnames=True, then parameters are named as in the original\n implementation of compound models.\n \"\"\"\n if self._parameters is not None:\n # do nothing\n return\n if self._leaflist is None:\n self._make_leaflist()\n self._parameters_ = {}\n param_map = {}\n self._param_names = []\n for lindex, leaf in enumerate(self._leaflist):\n if not isinstance(leaf, dict):\n for param_name in leaf.param_names:\n param = getattr(leaf, param_name)\n new_param_name = f\"{param_name}_{lindex}\"\n self.__dict__[new_param_name] = param\n self._parameters_[new_param_name] = param\n self._param_names.append(new_param_name)\n param_map[new_param_name] = (lindex, param_name)\n self._param_metrics = {}\n self._param_map = param_map\n self._param_map_inverse = {v: k for k, v in param_map.items()}\n self._initialize_slices()\n self._param_names = tuple(self._param_names)\n\n def _initialize_slices(self):\n param_metrics = self._param_metrics\n total_size = 0\n\n for name in self.param_names:\n param = getattr(self, name)\n value = param.value\n param_size = np.size(value)\n param_shape = np.shape(value)\n param_slice = slice(total_size, total_size + param_size)\n param_metrics[name] = {}\n param_metrics[name][\"slice\"] = param_slice\n param_metrics[name][\"shape\"] = param_shape\n param_metrics[name][\"size\"] = param_size\n total_size += param_size\n self._parameters = np.empty(total_size, dtype=np.float64)\n\n @staticmethod\n def _recursive_lookup(branch, adict, key):\n if isinstance(branch, CompoundModel):\n return adict[key]\n return branch, key\n\n def inputs_map(self):\n \"\"\"\n Map the names of the inputs to this ExpressionTree to the inputs to the leaf models.\n \"\"\"\n inputs_map = {}\n if not isinstance(\n self.op, str\n ): # If we don't have an operator the mapping is trivial\n return {inp: (self, inp) for inp in self.inputs}\n\n elif self.op == \"|\":\n if isinstance(self.left, CompoundModel):\n l_inputs_map = self.left.inputs_map()\n for inp in self.inputs:\n if isinstance(self.left, CompoundModel):\n inputs_map[inp] = l_inputs_map[inp]\n else:\n inputs_map[inp] = self.left, inp\n elif self.op == \"&\":\n if isinstance(self.left, CompoundModel):\n l_inputs_map = self.left.inputs_map()\n if isinstance(self.right, CompoundModel):\n r_inputs_map = self.right.inputs_map()\n for i, inp in enumerate(self.inputs):\n if i < len(self.left.inputs): # Get from left\n if isinstance(self.left, CompoundModel):\n inputs_map[inp] = l_inputs_map[self.left.inputs[i]]\n else:\n inputs_map[inp] = self.left, self.left.inputs[i]\n else: # Get from right\n if isinstance(self.right, CompoundModel):\n inputs_map[inp] = r_inputs_map[\n self.right.inputs[i - len(self.left.inputs)]\n ]\n else:\n inputs_map[inp] = (\n self.right,\n self.right.inputs[i - len(self.left.inputs)],\n )\n elif self.op == \"fix_inputs\":\n fixed_ind = list(self.right.keys())\n ind = [\n list(self.left.inputs).index(i) if isinstance(i, str) else i\n for i in fixed_ind\n ]\n inp_ind = list(range(self.left.n_inputs))\n for i in ind:\n inp_ind.remove(i)\n for i in inp_ind:\n inputs_map[self.left.inputs[i]] = self.left, self.left.inputs[i]\n else:\n if isinstance(self.left, CompoundModel):\n l_inputs_map = self.left.inputs_map()\n for inp in self.left.inputs:\n if isinstance(self.left, CompoundModel):\n inputs_map[inp] = l_inputs_map[inp]\n else:\n inputs_map[inp] = self.left, inp\n return inputs_map\n\n def _parameter_units_for_data_units(self, input_units, output_units):\n if self._leaflist is None:\n self._map_parameters()\n units_for_data = {}\n for imodel, model in enumerate(self._leaflist):\n units_for_data_leaf = model._parameter_units_for_data_units(\n input_units, output_units\n )\n for param_leaf in units_for_data_leaf:\n param = self._param_map_inverse[(imodel, param_leaf)]\n units_for_data[param] = units_for_data_leaf[param_leaf]\n return units_for_data\n\n @property\n def input_units(self):\n inputs_map = self.inputs_map()\n input_units_dict = {\n key: inputs_map[key][0].input_units[orig_key]\n for key, (mod, orig_key) in inputs_map.items()\n if inputs_map[key][0].input_units is not None\n }\n if input_units_dict:\n return input_units_dict\n return None\n\n @property\n def input_units_equivalencies(self):\n inputs_map = self.inputs_map()\n input_units_equivalencies_dict = {\n key: inputs_map[key][0].input_units_equivalencies[orig_key]\n for key, (mod, orig_key) in inputs_map.items()\n if inputs_map[key][0].input_units_equivalencies is not None\n }\n if not input_units_equivalencies_dict:\n return None\n\n return input_units_equivalencies_dict\n\n @property\n def input_units_allow_dimensionless(self):\n inputs_map = self.inputs_map()\n return {\n key: inputs_map[key][0].input_units_allow_dimensionless[orig_key]\n for key, (mod, orig_key) in inputs_map.items()\n }\n\n @property\n def input_units_strict(self):\n inputs_map = self.inputs_map()\n return {\n key: inputs_map[key][0].input_units_strict[orig_key]\n for key, (mod, orig_key) in inputs_map.items()\n }\n\n @property\n def return_units(self):\n outputs_map = self.outputs_map()\n return {\n key: outputs_map[key][0].return_units[orig_key]\n for key, (mod, orig_key) in outputs_map.items()\n if outputs_map[key][0].return_units is not None\n }\n\n def outputs_map(self):\n \"\"\"\n Map the names of the outputs to this ExpressionTree to the outputs to the leaf models.\n \"\"\"\n outputs_map = {}\n if not isinstance(\n self.op, str\n ): # If we don't have an operator the mapping is trivial\n return {out: (self, out) for out in self.outputs}\n\n elif self.op == \"|\":\n if isinstance(self.right, CompoundModel):\n r_outputs_map = self.right.outputs_map()\n for out in self.outputs:\n if isinstance(self.right, CompoundModel):\n outputs_map[out] = r_outputs_map[out]\n else:\n outputs_map[out] = self.right, out\n\n elif self.op == \"&\":\n if isinstance(self.left, CompoundModel):\n l_outputs_map = self.left.outputs_map()\n if isinstance(self.right, CompoundModel):\n r_outputs_map = self.right.outputs_map()\n for i, out in enumerate(self.outputs):\n if i < len(self.left.outputs): # Get from left\n if isinstance(self.left, CompoundModel):\n outputs_map[out] = l_outputs_map[self.left.outputs[i]]\n else:\n outputs_map[out] = self.left, self.left.outputs[i]\n else: # Get from right\n if isinstance(self.right, CompoundModel):\n outputs_map[out] = r_outputs_map[\n self.right.outputs[i - len(self.left.outputs)]\n ]\n else:\n outputs_map[out] = (\n self.right,\n self.right.outputs[i - len(self.left.outputs)],\n )\n elif self.op == \"fix_inputs\":\n return self.left.outputs_map()\n else:\n if isinstance(self.left, CompoundModel):\n l_outputs_map = self.left.outputs_map()\n for out in self.left.outputs:\n if isinstance(self.left, CompoundModel):\n outputs_map[out] = l_outputs_map()[out]\n else:\n outputs_map[out] = self.left, out\n return outputs_map\n\n @property\n def has_user_bounding_box(self):\n \"\"\"\n A flag indicating whether or not a custom bounding_box has been\n assigned to this model by a user, via assignment to\n ``model.bounding_box``.\n \"\"\"\n return self._user_bounding_box is not None\n\n def render(self, out=None, coords=None):\n \"\"\"\n Evaluate a model at fixed positions, respecting the ``bounding_box``.\n\n The key difference relative to evaluating the model directly is that\n this method is limited to a bounding box if the `Model.bounding_box`\n attribute is set.\n\n Parameters\n ----------\n out : `numpy.ndarray`, optional\n An array that the evaluated model will be added to. If this is not\n given (or given as ``None``), a new array will be created.\n coords : array-like, optional\n An array to be used to translate from the model's input coordinates\n to the ``out`` array. It should have the property that\n ``self(coords)`` yields the same shape as ``out``. If ``out`` is\n not specified, ``coords`` will be used to determine the shape of\n the returned array. If this is not provided (or None), the model\n will be evaluated on a grid determined by `Model.bounding_box`.\n\n Returns\n -------\n out : `numpy.ndarray`\n The model added to ``out`` if ``out`` is not ``None``, or else a\n new array from evaluating the model over ``coords``.\n If ``out`` and ``coords`` are both `None`, the returned array is\n limited to the `Model.bounding_box` limits. If\n `Model.bounding_box` is `None`, ``arr`` or ``coords`` must be\n passed.\n\n Raises\n ------\n ValueError\n If ``coords`` are not given and the `Model.bounding_box` of\n this model is not set.\n\n Examples\n --------\n :ref:`astropy:bounding-boxes`\n \"\"\"\n bbox = self.get_bounding_box()\n\n ndim = self.n_inputs\n\n if (coords is None) and (out is None) and (bbox is None):\n raise ValueError(\"If no bounding_box is set, coords or out must be input.\")\n\n # for consistent indexing\n if ndim == 1:\n if coords is not None:\n coords = [coords]\n if bbox is not None:\n bbox = [bbox]\n\n if coords is not None:\n coords = np.asanyarray(coords, dtype=float)\n # Check dimensions match out and model\n assert len(coords) == ndim\n if out is not None:\n if coords[0].shape != out.shape:\n raise ValueError(\"inconsistent shape of the output.\")\n else:\n out = np.zeros(coords[0].shape)\n\n if out is not None:\n out = np.asanyarray(out)\n if out.ndim != ndim:\n raise ValueError(\n \"the array and model must have the same number of dimensions.\"\n )\n\n if bbox is not None:\n # Assures position is at center pixel, important when using\n # add_array.\n pd = (\n np.array([(np.mean(bb), np.ceil((bb[1] - bb[0]) / 2)) for bb in bbox])\n .astype(int)\n .T\n )\n pos, delta = pd\n\n if coords is not None:\n sub_shape = tuple(delta * 2 + 1)\n sub_coords = np.array(\n [extract_array(c, sub_shape, pos) for c in coords]\n )\n else:\n limits = [slice(p - d, p + d + 1, 1) for p, d in pd.T]\n sub_coords = np.mgrid[limits]\n\n sub_coords = sub_coords[::-1]\n\n if out is None:\n out = self(*sub_coords)\n else:\n try:\n out = add_array(out, self(*sub_coords), pos)\n except ValueError:\n raise ValueError(\n \"The `bounding_box` is larger than the input out in \"\n \"one or more dimensions. Set \"\n \"`model.bounding_box = None`.\"\n )\n else:\n if coords is None:\n im_shape = out.shape\n limits = [slice(i) for i in im_shape]\n coords = np.mgrid[limits]\n\n coords = coords[::-1]\n\n out += self(*coords)\n\n return out\n\n def replace_submodel(self, name, model):\n \"\"\"\n Construct a new `~astropy.modeling.CompoundModel` instance from an\n existing CompoundModel, replacing the named submodel with a new model.\n\n In order to ensure that inverses and names are kept/reconstructed, it's\n necessary to rebuild the CompoundModel from the replaced node all the\n way back to the base. The original CompoundModel is left untouched.\n\n Parameters\n ----------\n name : str\n name of submodel to be replaced\n model : `~astropy.modeling.Model`\n replacement model\n \"\"\"\n submodels = [\n m for m in self.traverse_postorder() if getattr(m, \"name\", None) == name\n ]\n if submodels:\n if len(submodels) > 1:\n raise ValueError(f\"More than one submodel named {name}\")\n\n old_model = submodels.pop()\n if len(old_model) != len(model):\n raise ValueError(\n \"New and old models must have equal values for n_models\"\n )\n\n # Do this check first in order to raise a more helpful Exception,\n # although it would fail trying to construct the new CompoundModel\n if (\n old_model.n_inputs != model.n_inputs\n or old_model.n_outputs != model.n_outputs\n ):\n raise ValueError(\n \"New model must match numbers of inputs and \"\n \"outputs of existing model\"\n )\n\n tree = _get_submodel_path(self, name)\n while tree:\n branch = self.copy()\n for node in tree[:-1]:\n branch = getattr(branch, node)\n setattr(branch, tree[-1], model)\n model = CompoundModel(\n branch.op, branch.left, branch.right, name=branch.name\n )\n tree = tree[:-1]\n return model\n\n else:\n raise ValueError(f\"No submodels found named {name}\")\n\n def _set_sub_models_and_parameter_units(self, left, right):\n \"\"\"\n Provides a work-around to properly set the sub models and respective\n parameters's units/values when using ``without_units_for_data``\n or ``without_units_for_data`` methods.\n \"\"\"\n model = CompoundModel(self.op, left, right)\n\n self.left = left\n self.right = right\n\n for name in model.param_names:\n model_parameter = getattr(model, name)\n parameter = getattr(self, name)\n\n parameter.value = model_parameter.value\n parameter._set_unit(model_parameter.unit, force=True)\n\n def without_units_for_data(self, **kwargs):\n \"\"\"\n See `~astropy.modeling.Model.without_units_for_data` for overview\n of this method.\n\n Notes\n -----\n This modifies the behavior of the base method to account for the\n case where the sub-models of a compound model have different output\n units. This is only valid for compound * and / compound models as\n in that case it is reasonable to mix the output units. It does this\n by modifying the output units of each sub model by using the output\n units of the other sub model so that we can apply the original function\n and get the desired result.\n\n Additional data has to be output in the mixed output unit case\n so that the units can be properly rebuilt by\n `~astropy.modeling.CompoundModel.with_units_from_data`.\n\n Outside the mixed output units, this method is identical to the\n base method.\n \"\"\"\n if self.op in [\"*\", \"/\"]:\n model = self.copy()\n inputs = {inp: kwargs[inp] for inp in self.inputs}\n\n left_units = self.left.output_units(**kwargs)\n right_units = self.right.output_units(**kwargs)\n\n if self.op == \"*\":\n left_kwargs = {\n out: kwargs[out] / right_units[out]\n for out in self.left.outputs\n if kwargs[out] is not None\n }\n right_kwargs = {\n out: kwargs[out] / left_units[out]\n for out in self.right.outputs\n if kwargs[out] is not None\n }\n else:\n left_kwargs = {\n out: kwargs[out] * right_units[out]\n for out in self.left.outputs\n if kwargs[out] is not None\n }\n right_kwargs = {\n out: 1 / kwargs[out] * left_units[out]\n for out in self.right.outputs\n if kwargs[out] is not None\n }\n\n left_kwargs.update(inputs.copy())\n right_kwargs.update(inputs.copy())\n\n left = self.left.without_units_for_data(**left_kwargs)\n if isinstance(left, tuple):\n left_kwargs[\"_left_kwargs\"] = left[1]\n left_kwargs[\"_right_kwargs\"] = left[2]\n left = left[0]\n\n right = self.right.without_units_for_data(**right_kwargs)\n if isinstance(right, tuple):\n right_kwargs[\"_left_kwargs\"] = right[1]\n right_kwargs[\"_right_kwargs\"] = right[2]\n right = right[0]\n\n model._set_sub_models_and_parameter_units(left, right)\n\n return model, left_kwargs, right_kwargs\n else:\n return super().without_units_for_data(**kwargs)\n\n def with_units_from_data(self, **kwargs):\n \"\"\"\n See `~astropy.modeling.Model.with_units_from_data` for overview\n of this method.\n\n Notes\n -----\n This modifies the behavior of the base method to account for the\n case where the sub-models of a compound model have different output\n units. This is only valid for compound * and / compound models as\n in that case it is reasonable to mix the output units. In order to\n do this it requires some additional information output by\n `~astropy.modeling.CompoundModel.without_units_for_data` passed as\n keyword arguments under the keywords ``_left_kwargs`` and ``_right_kwargs``.\n\n Outside the mixed output units, this method is identical to the\n base method.\n \"\"\"\n if self.op in [\"*\", \"/\"]:\n left_kwargs = kwargs.pop(\"_left_kwargs\")\n right_kwargs = kwargs.pop(\"_right_kwargs\")\n\n left = self.left.with_units_from_data(**left_kwargs)\n right = self.right.with_units_from_data(**right_kwargs)\n\n model = self.copy()\n model._set_sub_models_and_parameter_units(left, right)\n\n return model\n else:\n return super().with_units_from_data(**kwargs)\n\n\ndef _get_submodel_path(model, name):\n \"\"\"Find the route down a CompoundModel's tree to the model with the\n specified name (whether it's a leaf or not).\n \"\"\"\n if getattr(model, \"name\", None) == name:\n return []\n try:\n return [\"left\"] + _get_submodel_path(model.left, name)\n except (AttributeError, TypeError):\n pass\n try:\n return [\"right\"] + _get_submodel_path(model.right, name)\n except (AttributeError, TypeError):\n pass\n\n\ndef binary_operation(binoperator, left, right):\n \"\"\"\n Perform binary operation. Operands may be matching tuples of operands.\n \"\"\"\n if isinstance(left, tuple) and isinstance(right, tuple):\n return tuple(binoperator(item[0], item[1]) for item in zip(left, right))\n return binoperator(left, right)\n\n\ndef get_ops(tree, opset):\n \"\"\"\n Recursive function to collect operators used.\n \"\"\"\n if isinstance(tree, CompoundModel):\n opset.add(tree.op)\n get_ops(tree.left, opset)\n get_ops(tree.right, opset)\n else:\n return\n\n\ndef make_subtree_dict(tree, nodepath, tdict, leaflist):\n \"\"\"Traverse a tree noting each node by a key.\n\n The key indicates all the left/right choices necessary to reach that node.\n Each key will reference a tuple that contains:\n\n - reference to the compound model for that node.\n - left most index contained within that subtree\n (relative to all indices for the whole tree)\n - right most index contained within that subtree\n \"\"\"\n # if this is a leaf, just append it to the leaflist\n if not hasattr(tree, \"isleaf\"):\n leaflist.append(tree)\n else:\n leftmostind = len(leaflist)\n make_subtree_dict(tree.left, nodepath + \"l\", tdict, leaflist)\n make_subtree_dict(tree.right, nodepath + \"r\", tdict, leaflist)\n rightmostind = len(leaflist) - 1\n tdict[nodepath] = (tree, leftmostind, rightmostind)\n\n\n_ORDER_OF_OPERATORS = [(\"fix_inputs\",), (\"|\",), (\"&\",), (\"+\", \"-\"), (\"*\", \"/\"), (\"**\",)]\nOPERATOR_PRECEDENCE = {}\nfor idx, ops in enumerate(_ORDER_OF_OPERATORS):\n for op in ops:\n OPERATOR_PRECEDENCE[op] = idx\ndel idx, op, ops\n\n\ndef fix_inputs(modelinstance, values, bounding_boxes=None, selector_args=None):\n \"\"\"\n This function creates a compound model with one or more of the input\n values of the input model assigned fixed values (scalar or array).\n\n Parameters\n ----------\n modelinstance : `~astropy.modeling.Model` instance\n This is the model that one or more of the\n model input values will be fixed to some constant value.\n values : dict\n A dictionary where the key identifies which input to fix\n and its value is the value to fix it at. The key may either be the\n name of the input or a number reflecting its order in the inputs.\n\n Examples\n --------\n >>> from astropy.modeling.models import Gaussian2D\n >>> g = Gaussian2D(1, 2, 3, 4, 5)\n >>> gv = fix_inputs(g, {0: 2.5})\n\n Results in a 1D function equivalent to Gaussian2D(1, 2, 3, 4, 5)(x=2.5, y)\n \"\"\"\n model = CompoundModel(\"fix_inputs\", modelinstance, values)\n if bounding_boxes is not None:\n if selector_args is None:\n selector_args = tuple((key, True) for key in values.keys())\n bbox = CompoundBoundingBox.validate(\n modelinstance, bounding_boxes, selector_args\n )\n _selector = bbox.selector_args.get_fixed_values(modelinstance, values)\n\n new_bbox = bbox[_selector]\n new_bbox = new_bbox.__class__.validate(model, new_bbox)\n\n model.bounding_box = new_bbox\n return model\n\n\ndef bind_bounding_box(modelinstance, bounding_box, ignored=None, order=\"C\"):\n \"\"\"\n Set a validated bounding box to a model instance.\n\n Parameters\n ----------\n modelinstance : `~astropy.modeling.Model` instance\n This is the model that the validated bounding box will be set on.\n bounding_box : tuple\n A bounding box tuple, see :ref:`astropy:bounding-boxes` for details\n ignored : list\n List of the inputs to be ignored by the bounding box.\n order : str, optional\n The ordering of the bounding box tuple, can be either ``'C'`` or\n ``'F'``.\n \"\"\"\n modelinstance.bounding_box = ModelBoundingBox.validate(\n modelinstance, bounding_box, ignored=ignored, order=order\n )\n\n\ndef bind_compound_bounding_box(\n modelinstance,\n bounding_boxes,\n selector_args,\n create_selector=None,\n ignored=None,\n order=\"C\",\n):\n \"\"\"\n Add a validated compound bounding box to a model instance.\n\n Parameters\n ----------\n modelinstance : `~astropy.modeling.Model` instance\n This is the model that the validated compound bounding box will be set on.\n bounding_boxes : dict\n A dictionary of bounding box tuples, see :ref:`astropy:bounding-boxes`\n for details.\n selector_args : list\n List of selector argument tuples to define selection for compound\n bounding box, see :ref:`astropy:bounding-boxes` for details.\n create_selector : callable, optional\n An optional callable with interface (selector_value, model) which\n can generate a bounding box based on a selector value and model if\n there is no bounding box in the compound bounding box listed under\n that selector value. Default is ``None``, meaning new bounding\n box entries will not be automatically generated.\n ignored : list\n List of the inputs to be ignored by the bounding box.\n order : str, optional\n The ordering of the bounding box tuple, can be either ``'C'`` or\n ``'F'``.\n \"\"\"\n modelinstance.bounding_box = CompoundBoundingBox.validate(\n modelinstance,\n bounding_boxes,\n selector_args,\n create_selector=create_selector,\n ignored=ignored,\n order=order,\n )\n\n\ndef custom_model(*args, fit_deriv=None):\n \"\"\"\n Create a model from a user defined function. The inputs and parameters of\n the model will be inferred from the arguments of the function.\n\n This can be used either as a function or as a decorator. See below for\n examples of both usages.\n\n The model is separable only if there is a single input.\n\n .. note::\n\n All model parameters have to be defined as keyword arguments with\n default values in the model function. Use `None` as a default argument\n value if you do not want to have a default value for that parameter.\n\n The standard settable model properties can be configured by default\n using keyword arguments matching the name of the property; however,\n these values are not set as model \"parameters\". Moreover, users\n cannot use keyword arguments matching non-settable model properties,\n with the exception of ``n_outputs`` which should be set to the number of\n outputs of your function.\n\n Parameters\n ----------\n func : function\n Function which defines the model. It should take N positional\n arguments where ``N`` is dimensions of the model (the number of\n independent variable in the model), and any number of keyword arguments\n (the parameters). It must return the value of the model (typically as\n an array, but can also be a scalar for scalar inputs). This\n corresponds to the `~astropy.modeling.Model.evaluate` method.\n fit_deriv : function, optional\n Function which defines the Jacobian derivative of the model. I.e., the\n derivative with respect to the *parameters* of the model. It should\n have the same argument signature as ``func``, but should return a\n sequence where each element of the sequence is the derivative\n with respect to the corresponding argument. This corresponds to the\n :meth:`~astropy.modeling.FittableModel.fit_deriv` method.\n\n Examples\n --------\n Define a sinusoidal model function as a custom 1D model::\n\n >>> from astropy.modeling.models import custom_model\n >>> import numpy as np\n >>> def sine_model(x, amplitude=1., frequency=1.):\n ... return amplitude * np.sin(2 * np.pi * frequency * x)\n >>> def sine_deriv(x, amplitude=1., frequency=1.):\n ... return 2 * np.pi * amplitude * np.cos(2 * np.pi * frequency * x)\n >>> SineModel = custom_model(sine_model, fit_deriv=sine_deriv)\n\n Create an instance of the custom model and evaluate it::\n\n >>> model = SineModel()\n >>> model(0.25) # doctest: +FLOAT_CMP\n 1.0\n\n This model instance can now be used like a usual astropy model.\n\n The next example demonstrates a 2D Moffat function model, and also\n demonstrates the support for docstrings (this example could also include\n a derivative, but it has been omitted for simplicity)::\n\n >>> @custom_model\n ... def Moffat2D(x, y, amplitude=1.0, x_0=0.0, y_0=0.0, gamma=1.0,\n ... alpha=1.0):\n ... \\\"\\\"\\\"Two dimensional Moffat function.\\\"\\\"\\\"\n ... rr_gg = ((x - x_0) ** 2 + (y - y_0) ** 2) / gamma ** 2\n ... return amplitude * (1 + rr_gg) ** (-alpha)\n ...\n >>> print(Moffat2D.__doc__)\n Two dimensional Moffat function.\n >>> model = Moffat2D()\n >>> model(1, 1) # doctest: +FLOAT_CMP\n 0.3333333333333333\n \"\"\"\n if len(args) == 1 and callable(args[0]):\n return _custom_model_wrapper(args[0], fit_deriv=fit_deriv)\n elif not args:\n return functools.partial(_custom_model_wrapper, fit_deriv=fit_deriv)\n else:\n raise TypeError(\n f\"{__name__} takes at most one positional argument (the callable/\"\n \"function to be turned into a model. When used as a decorator \"\n \"it should be passed keyword arguments only (if \"\n \"any).\"\n )\n\n\ndef _custom_model_inputs(func):\n \"\"\"\n Processes the inputs to the `custom_model`'s function into the appropriate\n categories.\n\n Parameters\n ----------\n func : callable\n\n Returns\n -------\n inputs : list\n list of evaluation inputs\n special_params : dict\n dictionary of model properties which require special treatment\n settable_params : dict\n dictionary of defaults for settable model properties\n params : dict\n dictionary of model parameters set by `custom_model`'s function\n \"\"\"\n inputs, parameters = get_inputs_and_params(func)\n\n special = [\"n_outputs\"]\n settable = [\n attr\n for attr, value in vars(Model).items()\n if isinstance(value, property) and value.fset is not None\n ]\n properties = [\n attr\n for attr, value in vars(Model).items()\n if isinstance(value, property) and value.fset is None and attr not in special\n ]\n\n special_params = {}\n settable_params = {}\n params = {}\n for param in parameters:\n if param.name in special:\n special_params[param.name] = param.default\n elif param.name in settable:\n settable_params[param.name] = param.default\n elif param.name in properties:\n raise ValueError(\n f\"Parameter '{param.name}' cannot be a model property: {properties}.\"\n )\n else:\n params[param.name] = param.default\n\n return inputs, special_params, settable_params, params\n\n\ndef _custom_model_wrapper(func, fit_deriv=None):\n \"\"\"\n Internal implementation `custom_model`.\n\n When `custom_model` is called as a function its arguments are passed to\n this function, and the result of this function is returned.\n\n When `custom_model` is used as a decorator a partial evaluation of this\n function is returned by `custom_model`.\n \"\"\"\n if not callable(func):\n raise ModelDefinitionError(\n \"func is not callable; it must be a function or other callable object\"\n )\n\n if fit_deriv is not None and not callable(fit_deriv):\n raise ModelDefinitionError(\n \"fit_deriv not callable; it must be a function or other callable object\"\n )\n\n model_name = func.__name__\n\n inputs, special_params, settable_params, params = _custom_model_inputs(func)\n\n if fit_deriv is not None and len(fit_deriv.__defaults__) != len(params):\n raise ModelDefinitionError(\n \"derivative function should accept same number of parameters as func.\"\n )\n\n params = {\n param: Parameter(param, default=default) for param, default in params.items()\n }\n\n mod = find_current_module(2)\n if mod:\n modname = mod.__name__\n else:\n modname = \"__main__\"\n\n members = {\n \"__module__\": str(modname),\n \"__doc__\": func.__doc__,\n \"n_inputs\": len(inputs),\n \"n_outputs\": special_params.pop(\"n_outputs\", 1),\n \"evaluate\": staticmethod(func),\n \"_settable_properties\": settable_params,\n }\n\n if fit_deriv is not None:\n members[\"fit_deriv\"] = staticmethod(fit_deriv)\n\n members.update(params)\n\n cls = type(model_name, (FittableModel,), members)\n cls._separable = len(inputs) == 1\n return cls\n\n\ndef render_model(model, arr=None, coords=None):\n \"\"\"\n Evaluates a model on an input array. Evaluation is limited to\n a bounding box if the `Model.bounding_box` attribute is set.\n\n Parameters\n ----------\n model : `Model`\n Model to be evaluated.\n arr : `numpy.ndarray`, optional\n Array on which the model is evaluated.\n coords : array-like, optional\n Coordinate arrays mapping to ``arr``, such that\n ``arr[coords] == arr``.\n\n Returns\n -------\n array : `numpy.ndarray`\n The model evaluated on the input ``arr`` or a new array from\n ``coords``.\n If ``arr`` and ``coords`` are both `None`, the returned array is\n limited to the `Model.bounding_box` limits. If\n `Model.bounding_box` is `None`, ``arr`` or ``coords`` must be passed.\n\n Examples\n --------\n :ref:`astropy:bounding-boxes`\n \"\"\"\n bbox = model.bounding_box\n\n if (coords is None) & (arr is None) & (bbox is None):\n raise ValueError(\"If no bounding_box is set, coords or arr must be input.\")\n\n # for consistent indexing\n if model.n_inputs == 1:\n if coords is not None:\n coords = [coords]\n if bbox is not None:\n bbox = [bbox]\n\n if arr is not None:\n arr = arr.copy()\n # Check dimensions match model\n if arr.ndim != model.n_inputs:\n raise ValueError(\n \"number of array dimensions inconsistent with number of model inputs.\"\n )\n if coords is not None:\n # Check dimensions match arr and model\n coords = np.array(coords)\n if len(coords) != model.n_inputs:\n raise ValueError(\n \"coordinate length inconsistent with the number of model inputs.\"\n )\n if arr is not None:\n if coords[0].shape != arr.shape:\n raise ValueError(\"coordinate shape inconsistent with the array shape.\")\n else:\n arr = np.zeros(coords[0].shape)\n\n if bbox is not None:\n # assures position is at center pixel, important when using add_array\n pd = pos, delta = (\n np.array([(np.mean(bb), np.ceil((bb[1] - bb[0]) / 2)) for bb in bbox])\n .astype(int)\n .T\n )\n\n if coords is not None:\n sub_shape = tuple(delta * 2 + 1)\n sub_coords = np.array([extract_array(c, sub_shape, pos) for c in coords])\n else:\n limits = [slice(p - d, p + d + 1, 1) for p, d in pd.T]\n sub_coords = np.mgrid[limits]\n\n sub_coords = sub_coords[::-1]\n\n if arr is None:\n arr = model(*sub_coords)\n else:\n try:\n arr = add_array(arr, model(*sub_coords), pos)\n except ValueError:\n raise ValueError(\n \"The `bounding_box` is larger than the input\"\n \" arr in one or more dimensions. Set \"\n \"`model.bounding_box = None`.\"\n )\n else:\n if coords is None:\n im_shape = arr.shape\n limits = [slice(i) for i in im_shape]\n coords = np.mgrid[limits]\n\n arr += model(*coords[::-1])\n\n return arr\n\n\ndef hide_inverse(model):\n \"\"\"\n This is a convenience function intended to disable automatic generation\n of the inverse in compound models by disabling one of the constituent\n model's inverse. This is to handle cases where user provided inverse\n functions are not compatible within an expression.\n\n For example::\n\n compound_model.inverse = hide_inverse(m1) + m2 + m3\n\n This will insure that the defined inverse itself won't attempt to\n build its own inverse, which would otherwise fail in this example\n (e.g., m = m1 + m2 + m3 happens to raises an exception for this\n reason.)\n\n Note that this permanently disables it. To prevent that either copy\n the model or restore the inverse later.\n \"\"\"\n del model.inverse\n return model\n", "astropy/modeling/fitting.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\n\"\"\"\nThis module implements classes (called Fitters) which combine optimization\nalgorithms (typically from `scipy.optimize`) with statistic functions to perform\nfitting. Fitters are implemented as callable classes. In addition to the data\nto fit, the ``__call__`` method takes an instance of\n`~astropy.modeling.core.FittableModel` as input, and returns a copy of the\nmodel with its parameters determined by the optimizer.\n\nOptimization algorithms, called \"optimizers\" are implemented in\n`~astropy.modeling.optimizers` and statistic functions are in\n`~astropy.modeling.statistic`. The goal is to provide an easy to extend\nframework and allow users to easily create new fitters by combining statistics\nwith optimizers.\n\nThere are two exceptions to the above scheme.\n`~astropy.modeling.fitting.LinearLSQFitter` uses Numpy's `~numpy.linalg.lstsq`\nfunction. `~astropy.modeling.fitting.LevMarLSQFitter` uses\n`~scipy.optimize.leastsq` which combines optimization and statistic in one\nimplementation.\n\"\"\"\n# pylint: disable=invalid-name\n\nimport abc\nimport inspect\nimport operator\nimport warnings\nfrom functools import reduce, wraps\nfrom importlib.metadata import entry_points\n\nimport numpy as np\n\nfrom astropy.units import Quantity\nfrom astropy.utils.exceptions import AstropyUserWarning\n\nfrom .optimizers import DEFAULT_ACC, DEFAULT_EPS, DEFAULT_MAXITER, SLSQP, Simplex\nfrom .spline import (\n SplineExactKnotsFitter,\n SplineInterpolateFitter,\n SplineSmoothingFitter,\n SplineSplrepFitter,\n)\nfrom .statistic import leastsquare\nfrom .utils import _combine_equivalency_dict, poly_map_domain\n\n__all__ = [\n \"LinearLSQFitter\",\n \"LevMarLSQFitter\",\n \"TRFLSQFitter\",\n \"DogBoxLSQFitter\",\n \"LMLSQFitter\",\n \"FittingWithOutlierRemoval\",\n \"SLSQPLSQFitter\",\n \"SimplexLSQFitter\",\n \"JointFitter\",\n \"Fitter\",\n \"ModelLinearityError\",\n \"ModelsError\",\n \"SplineExactKnotsFitter\",\n \"SplineInterpolateFitter\",\n \"SplineSmoothingFitter\",\n \"SplineSplrepFitter\",\n]\n\n\n# Statistic functions implemented in `astropy.modeling.statistic.py\nSTATISTICS = [leastsquare]\n\n# Optimizers implemented in `astropy.modeling.optimizers.py\nOPTIMIZERS = [Simplex, SLSQP]\n\n\nclass NonFiniteValueError(RuntimeError):\n \"\"\"\n Error raised when attempting to a non-finite value.\n \"\"\"\n\n\nclass Covariance:\n \"\"\"Class for covariance matrix calculated by fitter.\"\"\"\n\n def __init__(self, cov_matrix, param_names):\n self.cov_matrix = cov_matrix\n self.param_names = param_names\n\n def pprint(self, max_lines, round_val):\n # Print and label lower triangle of covariance matrix\n # Print rows for params up to `max_lines`, round floats to 'round_val'\n longest_name = max(len(x) for x in self.param_names)\n ret_str = \"parameter variances / covariances \\n\"\n fstring = f'{\"\": <{longest_name}}| {{0}}\\n'\n for i, row in enumerate(self.cov_matrix):\n if i <= max_lines - 1:\n param = self.param_names[i]\n ret_str += fstring.replace(\" \" * len(param), param, 1).format(\n repr(np.round(row[: i + 1], round_val))[7:-2]\n )\n else:\n ret_str += \"...\"\n return ret_str.rstrip()\n\n def __repr__(self):\n return self.pprint(max_lines=10, round_val=3)\n\n def __getitem__(self, params):\n # index covariance matrix by parameter names or indices\n if len(params) != 2:\n raise ValueError(\"Covariance must be indexed by two values.\")\n if all(isinstance(item, str) for item in params):\n i1, i2 = (\n self.param_names.index(params[0]),\n self.param_names.index(params[1]),\n )\n elif all(isinstance(item, int) for item in params):\n i1, i2 = params\n else:\n raise TypeError(\n \"Covariance can be indexed by two parameter names or integer indices.\"\n )\n return self.cov_matrix[i1][i2]\n\n\nclass StandardDeviations:\n \"\"\"Class for fitting uncertainties.\"\"\"\n\n def __init__(self, cov_matrix, param_names):\n self.param_names = param_names\n self.stds = self._calc_stds(cov_matrix)\n\n def _calc_stds(self, cov_matrix):\n # sometimes scipy lstsq returns a non-sensical negative vals in the\n # diagonals of the cov_x it computes.\n stds = [np.sqrt(x) if x > 0 else None for x in np.diag(cov_matrix)]\n return stds\n\n def pprint(self, max_lines, round_val):\n longest_name = max(len(x) for x in self.param_names)\n ret_str = \"standard deviations\\n\"\n for i, std in enumerate(self.stds):\n if i <= max_lines - 1:\n param = self.param_names[i]\n ret_str += (\n f\"{param}{' ' * (longest_name - len(param))}| \"\n f\"{np.round(std, round_val)}\\n\"\n )\n else:\n ret_str += \"...\"\n return ret_str.rstrip()\n\n def __repr__(self):\n return self.pprint(max_lines=10, round_val=3)\n\n def __getitem__(self, param):\n if isinstance(param, str):\n i = self.param_names.index(param)\n elif isinstance(param, int):\n i = param\n else:\n raise TypeError(\n \"Standard deviation can be indexed by parameter name or integer.\"\n )\n return self.stds[i]\n\n\nclass ModelsError(Exception):\n \"\"\"Base class for model exceptions.\"\"\"\n\n\nclass ModelLinearityError(ModelsError):\n \"\"\"Raised when a non-linear model is passed to a linear fitter.\"\"\"\n\n\nclass UnsupportedConstraintError(ModelsError, ValueError):\n \"\"\"\n Raised when a fitter does not support a type of constraint.\n \"\"\"\n\n\nclass _FitterMeta(abc.ABCMeta):\n \"\"\"\n Currently just provides a registry for all Fitter classes.\n \"\"\"\n\n registry = set()\n\n def __new__(mcls, name, bases, members):\n cls = super().__new__(mcls, name, bases, members)\n\n if not inspect.isabstract(cls) and not name.startswith(\"_\"):\n mcls.registry.add(cls)\n\n return cls\n\n\ndef fitter_unit_support(func):\n \"\"\"\n This is a decorator that can be used to add support for dealing with\n quantities to any __call__ method on a fitter which may not support\n quantities itself. This is done by temporarily removing units from all\n parameters then adding them back once the fitting has completed.\n \"\"\"\n\n @wraps(func)\n def wrapper(self, model, x, y, z=None, **kwargs):\n equivalencies = kwargs.pop(\"equivalencies\", None)\n\n data_has_units = (\n isinstance(x, Quantity)\n or isinstance(y, Quantity)\n or isinstance(z, Quantity)\n )\n\n model_has_units = model._has_units\n\n if data_has_units or model_has_units:\n if model._supports_unit_fitting:\n # We now combine any instance-level input equivalencies with user\n # specified ones at call-time.\n\n input_units_equivalencies = _combine_equivalency_dict(\n model.inputs, equivalencies, model.input_units_equivalencies\n )\n\n # If input_units is defined, we transform the input data into those\n # expected by the model. We hard-code the input names 'x', and 'y'\n # here since FittableModel instances have input names ('x',) or\n # ('x', 'y')\n\n if model.input_units is not None:\n if isinstance(x, Quantity):\n x = x.to(\n model.input_units[model.inputs[0]],\n equivalencies=input_units_equivalencies[model.inputs[0]],\n )\n if isinstance(y, Quantity) and z is not None:\n y = y.to(\n model.input_units[model.inputs[1]],\n equivalencies=input_units_equivalencies[model.inputs[1]],\n )\n\n # Create a dictionary mapping the real model inputs and outputs\n # names to the data. This remapping of names must be done here, after\n # the input data is converted to the correct units.\n rename_data = {model.inputs[0]: x}\n if z is not None:\n rename_data[model.outputs[0]] = z\n rename_data[model.inputs[1]] = y\n else:\n rename_data[model.outputs[0]] = y\n rename_data[\"z\"] = None\n\n # We now strip away the units from the parameters, taking care to\n # first convert any parameters to the units that correspond to the\n # input units (to make sure that initial guesses on the parameters)\n # are in the right unit system\n model = model.without_units_for_data(**rename_data)\n if isinstance(model, tuple):\n rename_data[\"_left_kwargs\"] = model[1]\n rename_data[\"_right_kwargs\"] = model[2]\n model = model[0]\n\n # We strip away the units from the input itself\n add_back_units = False\n\n if isinstance(x, Quantity):\n add_back_units = True\n xdata = x.value\n else:\n xdata = np.asarray(x)\n\n if isinstance(y, Quantity):\n add_back_units = True\n ydata = y.value\n else:\n ydata = np.asarray(y)\n\n if z is not None:\n if isinstance(z, Quantity):\n add_back_units = True\n zdata = z.value\n else:\n zdata = np.asarray(z)\n # We run the fitting\n if z is None:\n model_new = func(self, model, xdata, ydata, **kwargs)\n else:\n model_new = func(self, model, xdata, ydata, zdata, **kwargs)\n\n # And finally we add back units to the parameters\n if add_back_units:\n model_new = model_new.with_units_from_data(**rename_data)\n return model_new\n\n else:\n raise NotImplementedError(\n \"This model does not support being fit to data with units.\"\n )\n\n else:\n return func(self, model, x, y, z=z, **kwargs)\n\n return wrapper\n\n\nclass Fitter(metaclass=_FitterMeta):\n \"\"\"\n Base class for all fitters.\n\n Parameters\n ----------\n optimizer : callable\n A callable implementing an optimization algorithm\n statistic : callable\n Statistic function\n\n \"\"\"\n\n supported_constraints = []\n\n def __init__(self, optimizer, statistic):\n if optimizer is None:\n raise ValueError(\"Expected an optimizer.\")\n if statistic is None:\n raise ValueError(\"Expected a statistic function.\")\n if isinstance(optimizer, type):\n # a callable class\n self._opt_method = optimizer()\n elif inspect.isfunction(optimizer):\n self._opt_method = optimizer\n else:\n raise ValueError(\"Expected optimizer to be a callable class or a function.\")\n if isinstance(statistic, type):\n self._stat_method = statistic()\n else:\n self._stat_method = statistic\n\n def objective_function(self, fps, *args):\n \"\"\"\n Function to minimize.\n\n Parameters\n ----------\n fps : list\n parameters returned by the fitter\n args : list\n [model, [other_args], [input coordinates]]\n other_args may include weights or any other quantities specific for\n a statistic\n\n Notes\n -----\n The list of arguments (args) is set in the `__call__` method.\n Fitters may overwrite this method, e.g. when statistic functions\n require other arguments.\n\n \"\"\"\n model = args[0]\n meas = args[-1]\n fitter_to_model_params(model, fps)\n res = self._stat_method(meas, model, *args[1:-1])\n return res\n\n @staticmethod\n def _add_fitting_uncertainties(*args):\n \"\"\"\n When available, calculate and sets the parameter covariance matrix\n (model.cov_matrix) and standard deviations (model.stds).\n \"\"\"\n return None\n\n @abc.abstractmethod\n def __call__(self):\n \"\"\"\n This method performs the actual fitting and modifies the parameter list\n of a model.\n Fitter subclasses should implement this method.\n \"\"\"\n raise NotImplementedError(\"Subclasses should implement this method.\")\n\n\n# TODO: I have ongoing branch elsewhere that's refactoring this module so that\n# all the fitter classes in here are Fitter subclasses. In the meantime we\n# need to specify that _FitterMeta is its metaclass.\nclass LinearLSQFitter(metaclass=_FitterMeta):\n \"\"\"\n A class performing a linear least square fitting.\n Uses `numpy.linalg.lstsq` to do the fitting.\n Given a model and data, fits the model to the data and changes the\n model's parameters. Keeps a dictionary of auxiliary fitting information.\n\n Notes\n -----\n Note that currently LinearLSQFitter does not support compound models.\n \"\"\"\n\n supported_constraints = [\"fixed\"]\n supports_masked_input = True\n\n def __init__(self, calc_uncertainties=False):\n self.fit_info = {\n \"residuals\": None,\n \"rank\": None,\n \"singular_values\": None,\n \"params\": None,\n }\n self._calc_uncertainties = calc_uncertainties\n\n @staticmethod\n def _is_invertible(m):\n \"\"\"Check if inverse of matrix can be obtained.\"\"\"\n if m.shape[0] != m.shape[1]:\n return False\n if np.linalg.matrix_rank(m) < m.shape[0]:\n return False\n return True\n\n def _add_fitting_uncertainties(self, model, a, n_coeff, x, y, z=None, resids=None):\n \"\"\"\n Calculate and parameter covariance matrix and standard deviations\n and set `cov_matrix` and `stds` attributes.\n \"\"\"\n x_dot_x_prime = np.dot(a.T, a)\n masked = False or hasattr(y, \"mask\")\n\n # check if invertible. if not, can't calc covariance.\n if not self._is_invertible(x_dot_x_prime):\n return model\n inv_x_dot_x_prime = np.linalg.inv(x_dot_x_prime)\n\n if z is None: # 1D models\n if len(model) == 1: # single model\n mask = None\n if masked:\n mask = y.mask\n xx = np.ma.array(x, mask=mask)\n RSS = [(1 / (xx.count() - n_coeff)) * resids]\n\n if len(model) > 1: # model sets\n RSS = [] # collect sum residuals squared for each model in set\n for j in range(len(model)):\n mask = None\n if masked:\n mask = y.mask[..., j].flatten()\n xx = np.ma.array(x, mask=mask)\n eval_y = model(xx, model_set_axis=False)\n eval_y = np.rollaxis(eval_y, model.model_set_axis)[j]\n RSS.append(\n (1 / (xx.count() - n_coeff)) * np.sum((y[..., j] - eval_y) ** 2)\n )\n\n else: # 2D model\n if len(model) == 1:\n mask = None\n if masked:\n warnings.warn(\n \"Calculation of fitting uncertainties \"\n \"for 2D models with masked values not \"\n \"currently supported.\\n\",\n AstropyUserWarning,\n )\n return\n xx, _ = np.ma.array(x, mask=mask), np.ma.array(y, mask=mask)\n # len(xx) instead of xx.count. this will break if values are masked?\n RSS = [(1 / (len(xx) - n_coeff)) * resids]\n else:\n RSS = []\n for j in range(len(model)):\n eval_z = model(x, y, model_set_axis=False)\n mask = None # need to figure out how to deal w/ masking here.\n if model.model_set_axis == 1:\n # model_set_axis passed when evaluating only refers to input shapes\n # so output must be reshaped for model_set_axis=1.\n eval_z = np.rollaxis(eval_z, 1)\n eval_z = eval_z[j]\n RSS.append(\n [(1 / (len(x) - n_coeff)) * np.sum((z[j] - eval_z) ** 2)]\n )\n\n covs = [inv_x_dot_x_prime * r for r in RSS]\n free_param_names = [\n x\n for x in model.fixed\n if (model.fixed[x] is False) and (model.tied[x] is False)\n ]\n\n if len(covs) == 1:\n model.cov_matrix = Covariance(covs[0], model.param_names)\n model.stds = StandardDeviations(covs[0], free_param_names)\n else:\n model.cov_matrix = [Covariance(cov, model.param_names) for cov in covs]\n model.stds = [StandardDeviations(cov, free_param_names) for cov in covs]\n\n @staticmethod\n def _deriv_with_constraints(model, param_indices, x=None, y=None):\n if y is None:\n d = np.array(model.fit_deriv(x, *model.parameters))\n else:\n d = np.array(model.fit_deriv(x, y, *model.parameters))\n\n if model.col_fit_deriv:\n return d[param_indices]\n else:\n return d[..., param_indices]\n\n def _map_domain_window(self, model, x, y=None):\n \"\"\"\n Maps domain into window for a polynomial model which has these\n attributes.\n \"\"\"\n if y is None:\n if hasattr(model, \"domain\") and model.domain is None:\n model.domain = [x.min(), x.max()]\n if hasattr(model, \"window\") and model.window is None:\n model.window = [-1, 1]\n return poly_map_domain(x, model.domain, model.window)\n else:\n if hasattr(model, \"x_domain\") and model.x_domain is None:\n model.x_domain = [x.min(), x.max()]\n if hasattr(model, \"y_domain\") and model.y_domain is None:\n model.y_domain = [y.min(), y.max()]\n if hasattr(model, \"x_window\") and model.x_window is None:\n model.x_window = [-1.0, 1.0]\n if hasattr(model, \"y_window\") and model.y_window is None:\n model.y_window = [-1.0, 1.0]\n\n xnew = poly_map_domain(x, model.x_domain, model.x_window)\n ynew = poly_map_domain(y, model.y_domain, model.y_window)\n return xnew, ynew\n\n @fitter_unit_support\n def __call__(self, model, x, y, z=None, weights=None, rcond=None):\n \"\"\"\n Fit data to this model.\n\n Parameters\n ----------\n model : `~astropy.modeling.FittableModel`\n model to fit to x, y, z\n x : array\n Input coordinates\n y : array-like\n Input coordinates\n z : array-like, optional\n Input coordinates.\n If the dependent (``y`` or ``z``) coordinate values are provided\n as a `numpy.ma.MaskedArray`, any masked points are ignored when\n fitting. Note that model set fitting is significantly slower when\n there are masked points (not just an empty mask), as the matrix\n equation has to be solved for each model separately when their\n coordinate grids differ.\n weights : array, optional\n Weights for fitting.\n For data with Gaussian uncertainties, the weights should be\n 1/sigma.\n rcond : float, optional\n Cut-off ratio for small singular values of ``a``.\n Singular values are set to zero if they are smaller than ``rcond``\n times the largest singular value of ``a``.\n equivalencies : list or None, optional, keyword-only\n List of *additional* equivalencies that are should be applied in\n case x, y and/or z have units. Default is None.\n\n Returns\n -------\n model_copy : `~astropy.modeling.FittableModel`\n a copy of the input model with parameters set by the fitter\n\n \"\"\"\n if not model.fittable:\n raise ValueError(\"Model must be a subclass of FittableModel\")\n\n if not model.linear:\n raise ModelLinearityError(\n \"Model is not linear in parameters, \"\n \"linear fit methods should not be used.\"\n )\n\n if hasattr(model, \"submodel_names\"):\n raise ValueError(\"Model must be simple, not compound\")\n\n _validate_constraints(self.supported_constraints, model)\n\n model_copy = model.copy()\n model_copy.sync_constraints = False\n _, fitparam_indices, _ = model_to_fit_params(model_copy)\n\n if model_copy.n_inputs == 2 and z is None:\n raise ValueError(\"Expected x, y and z for a 2 dimensional model.\")\n\n farg = _convert_input(\n x, y, z, n_models=len(model_copy), model_set_axis=model_copy.model_set_axis\n )\n\n n_fixed = sum(model_copy.fixed.values())\n\n # This is also done by _convert_inputs, but we need it here to allow\n # checking the array dimensionality before that gets called:\n if weights is not None:\n weights = np.asarray(weights, dtype=float)\n\n if n_fixed:\n # The list of fixed params is the complement of those being fitted:\n fixparam_indices = [\n idx\n for idx in range(len(model_copy.param_names))\n if idx not in fitparam_indices\n ]\n\n # Construct matrix of user-fixed parameters that can be dotted with\n # the corresponding fit_deriv() terms, to evaluate corrections to\n # the dependent variable in order to fit only the remaining terms:\n fixparams = np.asarray(\n [\n getattr(model_copy, model_copy.param_names[idx]).value\n for idx in fixparam_indices\n ]\n )\n\n if len(farg) == 2:\n x, y = farg\n\n if weights is not None:\n # If we have separate weights for each model, apply the same\n # conversion as for the data, otherwise check common weights\n # as if for a single model:\n _, weights = _convert_input(\n x,\n weights,\n n_models=len(model_copy) if weights.ndim == y.ndim else 1,\n model_set_axis=model_copy.model_set_axis,\n )\n\n # map domain into window\n if hasattr(model_copy, \"domain\"):\n x = self._map_domain_window(model_copy, x)\n if n_fixed:\n lhs = np.asarray(\n self._deriv_with_constraints(model_copy, fitparam_indices, x=x)\n )\n fixderivs = self._deriv_with_constraints(\n model_copy, fixparam_indices, x=x\n )\n else:\n lhs = np.asarray(model_copy.fit_deriv(x, *model_copy.parameters))\n sum_of_implicit_terms = model_copy.sum_of_implicit_terms(x)\n rhs = y\n else:\n x, y, z = farg\n\n if weights is not None:\n # If we have separate weights for each model, apply the same\n # conversion as for the data, otherwise check common weights\n # as if for a single model:\n _, _, weights = _convert_input(\n x,\n y,\n weights,\n n_models=len(model_copy) if weights.ndim == z.ndim else 1,\n model_set_axis=model_copy.model_set_axis,\n )\n\n # map domain into window\n if hasattr(model_copy, \"x_domain\"):\n x, y = self._map_domain_window(model_copy, x, y)\n\n if n_fixed:\n lhs = np.asarray(\n self._deriv_with_constraints(model_copy, fitparam_indices, x=x, y=y)\n )\n fixderivs = self._deriv_with_constraints(\n model_copy, fixparam_indices, x=x, y=y\n )\n else:\n lhs = np.asanyarray(model_copy.fit_deriv(x, y, *model_copy.parameters))\n sum_of_implicit_terms = model_copy.sum_of_implicit_terms(x, y)\n\n if len(model_copy) > 1:\n # Just to be explicit (rather than baking in False == 0):\n model_axis = model_copy.model_set_axis or 0\n\n if z.ndim > 2:\n # For higher-dimensional z, flatten all the axes except the\n # dimension along which models are stacked and transpose so\n # the model axis is *last* (I think this resolves Erik's\n # pending generalization from 80a6f25a):\n rhs = np.rollaxis(z, model_axis, z.ndim)\n rhs = rhs.reshape(-1, rhs.shape[-1])\n else:\n # This \"else\" seems to handle the corner case where the\n # user has already flattened x/y before attempting a 2D fit\n # but z has a second axis for the model set. NB. This is\n # ~5-10x faster than using rollaxis.\n rhs = z.T if model_axis == 0 else z\n\n if weights is not None:\n # Same for weights\n if weights.ndim > 2:\n # Separate 2D weights for each model:\n weights = np.rollaxis(weights, model_axis, weights.ndim)\n weights = weights.reshape(-1, weights.shape[-1])\n elif weights.ndim == z.ndim:\n # Separate, flattened weights for each model:\n weights = weights.T if model_axis == 0 else weights\n else:\n # Common weights for all the models:\n weights = weights.flatten()\n else:\n rhs = z.flatten()\n if weights is not None:\n weights = weights.flatten()\n\n # If the derivative is defined along rows (as with non-linear models)\n if model_copy.col_fit_deriv:\n lhs = np.asarray(lhs).T\n\n # Some models (eg. Polynomial1D) don't flatten multi-dimensional inputs\n # when constructing their Vandermonde matrix, which can lead to obscure\n # failures below. Ultimately, np.linalg.lstsq can't handle >2D matrices,\n # so just raise a slightly more informative error when this happens:\n if np.asanyarray(lhs).ndim > 2:\n raise ValueError(\n f\"{type(model_copy).__name__} gives unsupported >2D \"\n \"derivative matrix for this x/y\"\n )\n\n # Subtract any terms fixed by the user from (a copy of) the RHS, in\n # order to fit the remaining terms correctly:\n if n_fixed:\n if model_copy.col_fit_deriv:\n fixderivs = np.asarray(fixderivs).T # as for lhs above\n rhs = rhs - fixderivs.dot(fixparams) # evaluate user-fixed terms\n\n # Subtract any terms implicit in the model from the RHS, which, like\n # user-fixed terms, affect the dependent variable but are not fitted:\n if sum_of_implicit_terms is not None:\n # If we have a model set, the extra axis must be added to\n # sum_of_implicit_terms as its innermost dimension, to match the\n # dimensionality of rhs after _convert_input \"rolls\" it as needed\n # by np.linalg.lstsq. The vector then gets broadcast to the right\n # number of sets (columns). This assumes all the models share the\n # same input coordinates, as is currently the case.\n if len(model_copy) > 1:\n sum_of_implicit_terms = sum_of_implicit_terms[..., np.newaxis]\n rhs = rhs - sum_of_implicit_terms\n\n if weights is not None:\n if rhs.ndim == 2:\n if weights.shape == rhs.shape:\n # separate weights for multiple models case: broadcast\n # lhs to have more dimension (for each model)\n lhs = lhs[..., np.newaxis] * weights[:, np.newaxis]\n rhs = rhs * weights\n else:\n lhs *= weights[:, np.newaxis]\n # Don't modify in-place in case rhs was the original\n # dependent variable array\n rhs = rhs * weights[:, np.newaxis]\n else:\n lhs *= weights[:, np.newaxis]\n rhs = rhs * weights\n\n scl = (lhs * lhs).sum(0)\n lhs /= scl\n\n masked = np.any(np.ma.getmask(rhs))\n if weights is not None and not masked and np.any(np.isnan(lhs)):\n raise ValueError(\n \"Found NaNs in the coefficient matrix, which \"\n \"should not happen and would crash the lapack \"\n \"routine. Maybe check that weights are not null.\"\n )\n\n a = None # need for calculating covariance\n\n if (masked and len(model_copy) > 1) or (\n weights is not None and weights.ndim > 1\n ):\n # Separate masks or weights for multiple models case: Numpy's\n # lstsq supports multiple dimensions only for rhs, so we need to\n # loop manually on the models. This may be fixed in the future\n # with https://github.com/numpy/numpy/pull/15777.\n\n # Initialize empty array of coefficients and populate it one model\n # at a time. The shape matches the number of coefficients from the\n # Vandermonde matrix and the number of models from the RHS:\n lacoef = np.zeros(lhs.shape[1:2] + rhs.shape[-1:], dtype=rhs.dtype)\n\n # Arrange the lhs as a stack of 2D matrices that we can iterate\n # over to get the correctly-orientated lhs for each model:\n if lhs.ndim > 2:\n lhs_stack = np.rollaxis(lhs, -1, 0)\n else:\n lhs_stack = np.broadcast_to(lhs, rhs.shape[-1:] + lhs.shape)\n\n # Loop over the models and solve for each one. By this point, the\n # model set axis is the second of two. Transpose rather than using,\n # say, np.moveaxis(array, -1, 0), since it's slightly faster and\n # lstsq can't handle >2D arrays anyway. This could perhaps be\n # optimized by collecting together models with identical masks\n # (eg. those with no rejected points) into one operation, though it\n # will still be relatively slow when calling lstsq repeatedly.\n for model_lhs, model_rhs, model_lacoef in zip(lhs_stack, rhs.T, lacoef.T):\n # Cull masked points on both sides of the matrix equation:\n good = ~model_rhs.mask if masked else slice(None)\n model_lhs = model_lhs[good]\n model_rhs = model_rhs[good][..., np.newaxis]\n a = model_lhs\n\n # Solve for this model:\n t_coef, resids, rank, sval = np.linalg.lstsq(\n model_lhs, model_rhs, rcond\n )\n model_lacoef[:] = t_coef.T\n\n else:\n # If we're fitting one or more models over a common set of points,\n # we only have to solve a single matrix equation, which is an order\n # of magnitude faster than calling lstsq() once per model below:\n\n good = ~rhs.mask if masked else slice(None) # latter is a no-op\n a = lhs[good]\n # Solve for one or more models:\n lacoef, resids, rank, sval = np.linalg.lstsq(lhs[good], rhs[good], rcond)\n\n self.fit_info[\"residuals\"] = resids\n self.fit_info[\"rank\"] = rank\n self.fit_info[\"singular_values\"] = sval\n\n lacoef /= scl[:, np.newaxis] if scl.ndim < rhs.ndim else scl\n self.fit_info[\"params\"] = lacoef\n\n fitter_to_model_params(model_copy, lacoef.flatten())\n\n # TODO: Only Polynomial models currently have an _order attribute;\n # maybe change this to read isinstance(model, PolynomialBase)\n if (\n hasattr(model_copy, \"_order\")\n and len(model_copy) == 1\n and rank < (model_copy._order - n_fixed)\n ):\n warnings.warn(\"The fit may be poorly conditioned\\n\", AstropyUserWarning)\n\n # calculate and set covariance matrix and standard devs. on model\n if self._calc_uncertainties:\n if len(y) > len(lacoef):\n self._add_fitting_uncertainties(\n model_copy, a * scl, len(lacoef), x, y, z, resids\n )\n model_copy.sync_constraints = True\n return model_copy\n\n\nclass FittingWithOutlierRemoval:\n \"\"\"\n This class combines an outlier removal technique with a fitting procedure.\n Basically, given a maximum number of iterations ``niter``, outliers are\n removed and fitting is performed for each iteration, until no new outliers\n are found or ``niter`` is reached.\n\n Parameters\n ----------\n fitter : `Fitter`\n An instance of any Astropy fitter, i.e., LinearLSQFitter,\n LevMarLSQFitter, SLSQPLSQFitter, SimplexLSQFitter, JointFitter. For\n model set fitting, this must understand masked input data (as\n indicated by the fitter class attribute ``supports_masked_input``).\n outlier_func : callable\n A function for outlier removal.\n If this accepts an ``axis`` parameter like the `numpy` functions, the\n appropriate value will be supplied automatically when fitting model\n sets (unless overridden in ``outlier_kwargs``), to find outliers for\n each model separately; otherwise, the same filtering must be performed\n in a loop over models, which is almost an order of magnitude slower.\n niter : int, optional\n Maximum number of iterations.\n outlier_kwargs : dict, optional\n Keyword arguments for outlier_func.\n\n Attributes\n ----------\n fit_info : dict\n The ``fit_info`` (if any) from the last iteration of the wrapped\n ``fitter`` during the most recent fit. An entry is also added with the\n keyword ``niter`` that records the actual number of fitting iterations\n performed (as opposed to the user-specified maximum).\n \"\"\"\n\n def __init__(self, fitter, outlier_func, niter=3, **outlier_kwargs):\n self.fitter = fitter\n self.outlier_func = outlier_func\n self.niter = niter\n self.outlier_kwargs = outlier_kwargs\n self.fit_info = {\"niter\": None}\n\n def __str__(self):\n return (\n f\"Fitter: {self.fitter.__class__.__name__}\\n\"\n f\"Outlier function: {self.outlier_func.__name__}\\n\"\n f\"Num. of iterations: {self.niter}\\n\"\n f\"Outlier func. args.: {self.outlier_kwargs}\"\n )\n\n def __repr__(self):\n return (\n f\"{self.__class__.__name__}(fitter: {self.fitter.__class__.__name__}, \"\n f\"outlier_func: {self.outlier_func.__name__},\"\n f\" niter: {self.niter}, outlier_kwargs: {self.outlier_kwargs})\"\n )\n\n def __call__(self, model, x, y, z=None, weights=None, **kwargs):\n \"\"\"\n Parameters\n ----------\n model : `~astropy.modeling.FittableModel`\n An analytic model which will be fit to the provided data.\n This also contains the initial guess for an optimization\n algorithm.\n x : array-like\n Input coordinates.\n y : array-like\n Data measurements (1D case) or input coordinates (2D case).\n z : array-like, optional\n Data measurements (2D case).\n weights : array-like, optional\n Weights to be passed to the fitter.\n kwargs : dict, optional\n Keyword arguments to be passed to the fitter.\n\n Returns\n -------\n fitted_model : `~astropy.modeling.FittableModel`\n Fitted model after outlier removal.\n mask : `numpy.ndarray`\n Boolean mask array, identifying which points were used in the final\n fitting iteration (False) and which were found to be outliers or\n were masked in the input (True).\n \"\"\"\n # For single models, the data get filtered here at each iteration and\n # then passed to the fitter, which is the historical behavior and\n # works even for fitters that don't understand masked arrays. For model\n # sets, the fitter must be able to filter masked data internally,\n # because fitters require a single set of x/y coordinates whereas the\n # eliminated points can vary between models. To avoid this limitation,\n # we could fall back to looping over individual model fits, but it\n # would likely be fiddly and involve even more overhead (and the\n # non-linear fitters don't work with model sets anyway, as of writing).\n\n if len(model) == 1:\n model_set_axis = None\n else:\n if (\n not hasattr(self.fitter, \"supports_masked_input\")\n or self.fitter.supports_masked_input is not True\n ):\n raise ValueError(\n f\"{type(self.fitter).__name__} cannot fit model sets with masked \"\n \"values\"\n )\n\n # Fitters use their input model's model_set_axis to determine how\n # their input data are stacked:\n model_set_axis = model.model_set_axis\n # Construct input coordinate tuples for fitters & models that are\n # appropriate for the dimensionality being fitted:\n if z is None:\n coords = (x,)\n data = y\n else:\n coords = x, y\n data = z\n\n # For model sets, construct a numpy-standard \"axis\" tuple for the\n # outlier function, to treat each model separately (if supported):\n if model_set_axis is not None:\n if model_set_axis < 0:\n model_set_axis += data.ndim\n\n if \"axis\" not in self.outlier_kwargs: # allow user override\n # This also works for False (like model instantiation):\n self.outlier_kwargs[\"axis\"] = tuple(\n n for n in range(data.ndim) if n != model_set_axis\n )\n\n loop = False\n\n # Starting fit, prior to any iteration and masking:\n fitted_model = self.fitter(model, x, y, z, weights=weights, **kwargs)\n filtered_data = np.ma.masked_array(data)\n if filtered_data.mask is np.ma.nomask:\n filtered_data.mask = False\n filtered_weights = weights\n last_n_masked = filtered_data.mask.sum()\n n = 0 # (allow recording no. of iterations when 0)\n\n # Perform the iterative fitting:\n for n in range(1, self.niter + 1):\n # (Re-)evaluate the last model:\n model_vals = fitted_model(*coords, model_set_axis=False)\n\n # Determine the outliers:\n if not loop:\n # Pass axis parameter if outlier_func accepts it, otherwise\n # prepare for looping over models:\n try:\n filtered_data = self.outlier_func(\n filtered_data - model_vals, **self.outlier_kwargs\n )\n # If this happens to catch an error with a parameter other\n # than axis, the next attempt will fail accordingly:\n except TypeError:\n if model_set_axis is None:\n raise\n else:\n self.outlier_kwargs.pop(\"axis\", None)\n loop = True\n\n # Construct MaskedArray to hold filtered values:\n filtered_data = np.ma.masked_array(\n filtered_data,\n dtype=np.result_type(filtered_data, model_vals),\n copy=True,\n )\n # Make sure the mask is an array, not just nomask:\n if filtered_data.mask is np.ma.nomask:\n filtered_data.mask = False\n\n # Get views transposed appropriately for iteration\n # over the set (handling data & mask separately due to\n # NumPy issue #8506):\n data_T = np.rollaxis(filtered_data, model_set_axis, 0)\n mask_T = np.rollaxis(filtered_data.mask, model_set_axis, 0)\n\n if loop:\n model_vals_T = np.rollaxis(model_vals, model_set_axis, 0)\n for row_data, row_mask, row_mod_vals in zip(\n data_T, mask_T, model_vals_T\n ):\n masked_residuals = self.outlier_func(\n row_data - row_mod_vals, **self.outlier_kwargs\n )\n row_data.data[:] = masked_residuals.data\n row_mask[:] = masked_residuals.mask\n\n # Issue speed warning after the fact, so it only shows up when\n # the TypeError is genuinely due to the axis argument.\n warnings.warn(\n \"outlier_func did not accept axis argument; \"\n \"reverted to slow loop over models.\",\n AstropyUserWarning,\n )\n\n # Recombine newly-masked residuals with model to get masked values:\n filtered_data += model_vals\n\n # Re-fit the data after filtering, passing masked/unmasked values\n # for single models / sets, respectively:\n if model_set_axis is None:\n good = ~filtered_data.mask\n\n if weights is not None:\n filtered_weights = weights[good]\n\n fitted_model = self.fitter(\n fitted_model,\n *(c[good] for c in coords),\n filtered_data.data[good],\n weights=filtered_weights,\n **kwargs,\n )\n else:\n fitted_model = self.fitter(\n fitted_model,\n *coords,\n filtered_data,\n weights=filtered_weights,\n **kwargs,\n )\n\n # Stop iteration if the masked points are no longer changing (with\n # cumulative rejection we only need to compare how many there are):\n this_n_masked = filtered_data.mask.sum() # (minimal overhead)\n if this_n_masked == last_n_masked:\n break\n last_n_masked = this_n_masked\n\n self.fit_info = {\"niter\": n}\n self.fit_info.update(getattr(self.fitter, \"fit_info\", {}))\n\n return fitted_model, filtered_data.mask\n\n\nclass _NonLinearLSQFitter(metaclass=_FitterMeta):\n \"\"\"\n Base class for Non-Linear least-squares fitters.\n\n Parameters\n ----------\n calc_uncertainties : bool\n If the covariance matrix should be computed and set in the fit_info.\n Default: False\n use_min_max_bounds : bool\n If the set parameter bounds for a model will be enforced each given\n parameter while fitting via a simple min/max condition.\n Default: True\n \"\"\"\n\n supported_constraints = [\"fixed\", \"tied\", \"bounds\"]\n \"\"\"\n The constraint types supported by this fitter type.\n \"\"\"\n\n def __init__(self, calc_uncertainties=False, use_min_max_bounds=True):\n self.fit_info = None\n self._calc_uncertainties = calc_uncertainties\n self._use_min_max_bounds = use_min_max_bounds\n super().__init__()\n\n def objective_function(self, fps, *args):\n \"\"\"\n Function to minimize.\n\n Parameters\n ----------\n fps : list\n parameters returned by the fitter\n args : list\n [model, [weights], [input coordinates]]\n\n \"\"\"\n model = args[0]\n weights = args[1]\n fitter_to_model_params(model, fps, self._use_min_max_bounds)\n meas = args[-1]\n\n if weights is None:\n value = np.ravel(model(*args[2:-1]) - meas)\n else:\n value = np.ravel(weights * (model(*args[2:-1]) - meas))\n\n if not np.all(np.isfinite(value)):\n raise NonFiniteValueError(\n \"Objective function has encountered a non-finite value, \"\n \"this will cause the fit to fail!\\n\"\n \"Please remove non-finite values from your input data before \"\n \"fitting to avoid this error.\"\n )\n\n return value\n\n @staticmethod\n def _add_fitting_uncertainties(model, cov_matrix):\n \"\"\"\n Set ``cov_matrix`` and ``stds`` attributes on model with parameter\n covariance matrix returned by ``optimize.leastsq``.\n \"\"\"\n free_param_names = [\n x\n for x in model.fixed\n if (model.fixed[x] is False) and (model.tied[x] is False)\n ]\n\n model.cov_matrix = Covariance(cov_matrix, free_param_names)\n model.stds = StandardDeviations(cov_matrix, free_param_names)\n\n @staticmethod\n def _wrap_deriv(params, model, weights, x, y, z=None):\n \"\"\"\n Wraps the method calculating the Jacobian of the function to account\n for model constraints.\n `scipy.optimize.leastsq` expects the function derivative to have the\n above signature (parlist, (argtuple)). In order to accommodate model\n constraints, instead of using p directly, we set the parameter list in\n this function.\n \"\"\"\n if weights is None:\n weights = 1.0\n\n if any(model.fixed.values()) or any(model.tied.values()):\n # update the parameters with the current values from the fitter\n fitter_to_model_params(model, params)\n if z is None:\n full = np.array(model.fit_deriv(x, *model.parameters))\n if not model.col_fit_deriv:\n full_deriv = np.ravel(weights) * full.T\n else:\n full_deriv = np.ravel(weights) * full\n else:\n full = np.array(\n [np.ravel(_) for _ in model.fit_deriv(x, y, *model.parameters)]\n )\n if not model.col_fit_deriv:\n full_deriv = np.ravel(weights) * full.T\n else:\n full_deriv = np.ravel(weights) * full\n\n pars = [getattr(model, name) for name in model.param_names]\n fixed = [par.fixed for par in pars]\n tied = [par.tied for par in pars]\n tied = list(np.where([par.tied is not False for par in pars], True, tied))\n fix_and_tie = np.logical_or(fixed, tied)\n ind = np.logical_not(fix_and_tie)\n\n if not model.col_fit_deriv:\n residues = np.asarray(full_deriv[np.nonzero(ind)]).T\n else:\n residues = full_deriv[np.nonzero(ind)]\n\n return [np.ravel(_) for _ in residues]\n else:\n if z is None:\n fit_deriv = np.array(model.fit_deriv(x, *params))\n try:\n output = np.array(\n [np.ravel(_) for _ in np.array(weights) * fit_deriv]\n )\n if output.shape != fit_deriv.shape:\n output = np.array(\n [np.ravel(_) for _ in np.atleast_2d(weights).T * fit_deriv]\n )\n return output\n except ValueError:\n return np.array(\n [\n np.ravel(_)\n for _ in np.array(weights) * np.moveaxis(fit_deriv, -1, 0)\n ]\n ).transpose()\n else:\n if not model.col_fit_deriv:\n return [\n np.ravel(_)\n for _ in (\n np.ravel(weights)\n * np.array(model.fit_deriv(x, y, *params)).T\n ).T\n ]\n return [\n np.ravel(_)\n for _ in weights * np.array(model.fit_deriv(x, y, *params))\n ]\n\n def _compute_param_cov(\n self, model, y, init_values, cov_x, fitparams, farg, weights=None\n ):\n # now try to compute the true covariance matrix\n if (len(y) > len(init_values)) and cov_x is not None:\n self.fit_info[\"param_cov\"] = cov_x\n if weights is None:\n # if there are no measurement uncertainties given in `weights`,\n # fall back on the default behavior in scipy.optimize.curve_fit\n # when `absolute_sigma == False`. If there are uncertainties,\n # assume they are \"absolute\" and not \"relative\".\n # For details, see curve_fit:\n # https://github.com/scipy/scipy/blob/\n # c1ed5ece8ffbf05356a22a8106affcd11bd3aee0/scipy/\n # optimize/_minpack_py.py#L591-L602\n sum_sqrs = np.sum(self.objective_function(fitparams, *farg) ** 2)\n dof = len(y) - len(init_values)\n self.fit_info[\"param_cov\"] *= sum_sqrs / dof\n else:\n self.fit_info[\"param_cov\"] = None\n if self._calc_uncertainties is True:\n if self.fit_info[\"param_cov\"] is not None:\n self._add_fitting_uncertainties(model, self.fit_info[\"param_cov\"])\n\n def _run_fitter(self, model, farg, maxiter, acc, epsilon, estimate_jacobian):\n return None, None, None\n\n def _filter_non_finite(self, x, y, z=None, weights=None):\n \"\"\"\n Filter out non-finite values in x, y, z.\n\n Returns\n -------\n x, y, z : ndarrays\n x, y, and z with non-finite values filtered out.\n \"\"\"\n MESSAGE = \"Non-Finite input data has been removed by the fitter.\"\n\n mask = np.ones_like(x, dtype=bool) if weights is None else np.isfinite(weights)\n mask &= np.isfinite(y) if z is None else np.isfinite(z)\n\n if not np.all(mask):\n warnings.warn(MESSAGE, AstropyUserWarning)\n\n return (\n x[mask],\n y[mask],\n None if z is None else z[mask],\n None if weights is None else weights[mask],\n )\n\n @fitter_unit_support\n def __call__(\n self,\n model,\n x,\n y,\n z=None,\n weights=None,\n maxiter=DEFAULT_MAXITER,\n acc=DEFAULT_ACC,\n epsilon=DEFAULT_EPS,\n estimate_jacobian=False,\n filter_non_finite=False,\n ):\n \"\"\"\n Fit data to this model.\n\n Parameters\n ----------\n model : `~astropy.modeling.FittableModel`\n model to fit to x, y, z\n x : array\n input coordinates\n y : array\n input coordinates\n z : array, optional\n input coordinates\n weights : array, optional\n Weights for fitting. For data with Gaussian uncertainties, the weights\n should be 1/sigma.\n\n .. versionchanged:: 5.3\n Calculate parameter covariances while accounting for ``weights``\n as \"absolute\" inverse uncertainties. To recover the old behavior,\n choose ``weights=None``.\n\n maxiter : int\n maximum number of iterations\n acc : float\n Relative error desired in the approximate solution\n epsilon : float\n A suitable step length for the forward-difference\n approximation of the Jacobian (if model.fjac=None). If\n epsfcn is less than the machine precision, it is\n assumed that the relative errors in the functions are\n of the order of the machine precision.\n estimate_jacobian : bool\n If False (default) and if the model has a fit_deriv method,\n it will be used. Otherwise the Jacobian will be estimated.\n If True, the Jacobian will be estimated in any case.\n equivalencies : list or None, optional, keyword-only\n List of *additional* equivalencies that are should be applied in\n case x, y and/or z have units. Default is None.\n filter_non_finite : bool, optional\n Whether or not to filter data with non-finite values. Default is False\n\n Returns\n -------\n model_copy : `~astropy.modeling.FittableModel`\n a copy of the input model with parameters set by the fitter\n\n \"\"\"\n model_copy = _validate_model(model, self.supported_constraints)\n model_copy.sync_constraints = False\n\n if filter_non_finite:\n x, y, z, weights = self._filter_non_finite(x, y, z, weights)\n farg = (\n model_copy,\n weights,\n ) + _convert_input(x, y, z)\n\n init_values, fitparams, cov_x = self._run_fitter(\n model_copy, farg, maxiter, acc, epsilon, estimate_jacobian\n )\n\n self._compute_param_cov(\n model_copy, y, init_values, cov_x, fitparams, farg, weights\n )\n\n model_copy.sync_constraints = True\n return model_copy\n\n\nclass LevMarLSQFitter(_NonLinearLSQFitter):\n \"\"\"\n Levenberg-Marquardt algorithm and least squares statistic.\n\n Parameters\n ----------\n calc_uncertainties : bool\n If the covariance matrix should be computed and set in the fit_info.\n Default: False\n\n Attributes\n ----------\n fit_info : dict\n The `scipy.optimize.leastsq` result for the most recent fit (see\n notes).\n\n Notes\n -----\n The ``fit_info`` dictionary contains the values returned by\n `scipy.optimize.leastsq` for the most recent fit, including the values from\n the ``infodict`` dictionary it returns. See the `scipy.optimize.leastsq`\n documentation for details on the meaning of these values. Note that the\n ``x`` return value is *not* included (as it is instead the parameter values\n of the returned model).\n Additionally, one additional element of ``fit_info`` is computed whenever a\n model is fit, with the key 'param_cov'. The corresponding value is the\n covariance matrix of the parameters as a 2D numpy array. The order of the\n matrix elements matches the order of the parameters in the fitted model\n (i.e., the same order as ``model.param_names``).\n\n \"\"\"\n\n def __init__(self, calc_uncertainties=False):\n super().__init__(calc_uncertainties)\n self.fit_info = {\n \"nfev\": None,\n \"fvec\": None,\n \"fjac\": None,\n \"ipvt\": None,\n \"qtf\": None,\n \"message\": None,\n \"ierr\": None,\n \"param_jac\": None,\n \"param_cov\": None,\n }\n\n def _run_fitter(self, model, farg, maxiter, acc, epsilon, estimate_jacobian):\n from scipy import optimize\n\n if model.fit_deriv is None or estimate_jacobian:\n dfunc = None\n else:\n dfunc = self._wrap_deriv\n init_values, _, _ = model_to_fit_params(model)\n fitparams, cov_x, dinfo, mess, ierr = optimize.leastsq(\n self.objective_function,\n init_values,\n args=farg,\n Dfun=dfunc,\n col_deriv=model.col_fit_deriv,\n maxfev=maxiter,\n epsfcn=epsilon,\n xtol=acc,\n full_output=True,\n )\n fitter_to_model_params(model, fitparams)\n self.fit_info.update(dinfo)\n self.fit_info[\"cov_x\"] = cov_x\n self.fit_info[\"message\"] = mess\n self.fit_info[\"ierr\"] = ierr\n if ierr not in [1, 2, 3, 4]:\n warnings.warn(\n \"The fit may be unsuccessful; check \"\n \"fit_info['message'] for more information.\",\n AstropyUserWarning,\n )\n\n return init_values, fitparams, cov_x\n\n\nclass _NLLSQFitter(_NonLinearLSQFitter):\n \"\"\"\n Wrapper class for `scipy.optimize.least_squares` method, which provides:\n - Trust Region Reflective\n - dogbox\n - Levenberg-Marquardt\n algorithms using the least squares statistic.\n\n Parameters\n ----------\n method : str\n ‘trf’ : Trust Region Reflective algorithm, particularly suitable\n for large sparse problems with bounds. Generally robust method.\n ‘dogbox’ : dogleg algorithm with rectangular trust regions, typical\n use case is small problems with bounds. Not recommended for\n problems with rank-deficient Jacobian.\n ‘lm’ : Levenberg-Marquardt algorithm as implemented in MINPACK.\n Doesn’t handle bounds and sparse Jacobians. Usually the most\n efficient method for small unconstrained problems.\n calc_uncertainties : bool\n If the covariance matrix should be computed and set in the fit_info.\n Default: False\n use_min_max_bounds: bool\n If the set parameter bounds for a model will be enforced each given\n parameter while fitting via a simple min/max condition. A True setting\n will replicate how LevMarLSQFitter enforces bounds.\n Default: False\n\n Attributes\n ----------\n fit_info :\n A `scipy.optimize.OptimizeResult` class which contains all of\n the most recent fit information\n \"\"\"\n\n def __init__(self, method, calc_uncertainties=False, use_min_max_bounds=False):\n super().__init__(calc_uncertainties, use_min_max_bounds)\n self._method = method\n\n def _run_fitter(self, model, farg, maxiter, acc, epsilon, estimate_jacobian):\n from scipy import optimize\n from scipy.linalg import svd\n\n if model.fit_deriv is None or estimate_jacobian:\n dfunc = \"2-point\"\n else:\n\n def _dfunc(params, model, weights, x, y, z=None):\n if model.col_fit_deriv:\n return np.transpose(\n self._wrap_deriv(params, model, weights, x, y, z)\n )\n else:\n return self._wrap_deriv(params, model, weights, x, y, z)\n\n dfunc = _dfunc\n\n init_values, _, bounds = model_to_fit_params(model)\n\n # Note, if use_min_max_bounds is True we are defaulting to enforcing bounds\n # using the old method employed by LevMarLSQFitter, this is different\n # from the method that optimize.least_squares employs to enforce bounds\n # thus we override the bounds being passed to optimize.least_squares so\n # that it will not enforce any bounding.\n if self._use_min_max_bounds:\n bounds = (-np.inf, np.inf)\n\n self.fit_info = optimize.least_squares(\n self.objective_function,\n init_values,\n args=farg,\n jac=dfunc,\n max_nfev=maxiter,\n diff_step=np.sqrt(epsilon),\n xtol=acc,\n method=self._method,\n bounds=bounds,\n )\n\n # Adapted from ~scipy.optimize.minpack, see:\n # https://github.com/scipy/scipy/blob/47bb6febaa10658c72962b9615d5d5aa2513fa3a/scipy/optimize/minpack.py#L795-L816\n # Do Moore-Penrose inverse discarding zero singular values.\n _, s, VT = svd(self.fit_info.jac, full_matrices=False)\n threshold = np.finfo(float).eps * max(self.fit_info.jac.shape) * s[0]\n s = s[s > threshold]\n VT = VT[: s.size]\n cov_x = np.dot(VT.T / s**2, VT)\n\n fitter_to_model_params(model, self.fit_info.x, False)\n if not self.fit_info.success:\n warnings.warn(\n f\"The fit may be unsuccessful; check: \\n {self.fit_info.message}\",\n AstropyUserWarning,\n )\n\n return init_values, self.fit_info.x, cov_x\n\n\nclass TRFLSQFitter(_NLLSQFitter):\n \"\"\"\n Trust Region Reflective algorithm and least squares statistic.\n\n Parameters\n ----------\n calc_uncertainties : bool\n If the covariance matrix should be computed and set in the fit_info.\n Default: False\n use_min_max_bounds: bool\n If the set parameter bounds for a model will be enforced each given\n parameter while fitting via a simple min/max condition. A True setting\n will replicate how LevMarLSQFitter enforces bounds.\n Default: False\n\n Attributes\n ----------\n fit_info :\n A `scipy.optimize.OptimizeResult` class which contains all of\n the most recent fit information\n \"\"\"\n\n def __init__(self, calc_uncertainties=False, use_min_max_bounds=False):\n super().__init__(\"trf\", calc_uncertainties, use_min_max_bounds)\n\n\nclass DogBoxLSQFitter(_NLLSQFitter):\n \"\"\"\n DogBox algorithm and least squares statistic.\n\n Parameters\n ----------\n calc_uncertainties : bool\n If the covariance matrix should be computed and set in the fit_info.\n Default: False\n use_min_max_bounds: bool\n If the set parameter bounds for a model will be enforced each given\n parameter while fitting via a simple min/max condition. A True setting\n will replicate how LevMarLSQFitter enforces bounds.\n Default: False\n\n Attributes\n ----------\n fit_info :\n A `scipy.optimize.OptimizeResult` class which contains all of\n the most recent fit information\n \"\"\"\n\n def __init__(self, calc_uncertainties=False, use_min_max_bounds=False):\n super().__init__(\"dogbox\", calc_uncertainties, use_min_max_bounds)\n\n\nclass LMLSQFitter(_NLLSQFitter):\n \"\"\"\n `scipy.optimize.least_squares` Levenberg-Marquardt algorithm and least squares statistic.\n\n Parameters\n ----------\n calc_uncertainties : bool\n If the covariance matrix should be computed and set in the fit_info.\n Default: False\n\n Attributes\n ----------\n fit_info :\n A `scipy.optimize.OptimizeResult` class which contains all of\n the most recent fit information\n \"\"\"\n\n def __init__(self, calc_uncertainties=False):\n super().__init__(\"lm\", calc_uncertainties, True)\n\n\nclass SLSQPLSQFitter(Fitter):\n \"\"\"\n Sequential Least Squares Programming (SLSQP) optimization algorithm and\n least squares statistic.\n\n Raises\n ------\n ModelLinearityError\n A linear model is passed to a nonlinear fitter\n\n Notes\n -----\n See also the `~astropy.modeling.optimizers.SLSQP` optimizer.\n\n \"\"\"\n\n supported_constraints = SLSQP.supported_constraints\n\n def __init__(self):\n super().__init__(optimizer=SLSQP, statistic=leastsquare)\n self.fit_info = {}\n\n @fitter_unit_support\n def __call__(self, model, x, y, z=None, weights=None, **kwargs):\n \"\"\"\n Fit data to this model.\n\n Parameters\n ----------\n model : `~astropy.modeling.FittableModel`\n model to fit to x, y, z\n x : array\n input coordinates\n y : array\n input coordinates\n z : array, optional\n input coordinates\n weights : array, optional\n Weights for fitting.\n For data with Gaussian uncertainties, the weights should be\n 1/sigma.\n kwargs : dict\n optional keyword arguments to be passed to the optimizer or the statistic\n verblevel : int\n 0-silent\n 1-print summary upon completion,\n 2-print summary after each iteration\n maxiter : int\n maximum number of iterations\n epsilon : float\n the step size for finite-difference derivative estimates\n acc : float\n Requested accuracy\n equivalencies : list or None, optional, keyword-only\n List of *additional* equivalencies that are should be applied in\n case x, y and/or z have units. Default is None.\n\n Returns\n -------\n model_copy : `~astropy.modeling.FittableModel`\n a copy of the input model with parameters set by the fitter\n\n \"\"\"\n model_copy = _validate_model(model, self._opt_method.supported_constraints)\n model_copy.sync_constraints = False\n farg = _convert_input(x, y, z)\n farg = (\n model_copy,\n weights,\n ) + farg\n init_values, _, _ = model_to_fit_params(model_copy)\n fitparams, self.fit_info = self._opt_method(\n self.objective_function, init_values, farg, **kwargs\n )\n fitter_to_model_params(model_copy, fitparams)\n\n model_copy.sync_constraints = True\n return model_copy\n\n\nclass SimplexLSQFitter(Fitter):\n \"\"\"\n Simplex algorithm and least squares statistic.\n\n Raises\n ------\n `ModelLinearityError`\n A linear model is passed to a nonlinear fitter\n\n \"\"\"\n\n supported_constraints = Simplex.supported_constraints\n\n def __init__(self):\n super().__init__(optimizer=Simplex, statistic=leastsquare)\n self.fit_info = {}\n\n @fitter_unit_support\n def __call__(self, model, x, y, z=None, weights=None, **kwargs):\n \"\"\"\n Fit data to this model.\n\n Parameters\n ----------\n model : `~astropy.modeling.FittableModel`\n model to fit to x, y, z\n x : array\n input coordinates\n y : array\n input coordinates\n z : array, optional\n input coordinates\n weights : array, optional\n Weights for fitting.\n For data with Gaussian uncertainties, the weights should be\n 1/sigma.\n kwargs : dict\n optional keyword arguments to be passed to the optimizer or the statistic\n maxiter : int\n maximum number of iterations\n acc : float\n Relative error in approximate solution\n equivalencies : list or None, optional, keyword-only\n List of *additional* equivalencies that are should be applied in\n case x, y and/or z have units. Default is None.\n\n Returns\n -------\n model_copy : `~astropy.modeling.FittableModel`\n a copy of the input model with parameters set by the fitter\n\n \"\"\"\n model_copy = _validate_model(model, self._opt_method.supported_constraints)\n model_copy.sync_constraints = False\n farg = _convert_input(x, y, z)\n farg = (\n model_copy,\n weights,\n ) + farg\n\n init_values, _, _ = model_to_fit_params(model_copy)\n\n fitparams, self.fit_info = self._opt_method(\n self.objective_function, init_values, farg, **kwargs\n )\n fitter_to_model_params(model_copy, fitparams)\n model_copy.sync_constraints = True\n return model_copy\n\n\nclass JointFitter(metaclass=_FitterMeta):\n \"\"\"\n Fit models which share a parameter.\n For example, fit two gaussians to two data sets but keep\n the FWHM the same.\n\n Parameters\n ----------\n models : list\n a list of model instances\n jointparameters : list\n a list of joint parameters\n initvals : list\n a list of initial values\n\n \"\"\"\n\n def __init__(self, models, jointparameters, initvals):\n self.models = list(models)\n self.initvals = list(initvals)\n self.jointparams = jointparameters\n self._verify_input()\n self.fitparams = self.model_to_fit_params()\n\n # a list of model.n_inputs\n self.modeldims = [m.n_inputs for m in self.models]\n # sum all model dimensions\n self.ndim = np.sum(self.modeldims)\n\n def model_to_fit_params(self):\n fparams = []\n fparams.extend(self.initvals)\n for model in self.models:\n params = model.parameters.tolist()\n joint_params = self.jointparams[model]\n param_metrics = model._param_metrics\n for param_name in joint_params:\n slice_ = param_metrics[param_name][\"slice\"]\n del params[slice_]\n fparams.extend(params)\n return fparams\n\n def objective_function(self, fps, *args):\n \"\"\"\n Function to minimize.\n\n Parameters\n ----------\n fps : list\n the fitted parameters - result of an one iteration of the\n fitting algorithm\n args : dict\n tuple of measured and input coordinates\n args is always passed as a tuple from optimize.leastsq\n\n \"\"\"\n lstsqargs = list(args)\n fitted = []\n fitparams = list(fps)\n numjp = len(self.initvals)\n # make a separate list of the joint fitted parameters\n jointfitparams = fitparams[:numjp]\n del fitparams[:numjp]\n\n for model in self.models:\n joint_params = self.jointparams[model]\n margs = lstsqargs[: model.n_inputs + 1]\n del lstsqargs[: model.n_inputs + 1]\n # separate each model separately fitted parameters\n numfp = len(model._parameters) - len(joint_params)\n mfparams = fitparams[:numfp]\n\n del fitparams[:numfp]\n # recreate the model parameters\n mparams = []\n param_metrics = model._param_metrics\n for param_name in model.param_names:\n if param_name in joint_params:\n index = joint_params.index(param_name)\n # should do this with slices in case the\n # parameter is not a number\n mparams.extend([jointfitparams[index]])\n else:\n slice_ = param_metrics[param_name][\"slice\"]\n plen = slice_.stop - slice_.start\n mparams.extend(mfparams[:plen])\n del mfparams[:plen]\n modelfit = model.evaluate(margs[:-1], *mparams)\n fitted.extend(modelfit - margs[-1])\n return np.ravel(fitted)\n\n def _verify_input(self):\n if len(self.models) <= 1:\n raise TypeError(f\"Expected >1 models, {len(self.models)} is given\")\n if len(self.jointparams.keys()) < 2:\n raise TypeError(\n \"At least two parameters are expected, \"\n f\"{len(self.jointparams.keys())} is given\"\n )\n for j in self.jointparams.keys():\n if len(self.jointparams[j]) != len(self.initvals):\n raise TypeError(\n f\"{len(self.jointparams[j])} parameter(s) \"\n f\"provided but {len(self.initvals)} expected\"\n )\n\n def __call__(self, *args):\n \"\"\"\n Fit data to these models keeping some of the parameters common to the\n two models.\n \"\"\"\n from scipy import optimize\n\n if len(args) != reduce(lambda x, y: x + 1 + y + 1, self.modeldims):\n raise ValueError(\n f\"Expected {reduce(lambda x, y: x + 1 + y + 1, self.modeldims)} \"\n f\"coordinates in args but {len(args)} provided\"\n )\n\n self.fitparams[:], _ = optimize.leastsq(\n self.objective_function, self.fitparams, args=args\n )\n\n fparams = self.fitparams[:]\n numjp = len(self.initvals)\n # make a separate list of the joint fitted parameters\n jointfitparams = fparams[:numjp]\n del fparams[:numjp]\n\n for model in self.models:\n # extract each model's fitted parameters\n joint_params = self.jointparams[model]\n numfp = len(model._parameters) - len(joint_params)\n mfparams = fparams[:numfp]\n\n del fparams[:numfp]\n # recreate the model parameters\n mparams = []\n param_metrics = model._param_metrics\n for param_name in model.param_names:\n if param_name in joint_params:\n index = joint_params.index(param_name)\n # should do this with slices in case the parameter\n # is not a number\n mparams.extend([jointfitparams[index]])\n else:\n slice_ = param_metrics[param_name][\"slice\"]\n plen = slice_.stop - slice_.start\n mparams.extend(mfparams[:plen])\n del mfparams[:plen]\n model.parameters = np.array(mparams)\n\n\ndef _convert_input(x, y, z=None, n_models=1, model_set_axis=0):\n \"\"\"Convert inputs to float arrays.\"\"\"\n x = np.asanyarray(x, dtype=float)\n y = np.asanyarray(y, dtype=float)\n\n if z is not None:\n z = np.asanyarray(z, dtype=float)\n data_ndim, data_shape = z.ndim, z.shape\n else:\n data_ndim, data_shape = y.ndim, y.shape\n\n # For compatibility with how the linear fitter code currently expects to\n # work, shift the dependent variable's axes to the expected locations\n if n_models > 1 or data_ndim > x.ndim:\n if (model_set_axis or 0) >= data_ndim:\n raise ValueError(\"model_set_axis out of range\")\n if data_shape[model_set_axis] != n_models:\n raise ValueError(\n \"Number of data sets (y or z array) is expected to equal \"\n \"the number of parameter sets\"\n )\n if z is None:\n # For a 1-D model the y coordinate's model-set-axis is expected to\n # be last, so that its first dimension is the same length as the x\n # coordinates. This is in line with the expectations of\n # numpy.linalg.lstsq:\n # https://numpy.org/doc/stable/reference/generated/numpy.linalg.lstsq.html\n # That is, each model should be represented by a column. TODO:\n # Obviously this is a detail of np.linalg.lstsq and should be\n # handled specifically by any fitters that use it...\n y = np.rollaxis(y, model_set_axis, y.ndim)\n data_shape = y.shape[:-1]\n else:\n # Shape of z excluding model_set_axis\n data_shape = z.shape[:model_set_axis] + z.shape[model_set_axis + 1 :]\n\n if z is None:\n if data_shape != x.shape:\n raise ValueError(\"x and y should have the same shape\")\n farg = (x, y)\n else:\n if not (x.shape == y.shape == data_shape):\n raise ValueError(\"x, y and z should have the same shape\")\n farg = (x, y, z)\n return farg\n\n\n# TODO: These utility functions are really particular to handling\n# bounds/tied/fixed constraints for scipy.optimize optimizers that do not\n# support them inherently; this needs to be reworked to be clear about this\n# distinction (and the fact that these are not necessarily applicable to any\n# arbitrary fitter--as evidenced for example by the fact that JointFitter has\n# its own versions of these)\n# TODO: Most of this code should be entirely rewritten; it should not be as\n# inefficient as it is.\ndef fitter_to_model_params(model, fps, use_min_max_bounds=True):\n \"\"\"\n Constructs the full list of model parameters from the fitted and\n constrained parameters.\n\n Parameters\n ----------\n model :\n The model being fit\n fps :\n The fit parameter values to be assigned\n use_min_max_bounds: bool\n If the set parameter bounds for model will be enforced on each\n parameter with bounds.\n Default: True\n \"\"\"\n _, fit_param_indices, _ = model_to_fit_params(model)\n\n has_tied = any(model.tied.values())\n has_fixed = any(model.fixed.values())\n has_bound = any(b != (None, None) for b in model.bounds.values())\n parameters = model.parameters\n\n if not (has_tied or has_fixed or has_bound):\n # We can just assign directly\n model.parameters = fps\n return\n\n fit_param_indices = set(fit_param_indices)\n offset = 0\n param_metrics = model._param_metrics\n for idx, name in enumerate(model.param_names):\n if idx not in fit_param_indices:\n continue\n\n slice_ = param_metrics[name][\"slice\"]\n shape = param_metrics[name][\"shape\"]\n # This is determining which range of fps (the fitted parameters) maps\n # to parameters of the model\n size = reduce(operator.mul, shape, 1)\n\n values = fps[offset : offset + size]\n\n # Check bounds constraints\n if model.bounds[name] != (None, None) and use_min_max_bounds:\n _min, _max = model.bounds[name]\n if _min is not None:\n values = np.fmax(values, _min)\n if _max is not None:\n values = np.fmin(values, _max)\n\n parameters[slice_] = values\n offset += size\n\n # Update model parameters before calling ``tied`` constraints.\n model._array_to_parameters()\n\n # This has to be done in a separate loop due to how tied parameters are\n # currently evaluated (the fitted parameters need to actually be *set* on\n # the model first, for use in evaluating the \"tied\" expression--it might be\n # better to change this at some point\n if has_tied:\n for idx, name in enumerate(model.param_names):\n if model.tied[name]:\n value = model.tied[name](model)\n slice_ = param_metrics[name][\"slice\"]\n\n # To handle multiple tied constraints, model parameters\n # need to be updated after each iteration.\n parameters[slice_] = value\n model._array_to_parameters()\n\n\ndef model_to_fit_params(model):\n \"\"\"\n Convert a model instance's parameter array to an array that can be used\n with a fitter that doesn't natively support fixed or tied parameters.\n In particular, it removes fixed/tied parameters from the parameter\n array.\n These may be a subset of the model parameters, if some of them are held\n constant or tied.\n \"\"\"\n fitparam_indices = list(range(len(model.param_names)))\n model_params = model.parameters\n model_bounds = list(model.bounds.values())\n if any(model.fixed.values()) or any(model.tied.values()):\n params = list(model_params)\n param_metrics = model._param_metrics\n for idx, name in list(enumerate(model.param_names))[::-1]:\n if model.fixed[name] or model.tied[name]:\n slice_ = param_metrics[name][\"slice\"]\n del params[slice_]\n del model_bounds[slice_]\n del fitparam_indices[idx]\n model_params = np.array(params)\n\n for idx, bound in enumerate(model_bounds):\n if bound[0] is None:\n lower = -np.inf\n else:\n lower = bound[0]\n\n if bound[1] is None:\n upper = np.inf\n else:\n upper = bound[1]\n\n model_bounds[idx] = (lower, upper)\n model_bounds = tuple(zip(*model_bounds))\n return model_params, fitparam_indices, model_bounds\n\n\ndef _validate_constraints(supported_constraints, model):\n \"\"\"Make sure model constraints are supported by the current fitter.\"\"\"\n message = \"Optimizer cannot handle {0} constraints.\"\n\n if any(model.fixed.values()) and \"fixed\" not in supported_constraints:\n raise UnsupportedConstraintError(message.format(\"fixed parameter\"))\n\n if any(model.tied.values()) and \"tied\" not in supported_constraints:\n raise UnsupportedConstraintError(message.format(\"tied parameter\"))\n\n if (\n any(tuple(b) != (None, None) for b in model.bounds.values())\n and \"bounds\" not in supported_constraints\n ):\n raise UnsupportedConstraintError(message.format(\"bound parameter\"))\n\n if model.eqcons and \"eqcons\" not in supported_constraints:\n raise UnsupportedConstraintError(message.format(\"equality\"))\n\n if model.ineqcons and \"ineqcons\" not in supported_constraints:\n raise UnsupportedConstraintError(message.format(\"inequality\"))\n\n\ndef _validate_model(model, supported_constraints):\n \"\"\"\n Check that model and fitter are compatible and return a copy of the model.\n \"\"\"\n if not model.fittable:\n raise ValueError(\"Model does not appear to be fittable.\")\n if model.linear:\n warnings.warn(\n \"Model is linear in parameters; consider using linear fitting methods.\",\n AstropyUserWarning,\n )\n elif len(model) != 1:\n # for now only single data sets ca be fitted\n raise ValueError(\"Non-linear fitters can only fit one data set at a time.\")\n _validate_constraints(supported_constraints, model)\n\n model_copy = model.copy()\n return model_copy\n\n\ndef populate_entry_points(entry_points):\n \"\"\"\n This injects entry points into the `astropy.modeling.fitting` namespace.\n This provides a means of inserting a fitting routine without requirement\n of it being merged into astropy's core.\n\n Parameters\n ----------\n entry_points : list of `~importlib.metadata.EntryPoint`\n entry_points are objects which encapsulate importable objects and\n are defined on the installation of a package.\n\n Notes\n -----\n An explanation of entry points can be found `here\n <http://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins>`_\n \"\"\"\n for entry_point in entry_points:\n name = entry_point.name\n try:\n entry_point = entry_point.load()\n except Exception as e:\n # This stops the fitting from choking if an entry_point produces an error.\n warnings.warn(\n AstropyUserWarning(\n f\"{type(e).__name__} error occurred in entry point {name}.\"\n )\n )\n else:\n if not isinstance(entry_point, type):\n warnings.warn(\n AstropyUserWarning(\n f\"Modeling entry point {name} expected to be a Class.\"\n )\n )\n else:\n if issubclass(entry_point, Fitter):\n name = entry_point.__name__\n globals()[name] = entry_point\n __all__.append(name) # noqa: PYI056\n else:\n warnings.warn(\n AstropyUserWarning(\n f\"Modeling entry point {name} expected to extend \"\n \"astropy.modeling.Fitter\"\n )\n )\n\n\npopulate_entry_points(entry_points().select(group=\"astropy.modeling\"))\n", "docs/changes/modeling/16677.feature.rst": null}
|
diff --git a/docs/changes/modeling/16677.feature.rst b/docs/changes/modeling/16677.feature.rst
new file mode 100644
index 000000000000..8c7d14e8555d
--- /dev/null
+++ b/docs/changes/modeling/16677.feature.rst
@@ -0,0 +1,3 @@
+Added ``Model.has_tied``, ``Model.has_fixed``, and ``Model.has_bounds`` attributes to make
+it easy to check whether models have various kinds of constraints set without having to
+inspect ``Model.tied``, ``Model.fixed``, and ``Model.bounds`` in detail.
|
{"astropy/modeling/core.py": [{"type": "function", "name": "Model.has_fixed", "lines": [1345, 1351], "signature": "def has_fixed(self):", "doc": "Whether the model has any fixed constraints."}, {"type": "function", "name": "Model.has_bounds", "lines": [1354, 1362], "signature": "def has_bounds(self):", "doc": "Whether the model has any bounds constraints."}, {"type": "function", "name": "Model.has_tied", "lines": [1365, 1371], "signature": "def has_tied(self):", "doc": "Whether the model has any tied constraints."}]}
|
v5.3
|
["astropy/modeling/tests/test_core.py::test_has_constraints", "astropy/modeling/tests/test_core.py::test_has_constraints_with_sync_constraints"]
|
["astropy/modeling/tests/test_core.py::test_Model_instance_repr_and_str", "astropy/modeling/tests/test_core.py::test_Model_array_parameter", "astropy/modeling/tests/test_core.py::test_inputless_model", "astropy/modeling/tests/test_core.py::test_ParametericModel", "astropy/modeling/tests/test_core.py::test_custom_model_signature", "astropy/modeling/tests/test_core.py::test_custom_model_subclass", "astropy/modeling/tests/test_core.py::test_custom_model_parametrized_decorator", "astropy/modeling/tests/test_core.py::test_custom_model_n_outputs", "astropy/modeling/tests/test_core.py::test_custom_model_settable_parameters", "astropy/modeling/tests/test_core.py::test_custom_model_regected_parameters", "astropy/modeling/tests/test_core.py::test_custom_inverse", "astropy/modeling/tests/test_core.py::test_custom_inverse_reset", "astropy/modeling/tests/test_core.py::test_render_model_2d", "astropy/modeling/tests/test_core.py::test_render_model_1d", "astropy/modeling/tests/test_core.py::test_render_model_3d", "astropy/modeling/tests/test_core.py::test_render_model_out_dtype", "astropy/modeling/tests/test_core.py::test_custom_bounding_box_1d", "astropy/modeling/tests/test_core.py::test_n_submodels_in_single_models", "astropy/modeling/tests/test_core.py::test_compound_deepcopy", "astropy/modeling/tests/test_core.py::test_rename_path", "astropy/modeling/tests/test_core.py::test_rename_1d[Gaussian1D]", "astropy/modeling/tests/test_core.py::test_rename_1d[Polynomial1D]", "astropy/modeling/tests/test_core.py::test_rename_1d[Shift]", "astropy/modeling/tests/test_core.py::test_rename_1d[Tabular1D]", "astropy/modeling/tests/test_core.py::test_rename_2d[Gaussian2D]", "astropy/modeling/tests/test_core.py::test_rename_2d[Polynomial2D]", "astropy/modeling/tests/test_core.py::test_rename_2d[Tabular2D]", "astropy/modeling/tests/test_core.py::test_fix_inputs_integer", "astropy/modeling/tests/test_core.py::test_fix_inputs_empty_dict", "astropy/modeling/tests/test_core.py::test_rename_inputs_outputs", "astropy/modeling/tests/test_core.py::test__prepare_output_single_model", "astropy/modeling/tests/test_core.py::test_prepare_outputs_mixed_broadcast", "astropy/modeling/tests/test_core.py::test_prepare_outputs_complex_reshape", "astropy/modeling/tests/test_core.py::test_prepare_outputs_single_entry_vector", "astropy/modeling/tests/test_core.py::test_coerce_units", "astropy/modeling/tests/test_core.py::test_bounding_box_general_inverse", "astropy/modeling/tests/test_core.py::test__add_special_operator", "astropy/modeling/tests/test_core.py::test_print_special_operator_CompoundModel", "astropy/modeling/tests/test_core.py::test__validate_input_shape", "astropy/modeling/tests/test_core.py::test__validate_input_shapes", "astropy/modeling/tests/test_core.py::test__remove_axes_from_shape", "astropy/modeling/tests/test_core.py::test_get_bounding_box", "astropy/modeling/tests/test_core.py::test_compound_bounding_box", "astropy/modeling/tests/test_core.py::test_bind_bounding_box", "astropy/modeling/tests/test_core.py::test_bind_compound_bounding_box_using_with_bounding_box_select", "astropy/modeling/tests/test_core.py::test_fix_inputs_compound_bounding_box", "astropy/modeling/tests/test_core.py::test_model_copy_with_bounding_box", "astropy/modeling/tests/test_core.py::test_compound_model_copy_with_bounding_box", "astropy/modeling/tests/test_core.py::test_model_copy_with_compound_bounding_box", "astropy/modeling/tests/test_core.py::test_compound_model_copy_with_compound_bounding_box", "astropy/modeling/tests/test_core.py::test_compound_model_copy_user_attribute", "astropy/modeling/tests/test_core.py::test_model_mixed_array_scalar_bounding_box", "astropy/modeling/tests/test_core.py::test_compound_model_mixed_array_scalar_bounding_box", "astropy/modeling/tests/test_core.py::test_model_with_bounding_box_true_and_single_output", "astropy/modeling/tests/test_core.py::test_compound_model_with_bounding_box_true_and_single_output", "astropy/modeling/tests/test_core.py::test_bounding_box_pass_with_ignored", "astropy/modeling/tests/test_core.py::test_compound_bounding_box_pass_with_ignored", "astropy/modeling/tests/test_core.py::test_model_integer_indexing[int]", "astropy/modeling/tests/test_core.py::test_model_integer_indexing[int32]", "astropy/modeling/tests/test_core.py::test_model_integer_indexing[int64]", "astropy/modeling/tests/test_core.py::test_model_integer_indexing[uint32]", "astropy/modeling/tests/test_core.py::test_model_integer_indexing[uint64]", "astropy/modeling/tests/test_core.py::test_model_string_indexing"]
|
2d281019494aaebf522f6626c0dae37510c16688
|
{"first_commit_time": 1720086629.0, "pr_title": "Add ``has_tied``, ``has_fixed`` and ``has_bounds`` properties to ``Model``", "pr_body": "*This has been split out from #16673 for ease of review*\r\n\r\nModels include ``tied``, ``fixed``, and ``bounds`` properties that return dictionaries with a summary of which parameters have constraints. However, sometimes one wants to quickly know whether there are any specified ``tied``, ``fixed``, or ``bounds`` constraints, and the main way to do that is:\r\n\r\n```python\r\n any(model.fixed.values())\r\n any(model.tied.values())\r\n any(b != (None, None) for b in model.bounds.values())\r\n```\r\n\r\nThese appear notably in the fitting code - this turns out to be reasonably expensive operations especially in the fitting code, where they might be called every time the objective function is called, and it is also clunky to repeat this logic several times in different places.\r\n\r\nThis PR adds:\r\n\r\n```python\r\nModel.has_fixed\r\nModel.has_tied\r\nModel.has_bounds\r\n```\r\n\r\nwhich simplifies this. In addition, these properties, like ``fixed``, ``tied``, and ``bounds`` are cached and the cached version is used when ``sync_constraints`` is ``False`` (typically during fitting), removing any performance impact. But beyond this, these properties could be generically useful to users, similarly to how we have e.g. ``has_units``, hence why I made these public.\r\n\r\n- [ ] By checking this box, the PR author has requested that maintainers do **NOT** use the \"Squash and Merge\" button. Maintainers should respect this when possible; however, the final decision is at the discretion of the maintainer that merges the PR.\r\n", "pr_timeline": [{"time": 1720125960.0, "comment": "Thank you for your contribution to Astropy! \ud83c\udf0c This checklist is meant to remind the package maintainers who will review this pull request of some common things to look for.\n\n - [ ] Do the proposed changes actually accomplish desired goals?\n - [ ] Do the proposed changes follow the [Astropy coding guidelines](https://docs.astropy.org/en/latest/development/codeguide.html)?\n - [ ] Are tests added/updated as required? If so, do they follow the [Astropy testing guidelines](https://docs.astropy.org/en/latest/development/testguide.html)?\n - [ ] Are docs added/updated as required? If so, do they follow the [Astropy documentation guidelines](https://docs.astropy.org/en/latest/development/docguide.html#astropy-documentation-rules-and-guidelines)?\n - [ ] Is rebase and/or squash necessary? If so, please provide the author with appropriate instructions. Also see instructions for [rebase](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#rebase-if-necessary) and [squash](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#squash-if-necessary).\n - [ ] Did the CI pass? If no, are the failures related? If you need to run daily and weekly cron jobs as part of the PR, please apply the \"Extra CI\" label. Codestyle issues can be fixed by the [bot](https://docs.astropy.org/en/latest/development/workflow/development_workflow.html#pre-commit).\n - [ ] Is a change log needed? If yes, did the change log check pass? If no, add the \"no-changelog-entry-needed\" label. If this is a manual backport, use the \"skip-changelog-checks\" label unless special changelog handling is necessary.\n - [ ] Is this a big PR that makes a \"What's new?\" entry worthwhile and if so, is (1) a \"what's new\" entry included in this PR and (2) the \"whatsnew-needed\" label applied?\n - [ ] At the time of adding the milestone, if the milestone set requires a backport to release branch(es), apply the appropriate \"backport-X.Y.x\" label(s) *before* merge."}, {"time": 1720125962.0, "comment": "\ud83d\udc4b Thank you for your draft pull request! Do you know that you can use `[ci skip]` or `[skip ci]` in your commit messages to skip running continuous integration tests until you are ready?"}], "issues": {}}
|
astropy/astropy
| 7,970
|
https://github.com/astropy/astropy/pull/7970
|
astropy__astropy-7970
|
[]
|
e222ffffbab7e38aec2134a31552c99aa90b0047
|
diff --git a/CHANGES.rst b/CHANGES.rst
index bbd4ad7014dc..ca55537547af 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -224,6 +224,8 @@ astropy.units
- ``AB`` and ``ST`` are now enabled by default, and have alternate names
``ABflux`` and ``STflux`` [#7891]
+- Added ``littleh`` unit and associated ``with_H0`` equivalency. [#7970]
+
astropy.utils
^^^^^^^^^^^^^
diff --git a/astropy/units/astrophys.py b/astropy/units/astrophys.py
index e894402a5b1c..a57ab925e9d5 100644
--- a/astropy/units/astrophys.py
+++ b/astropy/units/astrophys.py
@@ -160,6 +160,14 @@
def_unit(['beam'], namespace=_ns, prefixes=True)
def_unit(['electron'], doc="Number of electrons", namespace=_ns,
format={'latex': r'e^{-}', 'unicode': 'e⁻'})
+# This is not formally a unit, but is used in that way in many contexts, and
+# an appropriate equivalency is only possible if it's treated as a unit (see
+# https://arxiv.org/pdf/1308.4150.pdf for more)
+# Also note that h or h100 or h_100 would be a better name, but they either
+# conflict or have numbers in them, which is apparently disallowed
+def_unit(['littleh'], namespace=_ns, prefixes=False,
+ doc="Reduced/\"dimensionless\" Hubble constant",
+ format={'latex': r'h_{100}'})
###########################################################################
diff --git a/astropy/units/equivalencies.py b/astropy/units/equivalencies.py
index d416243f04e0..2a9bbda3b4b0 100644
--- a/astropy/units/equivalencies.py
+++ b/astropy/units/equivalencies.py
@@ -20,7 +20,7 @@
'brightness_temperature', 'thermodynamic_temperature',
'beam_angular_area', 'dimensionless_angles', 'logarithmic',
'temperature', 'temperature_energy', 'molar_mass_amu',
- 'pixel_scale', 'plate_scale']
+ 'pixel_scale', 'plate_scale', 'with_H0']
def dimensionless_angles():
@@ -687,3 +687,29 @@ def plate_scale(platescale):
"distance/angle")
return [(si.m, si.radian, lambda d: d*platescale_val, lambda rad: rad/platescale_val)]
+
+
+def with_H0(H0=None):
+ """
+ Convert between quantities with little-h and the equivalent physical units.
+
+ Parameters
+ ----------
+ H0 : `None` or `~astropy.units.Quantity`
+ The value of the Hubble constant to assume. If a `~astropy.units.Quantity`,
+ will assume the quantity *is* ``H0``. If `None` (default), use the
+ ``H0`` attribute from the default `astropy.cosmology` cosmology.
+
+ References
+ ----------
+ For an illuminating discussion on why you may or may not want to use
+ little-h at all, see https://arxiv.org/pdf/1308.4150.pdf
+ """
+
+ if H0 is None:
+ from .. import cosmology
+ H0 = cosmology.default_cosmology.get().H0
+
+ h100_val_unit = Unit(H0.to((si.km/si.s)/astrophys.Mpc).value/100 * astrophys.littleh)
+
+ return [(h100_val_unit, None)]
diff --git a/docs/units/equivalencies.rst b/docs/units/equivalencies.rst
index 86206968e1e9..7236ad0f8a66 100644
--- a/docs/units/equivalencies.rst
+++ b/docs/units/equivalencies.rst
@@ -347,8 +347,8 @@ and you want to know how big your pixels need to be to cover half an arcsecond::
>>> (0.5*u.arcsec).to(u.micron, tel_platescale) # doctest: +FLOAT_CMP
<Quantity 18.9077335632719 micron>
-Photometric Zero Point Equivalencies
-------------------------------------
+Photometric Zero Point Equivalency
+----------------------------------
This equivalency provides an easy way to move between photometric systems (i.e.,
those defined relative to a particular zero-point flux) and absolute fluxes.
@@ -361,6 +361,47 @@ standard zero point of 3631.1 Jy::
>>> u.Magnitude(target_flux.to(u.AB, zero_point_star_equiv)) # doctest: +FLOAT_CMP
<Magnitude 22.30195136 mag(AB)>
+Reduced Hubble constant/"little-h" Equivalency
+----------------------------------------------
+
+The dimensionless version of the Hubble constant - often known as "little h" -
+is a frequently-used quantity in extragalactic astrophysics. It is also widely
+known as the bane of beginners' existence in such fields (See e.g., the title of
+`this paper <https://doi.org/10.1017/pasa.2013.31>`__, which also provides
+valuable advice on the use of little h). Astropy provides an equivalency that
+helps keep this straight in at least some of these cases, by providing a way to
+convert to/from physical to "little h" units. Two example conversions:
+
+ >>> import astropy.units as u
+ >>> H0_70 = 70 * u.km/u.s / u.Mpc
+ >>> distance = 100 * (u.Mpc/u.littleh)
+ >>> distance.to(u.Mpc, u.with_H0(H0_70)) # doctest: +FLOAT_CMP
+ <Quantity 70.0 Mpc>
+ >>> luminosity = 1 * u.Lsun * u.littleh**-2
+ >>> luminosity.to(u.Lsun, u.with_H0(H0_70)) # doctest: +FLOAT_CMP
+ <Quantity 0.49 solLum>
+
+Note the unit name ``littleh`` - while this unit is usually expressed in the
+literature as just ``h``, here it is ``littleh`` to not cause confusion with
+"hours".
+
+If no argument is given (or the argument is `None`), this equivalency assumes
+the ``H0`` from the current default cosmology:
+
+ >>> distance = 100 * (u.Mpc/u.littleh)
+ >>> distance.to(u.Mpc, u.with_H0()) # doctest: +FLOAT_CMP
+ <Quantity 69.32 Mpc>
+
+This equivalency also allows the common magnitude formulation of little h
+scaling:
+
+ >>> mag_quantity = 12 * (u.mag + u.MagUnit(u.littleh**2))
+ >>> mag_quantity # doctest: +FLOAT_CMP
+ <Magnitude 12. mag(littleh2)>
+ >>> mag_quantity.to(u.mag, u.with_H0(H0_70)) # doctest: +FLOAT_CMP
+ <Quantity 11.2254902 mag>
+
+
Writing new equivalencies
=========================
|
diff --git a/astropy/units/tests/test_equivalencies.py b/astropy/units/tests/test_equivalencies.py
index 1ddd556627a3..5b7b2a7fe434 100644
--- a/astropy/units/tests/test_equivalencies.py
+++ b/astropy/units/tests/test_equivalencies.py
@@ -10,7 +10,7 @@
# LOCAL
from ... import units as u
-from ... import constants
+from ... import constants, cosmology
from ...tests.helper import assert_quantity_allclose
@@ -752,3 +752,26 @@ def test_plate_scale():
assert_quantity_allclose(asec.to(u.mm, u.plate_scale(platescale)), mm)
assert_quantity_allclose(asec.to(u.mm, u.plate_scale(platescale2)), mm)
+
+
+def test_littleh():
+ H0_70 = 70*u.km/u.s/u.Mpc
+ h100dist = 100 * u.Mpc/u.littleh
+
+ assert_quantity_allclose(h100dist.to(u.Mpc, u.with_H0(H0_70)), 70*u.Mpc)
+
+ # make sure using the default cosmology works
+ H0_default_cosmo = cosmology.default_cosmology.get().H0
+ assert_quantity_allclose(h100dist.to(u.Mpc, u.with_H0()),
+ H0_default_cosmo.value*u.Mpc)
+
+ # Now try a luminosity scaling
+ h1lum = 1 * u.Lsun * u.littleh**-2
+ assert_quantity_allclose(h1lum.to(u.Lsun, u.with_H0(H0_70)), .49*u.Lsun)
+
+ # And the trickiest one: magnitudes. Using H0=10 here for the round numbers
+ H0_10 = 10*u.km/u.s/u.Mpc
+ # assume the "true" magnitude M = 12.
+ # Then M - 5*log_10(h) = M + 5 = 17
+ withlittlehmag = 17 * (u.mag + u.MagUnit(u.littleh**2))
+ assert_quantity_allclose(withlittlehmag.to(u.mag, u.with_H0(H0_10)), 12*u.mag)
| 2018-10-25T17:06:00
|
{}
|
{"CHANGES.rst": "3.1 (unreleased)\n================\n\nNew Features\n------------\n\nastropy.config\n^^^^^^^^^^^^^^\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- ``convolve`` now accepts any array-like input, not just ``numpy.ndarray`` or\n lists. [#7303]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``SkyCoord.from_name`` constructor now has the ability to create\n coordinate objects by parsing object catalogue names that have embedded\n J-coordinates. [#7830]\n\n- The new function ``make_transform_graph_docs`` can be used to create a\n docstring graph from a custom ``TransformGraph`` object. [#7135]\n\n- ``KDTree`` for catalog matching is now built with sliding midpoint rule\n rather than standard. In code, this means setting ``compact_nodes=False``\n and ``balanced_tree=False`` in ``cKDTree``. The sliding midpoint rule is much\n more suitable for catalog matching, and results in 1000x speedup in some\n cases. [#7324]\n\n- Additional information about a site loaded from the Astropy sites registry is\n now available in ``EarthLocation.info.meta``. [#7857]\n\n- Added a ``concatenate_representations`` function to combine coordinate\n representation data and any associated differentials. [#7922]\n\n- ``BaseCoordinateFrame`` will now check for a method named\n ``_astropy_repr_in_frame`` when constructing the string forms of attributes.\n Allowing any class to control how ``BaseCoordinateFrame`` represents it when\n it is an attribute of a frame. [#7745]\n\n- Some rarely-changed attributes of frame classes are now cached, resulting in\n speedups (up to 50% in some cases) when creating new scalar frame or\n ``SkyCoord`` objects. [#7949]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Distance calculations with ``LambaCDM`` with no radiation (T_CMB0=0)\n are now 20x faster by using elliptic integrals for non-flat cases\n [#7155]\n\n- Distance calculations with ``FlatLambaCDM`` with no radiation (T_CMB0=0)\n are now 20x faster by using the hypergeometric function solution\n for this special case. [#7087]\n\n- Age calculations with ``FlatLambdaCDM`` with no radiation (Tcmb0=0)\n are now 1000x faster by using analytic solutions instead of integrating.\n [#7117]\n\nastropy.extern\n^^^^^^^^^^^^^^\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Latex reader now ignores ``\\toprule``, ``\\midrule``, and ``\\bottomrule``\n commands [#7349]\n\n- Added the RST (Restructured-text) table format and the fast version of the\n RDB reader to the set of formats that are guessed by default. [#5578]\n\n- The read trace (used primarily for debugging) now includes guess argument\n sets that were skipped entirely e.g. for not supporting user-supplied kwargs.\n All guesses thus removed from ``filtered_guess_kwargs`` are now listed as\n \"Disabled\" at the beginning of the trace. [#5578]\n\n- Emit a warning when reading an ECSV file without specifying the ``format``\n and without PyYAML installed. Previously this silently fell through to\n parsing as a basic format file and the file metadata was lost. [#7580]\n\n- Optionally allow writing masked columns to ECSV with the mask explicitly\n specified as a separate column instead of marking masked elements with \"\"\n (empty string). This allows handling the case of a masked string column\n with \"\" data rows. [#7481]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Added support for saving all representation classes and many coordinate\n frames to the asdf format. [#7079]\n\n- Added support for saving models with units to the asdf format. [#7237]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- ``HDUList.pop()`` now accepts string and tuple extension name\n specifications. [#7236]\n\n- Add an ``ignore_hdus`` keyword to ``FITSDiff`` to allow ignoring HDUs by\n NAME when diffing two FITS files [#7538]\n\n- Optionally allow writing masked columns to FITS with the mask explicitly\n specified as a separate column instead of using the FITS standard of\n certain embedded null values (``NaN`` for float, ``TNULL`` for integers).\n This can be used to work around limitations in the FITS standard. [#7481]\n\n- All time coordinates can now be written to and read from FITS binary tables,\n including those with vectorized locations. [#7430]\n\n- The ``fitsheader`` command line tool now supports a ``dfits+fitsort`` mode,\n and the dotted notation for keywords (e.g. ``ESO.INS.ID``). [#7240]\n\n- Fall back to reading arrays using mode='denywrite' if mode='readonly' fails\n when using memory-mapping. This solves cases on some platforms when the\n available address space was less than the file size (even when using memory\n mapping). [#7926]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Add a ``Multiply`` model which preserves unit through evaluate, unlike\n ``Scale`` which is dimensionless. [#7210]\n\n- Add a ``uses_quantity`` property to ``Model`` which allows introspection of if\n the ``Model`` can accept ``Quantity`` objects. [#7417]\n\n- Add a ``separability_matrix`` function which returns the correlation matrix\n of inputs and outputs. [#7803]\n\n- Fixed compatibility of ``JointFitter`` with the latest version of Numpy. [#7984]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``NDUncertainty`` objects now have a ``quantity`` attribute for simple\n conversion to quantities. [#7704]\n\nastropy.samp\n^^^^^^^^^^^^\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- The ``SigmaClip`` class and ``sigma_clip`` and\n ``sigma_clipped_stats`` functions are now significantly faster.\n [#7478]\n\n- Add an ``astropy.stats.bls`` module with an implementation of the \"box least\n squares\" periodogram that is commonly used for discovering transiting\n exoplanets and eclipsing binaries. [#7391]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Added support for full use of ``Time`` mixin column for join, hstack, and\n vstack table operations. [#6888]\n\n- Added a new table index engine, ``SCEngine``, based on the Sorted Containers\n package. [#7574]\n\n- Add a new keyword argument ``serialize_method`` to ``Table.write`` to\n control how ``Time`` and ``MaskedColumn`` columns are written. [#7481]\n\n- Allow mixin columns to be used in table ``group`` and ``unique``\n functions. This applies to both the key columns and the other data\n columns. [#7712]\n\n- Added support for stacking ``Column``, mixin column (e.g. ``Quantity``,\n ``Time``) or column-like objects. [#7674]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Added an option ``--readonly`` to the test command to change the\n permissions on the temporary installation location to read-only. [#7598]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Allow array-valued ``Time`` object to be modified in place. [#6028]\n\n- Added support for missing values (masking) to the ``Time`` class. [#6028]\n\n- Added supper for a 'local' time scale (for free-running clocks, etc.),\n and round-tripping to the corresponding FITS time scale. [#7122]\n\n- Added `datetime.timedelta` format class for ``TimeDelta``. [#7441]\n\n- Added ``strftime`` and ``strptime`` methods to ``Time`` class.\n These methods are similar to those in the Python standard library\n `time` package and provide flexible input and output formatting. [#7323]\n\n- Added ``datetime64`` format to the ``Time`` class to support working with\n ``numpy.datetime64`` dtype arrays. [#7361]\n\n- Add fractional second support for ``strftime`` and ``strptime`` methods\n of ``Time`` class. [#7705]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Add complex numbers support for ``Quantity._repr_latex_``. [#7676]\n\n- Add ``thermodynamic_temperature`` equivalency to convert between\n Jy/beam and \"thermodynamic temperature\" for cosmology. [#7054]\n\n- Add millibar unit. [#7863]\n\n- Add maggy and nanomaggy unit, as well as associated ``zero_point_flux``\n equivalency. [#7891]\n\n- ``AB`` and ``ST`` are now enabled by default, and have alternate names\n ``ABflux`` and ``STflux`` [#7891]\n\nastropy.utils\n^^^^^^^^^^^^^\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\nastropy.visualization.wcsaxes\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Add support for setting ``set_separator(None)`` to use default\n\n- Added ``imshow_norm`` function, which combines imshow and creation of a\n ``ImageNormalize`` object. [#7785]\n\n- Add support for setting ``set_separator(None)`` in WCSAxes to use default\n separators. [#7570]\n\n- Added two keyword argument options to ``CoordinateHelper.set_format_unit``:\n ``decimal`` can be used to specify whether to use decimal formatting for the\n labels (by default this is False for degrees and hours and True otherwise),\n and ``show_decimal_unit`` can be used to determine whether the units should be\n shown for decimal labels. [#7318]\n\n- Added documentation for ``transform=`` and ``coord_meta=``. [#7698]\n\n- Allow ``coord_meta=`` to optionally include ``format_unit=``. [#7848]\n\n- Add support for more rcParams related to the grid, ticks, and labels, and\n should work with most built-in Matplotlib styles. [#7961]\n\n- Improved rendering of outward-facing ticks. [#7961]\n\n- Add support for ``tick_params`` (which is a standard Matplotlib\n function/method) on both the ``WCSAxes`` class and the individual\n ``CoordinateHelper`` classes. Note that this is provided for compatibility\n with Matplotlib syntax users may be familiar with, but it is not the\n preferred way to change settings. Instead, methods such as ``set_ticks``\n should be preferred. [#7969]\n\n- Moved the argument ``exclude_overlapping`` from ``set_ticks`` to\n ``set_ticklabel``. [#7969]\n\n- Added a ``pad=`` argument to ``set_ticklabel`` to provide a way to control\n the padding between ticks and tick labels. [#7969]\n\n- Added support for setting the tick direction in ``set_ticks`` using the\n ``direction=`` keyword argument. [#7969]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Map ITRS frames to terrestrial WCS coordinates. This will make it possible to\n use WCSAxes to make figures that combine both celestial and terrestrial\n features. An example is plotting the coordinates of an astronomical transient\n over an all- sky satellite image to illustrate the position relative to the\n Earth at the time of the event. The ITRS frame is identified with WCSs that\n use the ``TLON-`` and ``TLAT-`` coordinate types. There are several examples\n of WCSs where this syntax is used to describe terrestrial coordinate systems:\n Section 7.4.1 of `WCS in FITS \"Paper II\" <http://adsabs.harvard.edu/abs/2002A%26A...395.1077C>`_\n and the `WCSTools documentation <http://tdc-www.harvard.edu/software/wcstools/wcstools.multiwcs.html>`_.\n [#6990]\n\n- Added the abstract base class for the low-level WCS API described in APE 14\n (https://doi.org/10.5281/zenodo.1188875). [#7325]\n- Add ``WCS.contains()`` function to check if the WCS footprint contains a given sky coordinate. [#7273]\n\n- Added the abstract base class for the high-level WCS API described in APE 14\n (https://doi.org/10.5281/zenodo.1188875). [#7325]\n\n- Added the high-level wrapper class for low-level WCS objects as described in\n APE 14 (https://doi.org/10.5281/zenodo.1188875). [#7326]\n\n- Added a new property ``WCS.has_distortion``. [#7326]\n\n- Deprecated ``_naxis1`` and ``_naxis2`` in favor of ``pixel_shape``. [#7973]\n\n\nAPI Changes\n-----------\n\nastropy.config\n^^^^^^^^^^^^^^\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- ``kernel`` can now be a tuple. [#7561]\n\n- Not technically an API changes, however, the doc string indicated that ``boundary=None``\n was the default when actually it is ``boundary='fill'``. The doc string has been corrected,\n however, someone may interpret this as an API change not realising that nothing has actually\n changed. [#7293]\n\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed ``astropy.coordinates.concatenate`` to include velocity data in the\n concatenation. [#7922]\n\n- Changed the name of the single argument to ``Frame.realize_frame()`` from the\n (incorrect) ``representation_type`` to ``data``. [#7923]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\nastropy.extern\n^^^^^^^^^^^^^^\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- If a fast reader is explicitly selected (e.g. ``fast_reader='force'``) and\n options which are incompatible with the fast reader are provided\n (e.g. ``quotechar='##'``) then now a ``ParameterError`` exception will be\n raised. [#5578]\n\n- The fast readers will now raise ``InconsistentTableError`` instead of\n ``CParserError`` if the number of data and header columns do not match.\n [#5578]\n\n- Changed a number of ``ValueError`` exceptions to ``InconsistentTableError``\n in places where the exception is related to parsing a table which is\n inconsistent with the specified table format. Note that\n ``InconsistentTableError`` inherits from ``ValueError`` so no user code\n changes are required. [#7425]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n- The ``fits.table_to_hdu()`` function will translate any column ``format``\n attributes to a TDISPn format string, if possible, and store it as a TDISPn\n keyword in the ``HDU`` header. [#7226]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Change the order of the return values from ``FittingWithOutlierRemoval``,\n such that ``fitted_model`` comes first, for consistency with other fitters.\n For the second value, return only a boolean outlier ``mask``, instead of the\n previous ``MaskedArray`` (which included a copy of the input data that was\n both redundant and inadvertently corrupted at masked points). Return a\n consistent type for the second value when ``niter=0``. [#7407]\n\n- Set the minimum value for the ``bolometric_flux`` parameter of the\n ``BlackBody1D`` model to zero. [#7045]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Add two new uncertainty classes, ``astropy.nddata.VarianceUncertainty`` and\n ``astropy.nddata.InverseVariance``. [#6971]\n\nastropy.samp\n^^^^^^^^^^^^\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- String values can now be used for the ``cenfunc`` and ``stdfunc``\n keywords in the ``SigmaClip`` class and ``sigma_clip`` and\n ``sigma_clipped_stats`` functions. [#7478]\n\n- The ``SigmaClip`` class and ``sigma_clip`` and\n ``sigma_clipped_stats`` functions now have a ``masked`` keyword,\n which can be used to return either a masked array (default) or an\n ndarray with the min/max values. [#7478]\n\n- The ``iters`` keyword has been renamed (and deprecated) to\n ``maxiters`` in the ``SigmaClip`` class and ``sigma_clip`` and\n ``sigma_clipped_stats`` functions. [#7478]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- ``Table.read()`` on a FITS binary table file will convert any TDISPn header\n keywords to a Python formatting string when possible, and store it in the\n column ``format`` attribute. [#7226]\n\n- No values provided to stack will now raise ``ValueError`` rather than\n ``TypeError``. [#7674]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- ``from astropy.tests.helper import *`` no longer includes\n ``quantity_allclose``. However,\n ``from astropy.tests.helper import quantity_allclose`` would still work.\n [#7381]\n\n- ``warnings_to_ignore_by_pyver`` option in\n ``enable_deprecations_as_exceptions()`` now takes ``None`` as key.\n Any deprecation message that is mapped to ``None`` will be ignored\n regardless of the Python version. [#7790]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Added the ability to use ``local`` as time scale in ``Time`` and\n ``TimeDelta``. [#6487]\n\n- Comparisons, addition, and subtraction of ``Time`` instances with non-time\n instances will now return ``NotImplemented`` rather than raise the\n ``Time``-specific ``OperandTypeError``. This will generally lead to a\n regular ``TypeError``. As a result, ``OperandTypeError`` now only occurs if\n the operation is between ``Time`` instances of incompatible type or scale.\n [#7584]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- In ``UnitBase.compose()``, if a sequence (list|tuple) is passed in to\n ``units``, the default for ``include_prefix_units`` is set to\n `True`, so that no units get ignored. [#6957]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``InheritDocstrings`` now also works on class properties. [#7166]\n\n- ``diff_values()``, ``report_diff_values()``, and ``where_not_allclose()``\n utility functions are moved from ``astropy.io.fits.diff``. [#7444]\n\n- ``invalidate_caches()`` has been removed from the\n ``astropy.utils.compat`` namespace, use it directly from ``importlib``. [#7872]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- In ``ImageNormalize``, the default for ``clip`` is set to ``True``.\n [#7800]\n\n- Changed ``AsymmetricPercentileInterval`` and ``MinMaxInterval`` to\n ignore NaN values in arrays. [#7360]\n\n- Automatically default to using ``grid_type='contours'`` in WCSAxes when using\n a custom ``Transform`` object if the transform has no inverse. [#7847]\n\nastropy.wcs\n^^^^^^^^^^^\n\nPerformance Improvements\n------------------------\n\n- Reduced import time by more cautious use of the standard library. [#7647]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Major performance overhaul to ``convolve()``. [#7293]\n\n- ``convolve()``: Boundaries ``fill``, ``extend``, and ``wrap`` now use a single\n implementation that pads the image with the correct boundary values before convolving.\n The runtimes of these three were significantly skewed. They now have\n equivalent runtimes that are also faster than before due to performant contiguous\n memory access. However, this does increase the memory foortprint as an entire\n new image array is required plus that needed for the padded region.[#7293]\n\n- ``convolve()``: Core computation ported from Cython to C. Several optimization\n techniques have been implemented to achieve performace gains, e.g. compiler\n hoisting, and vectorization, etc. Compiler optimization level ``-O2`` required for\n hoisting and ``-O3`` for vectorization. [#7293]\n\n- ``convolve()``: ``nan_treatment=‘interpolate’`` was slow to compute irrespective of\n whether any NaN values exist within the array. The input array is now\n checked for NaN values and interpolation is disabled if non are found. This is a\n significant performance boost for arrays without NaN values. [#7293]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Sped up creating SkyCoord objects by a factor of ~2 in some cases. [#7615]\n\n- Sped up getting xyz vectors from ``CartesianRepresentation`` (which\n is used a lot internally). [#7638]\n\n- Sped up transformations and some representation methods by replacing\n python code with (compiled) ``erfa`` ufuncs. [#7639]\n\n- Sped up adding differential (velocity) data to representations by a factor of\n ~20, which improves the speed of frame and SkyCoord initialization. [#7924]\n\n- Refactored ``SkyCoord`` initializer to improve performance and code clarity.\n [#7958]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Sped up creating new composite units, and raising units to some power\n [#7549]\n\n- Sped up Unit.to when target unit is the same as the original unit.\n [#7643]\n\n- Lazy-load ``scipy.special`` to shorten ``astropy.units`` import time.\n [#7636]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Significantly sped up drawing of contours in WCSAxes. [#7568]\n\nBug Fixes\n---------\n\nastropy.config\n^^^^^^^^^^^^^^\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``EarthLocation.of_address`` now uses the OpenStreetMap geocoding API by\n default to retrieve coordinates, with the Google API (which now requires an\n API key) as an option. [#7918]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\nastropy.extern\n^^^^^^^^^^^^^^\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fixed a problem when ``guess=True`` that ``fast_reader`` options\n could be dropped after the first fast reader class was tried. [#5578]\n\n- Units in CDS-formatted tables are now parsed correctly by the units\n module. [#7348]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Fixed bug when writing a table with masked columns to HDF5. Previously\n the mask was being silently dropped. If the ``serialize_meta`` option is\n enabled the data mask will now be written as an additional column and the\n masked columns will round-trip correctly. [#7481]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Added support for ``copy.copy`` and ``copy.deepcopy`` for ``HDUList``. [#7218]\n\n- Override ``HDUList.copy()`` to return a shallow HDUList instance. [#7218]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fix behaviour of certain models with units, by making certain unit-related\n attributes readonly. [#7210]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Fixed rounding behavior in ``overlap_slices`` for even-sized small\n arrays. [#7859]\n\nastropy.samp\n^^^^^^^^^^^^\n\nastropy.stats\n^^^^^^^^^^^^^\n\nastropy.table\n^^^^^^^^^^^^^\n\nastropy.tests\n^^^^^^^^^^^^^\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fix a bug when setting a ``TimeDelta`` array item with plain float value(s).\n This was always interpreted as a JD (day) value regardless of the\n ``TimeDelta`` format. [#7990]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- To simplify fast creation of ``Quantity`` instances from arrays, one can now\n write ``array << unit`` (equivalent to ``Quantity(array, unit, copy=False)``).\n If ``array`` is already a ``Quantity``, this will convert the quantity to the\n requested units; in-place conversion can be done with ``quantity <<= unit``.\n [#7734]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed a bug due to which ``report_diff_values()`` was reporting incorrect\n number of differences when comparing two ``numpy.ndarray``. [#7470]\n\n- The download progress bar is now only displayed in terminals, to avoid\n polluting piped output. [#7577]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Right ascension coordinates are now shown in hours by default, and the\n ``set_format_unit`` method on ``CoordinateHelper`` now works correctly\n with angle coordinates. [#7215]\n\nastropy.wcs\n^^^^^^^^^^^\n\nOther Changes and Additions\n---------------------------\n\n- The documentation build now uses the Sphinx configuration from sphinx-astropy\n rather than from astropy-helpers. [#7139]\n\n- Versions of Numpy <1.13 are no longer supported. [#7058]\n\n- Running tests now suppresses the output of the installation stage by default,\n to allow easier viewing of the test results. To re-enable the output as\n before, use ``python setup.py test --verbose-install``. [#7512]\n\n- The ERFA functions are now wrapped in ufuncs instead of custom C code,\n leading to some speed improvements, and setting the stage for allowing\n overrides with ``__array_ufunc__``. [#7502]\n\n\n\n3.0.6 (unreleased)\n==================\n\nBug Fixes\n---------\n\nastropy.config\n^^^^^^^^^^^^^^\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\nastropy.extern\n^^^^^^^^^^^^^^\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\nastropy.samp\n^^^^^^^^^^^^\n\nastropy.stats\n^^^^^^^^^^^^^\n\nastropy.table\n^^^^^^^^^^^^^\n\nastropy.tests\n^^^^^^^^^^^^^\n\nastropy.time\n^^^^^^^^^^^^\n\nastropy.units\n^^^^^^^^^^^^^\n\nastropy.utils\n^^^^^^^^^^^^^\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\nastropy.wcs\n^^^^^^^^^^^\n\n\nOther Changes and Additions\n---------------------------\n\n\n\n\n3.0.5 (2018-10-14)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed bug in which consecutive ``StaticMatrixTransform``'s in a frame\n transform path would be combined in the incorrect order. [#7707]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Fixing bug that doctests were not picked up from the narrative\n documentation when tests were run for all modules. [#7767]\n\n\n\n3.0.4 (2018-08-02)\n==================\n\nAPI Changes\n-----------\n\nastropy.table\n^^^^^^^^^^^^^\n\n- The private ``_parent`` attribute in the ``info`` attribute of table\n columns was changed from a direct reference to the parent column to a weak\n reference. This was in response to a memory leak caused by having a\n circular reference cycle. This change means that expressions like\n ``col[3:5].info`` will now fail because at the point of the ``info``\n property being evaluated the ``col[3:5]`` weak reference is dead. Instead\n force a reference with ``c = col[3:5]`` followed by\n ``c.info.indices``. [#6277, #7448]\n\n\nBug Fixes\n---------\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Fixed an bug when creating the ``WCS`` of a cutout (see ``nddata.Cutout2D``)\n when input image's ``WCS`` contains ``SIP`` distortion corrections by\n adjusting the ``crpix`` of the ``astropy.wcs.Sip`` (in addition to\n adjusting the ``crpix`` of the ``astropy.wcs.WCS`` object). This bug\n had the potential to produce large errors in ``WCS`` coordinate\n transformations depending on the position of the cutout relative\n to the input image's ``crpix``. [#7556, #7550]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix memory leak where updating a table column or deleting a table\n object was not releasing the memory due to a reference cycle\n in the column ``info`` attributes. [#6277, #7448]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed an bug when creating the ``WCS`` slice (see ``WCS.slice()``)\n when ``WCS`` contains ``SIP`` distortion corrections by\n adjusting the ``WCS.sip.crpix`` in addition to adjusting\n ``WCS.wcs.crpix``. This bug had the potential to produce large errors in\n ``WCS`` coordinate transformations depending on the position of the slice\n relative to ``WCS.wcs.crpix``. [#7556, #7550]\n\n\nOther Changes and Additions\n---------------------------\n\n- Updated bundled wcslib to v 5.19.1 [#7688]\n\n\n3.0.3 (2018-06-01)\n==================\n\nBug Fixes\n---------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix stripping correct (header) comment line from ``meta['comments']``\n in the ``CommentedHeader`` reader for all ``header_start`` settings. [#7508]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Raise error when attempting to open gzipped FITS file in 'append' mode.\n [#7473]\n\n- Fix a bug when writing to FITS a table that has a column description\n with embedded blank lines. [#7482]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Enabling running tests for multiple packages when specified comma\n separated. [#7463]\n\n\n3.0.2 (2018-04-23)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Computing a 3D separation between two ``SkyCoord`` objects (with the\n ``separation_3d`` method) now works with or without velocity data attached to\n the objects. [#7387]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fix validate with xmllint=True. [#7255, #7283]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- ``FittingWithOutlierRemoval`` now handles model sets, as long as the\n underlying fitter supports masked values. [#7199]\n\n- Remove assumption that ``model_set_axis == 0`` for 2D models in\n ``LinearLSQFitter``. [#7317, #7199]\n\n- Fix the shape of the outputs when a model set is evaluated with\n ``model_set_axis=False`` . [#7317]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Accept a tuple for the ``axis`` parameter in ``sigma_clip``, like the\n underlying ``numpy`` functions and some other functions in ``stats``. [#7199]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- The function ``quantity_allclose`` was moved to the ``units`` package with\n the new, shorter name ``allclose``. This eliminates a runtime dependency on\n ``pytest`` which was causing issues for some affiliated packages. The old\n import will continue to work but may be deprecated in the future. [#7252]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added a units-aware ``allclose`` function (this was previously available in\n the ``tests`` module as ``quantity_allclose``). To complement ``allclose``,\n a new ``isclose`` function is also added and backported. [#7252]\n\n\n3.0.1 (2018-03-12)\n==================\n\nBug Fixes\n---------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix a unicode decode error when reading a table with non-ASCII characters.\n The fast C reader cannot handle unicode so the code now uses the pure-Python\n reader in this case. [#7103]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Updated the bundled CFITSIO library to 3.430. This is to remedy a critical\n security vulnerability that was identified by NASA. See\n ``cextern/cfitsio/docs/changes.txt`` for additional information. [#7274]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Make sure that a sufficiently recent version of ASDF is installed when\n running test suite against ASDF tags and schemas. [#7205]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Fix reading files with serialized metadata when using a Table subclass. [#7213]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fix lookup fields by ID. [#7208]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fix model set evaluation over common input when model_set_axis > 0. [#7222]\n\n- Fixed the evaluation of compound models with units. This required adding the\n ability to have ``input_units_strict`` and ``input_units_allow_dimensionless``\n be dictionaries with input names as keys. [#6952]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- ``quantity_helper`` no longer requires ``scipy>=0.18``. [#7219]\n\n\n3.0 (2018-02-12)\n================\n\nNew Features\n------------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- New context manager ``set_enabled_constants`` to temporarily use an older\n version. [#7008]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``Distance`` object now accepts ``parallax`` as a keyword in the\n initializer, and supports retrieving a parallax (as an ``Angle``) via\n the ``.parallax`` attributes. [#6855]\n\n- The coordinate frame classes (subclasses of ``BaseCoordinateFrame``) now\n always have ``.velocity``, ``.proper_motion``, and ``.radial_velocity``\n properties that provide shorthands to the full-space Cartesian velocity as\n a ``CartesianDifferential``, the 2D proper motion as a ``Quantity``, and the\n radial or line-of-sight velocity as a ``Quantity``. [#6869]\n\n- ``SkyCoord`` objects now support storing and tranforming differentials - i.e.,\n both radial velocities and proper motions. [#6944]\n\n- All frame classes now automatically get sensible representation mappings for\n velocity components. For example, ``d_x``, ``d_y``, ``d_z`` are all\n automatically mapped to frame component namse ``v_x``, ``v_y``, ``v_z``.\n [#6856]\n\n- ``SkyCoord`` objects now support updating the position of a source given its\n space motion and a new time or time difference. [#6872]\n\n- The frame classes now accept a representation class or differential class, or\n string names for either, through the keyword arguments ``representation_type``\n and ``differential_type`` instead of ``representation`` and\n ``differential_cls``. [#6873]\n\n- The frame classes (and ``SkyCoord``) now give more useful error messages when\n incorrect attribute names are given. Instead of using the representation\n attribute names, they use the frame attribute names. [#7106]\n\n- ``EarthLocation`` now has a method to compute the gravitational redshift due\n due to solar system bodies. [#6861, #6935]\n\n- ``EarthLocation`` now has a ``get_gcrs`` convenience method to get the\n location in GCRS coordinates. [#6861, #6935]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Expanded the FITS ``Column`` interface to accept attributes pertaining to the FITS\n World Coordinate System, which includes spatial(celestial) and time coordinates. [#6359]\n\n- Added ``ver`` attribute to set the ``EXTVER`` header keyword to ``ImageHDU``\n and ``TableHDU``. [#6454]\n\n- The performance for reading FITS tables has been significantly improved,\n in particular for cases where the tables contain one or more string columns\n and when done through ``Table.read``. [#6821]\n\n- The performance for writing tables from ``Table.write`` has now been\n significantly improved for tables containing one or more string columns. [#6920]\n\n- The ``Table.read`` now supports a ``memmap=`` keyword argument to control\n whether or not to use memory mapping when reading the table. [#6821]\n\n- When reading FITS tables with ``fits.open``, a new keyword argument\n ``character_as_bytes`` can be passed - when set to `True`, character columns\n are returned as Numpy byte arrays (Numpy type S) while when set to `False`,\n the same columns are decoded to Unicode strings (Numpy type U) which uses more\n memory. [#6821]\n\n- The ``table_to_hdu`` function and the ``BinTableHDU.from_columns`` and\n ``FITS_rec.from_columns`` methods now include a ``character_as_bytes``\n keyword argument - if set to `True`, then when string columns are accessed,\n byte columns will be returned, which can provide significantly improved\n performance. [#6920]\n\n- Added support for writing and reading back a table which has \"mixin columns\"\n such as ``SkyCoord`` or ``EarthLocation`` with no loss of information. [#6912]\n\n- Enable tab-completion for ``FITS_rec`` column names and ``Header`` keywords\n with IPython 5 and later. [#7071]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- When writing to HDF5 files, the serialized metadata are now saved in a new\n dataset, instead of the HDF5 dataset attributes. This allows for metadata of\n any dimensions. [#6304]\n\n- Added support in HDF5 for writing and reading back a table which has \"mixin\n columns\" such as ``SkyCoord`` or ``EarthLocation`` with no loss of\n information. [#7007]\n\n- Add implementations of astropy-specific ASDF tag types. [#6790]\n\n- Add ASDF tag and schema for ICRSCoord. [#6904]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Add unit support for tabular models. [#6529]\n\n- A ``deepcopy()`` method was added to models. [#6515]\n\n- Added units support to ``AffineTransformation``. [#6853]\n\n- Added ``is_separable`` function to modeling to test the\n separability of a model. [#6746]\n\n- Added ``Model.separable`` property. It returns a boolean value or\n ``None`` if not set. [#6746]\n\n- Support masked array values in ``LinearLSQFitter`` (instead of silently\n ignorning the mask). [#6927]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added false alarm probability computation to ``astropy.stats.LombScargle``\n [#6488]\n\n- Implemented Kuiper functions in ``astropy.stats`` [#3724, #6565]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Added support for reading and writing ``astropy.time.Time`` Table columns\n to and from FITS tables, to the extent supported by the FITS standard. [#6176]\n\n- Improved exception handling and error messages when column ``format``\n attribute is incorrect for the column type. [#6385]\n\n- Allow to pass ``htmldict`` option to the jsviewer writer. [#6551]\n\n- Added new table operation ``astropy.table.setdiff`` that returns the set\n difference of table rows for two tables. [#6443]\n\n- Added support for reading time columns in FITS compliant binary tables\n as ``astropy.time.Time`` Table columns. [#6442]\n\n- Allowed to remove table rows through the ``__delitem__`` method. [#5839]\n\n- Added a new ``showtable`` command-line script to view binary or ASCII table\n files. [#6859]\n\n- Added new table property ``astropy.table.Table.loc_indices`` that returns the\n location of rows by indexes. [#6831]\n\n- Allow updating of table by indices through the property ``astropy.table.Table.loc``. [#6831]\n\n- Enable tab-completion for column names with IPython 5 and later. [#7071]\n\n- Allow getting and setting a table Row using multiple column names. [#7107]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Split pytest plugins into separate modules. Move remotedata, openfiles,\n doctestplus plugins to standalone repositories. [#6384, #6606]\n\n- When testing, astropy (or the package being tested) is now installed to\n a temporary directory instead of copying the build. This allows\n entry points to work correctly. [#6890]\n\n- The tests_require setting in setup.py now works properly when running\n 'python setup.py test'. [#6892]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Deprecated conversion of quantities to truth values. Currently, the expression\n ``bool(0 * u.dimensionless_unscaled)`` evaluates to ``True``. In the future,\n attempting to convert a ``Quantity`` to a ``bool`` will raise ``ValueError``.\n [#6580, #6590]\n\n- Modify the ``brightness_temperature`` equivalency to provide a surface\n brightness equivalency instead of the awkward assumed-per-beam equivalency\n that previously existed [#5173, #6663]\n\n- Support was added for a number of ``scipy.special`` functions. [#6852]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The ``astropy.utils.console.ProgressBar.map`` class method now supports the\n ``ipython_widget`` option. You can now pass it both ``multiprocess=True`` and\n ``ipython_widget=True`` to get both multiprocess speedup and a progress bar\n widget in an IPython Notebook. [#6368]\n\n- The ``astropy.utils.compat.funcsigs`` module has now been deprecated. Use the\n Python 'inspect' module directly instead. [#6598]\n\n- The ``astropy.utils.compat.futures`` module has now been deprecated. Use the\n Python 'concurrent.futures' module directly instead. [#6598]\n\n- ``JsonCustomEncoder`` is expanded to handle ``Quantity`` and ``UnitBase``.\n [#5471]\n\n- Added a ``dcip_xy`` method to IERS that interpolates along the dX_2000A and\n dY_2000A columns of the IERS table. Hence, the data for the CIP offsets is\n now available for use in coordinate frame conversion. [#5837]\n\n- The functions ``matmul``, ``broadcast_arrays``, ``broadcast_to`` of the\n ``astropy.utils.compat.numpy`` module have been deprecated. Use the\n NumPy functions directly. [#6691]\n\n- The ``astropy.utils.console.ProgressBar.map`` class method now returns\n results in sequential order. Previously, if you set ``multiprocess=True``,\n then the results could arrive in any arbitrary order, which could be a nasty\n shock. Although the function will still be evaluated on the items in\n arbitrary order, the return values will arrive in the same order in which the\n input items were provided. The method is now a thin wrapper around\n ``astropy.utils.console.ProgressBar.map_unordered``, which preserves the old\n behavior. [#6439]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Enable Matplotlib's subtraction shorthand syntax for composing and\n inverting trasformations for the ``WCSWorld2PixelTransform`` and\n ``WCSPixel2WorldTransform`` classes by setting ``has_inverse`` to ``True``.\n In order to implement a unit test, also implement the equality comparison\n operator for both classes. [#6531]\n\n- Added automatic hiding of axes labels when no tick labels are drawn on that\n axis. This parameter can be configured with\n ``WCSAxes.coords[*].set_axislabel_visibility_rule`` so that labels are automatically\n hidden when no ticks are drawn or always shown. [#6774]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Added a new function ``celestial_frame_to_wcs`` to convert from\n coordinate frames to WCS (the opposite of what ``wcs_to_celestial_frame``\n currently does. [#6481]\n\n- ``wcslib`` was updated to v 5.18. [#7066]\n\n\nAPI Changes\n-----------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- ``Gaussian2DKernel`` now accepts ``x_stddev`` in place of ``stddev`` with\n an option for ``y_stddev``, if different. It also accepts ``theta`` like\n ``Gaussian2D`` model. [#3605, #6748]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Deprecated ``recommended_units`` for representations. These were used to\n ensure that any angle was presented in degrees in sky coordinates and\n frames. This is more logically done in the frame itself. [#6858]\n\n- As noted above, the frame class attributes ``representation`` and\n ``differential_cls`` are being replaced by ``representation_type`` and\n ``differential_type``. In the next version, using ``representation`` will raise\n a deprecation warning. [#6873]\n\n- Coordinate frame classes now can't be added to the frame transform graph if\n they have frame attribute names that conflict with any component names. This\n is so ``SkyCoord`` can uniquely identify and distinguish frame attributes from\n frame components. [#6871]\n\n- Slicing and reshaping of ``SkyCoord`` and coordinate frames no longer passes\n the new object through ``__init__``, but directly sets atttributes on a new\n instance. This speeds up those methods by an order of magnitude, but means\n that any customization done in ``__init__`` is by-passed. [#6941]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Allow ECSV files to be auto-identified by ``Table.read`` or\n ``Table.write`` based on the ``.ecsv`` file name suffix. In this case it\n is not required to provide the ``format`` keyword. [#6552]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Automatically detect and handle compression in FITS files that are opened by\n passing a file handle to ``fits.open`` [#6373]\n\n- Remove the ``nonstandard`` checksum option. [#6571]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- When writing to HDF5 files, the serialized metadata are now saved in a new\n dataset instead of the HDF5 dataset attributes. This allows for metadata of\n any dimensions. [#6304]\n\n- Deprecated the ``usecPickle`` kwarg of ``fnunpickle`` and ``fnpickle`` as\n it was needed only for Python2 usage. [#6655]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Add handling of ``tree.Group`` elements to ``tree.Resource``. Unified I/O\n or conversion to astropy tables is not affected. [#6262]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Removed deprecated ``GaussianAbsorption1D`` model.\n Use ``Const1D - Gaussian1D`` instead. [#6542]\n\n- Removed the registry from modeling. [#6706]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- When setting the column ``format`` attribute the value is now immediately\n validated. Previously one could set to any value and it was only checked\n when actually formatting the column. [#6385]\n\n- Deprecated the ``python3_only`` kwarg of the\n ``convert_bytestring_to_unicode`` and ``convert_unicode_to_bytestring``\n methods it was needed only for Python2 usage. [#6655]\n\n- When reading in FITS tables with ``Table.read``, string columns are now\n represented using Numpy byte (dtype ``S``) arrays rather than Numpy\n unicode arrays (dtype ``U``). The ``Column`` class then ensures the\n bytes are automatically converted to string as needed. [#6821]\n\n- When getting a table row using multiple column names, if one of the\n names is not a valid column name then a ``KeyError`` exception is\n now raised (previously ``ValueError``). When setting a table row,\n if the right hand side is not a sequence with the correct length\n then a ``ValueError`` is now raised (previously in certain cases\n a ``TypeError`` was raised). [#7107]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``download_files_in_parallel`` now always uses ``cache=True`` to make the\n function work on Windows. [#6671]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- The Astropy matplotlib plot style has been deprecated. It will continue to\n work in future but is no longer documented. [#6991]\n\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Frame objects now use the default differential even if the representation is\n explicitly provided as long as the representation provided is the same type as\n the default representation. [#6944]\n\n- Coordinate frame classes now raise an error when they are added to the frame\n transform graph if they have frame attribute names that conflict with any\n component names. [#6871]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Added support for reading very large tables in chunks to reduce memory\n usage. [#6458]\n\n- Strip leading/trailing white-space from latex lines to avoid issues when\n matching ``\\begin{tabular}`` statements. This is done by introducing a new\n ``LatexInputter`` class to override the ``BaseInputter``. [#6311]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Properly handle opening of FITS files from ``http.client.HTTPResponse`` (i.e.\n it now works correctly when passing the results of ``urllib.request.urlopen``\n to ``fits.open``). [#6378]\n\n- Fix the ``fitscheck`` script for updating invalid checksums, or removing\n checksums. [#6571]\n\n- Fixed potential problems with the compression module [#6732]\n\n- Always use the 'D' format for floating point values in ascii tables. [#6938]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix getting a table row when using multiple column names (for example\n ``t[3]['a', 'b', 'c']``). Also fix a problem when setting an entire row:\n if setting one of the right-hand side values failed this could result in\n a partial update of the referenced parent table before the exception is\n raised. [#7107]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Initialization of ``Time`` instances with bytes or arrays with dtype ``S``\n will now automatically attempt to decode as ASCII. This ensures ``Column``\n instances with ASCII strings stored with dtype ``S`` can be used.\n [#6823, #6903]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed a bug that caused PLY files to not be generated correctly in Python 3.\n [#7174]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The ``deprecated`` decorator applied to a class will now modify the class\n itself, rather than to create a class that just looks and behave like the\n original. This is needed so that the Python 3 ``super`` without arguments\n works for decorated classes. [#6615]\n\n- Fixed ``HomogeneousList`` when setting one item or a slice. [#6773]\n\n- Also check the type when creating a new instance of\n ``HomogeneousList``. [#6773]\n\n- Make ``HomogeneousList`` work with iterators and generators when creating the\n instance, extending it, or using when setting a slice. [#6773]\n\n\nOther Changes and Additions\n---------------------------\n\n- Versions of Python <3.5 are no longer supported. [#6556]\n\n- Versions of Pytest <3.1 are no longer supported. [#6419]\n\n- Versions of Numpy <1.10 are no longer supported. [#6593]\n\n- The bundled CFITSIO was updated to version 3.41 [#6477]\n\n- ``analytic_functions`` sub-package is removed.\n Use ``astropy.modeling.blackbody``. [#6541]\n\n- ``astropy.vo`` sub-package is removed. Use ``astropy.samp`` for SAMP and\n ``astroquery`` for VO cone search. [#6540]\n\n- The guide to setting up Emacs for code development was simplified, and\n updated to recommend ``flycheck`` and ``flake8`` for syntax checks. [#6692]\n\n- The bundled version of PLY was updated to 3.10. [#7174]\n\n\n\n\n2.0.10 (unreleased)\n===================\n\nBug Fixes\n---------\n\nastropy.config\n^^^^^^^^^^^^^^\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\nastropy.extern\n^^^^^^^^^^^^^^\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\nastropy.stats\n^^^^^^^^^^^^^\n\nastropy.table\n^^^^^^^^^^^^^\n\nastropy.tests\n^^^^^^^^^^^^^\n\nastropy.time\n^^^^^^^^^^^^\n\nastropy.units\n^^^^^^^^^^^^^\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed the spelling of the 'luminous emittance/illuminance' physical\n property. [#7942]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug that caused origin to be incorrect if not specified. [#7927]\n\n- Fixed a bug that caused an error when plotting grids multiple times\n with grid_type='contours'. [#7927]\n\nastropy.vo\n^^^^^^^^^^\n\nastropy.wcs\n^^^^^^^^^^^\n\n\nOther Changes and Additions\n---------------------------\n\n\n\n2.0.9 (2018-10-14)\n==================\n\nBug Fixes\n---------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix reading of big files with the fast reader. [#7885]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- ``HDUList.__contains__()`` now works with ``HDU`` arguments. That is,\n ``hdulist[0] in hdulist`` now works as expected. [#7282]\n\n- ``HDUList`` s can now be written to streams in Python 3 [#7850]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Fixed the bug in CCData.read when the HDU is not specified and the first one\n is empty so the function searches for the first HDU with data which may not\n have an image extension. [#7739]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Fixed bugs in biweight statistics functions where a constant data\n array (or if using the axis keyword, constant along an axis) would\n return NaN. [#7737]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed a bug in ``to_pandas()`` where integer type masked columns were always\n getting converted to float. This could cause loss of precision. Now this only\n occurs if there are actually masked data values, in which case ``pandas``\n does require the values to be float so that ``NaN`` can be used to mark the\n masked values. [#7741, #7747]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Change the name of the configuration variable controlling the location of the\n Astropy cache in the Pytest plugin from ``cache_dir`` to\n ``astropy_cache_dir``. The command line flag also changed to\n ``--astropy-cache-dir``. This prevents a conflict with the ``cache_dir``\n variable provided by pytest itself. Also made similar change to\n ``config_dir`` option as a precaution. [#7721]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- ``UnrecognizedUnit`` instances can now be compared to any other object\n without raising `TypeError`. [#7606]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fix compatibility with Matplotlib 3.0. [#7839]\n\n- Fix an issue that caused a crash when using WCSAxes with a custom Transform\n object and when using ``grid_type='contours'`` to plot a grid. [#7846]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Instead of raising an error ``astropy.wcs`` now returns the input when\n the input has zero size. [#7746]\n\n- Fix ``malloc(0)`` bug in ``pipeline_all_pixel2world()`` and\n ``pipeline_pix2foc()``. They now raise an exception for input with\n zero coordinates, i.e. shape = (0, n). [#7806]\n\n - Fixed an issue with scalar input when WCS.naxis is one. [#7858]\n\nOther Changes and Additions\n---------------------------\n\n- Added a new ``astropy.__citation__`` attribute which gives a citation\n for Astropy in bibtex format. Made sure that both this and\n ``astropy.__bibtex__`` works outside the source environment, too. [#7718]\n\n\n\n2.0.8 (2018-08-02)\n==================\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Correct data type conversion for non-float masked kernels. [#7542]\n\n- Fix non-float or masked, zero sum kernels when ``normalize_kernel=False``.\n Non-floats would yeild a type error and masked kernels were not being filled.\n [#7541]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Ensure that relative humidities can be given as Quantities, rather than take\n any quantity and just strip its unit. [#7668]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Fixed ``Cutout2D`` output WCS NAXIS values to reflect the cutout\n image size. [#7552]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed a bug in ``add_columns`` method where ``rename_duplicate=True`` would\n cause an error if there were no duplicates. [#7540]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Fixed bug in ``python setup.py test --coverage`` on Windows machines. [#7673]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Avoid rounding errors when converting ``Quantity`` to ``TimeDelta``. [#7625]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug that caused the position of the tick values in decimal mode\n to be incorrectly determined. [#7332]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed a bug that caused ``wcs_to_celestial_frame``, ``skycoord_to_pixel``, and\n ``pixel_to_skycoord`` to raise an error if the axes of the celestial WCS were\n swapped. [#7691]\n\n\n2.0.7 (2018-06-01)\n==================\n\nBug Fixes\n---------\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed ``Tabular`` models to not change the shape of data. [#7411]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- In ``freedman_bin_width``, if the data has too small IQR,\n raise ``ValueError``. [#7248, #7402]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix a performance issue in ``MaskedColumn`` where initialization was\n extremely slow for large arrays with the default ``mask=None``. [#7422]\n\n- Fix printing table row indexed with unsigned integer. [#7469]\n\n- Fix copy of mask when copying a Table, as this is no more done systematically\n by Numpy since version 1.14. Also fixed a problem when MaskedColumn was\n initialized with ``mask=np.ma.nomask``. [#7486]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fixed a bug in Time that raised an error when initializing a subclass of Time\n with a Time object. [#7453]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed a bug that improperly handled unicode case of URL mirror in Python 2.\n [#7493]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug that prevented legends from being added to plots done with\n units. [#7510]\n\n\nOther Changes and Additions\n---------------------------\n\n- Bundled ``pytest-remotedata`` plugin is upgraded to 0.3. [#7493]\n\n\n2.0.6 (2018-04-23)\n==================\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- convolve(boundary=None) requires the kernel to be smaller than the image.\n This was never actually checked, it now is and an exception is raised.\n [#7313]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- ``u.quantity_input`` no longer errors if the return annotation for a\n function is ``None``. [#7336, #7380]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Explicitly default to origin='lower' in WCSAxes. [#7331]\n\n- Lists of units are now converted in the Matplotlib unit converter. This means\n that for Matplotlib versions later than 2.2, more plotting functions now work\n with units (e.g. errorbar). [#7037]\n\n\nOther Changes and Additions\n---------------------------\n\n- Updated the bundled CFITSIO library to 3.44. This is to remedy another\n critical security vulnerability that was identified by NASA. See\n ``cextern/cfitsio/docs/changes.txt`` for additional information. [#7370]\n\n\n2.0.5 (2018-03-12)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Add a workaround for a bug in the einsum function in Numpy 1.14.0. [#7187]\n\n- Fix problems with printing ``Angle`` instances under numpy 1.14.1. [#7234]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed the ``fitsdiff`` script for matching fits file with one in a\n directory path. [#7085]\n\n- Make sure that lazily-loaded ``HDUList`` is automatically loaded when calling\n ``hdulist.pop``. [#7186]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Propagate weights to underlying fitter in ``FittingWithOutlierRemoval`` [#7249]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Support dotted package names as namespace packages when gathering test\n coverage. [#7170]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Matplotlib axes have the ``axisbelow`` property to control the z-order of\n ticks, tick labels, and grid lines. WCSAxes will now respect this property.\n This is useful for drawing scale bars or inset boxes, which should have a\n z-order that places them above all ticks and gridlines. [#7098]\n\n\nOther Changes and Additions\n---------------------------\n\n- Updated the bundled CFITSIO library to 3.430. This is to remedy a critical\n security vulnerability that was identified by NASA. See\n ``cextern/cfitsio/docs/changes.txt`` for additional information. [#7274, #7275]\n\n\n2.0.4 (2018-02-06)\n==================\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed IndexError when ``preserve_nan=True`` in ``convolve_fft``. Added\n testing with ``preserve_nan=True``. [#7000]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``sites.json`` file is now parsed explicitly with a UTF-8 encoding. This\n means that future revisions to the file with unicode observatory names can\n be done without breaking the site registry parser. [#7082]\n\n- Working around a bug in Numpy 1.14.0 that broke some coordinate\n transformations. [#7105]\n\n- Fixed a bug where negative angles could be rounded wrongly when converting\n to a string with seconds omitted. [#7148]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- When datafile is missing, fits.tabledump uses input file name to build\n output file name. Fixed how it gets input file name from HDUList. [#6976]\n\n- Fix in-place updates to scaled columns. [#6956]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed bug in identifying inherited registrations from multiple ancestors [#7156]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed a bug in ``LevMarLSQFitter`` when fitting 2D models with constraints. [#6705]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``download_file`` function will check for cache downloaded from mirror URL\n first before attempting actual download if primary URL is unavailable. [#6987]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed test failures for ``astropy.visualization.wcsaxes`` which were due to\n local matplotlibrc files being taken into account. [#7132]\n\n\nOther Changes and Additions\n---------------------------\n\n- Fixed broken links in the documentation. [#6745]\n\n- Substantial performance improvement (potentially >1000x for some cases) when\n converting non-scalar ``coordinates.Angle`` objects to strings. [#7004]\n\n\n2.0.3 (2017-12-13)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Ecliptic frame classes now support attributes ``v_x``, ``v_y``, ``v_z`` when\n using with a Cartesian representation. [#6569]\n\n- Added a nicer error message when accidentally calling ``frame.representation``\n instead of ``frame.data`` in the context of methods that use ``._apply()``.\n [#6561]\n\n- Creating a new ``SkyCoord`` from a list of multiple ``SkyCoord`` objects now\n yield the correct type of frame, and works at all for non-equatorial frames.\n [#6612]\n\n- Improved accuracy of velocity calculation in ``EarthLocation.get_gcrs_posvel``.\n [#6699]\n\n- Improved accuracy of radial velocity corrections in\n ``SkyCoord.radial_velocity_correction```. [#6861]\n\n- The precision of ecliptic frames is now much better, after removing the\n nutation from the rotation and fixing the computation of the position of the\n Sun. [#6508]\n\nastropy.extern\n^^^^^^^^^^^^^^\n\n- Version 0.2.1 of ``pytest-astropy`` is included as an external package.\n [#6918]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fix writing the result of ``fitsdiff`` to file with ``--output-file``. [#6621]\n\n- Fix a minor bug where ``FITS_rec`` instances can not be indexed with tuples\n and other sequences that end up with a scalar. [#6955, #6966]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Fix ``ImportError`` when ``hdf5`` is imported first in a fresh Python\n interpreter in Python 3. [#6604, #6610]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Suppress errors during WCS creation in CCDData.read(). [#6500]\n\n- Fixed a problem with ``CCDData.read`` when the extension wasn't given and the\n primary HDU contained no ``data`` but another HDU did. In that case the header\n were not correctly combined. [#6489]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Fixed an issue where the biweight statistics functions would\n sometimes cause runtime underflow/overflow errors for float32 input\n arrays. [#6905]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed a problem when printing a table when a column is deleted and\n garbage-collected, and the format function caching mechanism happens\n to re-use the same cache key. [#6714]\n\n- Fixed a problem when comparing a unicode masked column (on left side) to\n a bytes masked column (on right side). [#6899]\n\n- Fixed a problem in comparing masked columns in bytes and unicode when the\n unicode had masked entries. [#6899]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Fixed a bug that causes tests for rst files to not be run on certain\n platforms. [#6555, #6608]\n\n- Fixed a bug that caused the doctestplus plugin to not work nicely with the\n hypothesis package. [#6605, #6609]\n\n- Fixed a bug that meant that the data.astropy.org mirror could not be used when\n using --remote-data=astropy. [#6724]\n\n- Support compatibility with new ``pytest-astropy`` plugins. [#6918]\n\n- When testing, astropy (or the package being tested) is now installed to\n a temporary directory instead of copying the build. This allows\n entry points to work correctly. [#6890]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Initialization of Time instances now is consistent for all formats to\n ensure that ``-0.5 <= jd2 < 0.5``. [#6653]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Ensure that ``Quantity`` slices can be set with objects that have a ``unit``\n attribute (such as ``Column``). [#6123]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``download_files_in_parallel`` now respects the given ``timeout`` value.\n [#6658]\n\n- Fixed bugs in remote data handling and also in IERS unit test related to path\n URL, and URI normalization on Windows. [#6651]\n\n- Fixed a bug that caused ``get_pkg_data_fileobj`` to not work correctly when\n used with non-local data from inside packages. [#6724]\n\n- Make sure ``get_pkg_data_fileobj`` fails if the URL can not be read, and\n correctly falls back on the mirror if necessary. [#6767]\n\n- Fix the ``finddiff`` option in ``find_current_module`` to properly deal\n with submodules. [#6767]\n\n- Fixed ``pyreadline`` import in ``utils.console.isatty`` for older IPython\n versions on Windows. [#6800]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed the vertical orientation of the ``fits2bitmap`` output bitmap\n image to match that of the FITS image. [#6844, #6969]\n\n- Added a workaround for a bug in matplotlib so that the ``fits2bitmap``\n script generates the correct output file type. [#6969]\n\n\nOther Changes and Additions\n---------------------------\n\n- No longer require LaTeX to build the documentation locally and\n use mathjax instead. [#6701]\n\n- Ensured that all tests use the Astropy data mirror if needed. [#6767]\n\n\n2.0.2 (2017-09-08)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Ensure transformations via ICRS also work for coordinates that use cartesian\n representations. [#6440]\n\n- Fixed a bug that was preventing ``SkyCoord`` objects made from lists of other\n coordinate objects from being written out to ECSV files. [#6448]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Support the ``GZIP_2`` FITS image compression algorithm as claimed\n in docs. [#6486]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug that wrote out VO table as version 1.2 instead of 1.3. [#6521]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix a bug when combining unicode columns via join or vstack. The character\n width of the output column was a factor of 4 larger than needed. [#6459]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Fixed running the test suite using --parallel. [#6415]\n\n- Added error handling for attempting to run tests in parallel without having\n the ``pytest-xdist`` package installed. [#6416]\n\n- Fixed issue running doctests with pytest>=3.2. [#6423, #6430]\n\n- Fixed issue caused by antivirus software in response to malformed compressed\n files used for testing. [#6522]\n\n- Updated top-level config file to properly ignore top-level directories.\n [#6449]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Quantity._repr_latex_ now respects precision option from numpy\n printoptions. [#6412]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- For the ``deprecated_renamed_argument`` decorator, refer to the deprecation‘s\n caller instead of ``astropy.utils.decorators``, to makes it easier to find\n where the deprecation warnings comes from. [#6422]\n\n\n2.0.1 (2017-07-30)\n==================\n\nBug Fixes\n---------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- Fixed Earth radius to be the IAU2015 value for the equatorial radius.\n The polar value had erroneously been used in 2.0. [#6400]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Added old frame attribute classes back to top-level namespace of\n ``astropy.coordinates``. [#6357]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Scaling an image always uses user-supplied values when given. Added\n defaults for scaling when bscale/bzero are not present (float images).\n Fixed a small bug in when to reset ``_orig_bscale``. [#5955]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed a bug in initializing compound models with units. [#6398]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Updating CCDData.read() to be more flexible with inputs, don't try to\n delete keywords that are missing from the header. [#6388]\n\nastropy.tests\n^^^^^^^^^^^^^\n- Fixed the test command that is run from ``setuptools`` to allow it to\n gracefully handle keyboard interrupts and pass them on to the ``pytest``\n subprocess. This prompts ``pytest`` to teardown and display useful traceback\n and test information [#6369]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Ticks and tick labels are now drawn in front of, rather than behind,\n gridlines in WCS axes. This improves legibility in situations where\n tick labels may be on the interior of the axes frame, such as the right\n ascension axis of an all-sky Aitoff or Mollweide projection. [#6361]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fix the missing wcskey part in _read_sip_kw, this will cause error when reading sip wcs while there is no default CRPIX1 CRPIX2 keywords and only CRPIX1n CRPIX2n in header. [#6372]\n\n\n2.0 (2017-07-07)\n================\n\nNew Features\n------------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- Constants are now organized into version modules, with physical CODATA\n constants in the ``codata2010`` and ``codata2014`` sub-modules,\n and astronomical constants defined by the IAU in the ``iau2012`` and\n ``iau2015`` sub-modules. The default constants in ``astropy.constants``\n in Astropy 2.0 have been updated from ``iau2012`` to ``iau2015`` and\n from ``codata2010`` to ``codata2014``. The constants for 1.3 can be\n accessed in the ``astropyconst13`` sub-module and the constants for 2.0\n (the default in ``astropy.constants``) can also be accessed in the\n ``astropyconst20`` sub-module [#6083]\n\n- The GM mass parameters recommended by IAU 2015 Resolution B 3 have been\n added as ``GM_sun``, ``GM_jup``, and ``GM_earth``, for the Sun,\n Jupiter and the Earth. [#6083]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Major change in convolution behavior and keyword arguments. Additional\n details are in the API section. [#5782]\n\n- Convolution with un-normalized and un-normalizable kernels is now possible.\n [#5782]\n\n- Add a new argument, ``normalization_rtol``, to ``convolve_fft``, allowing\n the user to specify the relative error tolerance in the normalization of\n the convolution kernel. [#5649, #5177]\n\n- Models can now be convoluted using ``convolve`` or ``convolve_fft``,\n which generates a regular compound model. [#6015]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Frame attributes set on ``SkyCoord`` are now always validated, and any\n ndarray-like operation (like slicing) will also be done on those. [#5751]\n\n- Caching of all possible frame attributes was implemented. This greatly\n speeds up many ``SkyCoord`` operations. [#5703, #5751]\n\n- A class hierarchy was added to allow the representation layer to store\n differentials (i.e., finite derivatives) of coordinates. This is intended\n to enable support for velocities in coordinate frames. [#5871]\n\n- ``replicate_without_data`` and ``replicate`` methods were added to\n coordinate frames that allow copying an existing frame object with various\n reference or copy behaviors and possibly overriding frame attributes. [#6182]\n\n- The representation class instances can now contain differential objects.\n This is primarily useful for internal operations that will provide support\n for transforming velocity components in coordinate frames. [#6169]\n\n- ``EarthLocation.to_geodetic()`` (and ``EarthLocation.geodetic``) now return\n namedtuples instead of regular tuples. [#6237]\n\n- ``EarthLocation`` now has ``lat`` and ``lon`` properties (equivalent to, but\n preferred over, the previous ``latitude`` and ``longitude``). [#6237]\n\n- Added a ``radial_velocity_correction`` method to ``SkyCoord`` to do compute\n barycentric and heliocentric velocity corrections. [#5752]\n\n- Added a new ``AffineTransform`` class for coordinate frame transformations.\n This class supports matrix operations with vector offsets in position or\n any differential quantities (so far, only velocity is supported). The\n matrix transform classes now subclass from the base affine transform.\n [#6218]\n\n- Frame objects now have experimental support for velocity components. Most\n frames default to accepting proper motion components and radial velocity,\n and the velocities transform correctly for any transformation that uses\n one of the ``AffineTransform``-type transformations. For other\n transformations a finite-difference velocity transformation is available,\n although it is not as numerically stable as those that use\n ``AffineTransform``-type transformations. [#6219, #6226]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Allow to specify encoding in ``ascii.read``, only for Python 3 and with the\n pure-Python readers. [#5448]\n\n- Writing latex tables with only a ``tabular`` environment is now possible by\n setting ``latexdict['tabletyle']`` to ``None``. [#6205]\n\n- Allow ECSV format to support reading and writing mixin columns like\n ``Time``, ``SkyCoord``, ``Latitude``, and ``EarthLocation``. [#6181]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Checking available disk space before writing out file. [#5550, #4065]\n\n- Change behavior to warn about units that are not FITS-compliant when\n writing a FITS file but not when reading. [#5675]\n\n- Added absolute tolerance parameter when comparing FITS files. [#4729]\n\n- New convenience function ``printdiff`` to print out diff reports. [#5759]\n\n- Allow to instantiate a ``BinTableHDU`` directly from a ``Table`` object.\n [#6139]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- YAML representer now also accepts numpy types. [#6077]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- New functions to unregister readers, writers, and identifiers. [#6217]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added ``SmoothlyBrokenPowerLaw1D`` model. [#5656]\n\n- Add ``n_submodels`` shared method to single and compound models, which\n allows users to get the number of components of a given single (compound)\n model. [#5747]\n\n- Added a ``name`` setter for instances of ``_CompoundModel``. [#5741]\n\n- Added FWHM properties to Gaussian and Moffat models. [#6027]\n\n- Added support for evaluating models and setting the results for inputs\n outside the bounding_box to a user specified ``fill_value``. This\n is controlled by a new optional boolean keyword ``with_bounding_box``. [#6081]\n\n- Added infrastructure support for units on parameters and during\n model evaluation and fitting, added support for units on all\n functional, power-law, polynomial, and rotation models where this\n is appropriate. A new BlackBody1D model has been added. [#4855, #6183,\n #6204, #6235]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Added an image class, ``CCDData``. [#6173]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added ``biweight_midcovariance`` function. [#5777]\n\n- Added ``biweight_scale`` and ``biweight_midcorrelation``\n functions. [#5991]\n\n- ``median_absolute_deviation`` and ``mad_std`` have ``ignore_nan`` option\n that will use ``np.ma.median`` with nans masked out or ``np.nanmedian``\n instead of ``np.median`` when computing the median. [#5232]\n\n- Implemented statistical estimators for Ripley's K Function. [#5712]\n\n- Added ``SigmaClip`` class. [#6206]\n\n- Added ``std_ddof`` keyword option to ``sigma_clipped_stats``.\n [#6066, #6207]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Issue a warning when assigning a string value to a column and\n the string gets truncated. This can occur because numpy string\n arrays are fixed-width and silently drop characters which do not\n fit within the fixed width. [#5624, #5819]\n\n- Added functionality to allow ``astropy.units.Quantity`` to be written\n as a normal column to FITS files. [#5910]\n\n- Add support for Quantity columns (within a ``QTable``) in table\n ``join()``, ``hstack()`` and ``vstack()`` operations. [#5841]\n\n- Allow unicode strings to be stored in a Table bytestring column in\n Python 3 using UTF-8 encoding. Allow comparison and assignment of\n Python 3 ``str`` object in a bytestring column (numpy ``'S'`` dtype).\n If comparison with ``str`` instead of ``bytes`` is a problem\n (and ``bytes`` is really more logical), please open an issue on GitHub.\n [#5700]\n\n- Added functionality to allow ``astropy.units.Quantity`` to be read\n from and written to a VOtable file. [#6132]\n\n- Added support for reading and writing a table with mixin columns like\n ``Time``, ``SkyCoord``, ``Latitude``, and ``EarthLocation`` via the\n ASCII ECSV format. [#6181]\n\n- Bug fix for ``MaskedColumn`` insert method, where ``fill_value`` attribute\n was not being passed along to the copy of the ``MaskedColumn`` that was\n returned. [#7585]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- ``enable_deprecations_as_exceptions`` function now accepts additional\n user-defined module imports and warning messages to ignore. [#6223, #6334]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- The ``astropy.units.quantity_input`` decorator will now convert the output to\n the unit specified as a return annotation under Python 3. [#5606]\n\n- Passing a logarithmic unit to the ``Quantity`` constructor now returns the\n appropriate logarithmic quantity class if ``subok=True``. For instance,\n ``Quantity(1, u.dex(u.m), subok=True)`` yields ``<Dex 1.0 dex(m)>``. [#5928]\n\n- The ``quantity_input`` decorator now accepts a string physical type in\n addition to of a unit object to specify the expected input ``Quantity``'s\n physical type. For example, ``@u.quantity_input(x='angle')`` is now\n functionally the same as ``@u.quantity_input(x=u.degree)``. [#3847]\n\n- The ``quantity_input`` decorator now also supports unit checking for\n optional keyword arguments and accepts iterables of units or physical types\n for specifying multiple valid equivalent inputs. For example,\n ``@u.quantity_input(x=['angle', 'angular speed'])`` or\n ``@u.quantity_input(x=[u.radian, u.radian/u.yr])`` would both allow either\n a ``Quantity`` angle or angular speed passed in to the argument ``x``.\n [#5653]\n\n- Added a new equivalence ``molar_mass_amu`` between g/mol to\n atomic mass units. [#6040, #6113]\n\n- ``Quantity`` has gained a new ``to_value`` method which returns the value\n of the quantity in a given unit. [#6127]\n\n- ``Quantity`` now supports the ``@`` operator for matrix multiplication that\n was introduced in Python 3.5, for all supported versions of numpy. [#6144]\n\n- ``Quantity`` supports the new ``__array_ufunc__`` protocol introduced in\n numpy 1.13. As a result, operations that involve unit conversion will be\n sped up considerably (by up to a factor of two for costly operations such\n as trigonometric ones). [#2583]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Added a new ``dataurl_mirror`` configuration item in ``astropy.utils.data``\n that is used to indicate a mirror for the astropy data server. [#5547]\n\n- Added a new convenience method ``get_cached_urls`` to ``astropy.utils.data``\n for getting a list of the URLs in your cache. [#6242]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Upgraded the included wcslib to version 5.16. [#6225]\n\n The minimum required version of wcslib in is 5.14.\n\n\nAPI Changes\n-----------\n\nastropy.analytic_functions\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- This entire sub-package is deprecated because blackbody has been moved to\n ``astropy.modeling.blackbody``. [#6191]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Major change in convolution behavior and keyword arguments.\n ``astropy.convolution.convolve_fft`` replaced ``interpolate_nan`` with\n ``nan_treatment``, and ``astropy.convolution.convolve`` received a new\n ``nan_treatment`` argument. ``astropy.convolution.convolve`` also no longer\n double-interpolates interpolates over NaNs, although that is now available\n as a separate ``astropy.convolution.interpolate_replace_nans`` function. See\n :ref:`the backwards compatibility note <astropy_convolve_compat>` for more\n on how to get the old behavior (and why you probably don't want to.) [#5782]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``astropy.coordinates.Galactic`` frame previously was had the cartesian\n ordering 'w', 'u', 'v' (for 'x', 'y', and 'z', respectively). This was an\n error and against the common convention. The 'x', 'y', and 'z' axes now\n map to 'u', 'v', and 'w', following the right-handed ('u' points to\n the Galactic center) convention. [#6330]\n\n- Removed deprecated ``angles.rotation_matrix`` and\n ``angles.angle_axis``. Use the routines in\n ``coordinates.matrix_utilities`` instead. [#6170]\n\n- ``EarthLocation.latitude`` and ``EarthLocation.longitude`` are now\n deprecated in favor of ``EarthLocation.lat`` and ``EarthLocation.lon``.\n They former will be removed in a future version. [#6237]\n\n- The ``FrameAttribute`` class and subclasses have been renamed to just contain\n ``Attribute``. For example, ``QuantityFrameAttribute`` is now\n ``QuantityAttribute``. [#6300]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Cosmological models do not include any contribution from neutrinos or photons\n by default -- that is, the default value of Tcmb0 is 0. This does not affect\n built in models (such as WMAP or Planck). [#6112]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Remove deprecated ``NumCode`` and ``ImgCode`` properties on FITS\n ``_ImageBaseHDU``. Use module-level constants ``BITPIX2DTYPE`` and\n ``DTYPE2BITPIX`` instead. [#4993]\n\n- ``comments`` meta key (which is ``io.ascii``'s table convention) is output\n to ``COMMENT`` instead of ``COMMENTS`` header. Similarly, ``COMMENT``\n headers are read into ``comments`` meta [#6097]\n\n- Remove compatibility code which forced loading all HDUs on close. The old\n behavior can be used with ``lazy_load_hdus=False``. Because of this change,\n trying to access the ``.data`` attribute from an HDU which is not loaded\n now raises a ``IndexError`` instead of a ``ValueError``. [#6082]\n\n- Deprecated ``clobber`` keyword; use ``overwrite``. [#6203]\n\n- Add EXTVER column to the output of ``HDUList.info()``. [#6124]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Removed deprecated ``Redshift`` model; Use ``RedshiftScaleFactor``. [#6053]\n\n- Removed deprecated ``Pix2Sky_AZP.check_mu`` and ``Pix2Sky_SZP.check_mu``\n methods. [#6170]\n\n- Deprecated ``GaussianAbsorption1D`` model, as it can be better represented\n by subtracting ``Gaussian1D`` from ``Const1D``. [#6200]\n\n- Added method ``sum_of_implicit_terms`` to ``Model``, needed when performing\n a linear fit to a model that has built-in terms with no corresponding\n parameters (primarily the ``1*x`` term of ``Shift``). [#6174]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Removed deprecated usage of parameter ``propagate_uncertainties`` as a\n positional keyword. [#6170]\n\n- Removed deprecated ``support_correlated`` attribute. [#6170]\n\n- Removed deprecated ``propagate_add``, ``propagate_subtract``,\n ``propagate_multiply`` and ``propagate_divide`` methods. [#6170]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Removed the deprecated ``sig`` and ``varfunc`` keywords in the\n ``sigma_clip`` function. [#5715]\n\n- Added ``modify_sample_size`` keyword to ``biweight_midvariance``\n function. [#5991]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- In Python 3, when getting an item from a bytestring Column it is now\n converted to ``str``. This means comparing a single item to a ``bytes``\n object will always fail, and instead one must compare with a ``str``\n object. [#5700]\n\n- Removed the deprecated ``data`` property of Row. [#5729]\n\n- Removed the deprecated functions ``join``, ``hstack``, ``vstack`` and\n ``get_groups`` from np_utils. [#5729]\n\n- Added ``name`` parameter to method ``astropy.table.Table.add_column`` and\n ``names`` parameter to method ``astropy.table.Table.add_columns``, to\n provide the flexibility to add unnamed columns, mixin objects and also to\n specify explicit names. Default names will be used if not\n specified. [#5996]\n\n- Added optional ``axis`` parameter to ``insert`` method for ``Column`` and\n ``MaskedColumn`` classes. [#6092]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Moved ``units.cgs.emu`` to ``units.deprecated.emu`` due to ambiguous\n definition of \"emu\". [#4918, #5906]\n\n- ``jupiterMass``, ``earthMass``, ``jupiterRad``, and ``earthRad`` no longer\n have their prefixed units included in the standard units. If needed, they\n can still be found in ``units.deprecated``. [#5661]\n\n- ``solLum``,``solMass``, and ``solRad`` no longer have their prefixed units\n included in the standard units. If needed, they can still be found in\n ``units.required_by_vounit``, and are enabled by default. [#5661]\n\n- Removed deprecated ``Unit.get_converter``. [#6170]\n\n- Internally, astropy replaced use of ``.to(unit).value`` with the new\n ``to_value(unit)`` method, since this is somewhat faster. Any subclasses\n that overwrote ``.to``, should also overwrite ``.to_value`` (or\n possibly just the private ``._to_value`` method. (If you did this,\n please let us know what was lacking that made this necessary!). [#6137]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Removed the deprecated compatibility modules for Python 2.6 (``argparse``,\n ``fractions``, ``gzip``, ``odict``, ``subprocess``) [#5975,#6157,#6164]\n\n- Removed the deprecated ``zest.releaser`` machinery. [#6282]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Removed the deprecated ``scale_image`` function. [#6170]\n\nastropy.vo\n^^^^^^^^^^\n\n- Cone Search now issues deprecation warning because it is moved to\n Astroquery 0.3.5 and will be removed from Astropy in a future version.\n [#5558, #5904]\n\n- The ``astropy.vo.samp`` package has been moved to ``astropy.samp``, and no\n longer supports HTTPS/SSL. [#6201, #6213]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Removed deprecated ``wcs.rotateCD``. [#6170]\n\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Major change in convolution behavior and keyword arguments:\n ``astropy.convolution.convolve`` was not performing normalized convolution\n in earlier versions of astropy. [#5782]\n\n- Direct convolution previously implemented the wrong definition of\n convolution. This error only affects *asymmetric* kernels. [#6267]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``astropy.coordinates.Galactic`` frame had an incorrect ording for the\n 'u', 'v', and 'w' cartesian coordinates. [#6330]\n\n- The ``astropy.coordinates.search_around_sky``,\n ``astropy.coordinates.search_around_3d``, and ``SkyCoord`` equivalent methods\n now correctly yield an ``astropy.coordinates.Angle`` as the third return type\n even if there are no matches (previously it returned a raw Quantity). [#6347]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\nastropy.extern\n^^^^^^^^^^^^^^\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- ``comments`` meta key (which is ``io.ascii``'s table convention) is output\n to ``COMMENT`` instead of ``COMMENTS`` header. Similarly, ``COMMENT``\n headers are read into ``comments`` meta [#6097]\n\n- Use more sensible fix values for invalid NAXISj header values. [#5935]\n\n- Close file on error to avoid creating a ``ResourceWarning`` warning\n about an unclosed file. [#6168, #6177]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Creating a compound model where one of the submodels is\n a compound model whose parameters were changed now uses the\n updated parameters and not the parameters of the original model. [#5741]\n\n- Allow ``Mapping`` and ``Identity`` to be fittable. [#6018]\n\n- Gaussian models now impose positive ``stddev`` in fitting. [#6019]\n\n- OrthoPolynomialBase (Chebyshev2D / Legendre2D) models were being evaluated\n incorrectly when part of a compound model (using the parameters from the\n original model), which in turn caused fitting to fail as a no-op. [#6085]\n\n- Allow ``Ring2D`` to be defined using ``r_out``. [#6192]\n\n- Make ``LinearLSQFitter`` produce correct results with fixed model\n parameters and allow ``Shift`` and ``Scale`` to be fitted with\n ``LinearLSQFitter`` and ``LevMarLSQFitter``. [#6174]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Allow to choose which median function is used in ``mad_std`` and\n ``median_absolute_deviation``. And allow to use these functions with\n a multi-dimensional ``axis``. [#5835]\n\n- Fixed ``biweight_midvariance`` so that by default it returns a\n variance that agrees with the standard definition. [#5991]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix a problem with vstack for bytes columns in Python 3. [#5628]\n\n- Fix QTable add/insert row for multidimensional Quantity. [#6092]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fixed the initial condition of ``TimeFITS`` to allow scale, FITS scale\n and FITS realization to be checked and equated properly. [#6202]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug that caused the default WCS to return coordinates offset by\n one. [#6339]\n\nastropy.vo\n^^^^^^^^^^\n\n- Fixed a bug in vo.samp when stopping a hub for which a lockfile was\n not created. [#6211]\n\n\nOther Changes and Additions\n---------------------------\n\n- Numpy 1.7 and 1.8 are no longer supported. [#6006]\n\n- Python 3.3 is no longer suppored. [#6020]\n\n- The bundled ERFA was updated to version 1.4.0. [#6239]\n\n- The bundled version of pytest has now been removed, but the\n astropy.tests.helper.pytest import will continue to work properly.\n Affiliated packages should nevertheless transition to importing pytest\n directly rather than from astropy.tests.helper. This also means that\n pytest is now a formal requirement for testing for both Astropy and\n for affiliated packages. [#5694]\n\n\n1.3.3 (2017-05-29)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug where ``StaticMatrixTransform`` erroneously copied frame\n attributes from the input coordinate to the output frame. In practice, this\n didn't actually affect any transforms in Astropy but may change behavior for\n users who explicitly used the ``StaticMatrixTransform`` in their own code.\n [#6045]\n\n- Fixed ``get_icrs_coordinates`` to loop through all the urls in case one\n raises an exception. [#5864]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fix table header not written out properly when ``fits.writeto()``\n convenience function is used. [#6042]\n\n- Fix writing out read-only arrays. [#6036]\n\n- Extension headers are written out properly when the ``fits.update()``\n convenience function is used. [#6058]\n\n- Angstrom, erg, G, and barn are no more reported as deprecated FITS units.\n [#5929]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix problem with Table pprint/pformat raising an exception for\n non-UTF-8 compliant bytestring data. [#6117]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Allow strings 'nan' and 'inf' as Quantity inputs. [#5958]\n\n- Add support for ``positive`` and ``divmod`` ufuncs (new in numpy 1.13).\n [#5998, #6020, #6116]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- On systems that do not have ``pkg_resources`` non-numerical additions to\n version numbers like ``dev`` or ``rc1`` are stripped in ``minversion`` to\n avoid a ``TypeError`` in ``distutils.version.LooseVersion`` [#5944]\n\n- Fix ``auto_download`` setting ignored in ``Time.ut1``. [#6033]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fix bug in ManualInterval which caused the limits to be returned incorrectly\n if set to zero, and fix defaults for ManualInterval in the presence of NaNs.\n [#6088]\n\n- Get rid of warnings that occurred when slicing a cube due to the tick\n locator trying to find ticks for the sliced axis. [#6104]\n\n- Accept normal Matplotlib keyword arguments in set_xlabel and set_ylabel\n functions. [#5686, #5692, #6060]\n\n- Fix a bug that caused labels to be missing from frames with labels that\n could change direction mid-axis, such as EllipticalFrame. Also ensure\n that empty tick labels do not cause any warnings. [#6063]\n\n\n1.3.2 (2017-03-30)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Ensure that checking equivalance of ``SkyCoord`` objects works with\n non-scalar attributes [#5884, #5887]\n\n- Ensure that transformation to frames with multi-dimensional attributes\n works as expected [#5890, #5897]\n\n- Make sure all ``BaseRepresentation`` objects can be output as strings.\n [#5889, #5897]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Add support for ``heaviside`` ufunc (new in numpy 1.13). [#5920]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fix to allow the C-based _fast_iterparse() VOTable XML parser to\n relloc() its buffers instead of overflowing them. [#5824, #5869]\n\n\nOther Changes and Additions\n---------------------------\n\n- File permissions are revised in the released source distribution. [#5912]\n\n\n1.3.1 (2017-03-18)\n==================\n\nNew Features\n------------\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The ``deprecated_renamed_argument`` decorator got a new ``pending``\n parameter to suppress the deprecation warnings. [#5761]\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Changed ``SkyCoord`` so that frame attributes which are not valid for the\n current ``frame`` (but are valid for other frames) are stored on the\n ``SkyCoord`` instance instead of the underlying ``frame`` instance (e.g.,\n setting ``relative_humidity`` on an ICRS ``SkyCoord`` instance.) [#5750]\n\n- Ensured that ``position_angle`` and ``separation`` give correct answers for\n frames with different equinox (see #5722). [#5762]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fix problem with padding bytes written for BinTable columns converted\n from unicode [#5280, #5287, #5288, #5296].\n\n- Fix out-of-order TUNITn cards when writing tables to FITS. [#5720]\n\n- Recognize PrimaryHDU when non boolean values are present for the\n 'GROUPS' header keyword. [#5808]\n\n- Fix the insertion of new keywords in compressed image headers\n (``CompImageHeader``). [#5866]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed a problem with setting ``bounding_box`` on 1D models. [#5718]\n\n- Fixed a broadcasting problem with weighted fitting of 2D models\n with ``LevMarLSQFitter``. [#5788]\n\n- Fixed a problem with passing kwargs to fitters, specifically ``verblevel``. [#5815]\n\n- Changed FittingWithOutlierRemoval to reject on the residual to the fit [#5831]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Fix the psd normalization for Lomb-Scargle periodograms in the presence\n of noise. [#5713]\n\n- Fix bug in the autofrequency range when ``minimum_frequency`` is specified\n but ``maximum_frequency`` is not. [#5738]\n\n- Ensure that a masked array is returned when sigma clipping fully masked\n data. [#5711]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix problem where key for caching column format function was not\n sufficiently unique. [#5803]\n\n- Handle sorting NaNs and masked values in jsviewer. [#4052, #5572]\n\n- Ensure mixin columns can be added to a table using a scalar value for the\n right-hand side if the type supports broadcasting. E.g., for an existing\n ``QTable``, ``t['q'] = 3*u.m`` will now add a column as expected. [#5820]\n\n- Fixes the bug of setting/getting values from rows/columns of a table using\n numpy array scalars. [#5772]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed problem where IrreducibleUnits could fail to unpickle. [#5868]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Avoid importing ``ipython`` in ``utils.console`` until it is necessary, to\n prevent deprecation warnings when importing, e.g., ``Column``. [#5755]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Avoid importing matplotlib.pyplot when importing\n astropy.visualization.wcsaxes. [#5680, #5684]\n\n- Ignore Numpy warnings that happen in coordinate transforms in WCSAxes.\n [#5792]\n\n- Fix compatibility issues between WCSAxes and Matplotlib 2.x. [#5786]\n\n- Fix a bug that caused WCSAxes frame visual properties to not be copied\n over when resetting the WCS. [#5791]\n\nastropy.extern\n^^^^^^^^^^^^^^\n\n- Fixed a bug where PLY was overwriting its generated files. [#5728]\n\nOther Changes and Additions\n---------------------------\n\n- Fixed a deprecation warning that occurred when running tests with\n astropy.test(). [#5689]\n\n- The deprecation of the ``clobber`` argument (originally deprecated in 1.3.0)\n in the ``io.fits`` write functions was changed to a \"pending\" deprecation\n (without displaying warnings) for now. [#5761]\n\n- Updated bundled astropy-helpers to v1.3.1. [#5880]\n\n\n1.3 (2016-12-22)\n================\n\nNew Features\n------------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- The ``convolve`` and ``convolve_fft`` arguments now support a ``mask`` keyword,\n which allows them to also support ``NDData`` objects as inputs. [#5554]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Added an ``of_address`` classmethod to ``EarthLocation`` to enable fast creation of\n ``EarthLocation`` objects given an address by querying the Google maps API [#5154].\n\n- A new routine, ``get_body_barycentric_posvel`` has been added that allows\n one to calculate positions as well as velocities for solar system bodies.\n For JPL kernels, this roughly doubles the execution time, so if one requires\n only the positions, one should use ``get_body_barycentric``. [#5231]\n\n- Transformations between coordinate systems can use the more accurate JPL\n ephemerides. [#5273, #5436]\n\n- Arithmetic on representations, such as addition of two representations,\n multiplication with a ``Quantity``, or calculating the norm via ``abs``,\n has now become possible. Furthermore, there are new methods ``mean``,\n ``sum``, ``dot``, and ``cross``. For all these, the representations are\n treated as vectors in cartesian space (temporarily converting to\n ``CartesianRepresentation`` if necessary). [#5301]\n has now become possible. Furthermore, there are news methods ``mean``,\n ``sum``, ``dot``, and ``cross`` with obvious meaning. [#5301]\n multiplication with a ``Quantity`` has now become possible. Furthermore,\n there are new methods ``norm``, ``mean``, ``sum``, ``dot``, and ``cross``.\n In all operations, the representations are treated as vectors. They are\n temporarily converted to ``CartesianRepresentation`` if necessary. [#5301]\n\n- ``CartesianRepresentation`` can be initialized with plain arrays by passing\n in a ``unit``. Furthermore, for input with a vector array, the coordinates\n no longer have to be in the first dimension, but can be at any ``xyz_axis``.\n To complement the latter, a new ``get_xyz(xyz_axis)`` method allows one to\n get a vector array out along a given axis. [#5439]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Files with \"Fortran-style\" columns (i.e. double-precision scientific notation\n with a character other than \"e\", like ``1.495978707D+13``) can now be parsed by\n the fast reader natively. [#5552]\n\n- Allow round-tripping masked data tables in most formats by using an\n empty string ``''`` as the default representation of masked values\n when writing. [#5347]\n\n- Allow reading HTML tables with unicode column values in Python 2.7. [#5410]\n\n- Check for self-consistency of ECSV header column names. [#5463]\n\n- Produce warnings when writing an IPAC table from an astropy table that\n contains metadata not supported by the IPAC format. [#4700]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- \"Lazy\" loading of HDUs now occurs - when an HDU is requested, the file is\n only read up to the point where that HDU is found. This can mean a\n substantial speedup when accessing files that have many HDUs. [#5065]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Added ``io.misc.yaml`` module to support serializing core astropy objects\n using the YAML protocol. [#5486]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Added ``delay_doc_updates`` contextmanager to postpone the formatting of\n the documentation for the ``read`` and ``write`` methods of the class to\n optionally reduce the import time. [#5275]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added a class to combine astropy fitters and functions to remove outliers\n e. g., sigma clip. [#4760]\n\n- Added a ``Tabular`` model. [#5105]\n\n- Added ``Hermite1D`` and ``Hermite2D`` polynomial models [#5242]\n\n- Added the injection of EntryPoints into astropy.modeling.fitting if\n they inherit from Fitters class. [#5241]\n\n- Added bounding box to ``Lorentz1D`` and ``MexicanHat1D`` models. [#5393]\n\n- Added ``Planar2D`` functional model. [#5456]\n\n- Updated ``Gaussian2D`` to accept no arguments (will use default x/y_stddev\n and theta). [#5537]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Added ``keep`` and ``**kwargs`` parameter to ``support_nddata``. [#5477]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added ``axis`` keyword to ``biweight_location`` and\n ``biweight_midvariance``. [#5127, #5158]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Allow renaming mixin columns. [#5469]\n\n- Support generalized value formatting for mixin columns in tables. [#5274]\n\n- Support persistence of table indices when pickling and copying table. [#5468]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Install both runtime and test dependencies when running the\n ./setup.py test command. These dependencies are specified by the\n install_requires and tests_require keywords via setuptools. [#5092]\n\n- Enable easier subclassing of the TestRunner class. [#5505]\n\nastropy.time\n^^^^^^^^^^^^\n\n- ``light_travel_time`` can now use more accurate JPL ephemerides. [#5273, #5436]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added ``pixel_scale`` and ``plate_scale`` equivalencies. [#4987]\n\n- The ``spectral_density`` equivalency now supports transformations of\n luminosity density. [#5151]\n\n- ``Quantity`` now accepts strings consisting of a number and unit such\n as '10 km/s'. [#5245]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Added a new decorator: ``deprecated_renamed_argument``. This can be used to\n rename a function argument, while it still allows for the use of the older\n argument name. [#5214]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Added a ``make_lupton_rgb`` function to generate color images from three\n greyscale images, following the algorithm of Lupton et al. (2004). [#5535]\n\n- Added ``data`` and ``interval`` inputs to the ``ImageNormalize``\n class. [#5206]\n\n- Added a new ``simple_norm`` convenience function. [#5206]\n\n- Added a default stretch for the ``Normalization`` class. [#5206].\n\n- Added a default ``vmin/vmax`` for the ``ManualInterval`` class.\n [#5206].\n\n- The ``wcsaxes`` subpackage has now been integrated in astropy as\n ``astropy.visualization.wcsaxes``. This allows plotting of astronomical\n data/coordinate systems in Matplotlib. [#5496]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Improved ``footprint_to_file``: allow to specify the coordinate system, and\n use by default the one from ``RADESYS``. Overwrite the file instead of\n appending to it. [#5494]\n\n\nAPI Changes\n-----------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- ``discretize_model`` now raises an exception if non-integer ranges are used.\n Previously it had incorrect behavior but did not raise an exception. [#5538]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``SkyCoord``, ``ICRS``, and other coordinate objects, as well as the\n underlying representations such as ``SphericalRepresentation`` and\n ``CartesianRepresentation`` can now be reshaped using methods named like the\n numpy ones for ``ndarray`` (``reshape``, ``swapaxes``, etc.)\n [#4123, #5254, #5482]\n\n- The ``obsgeoloc`` and ``obsgeovel`` attributes of ``GCRS`` and\n ``PrecessedGeocentric`` frames are now stored and returned as\n ``CartesianRepresentation`` objects, rather than ``Quantity`` objects.\n Similarly, ``EarthLocation.get_gcrs_posvel`` now returns a tuple of\n ``CartesianRepresentation`` objects. [#5253]\n\n- ``search_around_3d`` and ``search_around_sky`` now return units\n for the distance matching their input argument when no match is\n found, instead of ``dimensionless_unscaled``. [#5528]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- ASCII writers now accept an 'overwrite' argument.\n The default behavior is changed so that a warning will be\n issued when overwriting an existing file unless ``overwrite=True``.\n In a future version this will be changed from a warning to an\n exception to prevent accidentally overwriting a file. [#5007]\n\n- The default representation of masked values when writing tables was\n changed from ``'--'`` to the empty string ``''``. Previously any\n user-supplied ``fill_values`` parameter would overwrite the class\n default, but now the values are prepended to the class default. [#5347]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- The old ``Header`` interface, deprecated since Astropy 0.1 (PyFITS 3.1), has\n been removed entirely. See :ref:`header-transition-guide` for explanations\n on this change and help on the transition. [#5310]\n\n- The following functions, classes and methods have been removed:\n ``CardList``, ``Card.key``, ``Card.cardimage``, ``Card.ascardimage``,\n ``create_card``, ``create_card_from_string``, ``upper_key``,\n ``Header.ascard``, ``Header.rename_key``, ``Header.get_history``,\n ``Header.get_comment``, ``Header.toTxtFile``, ``Header.fromTxtFile``,\n ``new_table``, ``tdump``, ``tcreate``, ``BinTableHDU.tdump``,\n ``BinTableHDU.tcreate``.\n\n- Removed ``txtfile`` argument to the ``Header`` constructor.\n\n- Removed usage of ``Header.update`` with ``Header.update(keyword, value,\n comment)`` arguments.\n\n- Removed ``startColumn`` and ``endColumn`` arguments to the ``FITS_record``\n constructor.\n\n- The ``clobber`` argument in FITS writers has been renamed to\n ``overwrite``. This change affects the following functions and\n methods: ``tabledump``, ``writeto``, ``Header.tofile``,\n ``Header.totextfile``, ``_BaseDiff.report``,\n ``_BaseHDU.overwrite``, ``BinTableHDU.dump`` and\n ``HDUList.writeto``. [#5171]\n\n- Added an optional ``copy`` parameter to ``fits.Header`` which controls if\n a copy is made when creating an ``Header`` from another ``Header``.\n [#5005, #5326]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- ``.fts`` and ``.fts.gz`` files will be automatically identified as\n ``io.fits`` files if no explicit ``format`` is given. [#5211]\n\n- Added an optional ``readwrite`` parameter for ``get_formats`` to filter\n formats for read or write. [#5275]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- ``Gaussian2D`` now raises an error if ``theta`` is set at the same time as\n ``cov_matrix`` (previously ``theta`` was silently ignored). [#5537]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Setting an existing table column (e.g. ``t['a'] = [1, 2, 3]``) now defaults\n to *replacing* the column with a column corresponding to the new value\n (using ``t.replace_column()``) instead of doing an in-place update. Any\n existing meta-data in the column (e.g. the unit) is discarded. An\n in-place update is still done when the new value is not a valid column,\n e.g. ``t['a'] = 0``. To force an in-place update use the pattern\n ``t['a'][:] = [1, 2, 3]``. [#5556]\n\n- Allow ``collections.Mapping``-like ``data`` attribute when initializing a\n ``Table`` object (``dict``-like was already possible). [#5213]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- The inputs to the ``TestRunner.run_tests()`` method now must be\n keyword arguments (no positional arguments). This applies to the\n ``astropy.test()`` function as well. [#5505]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Renamed ``ignored`` context manager in ``compat.misc`` to ``suppress``\n to be consistent with https://bugs.python.org/issue19266 . [#5003]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Deprecated the ``scale_image`` function. [#5206]\n\n- The ``mpl_normalize`` module (containing the ``ImageNormalize``\n class) is now automatically imported with the ``visualization``\n subpackage. [#5491]\n\nastropy.vo\n^^^^^^^^^^\n\n- The ``clobber`` argument in ``VOSDatabase.to_json()`` has been\n renamed to ``overwrite``. [#5171]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- ``wcs.rotateCD()`` was deprecated without a replacement. [#5240]\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Transformations between CIRS and AltAz now correctly account for the\n location of the observer. [#5591]\n\n- GCRS frames representing a location on Earth with multiple obstimes are now\n allowed. This means that the solar system routines ``get_body``,\n ``get_moon`` and ``get_sun`` now work with non-scalar times and a\n non-geocentric observer. [#5253]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix issue with units or other astropy core classes stored in table meta.\n [#5605]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Copying a ``fits.Header`` using ``copy`` or ``deepcopy`` from the ``copy``\n module will use ``Header.copy`` to ensure that modifying the copy will\n not alter the other original Header and vice-versa. [#4990, #5323]\n\n- ``HDUList.info()`` no longer raises ``AttributeError`` in presence of\n ``BZERO``. [#5508]\n\n- Avoid exceptions with numpy 1.10 and up when using scaled integer data\n where ``BZERO`` has float type but integer value. [#4639, #5527]\n\n- Converting a header card to a string now calls ``self.verify('fix+warn')``\n instead of ``self.verify('fix')`` so headers with invalid keywords will\n not raise a ``VerifyError`` on printing. [#887,#5054]\n\n- ``FITS_Record._convert_ascii`` now converts blank fields to 0 when a\n non-blank null column value is set. [#5134, #5394]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- ``read`` now correctly raises an IOError if a file with an unknown\n extension can't be found, instead of raising IORegistryError:\n \"Format could not be identified.\" [#4779]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Ensure ``Time`` instances holding a single ``delta_ut1_utc`` can be copied,\n flattened, etc. [#5225]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Operations involving ``Angle`` or ``Distance``, or any other\n ``SpecificTypeQuantity`` instance, now also keep return an instance of the\n same type if the instance was the second argument (if the resulting unit\n is consistent with the specific type). [#5327]\n\n- Inplace operations on ``Angle`` and ``Distance`` instances now raise an\n exception if the final unit is not equivalent to radian and meter, resp.\n Similarly, views as ``Angle`` and ``Distance`` can now only be taken\n from quantities with appropriate units, and views as ``Quantity`` can only\n be taken from logarithmic quanties such as ``Magnitude`` if the physical\n unit is dimensionless. [#5070]\n\n- Conversion from quantities to logarithmic units now correctly causes a\n logarithmic quantity such as ``Magnitude`` to be returned. [#5183]\n\n\nastropy.wcs\n^^^^^^^^^^^\n\n- SIP distortion for an alternate WCS is correctly initialized now by\n looking at the \"CTYPE\" values matching the alternate WCS. [#5443]\n\nOther Changes and Additions\n---------------------------\n\n- The bundled ERFA was updated to version 1.3.0. This includes the\n leap second planned for 2016 Dec 31.\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Initialization of ``Angle`` has been sped up for ``Quantity`` and ``Angle``\n input. [#4970]\n\n- The use of ``np.matrix`` instances in the transformations has been\n deprecated, since this class does not allow stacks of matrices. As a\n result, the semi-public functions ``angles.rotation_matrix`` and\n ``angles.angle_axis`` are also deprecated, in favour of the new routines\n with the same name in ``coordinates.matrix_utilities``. [#5104]\n\n- A new ``BaseCoordinateFrame.cache`` dictionary has been created to expose\n the internal cache. This is useful when modifying representation data\n in-place without using ``realize_frame``. Additionally, documentation for\n in-place operations on coordinates were added. [#5575]\n\n- Coordinates and their representations are printed with a slightly different\n format, following how numpy >= 1.12 prints structured arrays. [#5423]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- The default cosmological model has been changed to Planck 2015,\n and the citation strings have been updated. [#5372]\n\nastropy.extern\n^^^^^^^^^^^^^^\n\n- Updated the bundled ``six`` module to version 1.10.0. [#5521]\n\n- Updated the astropy shipped version of ``PLY`` to version 3.9. [#5526]\n\n- Updated the astropy shipped version of jQuery to v3.3.1, and dataTables\n to v1.10.12. [#5564]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Performance improvements for tables with many columns. [#4985]\n\n- Removed obsolete code that was previously needed to properly\n implement the append mode. [#4793]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Reduced the time spent in the ``get_formats`` function. This also reduces\n the time it takes to import astropy subpackages, i.e.\n ``astropy.coordinates``. [#5262]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- The functions ``add_enabled_units``, ``set_enabled_equivalencies`` and\n ``add_enabled_equivalencies`` have been sped up by copying the current\n ``_UnitRegistry`` instead of building it from scratch. [#5306]\n\n- To build the documentation, the ``build_sphinx`` command has been deprecated\n in favor of ``build_docs``. [#5179]\n\n- The ``--remote-data`` option to ``python setup.py test`` can now take\n different arguments: ``--remote-data=none`` is the same as not specifying\n ``--remote-data`` (skip all tests that require the internet),\n ``--remote-data=astropy`` skips all tests that need remote data except those\n that require only data from data.astropy.org, and ``--remote-data=any`` is\n the same as ``--remote-data`` (run all tests that use remote data). [#5506]\n\n- The pytest ``recwarn`` fixture has been removed from the tests in favor of\n ``utils.catch_warnings``. [#5489]\n\n- Deprecated escape sequences in strings (Python 3.6) have been removed. [#5489]\n\n\n1.2.2 (2016-12-22)\n==================\n\nBug Fixes\n---------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix a bug where the ``fill_values`` parameter was ignored when writing a\n table to HTML format. [#5379]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Handle unicode FITS BinTable column names on Python 2 [#5204, #4805]\n\n- Fix reading of float values from ASCII tables, that could be read as\n float32 instead of float64 (with the E and F formats). These values are now\n always read as float64. [#5362]\n\n- Fixed memoryleak when using the compression module. [#5399, #5464]\n\n- Able to insert and remove lower case HIERARCH keywords in a consistent\n manner [#5313, #5321]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Fixed broadcasting in ``sigma_clip`` when using negative ``axis``. [#4988]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Assigning a logarithmic unit to a ``QTable`` column that did not have a\n unit yet now correctly turns it into the appropriate function quantity\n subclass (such as ``Magnitude`` or ``Dex``). [#5345]\n\n- Fix default value for ``show_row_index`` in ``Table.show_in_browser``.\n [#5562]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- For inverse trig functions that operate on quantities, catch any warnings\n that occur from evaluating the function on the unscaled quantity value\n between __array_prepare__ and __array_wrap__. [#5153]\n\n- Ensure ``!=`` also works for function units such as ``MagUnit`` [#5345]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fix use of the ``relax`` keyword in ``to_header`` when used to change the\n output precision. [#5164]\n\n- ``wcs.to_header(relax=True)`` adds a \"-SIP\" suffix to ``CTYPE`` when SIP\n distortion is present in the WCS object. [#5239]\n\n- Improved log messages in ``to_header``. [#5239]\n\nOther Changes and Additions\n---------------------------\n\n- The bundled ERFA was updated to version 1.3.0. This includes the\n leap second planned for 2016 Dec 31.\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- ``poisson_conf_interval`` with ``'kraft-burrows-nousek'`` interval is now\n faster and useable with SciPy versions < 0.14. [#5064, #5290]\n\n\n\n1.2.1 (2016-06-22)\n==================\n\nBug Fixes\n---------\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed a bug that caused TFIELDS to not be in the correct position in\n compressed image HDU headers under certain circumstances, which created\n invalid FITS files. [#5118, #5125]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed an ``ImportError`` that occurred whenever ``astropy.constants`` was\n imported before ``astropy.units``. [#5030, #5121]\n\n- Magnitude zero points used to define ``STmag``, ``ABmag``, ``M_bol`` and\n ``m_bol`` are now collected in ``astropy.units.magnitude_zero_points``.\n They are not enabled as regular units by default, but can be included\n using ``astropy.units.magnitude_zero_points.enable()``. This makes it\n possible to round-trip magnitudes as originally intended. [#5030]\n\n1.2 (2016-06-19)\n================\n\nGeneral\n-------\n\n- Astropy now requires Numpy 1.7.0 or later. [#4784]\n\nNew Features\n------------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- Add ``L_bol0``, the luminosity corresponding to absolute bolometric\n magnitude zero. [#4262]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``CartesianRepresentation`` now includes a transform() method that can take\n a 3x3 matrix to transform coordinates. [#4860]\n\n- Solar system and lunar ephemerides accessible via ``get_body``,\n ``get_body_barycentric`` and ``get_moon`` functions. [#4890]\n\n- Added astrometric frames (i.e., a frame centered on a particular\n point/object specified in another frame). [#4909, #4941]\n\n- Added ``SkyCoord.spherical_offsets_to`` method. [#4338]\n\n- Recent Earth rotation (IERS) data are now auto-downloaded so that AltAz\n transformations for future dates now use the most accurate available\n rotation values. [#4436]\n\n- Add support for heliocentric coordinate frames. [#4314]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- ``angular_diameter_distance_z1z2`` now supports the computation of\n the angular diameter distance between a scalar and an array like\n argument. [#4593] The method now supports models with negative\n Omega_k0 (positive curvature universes) [#4661] and allows z2 < z1.\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- File name could be passed as ``Path`` object. [#4606]\n\n- Check that columns in ``formats`` specifier exist in the output table\n when writing. [#4508, #4511]\n\n- Allow trailing whitespace in the IPAC header lines. [#4758]\n\n- Updated to filter out the default parser warning of BeautifulSoup.\n [#4551]\n\n- Added support for reading and writing reStructuredText simple tables.\n [#4812]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- File name could be passed as ``Path`` object. [#4606]\n\n- Header allows a dictionary-like cards argument during creation. [#4663]\n\n- New function ``convenience.table_to_hdu`` to allow creating a FITS\n HDU object directly from an astropy ``Table``. [#4778]\n\n- New optional arguments ``ignore_missing`` and ``remove_all`` are added\n to ``astropy.io.fits.header.remove()``. [#5020]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Added custom ``IORegistryError``. [#4833]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- File name could be passed as ``Path`` object. [#4606]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added the fittable=True attribute to the Scale and Shift models with tests. [#4718]\n\n- Added example plots to docstrings for some build-in models. [#4008]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``UnknownUncertainty`` new subclass of ``NDUncertainty`` that can be used to\n save uncertainties that cannot be used for error propagation. [#4272]\n\n- ``NDArithmeticMixin``: ``add``, ``subtract``, ``multiply`` and ``divide``\n can be used as classmethods but require that two operands are given. These\n operands don't need to be NDData instances but they must be convertible to\n NDData. This conversion is done internally. Using it on the instance does\n not require (but also allows) two operands. [#4272, #4851]\n\n- ``NDDataRef`` new subclass that implements ``NDData`` together with all\n currently available mixins. This class does not implement additional\n attributes, methods or a numpy.ndarray-like interface like ``NDDataArray``.\n attributes, methods or a numpy.ndarray-like interface like ``NDDataArray``.\n [#4797]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added ``axis`` keyword for ``mad_std`` function. [#4688, #4689]\n\n- Added Bayesian and Akaike Information Criteria. [#4716]\n\n- Added Bayesian upper limits for Poisson count rates. [#4622]\n\n- Added ``circstats``; a module for computing circular statistics. [#3705, #4472]\n\n- Added ``jackknife`` resampling method. [#3708, #4439]\n\n- Updated ``bootstrap`` to allow bootstrapping statistics with multiple\n outputs. [#3601]\n\n- Added ``LombScargle`` class to compute Lomb-Scargle periodograms [#4811]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- ``Table.show_in_notebook`` and ``Table.show_in_browser(jsviewer=True)`` now\n yield tables with an \"idx\" column, allowing easy identification of the index\n of a row even when the table is re-sorted in the browser. [#4404]\n\n- Added ``AttributeError`` when trying to set mask on non-masked table. [#4637]\n\n- Allow to use a tuple of keys in ``Table.sort``. [#4671]\n\n- Added ``itercols``; a way to iterate through columns of a table. [#3805,\n #4888]\n\n- ``Table.show_in_notebook`` and the default notebook display (i.e.,\n ``Table._repr_html_``) now use consistent table styles which can be set\n using the ``astropy.table.default_notebook_table_class`` configuration\n item. [#4886]\n\n- Added interface to create ``Table`` directly from any table-like object\n that has an ``__astropy_table__`` method. [#4885]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Enable test runner to obtain documentation source files from directory\n other than \"docs\". [#4748]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Added caching of scale and format transformations for improved performance.\n [#4422]\n\n- Recent Earth rotation (IERS) data are now auto-downloaded so that UT1\n transformations for future times now work out of the box. [#4436]\n\n- Add support for barycentric/heliocentric time corrections. [#4314]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- The option to use tuples to indicate fractional powers of units,\n deprecated in 0.3.1, has been removed. [#4449]\n\n- Added slug to imperial units. [#4670]\n\n- Added Earth radius (``R_earth``) and Jupiter radius (``R_jup``) to units.\n [#4818]\n\n- Added a ``represents`` property to allow access to the definition of a\n named unit (e.g., ``u.kpc.represents`` yields ``1000 pc``). [#4806]\n\n- Add bolometric absolute and apparent magnitudes, ``M_bol`` and ``m_bol``.\n [#4262]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``Path`` object could be passed to ``get_readable_fileobj``. [#4606]\n\n- Implemented a generic and extensible way of merging metadata. [#4459]\n\n- Added ``format_doc`` decorator which allows to replace and/or format the\n current docstring of an object. [#4242]\n\n- Added a new context manager ``set_locale`` to temporarily set the\n current locale. [#4363]\n\n- Added new IERS_Auto class to auto-download recent IERS (Earth rotation)\n data when required by coordinate or time transformations. [#4436]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Add zscale interval based on Numdisplay's implementation. [#4776]\n\nAPI changes\n-----------\n\nastropy.config\n^^^^^^^^^^^^^^\n\n- The deprecated ``ConfigurationItem`` and ``ConfigAlias`` classes and the\n ``save_config``, ``get_config_items``, and ``generate_all_config_items``\n functions have now been removed. [#2767, #4446]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Removed compatibility layer for pre-v0.4 API. [#4447]\n\n- Added ``copy`` keyword-only argument to allow initialization without\n copying the (possibly large) input coordinate arrays. [#4883]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Improve documentation of z validity range of cosmology objects [#4882, #4949]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Add a way to control HTML escaping when writing a table as an HTML file. [#4423]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Two optional boolean arguments ``ignore_missing`` and ``remove_all`` are\n added to ``Header.remove``. [#5020]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Renamed ``Redshift`` model to ``RedshiftScaleFactor``. [#3672]\n\n- Inputs (``coords`` and ``out``) to ``render`` function in ``Model`` are\n converted to float. [#4697]\n\n- ``RotateNative2Celestial`` and ``RotateCelestial2Native`` are now\n implemented as subclasses of ``EulerAngleRotation``. [#4881, #4940]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``NDDataBase`` does not set the private uncertainty property anymore. This\n only affects you if you subclass ``NDDataBase`` directly. [#4270]\n\n- ``NDDataBase``: the ``uncertainty``-setter is removed. A similar one is\n added in ``NDData`` so this also only affects you if you subclassed\n ``NDDataBase`` directly. [#4270]\n\n- ``NDDataBase``: ``uncertainty``-getter returns ``None`` instead of the\n private uncertainty and is now abstract. This getter is moved to\n ``NDData`` so it only affects direct subclasses of ``NDDataBase``. [#4270]\n\n- ``NDData`` accepts a Quantity-like data and an explicitly given unit.\n Before a ValueError was raised in this case. The final instance will use the\n explicitly given unit-attribute but doesn't check if the units are\n convertible and the data will not be scaled. [#4270]\n\n- ``NDData`` : the given mask, explicit or implicit if the data was masked,\n will be saved by the setter. It will not be saved directly as the private\n attribute. [#4879]\n\n- ``NDData`` accepts an additional argument ``copy`` which will copy every\n parameter before it is saved as attribute of the instance. [#4270]\n\n- ``NDData``: added an ``uncertainty.getter`` that returns the private\n attribute. It is equivalent to the old ``NDDataBase.uncertainty``-getter.\n [#4270]\n\n- ``NDData``: added an ``uncertainty.setter``. It is slightly modified with\n respect to the old ``NDDataBase.uncertainty``-setter. The changes include:\n\n- if the uncertainty has no uncertainty_type an info message is printed\n instead of a TypeError and the uncertainty is saved as\n ``UnknownUncertainty`` except the uncertainty is None. [#4270]\n\n- the requirement that the uncertainty_type of the uncertainty needs to be a\n string was removed. [#4270]\n\n- if the uncertainty is a subclass of NDUncertainty the parent_nddata\n attribute will be set so the uncertainty knows to which data it belongs.\n This is also a Bugfix. [#4152, #4270]\n\n- ``NDData``: added a ``meta``-getter, which will set and return an empty\n OrderedDict if no meta was previously set. [#4509, #4469]\n\n- ``NDData``: added an ``meta``-setter. It requires that the meta is\n dictionary-like (it also accepts Headers or ordered dictionaries and others)\n or None. [#4509, #4469, #4921]\n\n- ``NDArithmeticMixin``: The operand in arithmetic methods (``add``, ...)\n doesn't need to be a subclass of ``NDData``. It is sufficient if it can be\n converted to one. This conversion is done internally. [#4272]\n\n- ``NDArithmeticMixin``: The arithmetic methods allow several new arguments to\n control how or if different attributes of the class will be processed during\n the operation. [#4272]\n\n- ``NDArithmeticMixin``: Giving the parameter ``propagate_uncertainties`` as\n positional keyword is deprecated and will be removed in the future. You now\n need to specify it as keyword-parameter. Besides ``True`` and ``False`` also\n ``None`` is now a valid value for this parameter. [#4272, #4851]\n\n- ``NDArithmeticMixin``: The wcs attribute of the operands is not compared and\n thus raises no ValueError if they differ, except if a ``compare_wcs``\n parameter is specified. [#4272]\n\n- ``NDArithmeticMixin``: The arithmetic operation was split from a general\n ``_arithmetic`` method to different specialized private methods to allow\n subclasses more control on how the attributes are processed without\n overriding ``_arithmetic``. The ``_arithmetic`` method is now used to call\n these other methods. [#4272]\n\n- ``NDSlicingMixin``: If the attempt at slicing the mask, wcs or uncertainty\n fails with a ``TypeError`` a Warning is issued instead of the TypeError. [#4271]\n\n- ``NDUncertainty``: ``support_correlated`` attribute is deprecated in favor of\n ``supports_correlated`` which is a property. Also affects\n ``StdDevUncertainty``. [#4272]\n\n- ``NDUncertainty``: added the ``__init__`` that was previously implemented in\n ``StdDevUncertainty`` and takes an additional ``unit`` parameter. [#4272]\n\n- ``NDUncertainty``: added a ``unit`` property without setter that returns the\n set unit or if not set the unit of the parent. [#4272]\n\n- ``NDUncertainty``: included a ``parent_nddata`` property similar to the one\n previously implemented in StdDevUncertainty. [#4272]\n\n- ``NDUncertainty``: added an ``array`` property with setter. The setter will\n convert the value to a plain numpy array if it is a list or a subclass of a\n numpy array. [#4272]\n\n- ``NDUncertainty``: ``propagate_multiply`` and similar were removed. Before\n they were abstract properties and replaced by methods with the same name but\n with a leading underscore. The entry point for propagation is a method\n called ``propagate``. [#4272]\n\n- ``NDUncertainty`` and subclasses: implement a representation (``__repr__``).\n [#4787]\n\n- ``StdDevUncertainty``: error propagation allows an explicitly given\n correlation factor, which may be a scalar or an array which will be taken\n into account during propagation.\n This correlation must be determined manually and is not done by the\n uncertainty! [#4272]\n\n- ``StdDevUncertainty``: the ``array`` is converted to a plain numpy array\n only if it's a list or a subclass of numpy.ndarray. Previously it was always\n cast to a numpy array but also allowed subclasses. [#4272]\n\n- ``StdDevUncertainty``: setting the ``parent_nddata`` does not compare if the\n shape of it's array is identical to the parents data shape. [#4272]\n\n- ``StdDevUncertainty``: the ``array.setter`` doesn't compare if the array has\n the same shape as the parents data. [#4272]\n\n- ``StdDevUncertainty``: deprecated ``support_correlated`` in favor of\n ``supports_correlated``. [#4272, #4828]\n\n- ``StdDevUncertainty``: deprecated ``propagate_add`` and similar methods in\n favor of ``propagate``. [#4272, #4828]\n\n- Allow ``data`` to be a named argument in ``NDDataArray``. [#4626]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- ``operations.unique`` now has a ``keep`` parameter, which allows\n one to select whether to keep the first or last row in a set of\n duplicate rows, or to remove all rows that are duplicates. [#4632]\n\n- ``QTable`` now behaves more consistently by making columns act as a\n ``Quantity`` even if they are assigned a unit after the table is\n created. [#4497, #4884]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Remove deprecated ``register`` argument for Unit classes. [#4448]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The astropy.utils.compat.argparse module has now been deprecated. Use the\n Python 'argparse' module directly instead. [#4462]\n\n- The astropy.utils.compat.odict module has now been deprecated. Use the\n Python 'collections' module directly instead. [#4466]\n\n- The astropy.utils.compat.gzip module has now been deprecated. Use the\n Python 'gzip' module directly instead. [#4464]\n\n- The deprecated ``ScienceStateAlias`` class has been removed. [#2767, #4446]\n\n- The astropy.utils.compat.subprocess module has now been deprecated. Use the\n Python 'subprocess' module instead. [#4483]\n\n- The astropy.utils.xml.unescaper module now also unescapes ``'%2F'`` to\n ``'/'`` and ``'&&'`` to ``'&'`` in a given URL. [#4699]\n\n- The astropy.utils.metadata.MetaData descriptor has now two optional\n parameters: doc and copy. [#4921]\n\n- The default IERS (Earth rotation) data now is now auto-downloaded via a\n new class IERS_Auto. When extrapolating UT1-UTC or polar motion values\n outside the available time range, the values are now clipped at the last\n available value instead of being linearly extrapolated. [#4436]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- WCS objects can now be initialized with an ImageHDU or\n PrimaryHDU object. [#4493, #4505]\n\n- astropy.wcs now issues an INFO message when the header has SIP coefficients but\n \"-SIP\" is missing from CTYPE. [#4814]\n\nBug fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Ameliorate a problem with ``get_sun`` not round-tripping due to\n approximations in the light deflection calculation. [#4952]\n\n- Ensure that ``angle_utilities.position_angle`` accepts floats, as stated\n in the docstring. [#3800]\n\n- Ensured that transformations for ``GCRS`` frames are correct for\n non-geocentric observers. [#4986]\n\n- Fixed a problem with the ``Quantity._repr_latex_`` method causing errors\n when showing an ``EarthLocation`` in a Jupyter notebook. [#4542, #5068]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix a problem where the fast reader (with use_fast_converter=False) can\n fail on non-US locales. [#4363]\n\n- Fix astropy.io.ascii.read handling of units for IPAC formatted files.\n Columns with no unit are treated as unitless not dimensionless. [#4867,\n #4947]\n\n- Fix problems the header parsing in the sextractor reader. [#4603, #4910]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- ``GroupsHDU.is_image`` property is now set to ``False``. [#4742]\n\n- Ensure scaling keywords are removed from header when unsigned integer data\n is converted to signed type. [#4974, #5053]\n\n- Made TFORMx keyword check more flexible in test of compressed images to\n enable compatibility of the test with cfitsio 3.380. [#4646, #4653]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- The astropy.io.votable.validator.html module is updated to handle division\n by zero when generating validation report. [#4699]\n\n- KeyError when converting Table v1.2 numeric arrays fixed. [#4782]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Refactored ``AiryDisk2D``, ``Sersic1D``, and ``Sersic2D`` models\n to be able to combine them as classes as well as instances. [#4720]\n\n- Modified the \"LevMarLSQFitter\" class to use the weights in the\n calculation of the Jacobian. [#4751]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``NDData`` giving masked_Quantities as data-argument will use the\n implicitly passed mask, unit and value. [#4270]\n\n- ``NDData`` using a subclass implementing ``NDData`` with\n ``NDArithmeticMixin`` now allows error propagation. [#4270]\n\n- Fixed memory leak that happened when uncertainty of ``NDDataArray`` was\n set. [#4825, #4862]\n\n- ``StdDevUncertainty``: During error propagation the unit of the uncertainty\n is taken into account. [#4272]\n\n- ``NDArithmeticMixin``: ``divide`` and ``multiply`` yield correct\n uncertainties if only one uncertainty is set. [#4152, #4272]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Fix ``sigma_clipped_stats`` to use the ``axis`` argument. [#4726, #4808]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed bug where Tables created from existing Table objects were not\n inheriting the ``primary_key`` attribute. [#4672, #4930]\n\n- Provide more detail in the error message when reading a table fails due to a\n problem converting column string values. [#4759]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Exponentiation using a ``Quantity`` with a unit equivalent to dimensionless\n as base and an ``array``-like exponent yields the correct result. [#4770]\n\n- Ensured that with ``spectral_density`` equivalency one could also convert\n between ``photlam`` and ``STmag``/``ABmag``. [#5017]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The astropy.utils.compat.fractions module has now been deprecated. Use the\n Python 'fractions' module directly instead. [#4463]\n\n- Added ``format_doc`` decorator which allows to replace and/or format the\n current docstring of an object. [#4242]\n\n- Attributes using the astropy.utils.metadata.MetaData descriptor are now\n included in the sphinx documentation. [#4921]\n\nastropy.vo\n^^^^^^^^^^\n\n- Relaxed expected accuracy of Cone Search prediction test to reduce spurious\n failures. [#4382]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- astropy.wcs.to_header removes \"-SIP\" from CTYPE when SIP coefficients\n are not written out, i.e. ``relax`` is either ``False`` or ``None``.\n astropy.wcs.to_header appends \"-SIP\" to CTYPE when SIP coefficients\n are written out, i.e. ``relax=True``. [#4814]\n\n- Made ``wcs.bounds_check`` call ``wcsprm_python2c``, which means it\n works even if ``wcs.set`` has not been called yet. [#4957, #4966].\n\n- WCS objects can no longer be reverse-indexed, which was technically\n permitted but incorrectly implemented previously [#4962]\n\nOther Changes and Additions\n---------------------------\n\n- Python 2.6 is no longer supported. [#4486]\n\n- The bundled version of py.test has been updated to 2.8.3. [#4349]\n\n- Reduce Astropy's import time (``import astropy``) by almost a factor 2. [#4649]\n\n- Cython prerequisite for building changed to v0.19 in install.rst [#4705,\n #4710, #4719]\n\n- All astropy.modeling functionality that was deprecated in Astropy 1.0 has\n been removed. [#4857]\n\n- Added instructions for installing Astropy into CASA. [#4840]\n\n- Added an example gallery to the docs demonstrating short\n snippets/examples. [#4734]\n\n\n1.1.2 (2016-03-10)\n==================\n\nNew Features\n------------\n\nastropy.wcs\n^^^^^^^^^^^\n\n- The ``astropy.wcs`` module now exposes ``WCSHDO_P*`` constants that can be\n used to allow more control over output precision when using the ``relax``\n keyword argument. [#4616]\n\nBug Fixes\n---------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fixed handling of CDS data file when no description is given and also\n included stripping out of markup for missing value from description. [#4437, #4474]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed possible segfault during error handling in FITS tile\n compression. [#4489]\n\n- Fixed crash on pickling of binary table columns with the 'X', 'P', or\n 'Q' format. [#4514]\n\n- Fixed memory / reference leak that could occur when copying a ``FITS_rec``\n object (the ``.data`` for table HDUs). [#520]\n\n- Fixed a memory / reference leak in ``FITS_rec`` that occurred in a wide\n range of cases, especially after writing FITS tables to a file, but in\n other cases as well. [#4539]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fix a bug to allow instantiation of a modeling class having a parameter\n with a custom setter that takes two parameters ``(value, model)`` [#4656]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed bug when replacing a table column with a mixin column like\n Quantity or Time. [#4601]\n\n- Disable initial ordering in jsviewer (``show_in_browser``,\n ``show_in_notebook``) to respect the order from the Table. [#4628]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed sphinx issues on plotting quantities. [#4527]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed latex representation of function units. [#4563]\n\n- The ``zest.releaser`` hooks included in Astropy are now injected locally to\n Astropy, rather than being global. [#4650]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed ``fits2bitmap`` script to allow ext flag to contain extension\n names or numbers. [#4468]\n\n- Fixed ``fits2bitmap`` default output filename generation for\n compressed FITS files. [#4468]\n\n- Fixed ``quantity_support`` to ensure its conversion returns ndarray\n instances (needed for numpy >=1.10). [#4654]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed possible exception in handling of SIP headers that was introduced in\n v1.1.1. [#4492]\n\n- Fixed a bug that caused WCS objects with a high dynamic range of values for\n certain parameters to lose precision when converted to a header. This\n occurred for example in cases of spectral cubes, where a spectral axis in\n Hz might have a CRVAL3 value greater than 1e10 but the spatial coordinates\n would have CRVAL1/2 values 8 to 10 orders of magnitude smaller. This bug\n was present in Astropy 1.1 and 1.1.1 but not 1.0.x. This has now been fixed\n by ensuring that all WCS keywords are output with 14 significant figures by\n default. [#4616]\n\nOther Changes and Additions\n---------------------------\n\n- Updated bundled astropy-helpers to v1.1.2. [#4678]\n\n- Updated bundled copy of WCSLIB to 5.14. [#4579]\n\n\n1.1.1 (2016-01-08)\n==================\n\nNew Features\n------------\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Allow ``pathlib.Path`` objects (available in Python 3.4 and later) for\n specifying the file name in registry read / write functions. [#4405]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``console.human_file_size`` now accepts quantities with byte-equivalent\n units [#4373]\n\nBug Fixes\n---------\n\nastropy.analytic_functions\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed the blackbody functions' handling of overflows on some platforms\n (Windows with MSVC, older Linux versions) with a buggy ``expm1`` function.\n [#4393]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed an bug where updates to string columns in FITS tables were not saved\n on Python 3. [#4452]\n\nOther Changes and Additions\n---------------------------\n\n- Updated bundled astropy-helpers to v1.1.1. [#4413]\n\n\n1.1 (2015-12-11)\n================\n\nNew Features\n------------\n\nastropy.config\n^^^^^^^^^^^^^^\n\n- Added new tools ``set_temp_config`` and ``set_temp_cache`` which can be\n used either as function decorators or context managers to temporarily\n use alternative directories in which to read/write the Astropy config\n files and download caches respectively. This is especially useful for\n testing, though ``set_temp_cache`` may also be used as a way to provide\n an alternative (application specific) download cache for large data files,\n rather than relying on the default cache location in users' home\n directories. [#3975]\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- Added the Thomson scattering cross-section. [#3839]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Added Moffat2DKernel. [#3965]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Added ``get_constellation`` function and ``SkyCoord.get_constellation``\n convenience method to determine the constellation that a coordinate\n is in. [#3758]\n\n- Added ``PrecessedGeocentric`` frame, which is based on GCRS, but precessed\n to a specific requested mean equinox. [#3758]\n\n- Added ``Supergalactic`` frame to support de Vaucouleurs supergalactic\n coordinates. [#3892]\n\n- ``SphericalRepresentation`` now has a ``._unit_representation`` class attribute to specify\n an equivalent UnitSphericalRepresentation. This allows subclasses of\n representations to pair up correctly. [#3757]\n\n- Added functionality to support getting the locations of observatories by\n name. See ``astropy.coordinates.EarthLocation.of_site``. [#4042]\n\n- Added ecliptic coordinates, including ``GeocentricTrueEcliptic``,\n ``BarycentricTrueEcliptic``, and ``HeliocentricTrueEcliptic``. [#3749]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Add Planck 2015 cosmology [#3476]\n\n- Distance calculations now > 20-40x faster for the supplied\n cosmologies due to implementing Cython scalar versions of\n ``FLRW.inv_efunc``.[#4127]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Automatically use ``guess=False`` when reading if the file ``format`` is\n provided and the format parameters are uniquely specified. This update\n also removes duplicate format guesses to improve performance. [#3418]\n\n- Calls to ascii.read() for fixed-width tables may now omit one of the keyword\n arguments ``col_starts`` or ``col_ends``. Columns will be assumed to begin and\n end immediately adjacent to each other. [#3657]\n\n- Add a function ``get_read_trace()`` that returns a traceback of the\n attempted read formats for the last call to ``astropy.io.ascii.read``. [#3688]\n\n- Supports LZMA decompression via ``get_readable_fileobj`` [#3667]\n\n- Allow ``-`` character is Sextractor format column names. [#4168]\n\n- Improve DAOphot reader to read multi-aperture files [#3535, #4207]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Support reading and writing from bzip2 compressed files. i.e. ``.fits.bz2``\n files. [#3789]\n\n- Included a new command-line script called ``fitsinfo`` to display\n a summary of the HDUs in one or more FITS files. [#3677]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Support saving all meta information, description and units of tables and columns\n in HDF5 files [#4103]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- A new method was added to ``astropy.io.votable.VOTable``,\n ``get_info_by_id`` to conveniently find an ``INFO`` element by its\n ``ID`` attribute. [#3633]\n\n- Instances in the votable tree now have better ``__repr__`` methods. [#3639]\n\nastropy.logger.py\n^^^^^^^^^^^^^^^^^\n\n- Added log levels (e.g., DEBUG, INFO, CRITICAL) to ``astropy.log`` [#3947]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added a new ``Parameter.validator`` interface for setting a validation\n method on individual model parameters. See the ``Parameter``\n documentation for more details. [#3910]\n\n- The projection classes that are named based on the 3-letter FITS\n WCS projections (e.g. ``Pix2Sky_TAN``) now have aliases using\n longer, more descriptive names (e.g. ``Pix2Sky_Gnomonic``).\n [#3583]\n\n- All of the standard FITS WCS projection types have been\n implemented in ``astropy.modeling.projections`` (by wrapping\n WCSLIB). [#3906]\n\n- Added ``Sersic1D`` and ``Sersic2D`` model classes. [#3889]\n\n- Added the Voigt profile to existing models. [#3901]\n\n- Added ``bounding_box`` property and ``render_model`` function [#3909]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Added ``block_reduce`` and ``block_replicate`` functions. [#3453]\n\n- ``extract_array`` now offers different options to deal with array\n boundaries [#3727]\n\n- Added a new ``Cutout2D`` class to create postage stamp image cutouts\n with optional WCS propagation. [#3823]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added ``sigma_lower`` and ``sigma_upper`` keywords to\n ``sigma_clip`` to allow for non-symmetric clipping. [#3595]\n\n- Added ``cenfunc``, ``stdfunc``, and ``axis`` keywords to\n ``sigma_clipped_stats``. [#3792]\n\n- ``sigma_clip`` automatically masks invalid input values (NaNs, Infs) before\n performing the clipping [#4051]\n\n- Added the ``histogram`` routine, which is similar to ``np.histogram`` but\n includes several additional options for automatic determination of optimal\n histogram bins. Associated helper routines include ``bayesian_blocks``,\n ``friedman_bin_width``, ``scott_bin_width``, and ``knuth_bin_width``.\n This functionality was ported from the astroML_ library. [#3756]\n\n- Added the ``bayesian_blocks`` routine, which implements a dynamic algorithm\n for locating change-points in various time series. [#3756]\n\n- A new function ``poisson_conf_interval()`` was added to allow easy calculation\n of several standard formulae for the error bars on the mean of a Poisson variable\n estimated from a single sample.\n\nastropy.table\n^^^^^^^^^^^^^\n\n- ``add_column()`` and ``add_columns()`` now have ``rename_duplicate``\n option to rename new column(s) rather than raise exception when its name\n already exists. [#3592]\n\n- Added ``Table.to_pandas`` and ``Table.from_pandas`` for converting to/from\n pandas dataframes. [#3504]\n\n- Initializing a ``Table`` with ``Column`` objects no longer requires\n that the column ``name`` attribute be defined. [#3781]\n\n- Added an ``info`` property to ``Table`` objects which provides configurable\n summary information about the table and its columns. [#3731]\n\n- Added an ``info`` property to column classes (``Column`` or mixins). This\n serves a dual function of providing configurable summary information about\n the column, and acting as a manager of column attributes such as\n name, format, or description. [#3731]\n\n- Updated table and column representation to use the ``dtype_info_name``\n function for the dtype value. Removed the default \"masked=False\"\n from the table representation. [#3868, #3869]\n\n- Updated row representation to be consistent with the corresponding\n table representation for that row. Added HTML representation so a\n row displays nicely in IPython notebook.\n\n- Added a new table indexing engine allowing for the creation of\n indices on one or more columns of a table using ``add_index``. These\n indices enable new functionality such as searching for rows by value\n using ``loc`` and ``iloc``, as well as increased performance for\n certain operations. [#3915, #4202]\n\n- Added capability to include a structured array or recarray in a table\n as a mixin column. This allows for an approximation of nested tables.\n [#3925]\n\n- Added ``keep_byteorder`` option to ``Table.as_array()``. See the\n \"API Changes\" section below. [#4080]\n\n- Added a new method ``Table.replace_column()`` to replace an existing\n column with a new data column. [#4090]\n\n- Added a ``tableclass`` option to ``Table.pformat()`` to allow specifying\n a list of CSS classes added to the HTML table. [#4131]\n\n- New CSS for jsviewer table [#2917, #2982, #4174]\n\n- Added a new ``Table.show_in_notebook`` method that shows an interactive view\n of a Table (similar to ``Table.show_in_browser(jsviewer=True)``) in an\n Python/Jupyter notebook. [#4197]\n\n- Added column alignment formatting for better pprint viewing\n experience. [#3644]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Added new test config options, ``config_dir`` and ``cache_dir`` (these\n can be edited in ``setup.cfg`` or as extra command-line options to\n py.test) for setting the locations to use for the Astropy config files\n and download caches (see also the related ``set_temp_config/cache``\n features added in ``astropy.config``). [#3975]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Add support for FITS standard time strings. [#3547]\n\n- Allow the ``format`` attribute to be updated in place to change the\n default representation of a ``Time`` object. [#3673]\n\n- Add support for shape manipulation (reshape, ravel, etc.). [#3224]\n\n- Add argmin, argmax, argsort, min, max, ptp, sort methods. [#3681]\n\n- Add ``Time.to_datetime`` method for converting ``Time`` objects to\n timezone-aware datetimes. [#4119, #4124]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added furlong to imperial units. [#3529]\n\n- Added mil to imperial units. [#3716]\n\n- Added stone to imperial units. [#4192]\n\n- Added Earth Mass (``M_earth``) and Jupiter mass (``M_jup``) to units [#3907]\n\n- Added support for functional units, in particular the logarithmic ones\n ``Magnitude``, ``Decibel``, and ``Dex``. [#1894]\n\n- Quantities now work with the unit support in matplotlib. See\n :ref:`plotting-quantities`. [#3981]\n\n- Clarified imperial mass measurements and added pound force (lbf),\n kilopound (kip), and pound per square inch (psi). [#3409]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Added new ``OrderedDescriptor`` and ``OrderedDescriptorContainer`` utility\n classes that make it easier to implement classes with declarative APIs,\n wherein class-level attributes have an inherit \"ordering\" to them that is\n specified by the order in which those attributes are defined in the class\n declaration (by defining them using special descriptors that have\n ``OrderedDescriptor`` as a base class). See the API documentation for\n these classes for more details. Coordinate frames and models now use this\n interface. [#3679]\n\n- The ``get_pkg_data_*`` functions now take an optional ``package`` argument\n which allows specifying any package to read package data filenames or\n content out of, as opposed to only being able to use data from the package\n that the function is called from. [#4079]\n\n- Added function ``dtype_info_name`` to the ``data_info`` module to provide\n the name of a ``dtype`` for human-readable informational purposes. [#3868]\n\n- Added ``classproperty`` decorator--this is to ``property`` as\n ``classmethod`` is to normal instance methods. [#3982]\n\n- ``iers.open`` now handles network URLs, as well as local paths. [#3850]\n\n- The ``astropy.utils.wraps`` decorator now takes an optional\n ``exclude_args`` argument not shared by the standard library ``wraps``\n decorator (as it is unique to the Astropy version's ability of copying\n the wrapped function's argument signature). ``exclude_args`` allows\n certain arguments on the wrapped function to be excluded from the signature\n of the wrapper function. This is particularly useful when wrapping an\n instance method as a function (to exclude the ``self`` argument). [#4017]\n\n- ``get_readable_fileobj`` can automatically decompress LZMA ('.xz')\n files using the ``lzma`` module of Python 3.3+ or, when available, the\n ``backports.lzma`` package on earlier versions. [#3667]\n\n- The ``resolve_name`` utility now accepts any number of additional\n positional arguments that are automatically dotted together with the\n first ``name`` argument. [#4083]\n\n- Added ``is_url_in_cache`` for resolving paths to cached files via URLS\n and checking if files exist. [#4095]\n\n- Added a ``step`` argument to the ``ProgressBar.map`` method to give\n users control over the update frequency of the progress bar. [#4191]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Added a function / context manager ``quantity_support`` for enabling\n seamless plotting of ``Quantity`` instances in matplotlib. [#3981]\n\n- Added the ``hist`` function, which is similar to ``plt.hist`` but\n includes several additional options for automatic determination of optimal\n histogram bins. This functionality was ported from the astroML_ library.\n [#3756]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- The included version of wcslib has been upgraded to 5.10. [#3992, #4239]\n\n The minimum required version of wcslib in the 4.x series remains 4.24.\n\n The minimum required version of wcslib in the 5.x series is\n 5.8. Building astropy against a wcslib 5.x prior to 5.8\n will raise an ``ImportError`` when ``astropy.wcs`` is imported.\n\n The wcslib changes relevant to astropy are:\n\n- The FITS headers returned by ``astropy.wcs.WCS.to_header`` and\n ``astropy.wcs.WCS.to_header_string`` now include values with\n more precision. This will result in numerical differences in\n your results if you convert ``astropy.wcs.WCS`` objects to FITS\n headers and use the results.\n\n- ``astropy.wcs.WCS`` now recognises the ``TPV``, ``TPD``,\n ``TPU``, ``DSS``, ``TNX`` and ``ZPX`` polynomial distortions.\n\n- Added relaxation flags to allow ``PC0i_0ja``, ``PV0j_0ma``, and\n ``PS0j_0ma`` (i.e. with leading zeroes on the index).\n\n- Tidied up error reporting, particularly relating to translating\n status returns from lower-level functions.\n\n- Changed output formatting of floating point values in\n ``to_header``.\n\n- Enhanced text representation of ``WCS`` objects. [#3604]\n\n- The ``astropy.tests.helper`` module is now part of the public API (and has a\n documentation page). This module was in previous releases of astropy,\n but was not considered part of the public API until now. [#3890]\n\n.. _astroML: http://astroML.org\n\n- There is a new function ``astropy.online_help`` to search the\n astropy documentation and display the result in a web\n browser. [#3642]\n\nAPI changes\n-----------\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- ``FLRW._tfunc`` and ``FLRW._xfunc`` are marked as deprecated. Users\n should use the new public interfaces ``FLRW.lookback_time_integrand``\n and ``FLRW.abs_distance_integrand`` instead. [#3767]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- The default header line processing was made to be consistent with data line\n processing in that it now ignores blank lines that may have whitespace\n characters. Any code that explicitly specifies a ``header_start`` value\n for parsing a file with blank lines in the header containing whitespace will\n need to be updated. [#2654]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- The ``uint`` argument to ``fits.open`` is now True by default; that is,\n arrays using the FITS unsigned integer convention will be detected, and\n read as unsigned integers by default. A new config option for\n ``io.fits``, ``enable_uint``, can be changed to False to revert to the\n original behavior of ignoring the ``uint`` convention unless it is\n explicitly requested with ``uint=True``. [#3916]\n\n- The ``ImageHDU.NumCode`` and ``ImageHDU.ImgCode`` attributes (and same\n for other classes derived from ``_ImageBaseHDU``) are deprecated. Instead,\n the ``astropy.io.fits`` module-level constants ``BITPIX2DTYPE`` and\n ``DTYPE2BITPIX`` can be used. [#3916]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Note: Comparisons of model parameters with array-like values now\n yields a Numpy boolean array as one would get with normal Numpy\n array comparison. Previously this returned a scalar True or False,\n with True only if the comparison was true for all elements compared,\n which could lead to confusing circumstances. [#3912]\n\n- Using ``model.inverse = None`` to reset a model's inverse to its\n default is deprecated. In the future this syntax will explicitly make\n a model not have an inverse (even if it has a default). Instead, use\n ``del model.inverse`` to reset a model's inverse to its default (if it\n has a default, otherwise this just deletes any custom inverse that has\n been assigned to the model and is still equivalent to setting\n ``model.inverse = None``). [#4236]\n\n- Adds a ``model.has_user_inverse`` attribute which indicates whether or not\n a user has assigned a custom inverse to ``model.inverse``. This is just\n for informational purposes, for example, for software that introspects\n model objects. [#4236]\n\n- Renamed the parameters of ``RotateNative2Celestial`` and\n ``RotateCelestial2Native`` from ``phi``, ``theta``, ``psi`` to\n ``lon``, ``lat`` and ``lon_pole``. [#3578]\n\n- Deprecated the ``Pix2Sky_AZP.check_mu`` and ``Sky2Pix_AZP.check_mu``\n methods (these were obscure \"accidentally public\" methods that were\n probably not used by anyone). [#3910]\n\n- Added a phase parameter to the Sine1D model. [#3807]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Renamed the ``sigma_clip`` ``sig`` keyword as ``sigma``. [#3595]\n\n- Changed the ``sigma_clip`` ``varfunc`` keyword to ``stdfunc``. [#3595]\n\n- Renamed the ``sigma_clipped_stats`` ``mask_val`` keyword to\n ``mask_value``. [#3595]\n\n- Changed the default ``iters`` keyword value to 5 in both the\n ``sigma_clip`` and ``sigma_clipped_stats`` functions. [#4067]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- ``Table.as_array()`` always returns a structured array with each column in\n the system's native byte order. The optional ``keep_byteorder=True``\n option will keep each column's data in its original byteorder. [#4080]\n\n- ``Table.simple_table()`` now creates tables with int64 and float64 types\n instead of int32 and float64. [#4114]\n\n- An empty table can now be initialized without a ``names`` argument as long\n as a valid ``dtype`` argument (with names embedded) is supplied. [#3977]\n\nastropy.time\n^^^^^^^^^^^^\n\n- The ``astropy_time`` attribute and time format has been removed from the\n public interface. Existing code that instantiates a new time object using\n ``format='astropy_time'`` can simply omit the ``format``\n specification. [#3857]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Single-item ``Quantity`` instances with record ``dtype`` will now have\n their ``isscalar`` property return ``True``, consistent with behaviour for\n numpy arrays, where ``np.void`` records are considered scalar. [#3899]\n\n- Three changes relating to the FITS unit format [#3993]:\n\n- The FITS unit format will no longer parse an arbitrary number as a\n scale value. It must be a power of 10 of the form ``10^^k``,\n ``10^k``, ``10+k``, ``10-k`` and ``10(k)``. [#3993]\n\n- Scales that are powers of 10 can be written out. Previously, any\n non-1.0 scale was rejected.\n\n- The ``*`` character is accepted as a separator between the scale\n and the units.\n\n- Unit formatter classes now require the ``parse`` and ``to_string``\n methods are now required to be classmethods (and the formatter\n classes themselves are assumed to be singletons that are not\n instantiated). As unit formatters are mostly an internal implementation\n detail this is not likely to affect any users. [#4001]\n\n- CGS E&M units are now defined separately from SI E&M units, and have\n distinct physical types. [#4255, #4355]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- All of the ``get_pkg_data_*`` functions take an optional ``package``\n argument as their second positional argument. So any code that previously\n passed other arguments to these functions as positional arguments might\n break. Use keyword argument passing instead to mitigate this. [#4079]\n\n- ``astropy.utils.iers`` now uses a ``QTable`` internally, which means that\n the numerical columns are stored as ``Quantity``, with full support for\n units. Furthermore, the ``ut1_utc`` method now returns a ``Quantity``\n instead of a float or an array (as did ``pm_xy`` already). [#3223]\n\n- ``astropy.utils.iers`` now throws an ``IERSRangeError``, a subclass\n of ``IndexError``, rather than a raw ``IndexError``. This allows more\n fine-grained catching of situations where a ``Time`` is beyond the range\n of the loaded IERS tables. [#4302]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- When compiled with wcslib 5.9 or later, the FITS headers returned\n by ``astropy.wcs.WCS.to_header`` and\n ``astropy.wcs.WCS.to_header_string`` now include values with more\n precision. This will result in numerical differences in your\n results if you convert ``astropy.wcs.WCS`` objects to FITS headers\n and use the results.\n\n- If NAXIS1 or NAXIS2 is not passed with the header object to\n WCS.calc_footprint, a ValueError is raised. [#3557]\n\nBug fixes\n---------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- The constants ``Ry`` and ``u`` are now properly used inside the\n corresponding units. The latter have changed slightly as a result. [#4229]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Internally, ``coordinates`` now consistently uses the appropriate time\n scales for using ERFA functions. [#4302]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix a segfault in the fast C parser when one of the column headers\n is empty [#3545].\n\n- Fix several bugs that prevented the fast readers from being used\n when guessing the file format. Also improved the read trace\n information to better understand format guessing. [#4115]\n\n- Fix an underlying problem that resulted in an uncaught TypeError\n exception when reading a CDS-format file with guessing enabled. [#4120]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- ``Simplex`` fitter now correctly passes additional keywords arguments to\n the scipy solver. [#3966]\n\n- The keyword ``acc`` (for accuracy) is now correctly accepted by\n ``Simplex``. [#3966]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- The units ``Ryd`` and ``u`` are no longer hard-coded numbers, but depend\n on the appropriate values in the ``constants`` module. As a result, these\n units now imply slightly different conversions. [#4229]\n\nOther Changes and Additions\n---------------------------\n\n- The ``./setup.py test`` command is now implemented in the ``astropy.tests``\n module again (previously its implementation had been moved into\n astropy-helpers). However, that made it difficult to synchronize changes\n to the Astropy test runner with changes to the ``./setup.py test`` UI.\n astropy-helpers v1.1 and above will detect this implementation of the\n ``test`` command, when present, and use it instead of the old version that\n was included in astropy-helpers (most users will not notice any difference\n as a result of this change). [#4020]\n\n- The repr for ``Table`` no longer displays ``masked=False`` since tables\n are not masked by default anyway. [#3869]\n\n- The version of ``PLY`` that ships with astropy has been updated to 3.6.\n\n- WCSAxes is now required for doc builds. [#4074]\n\n- The migration guide from pre-v0.4 coordinates has been removed to avoid\n cluttering the ``astropy.coordinates`` documentation with increasingly\n irrelevant material. To see the migration guide, we recommend you simply look\n to the archived documentation for previous versions, e.g.\n http://docs.astropy.org/en/v1.0/coordinates/index.html#migrating-from-pre-v0-4-coordinates\n [#4203]\n\n- In ``astropy.coordinates``, the transformations between GCRS, CIRS,\n and ITRS have been adjusted to more logically reflect the order in\n which they actually apply. This should not affect most coordinate\n transformations, but may affect code that is especially sensitive to\n machine precision effects that change when the order in which\n transformations occur is changed. [#4255]\n\n- Astropy v1.1.0 will be the last release series to officially support\n Python 2.6. A deprecation warning will now be issued when using Astropy\n in Python 2.6 (this warning can be disabled through the usual Python warning\n filtering mechanisms). [#3779]\n\n\n1.0.13 (2017-05-29)\n===================\n\nBug Fixes\n---------\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fix use of quantize level parameter for ``CompImageHDU``. [#6029]\n\n- Prevent crash when a header contains non-ASCII (e.g. UTF-8) characters, to\n allow fixing the problematic cards. [#6084]\n\n\n1.0.12 (2017-03-05)\n===================\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed bug in ``discretize_integrate_2D`` in which x and y coordinates\n where swapped. [#5634]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug where ``get_transform`` could sometimes produce confusing errors\n because of a typo in the input validation. [#5645]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Guard against extremely unlikely problems in compressed images, which\n could lead to memory unmapping errors. [#5775]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug where stdlib ``realloc()`` was used instead of\n ``PyMem_Realloc()`` [#5696, #4739, #2100]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed ImportError with NumPy < 1.7 and Python 3.x in\n ``_register_patched_dtype_reduce``. [#5848]\n\n\n1.0.11 (2016-12-22)\n===================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Initialising a SkyCoord from a list containing a single SkyCoord no longer removes\n the distance from the coordinate. [#5270]\n\n- Fix errors in the implementation of the conversion to and from FK4 frames\n without e-terms, which will have affected coordinates not on the unit\n sphere (i.e., with distances). [#4293]\n\n- Fix bug where with cds units enabled it was no longer possible to initialize\n an ``Angle``. [#5483]\n\n- Ensure that ``search_around_sky`` and ``search_around_3d`` return\n integer type index arrays for empty (non) matches. [#4877, #5083]\n\n- Return an empty set of matches for ``search_around_sky`` and\n ``search_around_3d`` when one or both of the input coordinate\n arrays is empty. [#4875, #5083]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix a bug with empty value at end of tab-delimited table on Windows. [#5370]\n\n- Fix reading of big ASCII tables (more than 2Gb) with the fast reader.\n [#5319]\n\n- Fix segfault with FastCsv and row with too many columns. [#5534]\n\n- Fix problem reading an AASTex format table that does not have ``\\\\``\n at the end of the last table row. [#5427]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Removed raising of AssertionError that could occur after closing or\n deleting compressed image data. [#4690, #4694, #4948]\n\n- Fixed bug that caused an ignored exception to be displayed under certain\n conditions when terminating a script after using fits.getdata(). [#4977]\n\n- Fixed usage of inplace operations that were raising an exception with\n recent versions of Numpy due to implicit casting. [#5250]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fixed bug of ``Resource.__repr__()`` having undefined attributes and\n variables. [#5382]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- CompoundModel now correctly inherits _n_models, allowing the use of model sets [#5358]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed bug in Ci definition. [#5106]\n\n- Non-ascii cds unit strings are now correctly represented using ``str`` also\n on python2. This solves bugs in parsing coordinates involving strings too.\n [#5355]\n\n- Ensure ``Quantity`` supports ``np.float_power``, which is new in numpy 1.12.\n [#5480]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed AttributeError when calling ``utils.misc.signal_number_to_name`` with\n Python3 [#5430].\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Update the ``_naxis{x}`` attributes when calling ``WCS.slice``. [#5411]\n\n\nOther Changes and Additions\n---------------------------\n\n- The bundled ERFA was updated to version 1.3.0. This includes the\n leap second planned for 2016 Dec 31. [#5418]\n\n1.0.10 (2016-06-09)\n===================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``SkyCoord`` objects created before a new frame which has frame attributes\n is created no longer raise ``AttributeError`` when the new attributes are\n accessed [#5021]\n\n- Fix some errors in the implementation of aberration for ``get_sun``. [#4979]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix problem reading a zero-length ECSV table with a bool type column. [#5010]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fix convenience functions (``getdata``, ``getheader``, ``append``,\n ``update``) to close files. [#4786]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- The astropy.io.votable.validator.html module is updated to handle division\n by zero when generating validation report. [#4699]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed a bug where ``pprint()`` sometimes raises ``UnicodeDecodeError``\n in Python 2. [#4946]\n\n- Fix bug when doing outer join on multi-dimensional columns. [#4060]\n\n- Fixed bug where Tables created from existing Table objects were not\n inheriting the ``primary_key`` attribute. [#4672]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Fix coverage reporting in Python 3. [#4822]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Duplicates between long and short names are now removed in the ``names``\n and ``aliases`` properties of units. [#5036]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The astropy.utils.xml.unescaper module now also unescapes ``'%2F'`` to\n ``'/'`` and ``'&&'`` to ``'&'`` in a given URL. [#4699]\n\n- Fix two problems related to the download cache: clear_download_cache() does\n not work in Python 2.7 and downloading in Python 2.7 and then Python 3\n can result in an exception. [#4810]\n\nastropy.vo\n^^^^^^^^^^\n\n- Cache option now properly caches both downloaded JSON database and XML VO\n tables. [#4699]\n\n- The astropy.vo.validator.conf.conesearch_urls listing is updated to reflect\n external changes to some VizieR Cone Search services. [#4699]\n\n- VOSDatabase decodes byte-string to UTF-8 instead of ASCII to avoid\n UnicodeDecodeError for some rare cases. Fixed a Cone Search test that is\n failing as a side-effect of #4699. [#4757]\n\nOther Changes and Additions\n---------------------------\n\n- Updated ``astropy.tests`` test runner code to work with Coverage v4.0 when\n generating test coverage reports. [#4176]\n\n\n1.0.9 (2016-03-10)\n==================\n\nNew Features\n------------\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``NDArithmeticMixin`` check for matching WCS now works with\n ``astropy.wcs.WCS`` objects [#4499, #4503]\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Correct a bug in which ``psf_pad`` and ``fft_pad`` would be ignored [#4366]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fixed addition of new line characters after last row of data in\n ascii.latex.AASTex. [#4561]\n\n- Fixed reading of Latex tables where the ``\\tabular`` tag is in the first\n line. [#4595]\n\n- Fix use of plain format strings with the fast writer. [#4517]\n\n- Fix bug writing space-delimited file when table has empty fields. [#4417]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed possible segfault during error handling in FITS tile\n compression. [#4489]\n\n- Fixed crash on pickling of binary table columns with the 'X', 'P', or\n 'Q' format. [#4514]\n\n- Fixed memory / reference leak that could occur when copying a ``FITS_rec``\n object (the ``.data`` for table HDUs). [#520]\n\n- Fixed a memory / reference leak in ``FITS_rec`` that occurred in a wide\n range of cases, especially after writing FITS tables to a file, but in\n other cases as well. [#4539]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed display of compound model expressions and components when printing\n compound model instances. [#4414, #4482]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- the input for median_absolute_deviation will not be cast to plain numpy\n arrays when given subclasses of numpy arrays\n (like Quantity, numpy.ma.MaskedArray, etc.) [#4658]\n\n- Fixed incorrect results when using median_absolute_deviation with masked\n arrays. [#4658]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The ``zest.releaser`` hooks included in Astropy are now injected locally to\n Astropy, rather than being global. [#4650]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed ``fits2bitmap`` script to allow ext flag to contain extension\n names or numbers. [#4468]\n\n- Fixed ``fits2bitmap`` default output filename generation for\n compressed FITS files. [#4468]\n\n\n1.0.8 (2016-01-08)\n==================\n\nBug Fixes\n---------\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed an bug where updates to string columns in FITS tables were not saved\n on Python 3. [#4452]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- In-place peak-to-peak calculations now work on ``Quantity``. [#4442]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed ``find_api_page`` to work correctly on python 3.x [#4378, #4379]\n\n\n1.0.7 (2015-12-04)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Pickling of ``EarthLocation`` instances now also works on Python 2. [#4304]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix fast writer so bytestring column output is not prefixed by 'b' in\n Python 3. [#4350]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed a regression that could cause writes of large FITS files to be\n truncated. [#4307]\n\n- Astropy v1.0.6 included a fix (#4228) for an obscure case where the TDIM\n of a table column is smaller than the repeat count of its data format.\n This updates that fix in such a way that it works with Numpy 1.10 as well.\n [#4266]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix a bug when pickling a Table with mixin columns (e.g. Time). [#4098]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fix incorrect ``value`` attribute for epoch formats like \"unix\"\n when ``scale`` is different from the class ``epoch_scale``. [#4312]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed an issue where if ipython is installed but ipykernel is not\n installed then importing astropy from the ipython console gave an\n IPython.kernel deprecation warning. [#4279]\n\n- Fixed crash that could occur in ``ProgressBar`` when ``astropy`` is\n imported in an IPython startup script. [#4274]\n\nOther Changes and Additions\n---------------------------\n\n- Updated bundled astropy-helpers to v1.0.6. [#4372]\n\n\n1.0.6 (2015-10-22)\n==================\n\nBug Fixes\n---------\n\nastropy.analytic_functions\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed blackbody analytic functions to properly support arrays of\n temperatures. [#4251]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed errors in transformations for objects within a few AU of the\n Earth. Included substantive changes to transformation machinery\n that may change distances at levels ~machine precision for other\n objects. [#4254]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- ``fitsdiff`` and related functions now do a better job reporting differences\n between values that are different types but have the same representation\n (ex: the string '0' versus the number 0). [#4122]\n\n- Miscellaneous fixes for supporting Numpy 1.10. [#4228]\n\n- Fixed an issue where writing a column of unicode strings to a FITS table\n resulted in a quadrupling of size of the column (i.e. the format of the\n FITS column was 4 characters for every one in the original strings).\n [#4228]\n\n- Added support for an obscure case (but nonetheless allowed by the FITS\n standard) where a column has some TDIMn keyword, but a repeat count in\n the TFORMn column greater than the number of elements implied by the\n TDIMn. For example TFORMn = 100I, but TDIMn = '(5,5)'. In this case\n the TDIMn implies 5x5 arrays in the column, but the TFORMn implies\n a 100 element 1-D array in the column. In this case the TDIM takes\n precedence, and the remaining bytes in the column are ignored. [#4228]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fixed crash with Python compiler optimization level = 2. [#4231]\n\nastropy.vo\n^^^^^^^^^^\n\n- Fixed ``check_conesearch_sites`` with ``parallel=True`` on Python >= 3.3\n and on Windows (it was broken in both those cases for separate reasons).\n [#2970]\n\nOther Changes and Additions\n---------------------------\n\n- All tests now pass against Numpy v1.10.x. This implies nominal support for\n Numpy 1.10.x moving forward (but there may still be unknown issues). For\n example, there is already a known performance issue with tables containing\n large multi-dimensional columns--for example, tables that contain entire\n images in one or more of their columns. This is a known upstream issue in\n Numpy. [#4259]\n\n\n1.0.5 (2015-10-05)\n==================\n\nBug Fixes\n---------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- Rename units -> unit and error -> uncertainty in the ``repr`` and ``str``\n of constants to match attribute names. [#4147]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fix string representation of ``SkyCoord`` objects transformed into\n the ``AltAz`` frame [#4055, #4057]\n\n- Fix the ``search_around_sky`` function to allow ``storekdtree`` to be\n ``False`` as was intended. [#4082, #4212]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fix bug when extending one header (without comments) with another\n (with comments). [#3967]\n\n- Somewhat improved resource usage for FITS data--previously a new ``mmap``\n was opened for each HDU of a FITS file accessed through an ``HDUList``.\n Each ``mmap`` used up a single file descriptor, causing problems with\n system resource limits for some users. Now only a single ``mmap`` is\n opened, and shared for the data of all HDUs. Note: The problem still\n persists with using the \"convenience\" functions. For example using\n ``fits.getdata`` will create one ``mmap`` per HDU read this way (as\n opposed to opening the file with ``fits.open`` and accessing the HDUs\n through the ``HDUList`` object). [#4097]\n\n- Fix bug where reading a file without a newline failed with an\n unrelated / unhelpful exception. [#4160]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Cleaned up ``repr`` of models that have no parameters. [#4076]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Initializing ``NDDataArray`` from another instance now sets ``flags`` as\n expected and no longer fails when ``uncertainty`` is set [#4129].\n Initializing an ``NDData`` subclass from a parent instance\n (eg. ``NDDataArray`` from ``NDData``) now sets the attributes other than\n ``data`` as it should [#4130, #4137].\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix an issue with setting fill value when column dtype is changed. [#4088]\n\n- Fix bug when unpickling a bare Column where the _parent_table\n attribute was not set. This impacted the Column representation. [#4099]\n\n- Fix issue with the web browser opening with an empty page, and ensure that\n the url is correctly formatted for Windows. [#4132]\n\n- Fix NameError in table stack exception message. [#4213]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``resolve_name`` no longer causes ``sys.modules`` to be cluttered with\n additional copies of modules under a package imported like\n ``resolve_name('numpy')``. [#4084]\n\n- ``console`` was updated to support IPython 4.x and Jupyter 1.x.\n This should suppress a ShimWarning that was appearing at\n import of astropy with IPython 4.0 or later. [#4078]\n\n- Temporary downloaded files created by ``get_readable_fileobj`` when passed\n a URL are now deleted immediately after the file is closed. [#4198]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- The color for axes labels was set to white. Since white labels on white\n background are hard to read, the label color has been changed to black.\n [#4143]\n\n- ``ImageNormalize`` now automatically determines ``vmin``/``vmax``\n (via the ``autoscale_None`` method) when they have not been set\n explicitly. [#4117]\n\nastropy.vo\n^^^^^^^^^^\n\n- Cone Search validation no longer crashes when the provider gives an\n incomplete test query. It also ensures search radius for a test query\n is not too large to avoid timeout. [#4158, #4159]\n\nOther Changes and Additions\n---------------------------\n\n- Astropy now supports Python 3.5. [#4027]\n\n- Updated bundled version of astropy-helpers to 1.0.5. [#4215]\n\n- Updated tests to support py.test 2.7, and upgraded the bundled copy of\n py.test to v2.7.3. [#4027]\n\n\n1.0.4 (2015-08-11)\n==================\n\nNew Features\n------------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Modified Cython functions to release the GIL. This enables convolution\n to be parallelized effectively and gives large speedups when used with\n multithreaded task schedulers such as Dask. [#3949]\n\nAPI Changes\n-----------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Some transformations for an input coordinate that's a scalar now correctly\n return a scalar. This was always the intended behavior, but it may break\n code that has been written to work-around this bug, so it may be viewed as\n an unplanned API change [#3920, #4039]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- The ``astropy_mpl_style`` no longer sets ``interactive`` to ``True``, but\n instead leaves it at the user preference. This makes using the style\n compatible with building docs with Sphinx, and other non-interactive\n contexts. [#4030]\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fix bug where coordinate representation setting gets reset to default value\n when coordinate array is indexed or sliced. [#3824]\n\n- Fixed confusing warning message shown when using dates outside current IERS\n data. [#3844]\n\n- ``get_sun`` now yields a scalar when the input time is a scalar (this was a\n regression in v1.0.3 from v1.0.2) [#3998, #4039]\n\n- Fixed bug where some scalar coordinates were incorrectly being changed to\n length-1 array coordinates after transforming through certain frames.\n [#3920, #4039]\n\n- Fixed bug causing the ``separation`` methods of ``SkyCoord`` and frame\n classes to fail due to infinite recursion [#4033, #4039]\n\n- Made it so that passing in a list of ``SkyCoord`` objects that are in\n UnitSphericalRepresentation to the ``SkyCoord`` constructor appropriately\n yields a new object in UnitSphericalRepresentation [#3938, #4039]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Fixed wCDM to not ignore the Ob0 parameter on initialization. [#3934]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed crash when updating data in a random groups HDU opened in update\n mode. [#3730]\n\n- Fixed incorrect checksum / datasum being written when re-writing a scaled\n HDU (i.e. non-trivial BSCALE and/or BZERO) with\n ``do_not_scale_image_data=False``. [#3883]\n\n- Fixed stray deprecation warning in ``BinTableHDU.copy()``. [#3798]\n\n- Better handling of the ``BLANK`` keyword when auto-scaling scaled image\n data. The ``BLANK`` keyword is now removed from the header after\n auto-scaling is applied, and it is restored properly (with floating point\n NaNs replaced by the filler value) when updating a file opened with the\n ``scale_back=True`` argument. Invalid usage of the ``BLANK`` keyword is\n also better warned about during validation. [#3865]\n\n- Reading memmaped scaled images won't fail when\n ``do_not_scale_image_data=True`` (that is, since we're just reading the raw\n / physical data there is no reason mmap can't be used). [#3766]\n\n- Fixed a reference cycle that could sometimes cause FITS table-related\n objects (``BinTableHDU``, ``ColDefs``, etc.) to hang around in memory\n longer than expected. [#4012]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Improved support for pickling of compound models, including both compound\n model instances, and new compound model classes. [#3867]\n\n- Added missing default values for ``Ellipse2D`` parameters. [#3903]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fixed iteration of scalar ``Time`` objects so that ``iter()`` correctly\n raises a ``TypeError`` on them (while still allowing ``Time`` arrays to be\n iterated). [#4048]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added frequency-equivalency check when declaring doppler equivalencies\n [#3728]\n\n- Define ``floor_divide`` (``//``) for ``Quantity`` to be consistent\n ``divmod``, such that it only works where the quotient is dimensionless.\n This guarantees that ``(q1 // q2) * q2 + (q1 % q2) == q1``. [#3817]\n\n- Fixed the documentation of supported units to correctly report support for\n SI prefixes. Previously the table of supported units incorrectly showed\n several derived unit as not supporting prefixes, when in fact they do.\n [#3835]\n\n- Fix a crash when calling ``astropy.units.cds.enable()``. This will now\n \"set\" rather than \"add\" units to the active set to avoid the namespace\n clash with the default units. [#3873]\n\n- Ensure in-place operations on ``float32`` quantities work. [#4007]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- The ``deprecated`` decorator did not correctly wrap classes that have a\n custom metaclass--the metaclass could be dropped from the deprecated\n version of the class. [#3997]\n\n- The ``wraps`` decorator would copy the wrapped function's name to the\n wrapper function even when ``'__name__'`` is excluded from the ``assigned``\n argument. [#4016]\n\nMisc\n^^^^\n\n- ``fitscheck`` no longer causes scaled image data to be rescaled when\n adding checksums to existing files. [#3884]\n\n- Fixed an issue where running ``import astropy`` from within the source\n tree did not automatically build the extension modules if the source is\n from a source distribution (as opposed to a git repository). [#3932]\n\n- Fixed multiple instances of a bug that prevented Astropy from being used\n when compiled with the ``python -OO`` flag, due to it causing all\n docstrings to be stripped out. [#3923]\n\n- Removed source code template files that were being installed\n accidentally alongside installed Python modules. [#4014]\n\n- Fixed a bug in the exception logging that caused a crash in the exception\n handler itself on Python 3 when exceptions do not include a message.\n [#4056]\n\n\n1.0.3 (2015-06-05)\n==================\n\nNew Features\n------------\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Greatly improved the speed of printing a large table to the screen when\n only a few rows are being displayed. [#3796]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Add support for the 2015-Jun-30 leap second. [#3794]\n\nAPI Changes\n-----------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Note that HTML formatted tables will not always be found with guess mode\n unless it passes certain heuristics that strongly suggest the presence of\n HTML in the input. Code that expects to read tables from HTML should\n specify ``format='html'`` explicitly. See bug fixes below for more\n details. [#3693]\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Fix issue with repeated normalizations of ``Kernels``. [#3747]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed ``get_sun`` to yield frames with the ``obstime`` set to what's passed into the function (previously it incorrectly always had J2000). [#3750]\n\n- Fixed ``get_sun`` to account for aberration of light. [#3750]\n\n- Fixed error in the GCRS->ICRS transformation that gave incorrect distances. [#3750]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Remove HTML from the list of automatically-guessed formats when reading if\n the file does not appear to be HTML. This was necessary to avoid a\n commonly-encountered segmentation fault occurring in the libxml parser on\n MacOSX. [#3693]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixes to support the upcoming Numpy 1.10. [#3419]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Polynomials are now scaled when used in a compound model. [#3702]\n\n- Fixed the ``Ellipse2D`` model to be consistent with ``Disk2D`` in\n how pixels are included. [#3736]\n\n- Fixed crash when evaluating a model that accepts no inputs. [#3772]\n\nastropy.testing\n^^^^^^^^^^^^^^^\n\n- The Astropy py.test plugins that disable unintentional internet access\n in tests were also blocking use of local UNIX sockets in tests, which\n prevented testing some multiprocessing code--fixed. [#3713]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Supported full SI prefixes for the barn unit (\"picobarn\", \"femtobarn\",\n etc.) [#3753]\n\n- Fix loss of precision when multiplying non-whole-numbered powers\n of units together. For example, before this change, ``(u.m **\n 1.5) ** Fraction(4, 5)`` resulted in an inaccurate floating-point\n power of ``1.2000000000000002``. After this change, the exact\n rational number of ``Fraction(6, 5)`` is maintained. [#3790]\n\n- Fixed printing of object ndarrays containing multiple Quantity\n objects with differing / incompatible units. Note: Unit conversion errors\n now cause a ``UnitConversionError`` exception to be raised. However, this\n is a subclass of the ``UnitsError`` exception used previously, so existing\n code that catches ``UnitsError`` should still work. [#3778]\n\nOther Changes and Additions\n---------------------------\n\n- Added a new ``astropy.__bibtex__`` attribute which gives a citation\n for Astropy in bibtex format. [#3697]\n\n- The bundled version of ERFA was updated to v1.2.0 to address leapsecond\n updates. [#3802]\n\n\n0.4.6 (2015-05-29)\n==================\n\nBug Fixes\n---------\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fixed ERFA code to handle the 2015-Jun-30 leap second. [#3795]\n\n\n1.0.2 (2015-04-16)\n==================\n\nNew Features\n------------\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added support for polynomials with degree 0 or degree greater than 15.\n [#3574, 3589]\n\nBug Fixes\n---------\n\nastropy.config\n^^^^^^^^^^^^^^\n\n- The pre-astropy-0.4 configuration API has been fixed. It was\n inadvertently broken in 1.0.1. [#3627]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed a severe memory leak that occurred when reading tile compressed\n images. [#3680]\n\n- Fixed bug where column data could be unintentionally byte-swapped when\n copying data from an existing FITS file to a new FITS table with a\n TDIMn keyword for that column. [#3561]\n\n- The ``ColDefs.change_attrib``, ``ColDefs.change_name``, and\n ``ColDefs.change_unit`` methods now work as advertised. It is also\n possible (and preferable) to update attributes directly on ``Column``\n objects (for example setting ``column.name``), and the change will be\n accurately reflected in any associated table data and its FITS header.\n [#3283, #1539, #2618]\n\n- Fixes an issue with the ``FITS_rec`` interface to FITS table data, where a\n ``FITS_rec`` created by copying an existing FITS table but adding new rows\n could not be sliced or masked correctly. [#3641]\n\n- Fixed handling of BINTABLE with TDIMn of size 1. [#3580]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Loading a ``TABLE`` element without any ``DATA`` now correctly\n creates a 0-row array. [#3636]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added workaround to support inverses on compound models when one of the\n sub-models is itself a compound model with a manually-assigned custom\n inverse. [#3542]\n\n- Fixed instantiation of polynomial models with constraints for parameters\n (constraints could still be assigned after instantiation, but not during).\n [#3606]\n\n- Fixed fitting of 2D polynomial models with the ``LeVMarLSQFitter``. [#3606]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Ensure ``QTable`` can be pickled [#3590]\n\n- Some corner cases when instantiating an ``astropy.table.Table``\n with a Numpy array are handled [#3637]. Notably:\n\n- a zero-length array is the same as passing ``None``\n\n- a scalar raises a ``ValueError``\n\n- a one-dimensional array is treated as a single row of a table.\n\n- Ensure a ``Column`` without units is treated as an ``array``, not as an\n dimensionless ``Quantity``. [#3648]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Ensure equivalencies that do more than just scale a ``Quantity`` are\n properly handled also in ``ufunc`` evaluations. [#2496, #3586]\n\n- The LaTeX representation of the Angstrom unit has changed from\n ``\\overset{\\circ}{A}`` to ``\\mathring{A}``, which should have\n better support across regular LaTeX, MathJax and matplotlib (as of\n version 1.5) [#3617]\n\nastropy.vo\n^^^^^^^^^^\n\n- Using HTTPS/SSL for communication between SAMP hubs now works\n correctly on all supported versions of Python [#3613]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- When no ``relax`` argument is passed to ``WCS.to_header()`` and\n the result omits non-standard WCS keywords, a warning is\n emitted. [#3652]\n\nOther Changes and Additions\n---------------------------\n\nastropy.vo\n^^^^^^^^^^\n\n- The number of retries for connections in ``astropy.vo.samp`` can now be\n configured by a ``n_retries`` configuration option. [#3612]\n\n- Testing\n\n- Running ``astropy.test()`` from within the IPython prompt has been\n provisionally re-enabled. [#3184]\n\n\n1.0.1 (2015-03-06)\n==================\n\nBug Fixes\n---------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- Ensure constants can be turned into ``Quantity`` safely. [#3537, #3538]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix a segfault in the fast C parser when one of the column headers\n is empty [#3545].\n\n- Fixed support for reading inf and nan values with the fast reader in\n Windows. Also fixed in the case of using ``use_fast_converter=True``\n with the fast reader. [#3525]\n\n- Fixed use of mmap in the fast reader on Windows. [#3525]\n\n- Fixed issue where commented header would treat comments defining the table\n (i.e. column headers) as purely information comments, leading to problems\n when trying to round-trip the table. [#3562]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed propagation of parameter constraints ('fixed', 'bounds', 'tied')\n between compound models and their components. There is may still be some\n difficulty defining 'tied' constraints properly for use with compound\n models, however. [#3481]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Restore several properties to the compatibility class ``NDDataArray`` that\n were inadvertently omitted [#3466].\n\nastropy.time\n^^^^^^^^^^^^\n\n- Time objects now always evaluate to ``True``, except when empty. [#3530]\n\nMiscellaneous\n-------------\n\n- The ERFA wrappers are now written directly in the Python/C API\n rather than using Cython, for greater performance. [#3521]\n\n- Improve import time of astropy [#3488].\n\nOther Changes and Additions\n---------------------------\n\n- Updated bundled astropy-helpers version to v1.0.1 to address installation\n issues with some packages that depend on Astropy. [#3541]\n\n\n1.0 (2015-02-18)\n================\n\nGeneral\n-------\n\n- Astropy now requires Numpy 1.6.0 or later.\n\nNew Features\n------------\n\nastropy.analytic_functions\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- The ``astropy.analytic_functions`` was added to contain analytic functions\n useful for astronomy [#3077].\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``astropy.coordinates`` now has a full stack of frames allowing\n transformations from ICRS or other celestial systems down to Alt/Az\n coordinates. [#3217]\n\n- ``astropy.coordinates`` now has a ``get_sun`` function that gives\n the coordinates of the Sun at a specified time. [#3217]\n\n- ``SkyCoord`` now has ``to_pixel`` and ``from_pixel`` methods that convert\n between celestial coordinates as ``SkyCoord`` objects and pixel coordinates\n given an ``astropy.wcs.WCS`` object. [#3002]\n\n- ``SkyCoord`` now has ``search_around_sky`` and ``search_around_3d``\n convenience methods that allow searching for all coordinates within\n a certain distance of another ``SkyCoord``. [#2953]\n\n- ``SkyCoord`` can now accept a frame instance for the ``frame=`` keyword\n argument. [#3063]\n\n- ``SkyCoord`` now has a ``guess_from_table`` method that can be used to\n quickly create ``SkyCoord`` objects from an ``astropy.table.Table``\n object. [#2951]\n\n- ``astropy.coordinates`` now has a ``Galactocentric`` frame, a coordinate\n frame centered on a (user specified) center of the Milky Way. [#2761, #3286]\n\n- ``SkyCoord`` now accepts more formats of the coordinate string when the\n representation has ``ra`` and ``dec`` attributes. [#2920]\n\n- ``SkyCoord`` can now accept lists of ``SkyCoord`` objects, frame objects,\n or representation objects and will combine them into a single object.\n [#3285]\n\n- Frames and ``SkyCoord`` instances now have a method ``is_equivalent_frame``\n that can be used to check that two frames are equivalent (ignoring the\n data). [#3330]\n\n- The ``__repr__`` of coordinate objects now shows scalar coordinates in the\n same format as vector coordinates. [#3350, 3448]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Added ``lookback_distance``, which is ``c * lookback_time``. [#3145]\n\n- Add baryonic matter density and dark matter only density parameters\n to cosmology objects [#2757].\n\n- Add a ``clone`` method to cosmology objects to allow copies\n of cosmological objects to be created with the specified variables\n modified [#2592].\n\n- Increase default numerical precision of ``z_at_value`` following\n the accurate by default, fast by explicit request model [#3074].\n\n- Cosmology functions that take a single (redshift) input now\n broadcast like numpy ufuncs. So, passing an arbitrarily shaped\n array of inputs will produce an output of the same shape. [#3178, #3194]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Simplify the way new Reader classes are defined, allowing custom behavior\n entirely by overriding inherited class attributes instead of setting\n instance attributes in the Reader ``__init__`` method. [#2812]\n\n- There is now a faster C/Cython engine available for reading and writing\n simple ASCII formats like CSV. Both are enabled by default, and fast\n reading will fall back on an ordinary reader in case of a parsing\n failure. Their behavior can be altered with the parameter ``fast_reader``\n in ``read`` and ``fast_writer`` in ``write``. [#2716]\n\n- Make Latex/AASTex tables use unit attribute of Column for output. [#3064]\n\n- Store comment lines encountered during reading in metadata of the\n output table via ``meta['comment_lines']``. [#3222]\n\n- Write comment lines in Table metadata during output for all basic formats,\n IPAC, and fast writers. This functionality can be disabled with\n ``comment=False``. [#3255]\n\n- Add reader / writer for the Enhanced CSV format which stores table and\n column meta data, in particular data type and unit. [#2319]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- The ``fitsdiff`` script ignores some things by default when comparing fits\n files (e.g. empty header lines). This adds a ``--exact`` option where\n nothing is ignored. [#2782, #3110]\n\n- The ``fitsheader`` script now takes a ``--keyword`` option to extract a\n specific keyword from the header of a FITS file, and a ``--table`` option\n to export headers into any of the data formats supported by\n ``astropy.table``. [#2555, #2588]\n\n- ``Section`` now supports all advanced indexing features ``ndarray`` does\n (slices with any steps, integer arrays, boolean arrays, None, Ellipsis).\n It also properly returns scalars when this is appropriate. [#3148]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- ``astropy.io.votable.parse`` now takes a ``datatype_mapping``\n keyword argument to map invalid datatype names to valid ones in\n order to support non-compliant files. [#2675]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added the capability of creating new \"compound\" models by combining\n existing models using arithmetic operators. See the \"What's New in 1.0\"\n page in the Astropy documentation for more details. [#3231]\n\n- A new ``custom_model`` decorator/factory function has been added for\n converting normal functions to ``Model`` classes that can work within\n the Astropy modeling framework. This replaces the old ``custom_model_1d``\n function which is now deprecated. The new function works the same as\n the old one but is less limited in the types of models it can be used to\n created. [#1763]\n\n- The ``Model`` and ``Fitter`` classes have ``.registry`` attributes which\n provide sets of all loaded ``Model`` and ``Fitter`` classes (this is\n useful for building UIs for models and fitting). [#2725]\n\n- A dict-like ``meta`` member was added to ``Model``. it is to be used to\n store any optional information which is relevant to a project and is not\n in the standard ``Model`` class. [#2189]\n\n- Added ``Ellipse2D`` model. [#3124]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- New array-related utility functions in ``astropy.nddata.utils`` for adding\n and removing arrays from other arrays with different sizes/shapes. [#3201]\n\n- New metaclass ``NDDataBase`` for enforcing the nddata interface in\n subclasses without restricting implementation of the data storage. [#2905]\n\n- New mixin classes ``NDSlicingMixin`` for slicing, ``NDArithmeticMixin``\n for arithmetic operations, and ``NDIOMixin`` for input/ouput in NDData. [#2905]\n\n- Added a decorator ``support_nddata`` that can be used to write functions\n that can either take separate arguments or NDData objects. [#2855]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added ``mad_std()`` function. [#3208]\n\n- Added ``gaussian_fwhm_to_sigma`` and ``gaussian_sigma_to_fwhm``\n constants. [#3208]\n\n- New function ``sigma_clipped_stats`` which can be used to quickly get\n common statistics for an array, using sigma clipping at the same time.\n [#3201]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Changed the internal implementation of the ``Table`` class changed so that\n it no longer uses numpy structured arrays as the core table data container.\n [#2790, #3179]\n\n- Tables can now be written to an html file that includes interactive\n browsing capabilities. To write out to this format, use\n ``Table.write('filename.html', format='jsviewer')``. [#2875]\n\n- A ``quantity`` property and ``to`` method were added to ``Table``\n columns that allow the column values to be easily converted to\n ``astropy.units.Quantity`` objects. [#2950]\n\n- Add ``unique`` convenience method to table. [#3185]\n\nastropy.tests\n^^^^^^^^^^^^^\n\n- Added a new Quantity-aware ``assert_quantity_allclose``. [#3273]\n\nastropy.time\n^^^^^^^^^^^^\n\n- ``Time`` can now handle arbitrary array dimensions, with operations\n following standard numpy broadcasting rules. [#3138]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Support for VOUnit has been updated to be compliant with version\n 1.0 of the standard. [#2901]\n\n- Added an ``insert`` method to insert values into a ``Quantity`` object.\n This is similar to the ``numpy.insert`` function. [#3049]\n\n- When viewed in IPython, ``Quantity`` objects with array values now render\n using LaTeX and scientific notation. [#2271]\n\n- Added ``units.quantity_input`` decorator to validate quantity inputs to a\n function for unit compatibility. [#3072]\n\n- Added ``units.astronomical_unit`` as a long form for ``units.au``. [#3303]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Added a new decorator ``astropy.utils.wraps`` which acts as a replacement\n for the standard library's ``functools.wraps``, the only difference being\n that the decorated function also preserves the wrapped function's call\n signature. [#2849]\n\n- ``astropy.utils.compat.numpy`` has been revised such that it can include\n patched versions of routines from newer ``numpy`` versions. The first\n addition is a version of ``broadcast_arrays`` that can be used with\n ``Quantity`` and other ``ndarray`` subclasses (using the ``subok=True``\n flag). [#2327]\n\n- Added ``astropy.utils.resolve_name`` which returns a member of a module\n or class given the fully qualified dotted name of that object as a\n string. [#3389]\n\n- Added ``astropy.utils.minversion`` which can be used to check minimum\n version requirements of Python modules (to test for specific features and/\n or bugs and the like). [#3389]\n\nastropy.visualization\n^^^^^^^^^^^^^^^^^^^^^\n\n- Created ``astropy.visualization`` module and added functionality relating\n to image normalization (i.e. stretching and scaling) as well as a new\n script ``fits2bitmap`` that can produce a bitmap image from a FITS file.\n [#3201]\n\n- Added dictionary ``astropy.visualization.mpl_style.astropy_mpl_style``\n which can be used to set a uniform plotstyle specifically for tutorials\n that is improved compared to matplotlib defaults. [#2719, #2787, #3200]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- ``wcslib`` has been upgraded to version 4.25. This brings a\n single new feature:\n\n- ``equinox`` and ``radesys`` will now be given default values\n conforming with the WCS specification if ``EQUINOXa`` and\n ``RADESYSa``, respectively, are not present in the header.\n\n- The minimum required version of ``wcslib`` is now 4.24. [#2503]\n\n- Added a new function ``wcs_to_celestial_frame`` that can be used to find\n the astropy.coordinates celestial frame corresponding to a particular WCS.\n [#2730]\n\n- ``astropy.wcs.WCS.compare`` now supports a ``tolerance`` keyword argument\n to allow for approximate comparison of floating-point values. [#2503]\n\n- added ``pixel_scale_matrix``, ``celestial``, ``is_celestial``, and\n ``has_celestial`` convenience attributes. Added\n ``proj_plane_pixel_scales``, ``proj_plane_pixel_area``, and\n ``non_celestial_pixel_scales`` utility functions for retrieving WCS pixel\n scale and area information [#2832, #3304]\n\n- Added two functions ``pixel_to_skycoord`` and\n ``skycoord_to_pixel`` that make it easy to convert between\n SkyCoord objects and pixel coordinates. [#2885]\n\n- ``all_world2pix`` now uses a much more sophisticated and complete\n algorithm to iteratively compute the inverse WCS transform. [#2816]\n\n- Add ability to use ``WCS`` object to define projections in Matplotlib,\n using the ``WCSAxes`` package. [#3183]\n\n- Added ``is_proj_plane_distorted`` for testing if pixels are\n distorted. [#3329]\n\nMisc\n^^^^\n\n- ``astropy._erfa`` was added as a new subpackage wrapping the functionality\n of the ERFA library in python. This is primarily of use for other astropy\n subpackages, but the API may be made more public in the future. [#2992]\n\n\nAPI Changes\n-----------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Subclasses of ``BaseCoordinateFrame`` which define a custom ``repr`` should\n be aware of the format expected in ``SkyCoord.__repr__()``, which changed in\n this release. [#2704, #2882]\n\n- The ``CartesianPoints`` class (deprecated in v0.4) has now been removed.\n [#2990]\n\n- The previous ``astropy.coordinates.builtin_frames`` module is now a\n subpackage. Everything that was in the\n ``astropy.coordinates.builtin_frames`` module is still accessible from the\n new package, but the classes are now in separate modules. This should have\n no direct impact at the user level. [#3120]\n\n- Support for passing a frame as a positional argument in the ``SkyCoord``\n class has now been deprecated, except in the case where a frame with data\n is passed as the sole positional argument. [#3152]\n\n- Improved ``__repr__`` of coordinate objects representing a single\n coordinate point for the sake of easier copy/pasting. [#3350]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- The functional interface to the cosmological routines as well as\n ``set_current`` and ``get_current`` (deprecated in v0.4) have now been\n removed. [#2990]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Added a new argument to ``htmldict`` in the HTML reader named\n ``parser``, which allows the user to specify which parser\n BeautifulSoup should use as a backend. [#2815]\n\n- Add ``FixedWidthTwoLine`` reader to guessing. This will allows to read\n tables that a copied from screen output like ``print my_table`` to be read\n automatically. Discussed in #3025 and #3099 [#3109]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- A new optional argument ``cache`` has been added to\n ``astropy.io.fits.open()``. When opening a FITS file from a URL,\n ``cache`` is a boolean value specifying whether or not to save the\n file locally in Astropy's download cache (``True`` by default). [#3041]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Model classes should now specify ``inputs`` and ``outputs`` class\n attributes instead of the old ``n_inputs`` and ``n_outputs``. These\n should be tuples providing human-readable *labels* for all inputs and\n outputs of the model. The length of the tuple indicates the numbers\n of inputs and outputs. See \"What's New in Astropy 1.0\" for more\n details. [#2835]\n\n- It is no longer necessary to include ``__init__`` or ``__call__``\n definitions in ``Model`` subclasses if all they do is wrap the\n super-method in order to provide a nice call signature to the docs.\n The ``inputs`` class attribute is now used to generate a nice call\n signature, so these methods should only be overridden by ``Model``\n subclasses in order to provide new functionality. [#2835]\n\n- Most models included in Astropy now have sensible default values for most\n or all of their parameters. Call ``help(ModelClass)`` on any model to\n check what those defaults are. Most of them time they should be\n overridden, but some of them are useful (for example spatial offsets are\n always set at the origin by default). Another rule of thumb is that, where\n possible, default parameters are set so that the model is a no-op, or\n close to it, by default. [#2932]\n\n- The ``Model.inverse`` method has been changed to a *property*, so that\n now accessing ``model.inverse`` on a model returns a new model that\n implements that model's inverse, and *calling* ``model.inverse(...)``` on\n some independent variable computes the value of the inverse (similar to what\n the old ``Model.invert()`` method was meant to do). [#3024]\n\n- The ``Model.invert()`` method has been removed entirely (it was never\n implemented and there should not be any existing code that relies on it).\n [#3024]\n\n- ``custom_model_1d`` is deprecated in favor of the new ``custom_model``\n (see \"New Features\" above). [#1763]\n\n- The ``Model.param_dim`` property (deprecated in v0.4) has now been removed.\n [#2990]\n\n- The ``Beta1D`` and ``Beta2D`` models have been renamed to ``Moffat1D`` and\n ``Moffat2D``. [#3029]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``flags``, ``shape``, ``size``, ``dtype`` and ``ndim`` properties removed\n from ``astropy.nddata.NDData``. [#2905]\n\n- Arithmetic operations, uncertainty propagation, slicing and automatic\n conversion to a numpy array removed from ``astropy.nddata.NDData``. The\n class ``astropy.nddata.NDDataArray`` is functionally equivalent to the\n old ``NDData``. [#2905]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- The ``Column.units`` property (deprecated in v0.3) has now been removed.\n [#2990]\n\n- The ``Row.data`` and ``Table._data`` attributes have been deprecated\n related to the change in Table implementation. They are replaced by\n ``Row.as_void()`` and ``Table.as_array()`` methods, respectively. [#2790]\n\n- The ``Table.create_mask`` method has been removed. This undocumented\n method was a development orphan and would cause corruption of the\n table if called. [#2790]\n\n- The return type for integer item access to a Column (e.g. col[12] or\n t['a'][12]) is now always a numpy scalar, numpy ``ndarray``, or numpy\n ``MaskedArray``. Previously if the column was multidimensional then a\n Column object would be returned. [#3095]\n\n- The representation of Table and Column objects has been changed to\n be formatted similar to the print output. [#3239]\n\nastropy.time\n^^^^^^^^^^^^\n\n- The ``Time.val`` and ``Time.vals`` properties (deprecated in v0.3) and the\n ``Time.lon``, and ``Time.lat`` properties (deprecated in v0.4) have now\n been removed. [#2990]\n\n- Add ``decimalyear`` format that represents time as a decimal year. [#3265]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Support for VOUnit has been updated to be compliant with version\n 1.0 of the standard. This means that some VOUnit strings that were\n rejected before are now acceptable. [#2901] Notably:\n\n- SI prefixes are supported on most units\n\n- Binary prefixes are supported on \"bits\" and \"bytes\"\n\n- Custom units can be defined \"inline\" by placing them between single\n quotes.\n\n- ``Unit.get_converter`` has been deprecated. It is not strictly\n necessary for end users, and it was confusing due to lack of\n support for ``Quantity`` objects. [#3456]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Some members of ``astropy.utils.misc`` were moved into new submodules.\n Specifically:\n\n- ``deprecated``, ``deprecated_attribute``, and ``lazyproperty`` ->\n ``astropy.utils.decorators``\n\n- ``find_current_module``, ``find_mod_objs`` ->\n ``astropy.utils.introspection``\n\n All of these functions can be imported directly from ``astropy.utils``\n which should be preferred over referencing individual submodules of\n ``astropy.utils``. [#2857]\n\n- The ProgressBar.iterate class method (deprecated in v0.3) has now been\n removed. [#2990]\n\n- Updated ``astropy/utils/console.py`` ProgressBar() module to\n display output to IPython notebook with the addition of an\n ``interactive`` kwarg. [#2658] [#2789]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- The ``WCS.calcFootprint`` method (deprecated in v0.4) has now been removed.\n [#2990]\n\n- An invalid unit in a ``CUNITn`` keyword now displays a warning and\n returns a ``UnrecognizedUnit`` instance rather than raising an\n exception [#3190]\n\nBug Fixes\n---------\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- ``astropy.convolution.discretize_model`` now handles arbitrary callables\n correctly [#2274].\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``Angle.to_string`` now outputs unicode arrays instead of object arrays.\n [#2981]\n\n- ``SkyCoord.to_string`` no longer gives an error when used with an array\n coordinate with more than one dimension. [#3340]\n\n- Fixed support for subclasses of ``UnitSphericalRepresentation`` and\n ``SphericalRepresentation`` [#3354, #3366]\n\n- Fixed latex display of array angles in IPython notebook. [#3480]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- In the ``CommentedHeader`` the ``data_start`` parameter now defaults to\n ``0``, which is the first uncommented line. Discussed in #2692. [#3054]\n\n- Position lines in ``FixedWidthTwoLine`` reader could consist of many characters.\n Now, only one character in addition to the delimiter is allowed. This bug was\n discovered as part of [#3109]\n\n- The IPAC table writer now consistently uses the ``fill_values`` keyword to\n specify the output null values. Previously the behavior was inconsistent\n or incorrect. [#3259]\n\n- The IPAC table reader now correctly interprets abbreviated column types.\n [#3279]\n\n- Tables that look almost, but not quite like DAOPhot tables could cause\n guessing to fail. [#3342]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed the problem in ``fits.open`` of some filenames with colon (``:``) in\n the name being recognized as URLs instead of file names. [#3122]\n\n- Setting ``memmap=True`` in ``fits.open`` and related functions now raises\n a ValueError if opening a file in memory-mapped mode is impossible. [#2298]\n\n- CONTINUE cards no longer end the value of the final card in the series with\n an ampersand, per the specification of the CONTINUE card convention. [#3282]\n\n- Fixed a crash that occurred when reading an ASCII table containing\n zero-precision floating point fields. [#3422]\n\n- When a float field for an ASCII table has zero-precision a decimal point\n (with no digits following it) is still written to the field as long as\n there is space for it, as recommended by the FITS standard. This makes it\n less ambiguous that these columns should be interpreted as floats. [#3422]\n\nastropy.logger\n^^^^^^^^^^^^^^\n\n- Fix a bug that occurred when displaying warnings that produced an error\n message ``dictionary changed size during iteration``. [#3353]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed a bug in ``SLSQPLSQFitter`` where the ``maxiter`` argument was not\n passed correctly to the optimizer. [#3339]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix a problem where ``table.hstack`` fails to stack multiple references to\n the same table, e.g. ``table.hstack([t, t])``. [#2995]\n\n- Fixed a problem where ``table.vstack`` and ``table.hstack`` failed to stack\n a single table, e.g. ``table.vstack([t])``. [#3313]\n\n- Fix a problem when doing nested iterators on a single table. [#3358]\n\n- Fix an error when an empty list, tuple, or ndarray is used for item access\n within a table. This now returns the table with no rows. [#3442]\n\nastropy.time\n^^^^^^^^^^^^\n\n- When creating a Time object from a datetime object the time zone\n info is now correctly used. [#3160]\n\n- For Time objects, it is now checked that numerical input is finite. [#3396]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added a ``latex_inline`` unit format that returns the units in LaTeX math\n notation with negative exponents instead of fractions [#2622].\n\n- When using a unit that is deprecated in a given unit format,\n non-deprecated alternatives will be suggested. [#2806] For\n example::\n\n >>> import astropy.units as u\n >>> u.Unit('Angstrom', format='fits')\n WARNING: UnitsWarning: The unit 'Angstrom' has been deprecated\n in the FITS standard. Suggested: nm (with data multiplied by\n 0.1). [astropy.units.format.utils]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``treat_deprecations_as_exceptions`` has been fixed to recognize Astropy\n deprecation warnings. [#3015]\n\n- Converted representation of progress bar units without suffix\n from float to int in console.human_file_size. [#2201, #2202, #2721, #3299]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- ``astropy.wcs.WCS.sub`` now accepts unicode strings as input on\n Python 2.x [#3356]\n\nMisc\n^^^^\n\n- Some modules and tests that would crash upon import when using a non-final\n release of Numpy (e.g. 1.9.0rc1). [#3471]\n\nOther Changes and Additions\n---------------------------\n\n- The bundled copy of astropy-helpers has been updated to v1.0. [#3515]\n\n- Updated ``astropy.extern.configobj`` to Version 5. Version 5 uses ``six``\n and the same code covers both Python 2 and Python 3. [#3149]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``repr`` of ``SkyCoord`` and coordinate frame classes now separate\n frame attributes and coordinate information. [#2704, #2882]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Overwriting an existing file using the ``clobber=True`` option no longer\n displays a warning message. [#1963]\n\n- ``fits.open`` no longer catches ``OSError`` exceptions on missing or\n unreadable files-- instead it raises the standard Python exceptions in such\n cases. [#2756, #2785]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Sped up setting of ``Column`` slices by an order of magnitude. [#2994, #3020]\n\n- Updated the bundled ``six`` module to version 1.7.3 and made 1.7.3 the\n minimum acceptable version of ``six``. [#2814]\n\n- The version of ERFA included with Astropy is now v1.1.1 [#2971]\n\n- The code base is now fully Python 2 and 3 compatible and no longer requires\n 2to3. [#2033]\n\n- `funcsigs <https://pypi.python.org/pypi/funcsigs>`_ is included in\n utils.compat, but defaults to the inspect module components where available\n (3.3+) [#3151].\n\n- The list of modules displayed in the pytest header can now be customized.\n [#3157]\n\n- `jinja2 <http://jinja.pocoo.org/docs/dev/>`_>=2.7 is now required to build the\n source code from the git repository, in order to allow the ERFA wrappers to\n be generated. [#3166]\n\n\n0.4.5 (2015-02-16)\n==================\n\nBug Fixes\n---------\n\n- Fixed unnecessary attempt to run ``git`` when importing astropy. In\n particular, fixed a crash in Python 3 that could result from this when\n importing Astropy when the the current working directory is an empty git\n repository. [#3475]\n\nOther Changes and Additions\n---------------------------\n\n- Updated bundled copy of astropy-helpers to v0.4.6. [#3508]\n\n\n0.4.4 (2015-01-21)\n==================\n\nBug Fixes\n---------\n\nastropy.vo.samp\n^^^^^^^^^^^^^^^\n\n- ``astropy.vo.samp`` is now usable on Python builds that do not\n support the SSLv3 protocol (which depends both on the version of\n Python and the version of OpenSSL or LibreSSL that it is built\n against.) [#3308]\n\nAPI Changes\n-----------\n\nastropy.vo.samp\n^^^^^^^^^^^^^^^\n\n- The default SSL protocol used is now determined from the default\n used in the Python ``ssl`` standard library. This default may be\n different depending on the exact version of Python you are using.\n [#3308]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- WCS allows slices of the form slice(None, x, y), which previously resulted\n in an unsliced copy being returned (note: this was previously incorrectly\n reported as fixed in v0.4.3) [#2909]\n\n\n0.4.3 (2015-01-15)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``Distance`` class has been fixed to no longer rely on the deprecated\n cosmology functions. [#2991]\n\n- Ensure ``float32`` values can be used in coordinate representations. [#2983]\n\n- Fix frame attribute inheritance in ``SkyCoord.transform_to()`` method so\n that the default attribute value (e.g. equinox) for the destination frame\n gets used if no corresponding value was explicitly specified. [#3106]\n\n- ``Angle`` accepts hours:mins or deg:mins initializers (without\n seconds). In these cases float minutes are also accepted. [#2843]\n\n- ``astropy.coordinates.SkyCoord`` objects are now copyable. [#2888]\n\n- ``astropy.coordinates.SkyCoord`` object attributes are now\n immutable. It is still technically possible to change the\n internal data for an array-valued coordinate object but this leads\n to inconsistencies [#2889] and should not be done. [#2888]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- The ``ztol`` keyword argument to z_at_value now works correctly [#2993].\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fix a bug in Python 3 when guessing file format using a file object as\n input. Also improve performance in same situation for Python 2. [#3132]\n\n- Fix a problem where URL was being downloaded for each guess. [#2001]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- The ``in`` operator now works correctly for checking if an extension\n is in an ``HDUList`` (as given via EXTNAME, (EXTNAME, EXTVER) tuples,\n etc.) [#3060]\n\n- Added workaround for bug in MacOS X <= 10.8 that caused np.fromfile to\n fail. [#3078]\n\n- Added support for the ``RICE_ONE`` compression type synonym. [#3115]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed a test failure on Debian/PowerPC and Debian/s390x. [#2708]\n\n- Fixed crash in evaluating models that have more outputs than inputs--this\n case may not be handled as desired for all conceivable models of this\n format (some may have to implement custom ``prepare_inputs`` and\n ``prepare_outputs`` methods). But as long as all outputs can be assumed\n to have a shape determined from the broadcast of all inputs with all\n parameters then this can be used safely. [#3250]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fix a bug that caused join to fail for multi-dimensional columns. [#2984]\n\n- Fix a bug where MaskedColumn attributes which had been changed since\n the object was created were not being carried through when slicing. [#3023]\n\n- Fix a bug that prevented initializing a table from a structured array\n with multi-dimensional columns with copy=True. [#3034]\n\n- Fixed unnecessarily large unicode columns when instantiating a table from\n row data on Python 3. [#3052]\n\n- Improved the warning message when unable to aggregate non-numeric\n columns. [#2700]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Operations on quantities with incompatible types now raises a much\n more informative ``TypeError``. [#2934]\n\n- ``Quantity.tolist`` now overrides the ``ndarray`` method to give a\n ``NotImplementedError`` (by renaming the previous ``list`` method). [#3050]\n\n- ``Quantity.round`` now always returns a ``Quantity`` (previously it\n returned an ``ndarray`` for ``decimals>0``). [#3062]\n\n- Ensured ``np.squeeze`` always returns a ``Quantity`` (it only worked if\n no dimensions were removed). [#3045]\n\n- Input to ``Quantity`` with a ``unit`` attribute no longer can get mangled\n with ``copy=False``. [#3051]\n\n- Remove trailing space in ``__format__`` calls for dimensionless quantities.\n [#3097]\n\n- Comparisons between units and non-unit-like objects now works\n correctly. [#3108]\n\n- Units with fractional powers are now correctly multiplied together\n by using rational arithmetic. [#3121]\n\n- Removed a few entries from spectral density equivalencies which did not\n make sense. [#3153]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed an issue with the ``deprecated`` decorator on classes that invoke\n ``super()`` in their ``__init__`` method. [#3004]\n\n- Fixed a bug which caused the ``metadata_conflicts`` parameter to be\n ignored in the ``astropy.utils.metadata.merge`` function. [#3294]\n\nastropy.vo\n^^^^^^^^^^\n\n- Fixed an issue with reconnecting to a SAMP Hub. [#2674]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Invalid or out of range values passed to ``wcs_world2pix`` will\n now be correctly identified and returned as ``nan``\n values. [#2965]\n\n- Fixed an issue which meant that Python thought ``WCS`` objects were\n iterable. [#3066]\n\nMisc\n^^^^\n\n- Astropy will now work if your Python interpreter does not have the\n ``bz2`` module installed. [#3104]\n\n- Fixed ``ResourceWarning`` for ``astropy/extern/bundled/six.py`` that could\n occur sometimes after using Astropy in Python 3.4. [#3156]\n\nOther Changes and Additions\n---------------------------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Improved the agreement of the FK5 <-> Galactic conversion with other\n codes, and with the FK5 <-> FK4 <-> Galactic route. [#3107]\n\n\n0.4.2 (2014-09-23)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``Angle`` accepts hours:mins or deg:mins initializers (without\n seconds). In these cases float minutes are also accepted.\n\n- The ``repr`` for coordinate frames now displays the frame attributes\n (ex: ra, dec) in a consistent order. It should be noted that as part of\n this fix, the ``BaseCoordinateFrame.get_frame_attr_names()`` method now\n returns an ``OrderedDict`` instead of just a ``dict``. [#2845]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Fixed a crash when reading scaled float data out of a FITS file that was\n loaded from a string (using ``HDUList.fromfile``) rather than from a file.\n [#2710]\n\n- Fixed a crash when reading data from an HDU whose header contained in\n invalid value for the BLANK keyword (e.g., a string value instead of an\n integer as required by the FITS Standard). Invalid BLANK keywords are now\n warned about, but are otherwise ignored. [#2711]\n\n- Fixed a crash when reading the header of a tile-compressed HDU if that\n header contained invalid duplicate keywords resulting in a ``KeyError``\n [#2750]\n\n- Fixed crash when reading gzip-compressed FITS tables through the Astropy\n ``Table`` interface. [#2783]\n\n- Fixed corruption when writing new FITS files through to gzipped files.\n [#2794]\n\n- Fixed crash when writing HDUs made with non-contiguous data arrays to\n file-like objects. [#2794]\n\n- It is now possible to create ``astropy.io.fits.BinTableHDU``\n objects with a table with zero rows. [#2916]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Fixed a bug that prevented h5py ``Dataset`` objects from being\n automatically recognized by ``Table.read``. [#2831]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Make ``LevMarLSQFitter`` work with ``weights`` keyword. [#2900]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed reference cycle in tables that could prevent ``Table`` objects\n from being freed from memory. [#2879]\n\n- Fixed an issue where ``Table.pprint()`` did not print the header to\n ``stdout`` when ``stdout`` is redirected (say, to a file). [#2878]\n\n- Fixed printing of masked values when a format is specified. [#1026]\n\n- Ensured that numpy ufuncs that return booleans return plain ``ndarray``\n instances, just like the comparison operators. [#2963]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Ensure bigendian input to Time works on a little-endian machine\n (and vice versa). [#2942]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Ensure unit is kept when adding 0 to quantities. [#2968]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed color printing on Windows with IPython 2.0. [#2878]\n\nastropy.vo\n^^^^^^^^^^\n\n- Improved error message on Cone Search time out. [#2687]\n\nOther Changes and Additions\n---------------------------\n\n- Fixed a couple issues with files being inappropriately included and/or\n excluded from the source archive distributions of Astropy. [#2843, #2854]\n\n- As part of fixing the fact that masked elements of table columns could not be\n printed when a format was specified, the column format string options were\n expanded to allow simple specifiers such as ``'5.2f'``. [#2898]\n\n- Ensure numpy 1.9 is supported. [#2917]\n\n- Ensure numpy master is supported, by making ``np.cbrt`` work with quantities.\n [#2937]\n\n0.4.1 (2014-08-08)\n==================\n\nBug Fixes\n---------\n\nastropy.config\n^^^^^^^^^^^^^^\n\n- Fixed a bug where an unedited configuration file from astropy\n 0.3.2 would not be correctly identified as unedited. [#2772] This\n resulted in the warning::\n\n WARNING: ConfigurationChangedWarning: The configuration options\n in astropy 0.4 may have changed, your configuration file was not\n updated in order to preserve local changes. A new configuration\n template has been saved to\n '~/.astropy/config/astropy.0.4.cfg'. [astropy.config.configuration]\n\n- Fixed the error message that is displayed when an old\n configuration item has moved. Before, the destination\n section was wrong. [#2772]\n\n- Added configuration settings for ``io.fits``, ``io.votable`` and\n ``table.jsviewer`` that were missing from the configuration file\n template. [#2772]\n\n- The configuration template is no longer rewritten on every import\n of astropy, causing race conditions. [#2805]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed the multiplication of ``Kernel`` with numpy floats. [#2174]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- ``Distance`` can now take a list of quantities. [#2261]\n\n- For in-place operations for ``Angle`` instances in which the result unit\n is not an angle, an exception is raised before the instance is corrupted.\n [#2718]\n\n- ``CartesianPoints`` are now deprecated in favor of\n ``CartesianRepresentation``. [#2727]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- An existing table within an HDF5 file can be overwritten without affecting\n other datasets in the same HDF5 file by simultaneously using\n ``overwrite=True`` and ``append=True`` arguments to the ``Table.write``\n method. [#2624]\n\nastropy.logger\n^^^^^^^^^^^^^^\n\n- Fixed a crash that could occur in rare cases when (such as in bundled\n apps) where submodules of the ``email`` package are not importable. [#2671]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``astropy.nddata.NDData()`` no longer raises a ``ValueError`` when passed\n a numpy masked array which has no masked entries. [#2784]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- When saving a table to a FITS file containing a unit that is not\n supported by the FITS standard, a warning rather than an exception\n is raised. [#2797]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- By default, ``Quantity`` and its subclasses will now convert to float also\n numerical types such as ``decimal.Decimal``, which are stored as objects\n by numpy. [#1419]\n\n- The units ``count``, ``pixel``, ``voxel`` and ``dbyte`` now output\n to FITS, OGIP and VOUnit formats correctly. [#2798]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Restored missing information from deprecation warning messages\n from the ``deprecated`` decorator. [#2811]\n\n- Fixed support for ``staticmethod`` deprecation in the ``deprecated``\n decorator. [#2811]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed a memory leak when ``astropy.wcs.WCS`` objects are copied\n [#2754]\n\n- Fixed a crash when passing ``ra_dec_order=True`` to any of the\n ``*2world`` methods. [#2791]\n\nOther Changes and Additions\n---------------------------\n\n- Bundled copy of astropy-helpers upgraded to v0.4.1. [#2825]\n\n- General improvements to documentation and docstrings [#2722, #2728, #2742]\n\n- Made it easier for third-party packagers to have Astropy use their own\n version of the ``six`` module (so long as it meets the minimum version\n requirement) and remove the copy bundled with Astropy. See the\n astropy/extern/README file in the source tree. [#2623]\n\n\n0.4 (2014-07-16)\n================\n\nNew Features\n------------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- Added ``b_wien`` to represent Wien wavelength displacement law constant.\n [#2194]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Changed the input parameter in ``Gaussian1DKernel`` and\n ``Gaussian2DKernel`` from ``width`` to ``stddev`` [#2085].\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The coordinates package has undergone major changes to implement\n `APE5 <https://github.com/astropy/astropy-APEs/blob/master/APE5.rst>`_ .\n These include backwards-incompatible changes, as the underlying framework\n has changed substantially. See the APE5 text and the package documentation\n for more details. [#2422]\n\n- A ``position_angle`` method has been added to the new ``SkyCoord``. [#2487]\n\n- Updated ``Angle.dms`` and ``Angle.hms`` to return ``namedtuple`` -s instead\n of regular tuples, and added ``Angle.signed_dms`` attribute that gives the\n absolute value of the ``d``, ``m``, and ``s`` along with the sign. [#1988]\n\n- By default, ``Distance`` objects are now required to be positive. To\n allow negative values, set ``allow_negative=True`` in the ``Distance``\n constructor when creating a ``Distance`` instance.\n\n- ``Longitude`` (resp. ``Latitude``) objects cannot be used any more to\n initialize or set ``Latitude`` (resp. ``Longitude``) objects. An explicit\n conversion to ``Angle`` is now required. [#2461]\n\n- The deprecated functions for pre-0.3 coordinate object names like\n ``ICRSCoordinates`` have been removed. [#2422]\n\n- The ``rotation_matrix`` and ``angle_axis`` functions in\n ``astropy.coordinates.angles`` were made more numerically consistent and\n are now tested explicitly [#2619]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Added ``z_at_value`` function to find the redshift at which a cosmology\n function matches a desired value. [#1909]\n\n- Added ``FLRW.differential_comoving_volume`` method to give the differential\n comoving volume at redshift z. [#2103]\n\n- The functional interface is now deprecated in favor of the more-explicit\n use of methods on cosmology objects. [#2343]\n\n- Updated documentation to reflect the removal of the functional\n interface. [#2507]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- The ``astropy.io.ascii`` output formats ``latex`` and ``aastex`` accept a\n dictionary called ``latex_dict`` to specify options for LaTeX output. It is\n now possible to specify the table alignment within the text via the\n ``tablealign`` keyword. [#1838]\n\n- If ``header_start`` is specified in a call to ``ascii.get_reader`` or any\n method that calls ``get_reader`` (e.g. ``ascii.read``) but ``data_start``\n is not specified at the same time, then ``data_start`` is calculated so\n that the data starts after the header. Before this, the default was\n that the header line was read again as the first data line\n [#855 and #1844].\n\n- A new ``csv`` format was added as a convenience for handling CSV (comma-\n separated values) data. [#1935]\n This format also recognises rows with an inconsistent number of elements.\n [#1562]\n\n- An option was added to guess the start of data for CDS format files when\n they do not strictly conform to the format standard. [#2241]\n\n- Added an HTML reader and writer to the ``astropy.io.ascii`` package.\n Parsing requires the installation of BeautifulSoup and is therefore\n an optional feature. [#2160]\n\n- Added support for inputting column descriptions and column units\n with the ``io.ascii.SExtractor`` reader. [#2372]\n\n- Allow the use of non-local ReadMe files in the CDS reader. [#2329]\n\n- Provide a mechanism to select how masked values are printed. [#2424]\n\n- Added support for reading multi-aperture daophot file. [#2656]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Included a new command-line script called ``fitsheader`` to display the\n header(s) of a FITS file from the command line. [#2092]\n\n- Added new verification options ``fix+ignore``, ``fix+warn``,\n ``fix+exception``, ``silentfix+ignore``, ``silentfix+warn``, and\n ``silentfix+exception`` which give more control over how to report fixable\n errors as opposed to unfixable errors.\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Prototype implementation of fitters that treat optimization algorithms\n separately from fit statistics, allowing new fitters to be created by\n mixing and matching optimizers and statistic functions. [#1914]\n\n- Slight overhaul to how inputs to and outputs from models are handled with\n respect to array-valued parameters and variables, as well as sets of\n multiple models. See the associated PR and the modeling section of the\n v0.4 documentation for more details. [#2634]\n\n- Added a new ``SimplexLSQFitter`` which uses a downhill simplex optimizer\n with a least squares statistic. [#1914]\n\n- Changed ``Gaussian2D`` model such that ``theta`` now increases\n counterclockwise. [#2199]\n\n- Replaced the ``MatrixRotation2D`` model with a new model called simply\n ``Rotation2D`` which requires only an angle to specify the rotation.\n The new ``Rotation2D`` rotates in a counter-clockwise sense whereas\n the old ``MatrixRotation2D`` increased the angle clockwise.\n [#2266, #2269]\n\n- Added a new ``AffineTransformation2D`` model which serves as a\n replacement for the capability of ``MatrixRotation2D`` to accept an\n arbitrary matrix, while also adding a translation capability. [#2269]\n\n- Added ``GaussianAbsorption1D`` model. [#2215]\n\n- New ``Redshift`` model [#2176].\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Allow initialization ``NDData`` or ``StdDevUncertainty`` with a\n ``Quantity``. [#2380]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added flat prior to binom_conf_interval and binned_binom_proportion\n\n- Change default in ``sigma_clip`` from ``np.median`` to ``np.ma.median``.\n [#2582]\n\nastropy.sphinx\n^^^^^^^^^^^^^^\n\n- Note, the following new features are included in astropy-helpers as well:\n\n- The ``automodapi`` and ``automodsumm`` extensions now include sphinx\n configuration options to write out what ``automodapi`` and ``automodsumm``\n generate, mainly for debugging purposes. [#1975, #2022]\n\n- Reference documentation now shows functions/class docstrings at the\n inteded user-facing API location rather than the actual file where\n the implementation is found. [#1826]\n\n- The ``automodsumm`` extension configuration was changed to generate\n documentation of class ``__call__`` member functions. [#1817, #2135]\n\n- ``automodapi`` and ``automodsumm`` now have an ``:allowed-package-names:``\n option that make it possible to document functions and classes that\n are in a different namespace. [#2370]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Improved grouped table aggregation by using the numpy ``reduceat()`` method\n when possible. This can speed up the operation by a factor of at least 10\n to 100 for large unmasked tables and columns with relatively small\n group sizes. [#2625]\n\n- Allow row-oriented data input using a new ``rows`` keyword argument.\n [#850]\n\n- Allow subclassing of ``Table`` and the component classes ``Row``, ``Column``,\n ``MaskedColumn``, ``TableColumns``, and ``TableFormatter``. [#2287]\n\n- Fix to allow numpy integer types as valid indices into tables in\n Python 3.x [#2477]\n\n- Remove transition code related to the order change in ``Column`` and\n ``MaskedColumn`` arguments ``name`` and ``data`` from Astropy 0.2\n to 0.3. [#2511]\n\n- Change HTML table representation in IPython notebook to show all\n table columns instead of restricting to 80 column width. [#2651]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Mean and apparent sidereal time can now be calculated using the\n ``sidereal_time`` method [#1418].\n\n- The time scale now defaults to UTC if no scale is provided. [#2091]\n\n- ``TimeDelta`` objects can have all scales but UTC, as well as, for\n consistency with time-like quantities, undefined scale (where the\n scale is taken from the object one adds to or subtracts from).\n This allows, e.g., to work consistently in TDB. [#1932]\n\n- ``Time`` now supports ISO format strings that end in \"Z\". [#2211, #2203]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Support for the unit format `Office of Guest Investigator Programs (OGIP)\n FITS files\n <https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/general/ogip_93_001/>`__\n has been added. [#377]\n\n- The ``spectral`` equivalency can now handle angular wave number. [#1306 and\n #1899]\n\n- Added ``one`` as a shorthand for ``dimensionless_unscaled``. [#1980]\n\n- Added ``dex`` and ``dB`` units. [#1628]\n\n- Added ``temperature()`` equivalencies to support conversion between\n Kelvin, Celsius, and Fahrenheit. [#2209]\n\n- Added ``temperature_energy()`` equivalencies to support conversion\n between electron-volt and Kelvin. [#2637]\n\n- The runtime of ``astropy.units.Unit.compose`` is greatly improved\n (by a factor of 2 in most cases) [#2544]\n\n- Added ``electron`` unit. [#2599]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- ``timer.RunTimePredictor`` now uses ``astropy.modeling`` in its\n ``do_fit()`` method. [#1896]\n\nastropy.vo\n^^^^^^^^^^\n\n- A new sub-package, ``astropy.vo.samp``, is now available (this was\n previously the SAMPy package, which has been refactored for use in\n Astropy). [#1907]\n\n- Enhanced functionalities for ``VOSCatalog`` and ``VOSDatabase``. [#1206]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- astropy now requires wcslib version 4.23. The version of wcslib\n included with astropy has been updated to version 4.23.\n\n- Bounds checking is now performed on native spherical\n coordinates. Any out-of-bounds values will be returned as\n ``NaN``, and marked in the ``stat`` array, if using the\n low-level ``wcslib`` interface such as\n ``astropy.wcs.Wcsprm.p2s``. [#2107]\n\n- A new method, ``astropy.wcs.WCS.compare()``, compares two wcsprm\n structs for equality with varying degrees of strictness. [#2361]\n\n- New ``astropy.wcs.utils`` module, with a handful of tools for manipulating\n WCS objects, including dropping, swapping, and adding axes.\n\nMisc\n^^^^\n\n- Includes the new astropy-helpers package which separates some of Astropy's\n build, installation, and documentation infrastructure out into an\n independent package, making it easier for Affiliated Packages to depend on\n these features. astropy-helpers replaces/deprecates some of the submodules\n in the ``astropy`` package (see API Changes below). See also\n `APE 4 <https://github.com/astropy/astropy-APEs/blob/master/APE4.rst>`_\n for more details on the motivation behind and implementation of\n astropy-helpers. [#1563]\n\n\nAPI Changes\n-----------\n\nastropy.config\n^^^^^^^^^^^^^^\n\n- The configuration system received a major overhaul, as part of APE3. It is\n no longer possible to save configuration items from Python, but instead\n users must edit the configuration file directly. The locations of\n configuration items have moved, and some have been changed to science state\n values. The old locations should continue to work until astropy 0.5, but\n deprecation warnings will be displayed. See the `Configuration transition\n <http://docs.astropy.org/en/v0.4/config/config_0_4_transition.html>`_\n docs for a detailed description of the changes and how to update existing\n code. [#2094]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- The ``astropy.io.fits.new_table`` function is now fully deprecated (though\n will not be removed for a long time, considering how widely it is used).\n\n Instead please use the more explicit ``BinTableHDU.from_columns`` to create\n a new binary table HDU, and the similar ``TableHDU.from_columns`` to create\n a new ASCII table. These otherwise accept the same arguments as\n ``new_table`` which is now just a wrapper for these.\n\n- The ``.fromstring`` classmethod of each HDU type has been simplified such\n that, true to its namesake, it only initializes an HDU from a string\n containing its header *and* data.\n\n- Fixed an issue where header wildcard matching (for example\n ``header['DATE*']``) can be used to match *any* characters that might\n appear in a keyword. Previously this only matched keywords containing\n characters in the set ``[0-9A-Za-z_]``. Now this can also match a hyphen\n ``-`` and any other characters, as some conventions like ``HIERARCH`` and\n record-valued keyword cards allow a wider range of valid characters than\n standard FITS keywords.\n\n- This will be the *last* release to support the following APIs that have\n been marked deprecated since Astropy v0.1/PyFITS v3.1:\n\n- The ``CardList`` class, which was part of the old header implementation.\n\n- The ``Card.key`` attribute. Use ``Card.keyword`` instead.\n\n- The ``Card.cardimage`` and ``Card.ascardimage`` attributes. Use simply\n ``Card.image`` or ``str(card)`` instead.\n\n- The ``create_card`` factory function. Simply use the normal ``Card``\n constructor instead.\n\n- The ``create_card_from_string`` factory function. Use ``Card.fromstring``\n instead.\n\n- The ``upper_key`` function. Use ``Card.normalize_keyword`` method\n instead (this is not unlikely to be used outside of PyFITS itself, but it\n was technically public API).\n\n- The usage of ``Header.update`` with ``Header.update(keyword, value,\n comment)`` arguments. ``Header.update`` should only be used analogously\n to ``dict.update``. Use ``Header.set`` instead.\n\n- The ``Header.ascard`` attribute. Use ``Header.cards`` instead for a list\n of all the ``Card`` objects in the header.\n\n- The ``Header.rename_key`` method. Use ``Header.rename_keyword`` instead.\n\n- The ``Header.get_history`` method. Use ``header['HISTORY']`` instead\n (normal keyword lookup).\n\n- The ``Header.get_comment`` method. Use ``header['COMMENT']`` instead.\n\n- The ``Header.toTxtFile`` method. Use ``header.totextfile`` instead.\n\n- The ``Header.fromTxtFile`` method. Use ``Header.fromtextfile`` instead.\n\n- The ``tdump`` and ``tcreate`` functions. Use ``tabledump`` and\n ``tableload`` respectively.\n\n- The ``BinTableHDU.tdump`` and ``tcreate`` methods. Use\n ``BinTableHDU.dump`` and ``BinTableHDU.load`` respectively.\n\n- The ``txtfile`` argument to the ``Header`` constructor. Use\n ``Header.fromfile`` instead.\n\n- The ``startColumn`` and ``endColumn`` arguments to the ``FITS_record``\n constructor. These are unlikely to be used by any user code.\n\n These deprecated interfaces will be removed from the development version of\n Astropy following the v0.4 release (they will still be available in any\n v0.4.x bugfix releases, however).\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- The method computing the derivative of the model with respect\n to parameters was renamed from ``deriv`` to ``fit_deriv``. [#1739]\n\n- ``ParametricModel`` and the associated ``Parametric1DModel`` and\n ``Parametric2DModel`` classes have been renamed ``FittableModel``,\n ``Fittable1DModel``, and ``Fittable2DModel`` respectively. The base\n ``Model`` class has subsumed the functionality of the old\n\n ``ParametricModel`` class so that all models support parameter constraints.\n The only distinction of ``FittableModel`` is that anything which subclasses\n it is assumed \"safe\" to use with Astropy fitters. [#2276]\n\n- ``NonLinearLSQFitter`` has been renamed ``LevMarLSQFitter`` to emphasise\n that it uses the Levenberg-Marquardt optimization algorithm with a\n least squares statistic function. [#1914]\n\n- The ``SLSQPFitter`` class has been renamed ``SLSQPLSQFitter`` to emphasize\n that it uses the Sequential Least Squares Programming optimization\n algorithm with a least squares statistic function. [#1914]\n\n- The ``Fitter.errorfunc`` method has been renamed to the more general\n ``Fitter.objective_function``. [#1914]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Issue warning if unit is changed from a non-trivial value by directly\n setting ``NDData.unit``. [#2411]\n\n- The ``mask`` and ``flag`` attributes of ``astropy.nddata.NDData`` can now\n be set with any array-like object instead of requiring that they be set\n with a ``numpy.ndarray``. [#2419]\n\nastropy.sphinx\n^^^^^^^^^^^^^^\n\n- Use of the ``astropy.sphinx`` module is deprecated; all new development of\n this module is in ``astropy_helpers.sphinx`` which should be used instead\n (therefore documentation builds that made use of any of the utilities in\n ``astropy.sphinx`` now have ``astropy_helpers`` as a documentation\n dependency).\n\nastropy.table\n^^^^^^^^^^^^^\n\n- The default table printing function now shows a table header row for units\n if any columns have the unit attribute set. [#1282]\n\n- Before, an unmasked ``Table`` was automatically converted to a masked\n table if generated from a masked Table or a ``MaskedColumn``.\n Now, this conversion is only done if explicitly requested or if any\n of the input values is actually masked. [#1185]\n\n- The repr() function of ``astropy.table.Table`` now shows the units\n if any columns have the unit attribute set. [#2180]\n\n- The semantics of the config options ``table.max_lines`` and\n ``table.max_width`` has changed slightly. If these values are not\n set in the config file, astropy will try to determine the size\n automatically from the terminal. [#2683]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Correct use of UT in TDB calculation [#1938, #1939].\n\n- ``TimeDelta`` objects can have scales other than TAI [#1932].\n\n- Location information should now be passed on via an ``EarthLocation``\n instance or anything that initialises it, e.g., a tuple containing\n either geocentric or geodetic coordinates. [#1928]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- ``Quantity`` now converts input to float by default, as this is physically\n most sensible for nearly all units [#1776].\n\n- ``Quantity`` comparisons with ``==`` or ``!=`` now always return ``True``\n or ``False``, even if units do not match (for which case a ``UnitsError``\n used to be raised). [#2328]\n\n- Applying ``float`` or ``int`` to a ``Quantity`` now works for all\n dimensionless quantities; they are automatically converted to unscaled\n dimensionless. [#2249]\n\n- The exception ``astropy.units.UnitException``, which was\n deprecated in astropy 0.2, has been removed. Use\n ``astropy.units.UnitError`` instead [#2386]\n\n- Initializing a ``Quantity`` with a valid number/array with a ``unit``\n attribute now interprets that attribute as the units of the input value.\n This makes it possible to initialize a ``Quantity`` from an Astropy\n ``Table`` column and have it correctly pick up the units from the column.\n [#2486]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- ``calcFootprint`` was deprecated. It is replaced by\n ``calc_footprint``. An optional boolean keyword ``center`` was\n added to ``calc_footprint``. It controls whether the centers or\n the corners of the pixels are used in the computation. [#2384]\n\n- ``astropy.wcs.WCS.sip_pix2foc`` and\n ``astropy.wcs.WCS.sip_foc2pix`` formerly did not conform to the\n ``SIP`` standard: ``CRPIX`` was added to the ``foc`` result so\n that it could be used as input to \"core FITS WCS\". As of astropy\n 0.4, ``CRPIX`` is no longer added to the result, so the ``foc``\n space is correct as defined in the `SIP convention\n <http://adsabs.harvard.edu/abs/2005ASPC..347..491S>`__. [#2360]\n\n- ``astropy.wcs.UnitConverter``, which was deprecated in astropy\n 0.2, has been removed. Use the ``astropy.units`` module\n instead. [#2386]\n\n- The following methods on ``astropy.wcs.WCS``, which were\n deprecated in astropy 0.1, have been removed [#2386]:\n\n- ``all_pix2sky`` -> ``all_pix2world``\n\n- ``wcs_pix2sky`` -> ``wcs_pix2world``\n\n- ``wcs_sky2pix`` -> ``wcs_world2pix``\n\n- The ``naxis1`` and ``naxis2`` attributes and the ``get_naxis``\n method of ``astropy.wcs.WCS``, which were deprecated in astropy\n 0.2, have been removed. Use the shape of the underlying FITS data\n array instead. [#2386]\n\nMisc\n^^^^\n\n- The ``astropy.setup_helpers`` and ``astropy.version_helpers`` modules are\n deprecated; any non-critical fixes and development to those modules should\n be in ``astropy_helpers`` instead. Packages that use these modules in\n their ``setup.py`` should depend on ``astropy_helpers`` following the same\n pattern as in the Astropy package template.\n\n\nBug Fixes\n---------\n\nastropy.constants\n^^^^^^^^^^^^^^^^^\n\n- ``astropy.constants.Contant`` objects can now be deep\n copied. [#2601]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- The distance modulus function in ``astropy.cosmology`` can now handle\n negative distances, which can occur in certain closed cosmologies. [#2008]\n\n- Removed accidental imports of some extraneous variables in\n ``astropy.cosmology`` [#2025]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- ``astropy.io.ascii.read`` would fail to read lists of strings where some of\n the strings consisted of just a newline (\"\\n\"). [#2648]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Use NaN for missing values in FITS when using Table.write for float\n columns. Earlier the default fill value was close to 1e20.[#2186]\n\n- Fixes for checksums on 32-bit platforms. Results may be different\n if writing or checking checksums in \"nonstandard\" mode. [#2484]\n\n- Additional minor bug fixes ported from PyFITS. [#2575]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- It is now possible to save an ``astropy.table.Table`` object as a\n VOTable with any of the supported data formats, ``tabledata``,\n ``binary`` and ``binary2``, by using the ``tabledata_format``\n kwarg. [#2138]\n\n- Fixed a crash writing out variable length arrays. [#2577]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Indexing ``NDData`` in a way that results in a single element returns that\n element. [#2170]\n\n- Change construction of result of arithmetic and unit conversion to allow\n subclasses to require the presence of attribute like unit. [#2300]\n\n- Scale uncertainties to correct units in arithmetic operations and unit\n conversion. [#2393]\n\n- Ensure uncertainty and mask members are copied in arithmetic and\n convert_unit_to. [#2394]\n\n- Mask result of arithmetic if either of the operands is masked. [#2403]\n\n- Copy all attributes of input object if ``astropy.nddata.NDData`` is\n initialized with an ``NDData`` object. [#2406]\n\n- Copy ``flags`` to new object in ``convert_unit_to``. [#2409]\n\n- Result of ``NDData`` arithmetic makes a copy of any WCS instead of using\n a reference. [#2410]\n\n- Fix unit handling for multiplication/division and use\n ``astropy.units.Quantity`` for units arithmetic. [#2413]\n\n- A masked ``NDData`` is now converted to a masked array when used in an\n operation or ufunc with a numpy array. [#2414]\n\n- An unmasked ``NDData`` now uses an internal representation of its mask\n state that ``numpy.ma`` expects so that an ``NDData`` behaves as an\n unmasked array. [#2417]\n\nastropy.sphinx\n^^^^^^^^^^^^^^\n\n- Fix crash in smart resolver when the resolution doesn't work. [#2591]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- The ``astropy.table.Column`` object can now use both functions and callable\n objects as formats. [#2313]\n\n- Fixed a problem on 64 bit windows that caused errors\n \"expected 'DTYPE_t' but got 'long long'\" [#2490]\n\n- Fix initialisation of ``TableColumns`` with lists or tuples. [#2647]\n\n- Fix removal of single column using ``remove_columns``. [#2699]\n\n- Fix a problem that setting a row element within a masked table did not\n update the corresponding table element. [#2734]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Correct UT1->UTC->UT1 round-trip being off by 1 second if UT1 is\n on a leap second. [#2077]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- ``Quantity.copy`` now behaves identically to ``ndarray.copy``, and thus\n supports the ``order`` argument (for numpy >=1.6). [#2284]\n\n- Composing base units into identical composite units now works. [#2382]\n\n- Creating and composing/decomposing units is now substantially faster [#2544]\n\n- ``Quantity`` objects now are able to be assigned NaN [#2695]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Astropy now requires wcslib version 4.23. The version of wcslib\n included with astropy has been updated to version 4.23.\n\n- Bug fixes in the projection routines: in ``hpxx2s`` [the\n cartesian-to-spherical operation of the ``HPX`` projection]\n relating to bounds checking, bug introduced at wcslib 4.20; in\n ``parx2s`` and molx2s`` [the cartesion-to-spherical operation of\n the ``PAR`` and ``MOL`` projections respectively] relating to\n setting the stat vector; in ``hpxx2s`` relating to implementation\n of the vector API; and in ``xphx2s`` relating to setting an\n out-of-bounds value of *phi*.\n\n- In the ``PCO`` projection, use alternative projection equations\n for greater numerical precision near theta == 0. In the ``COP``\n projection, return an exact result for theta at the poles.\n Relaxed the tolerance for bounds checking a little in ``SFL``\n projection.\n\n- Fix a bug allocating insufficient memory in\n ``astropy.wcs.WCS.sub`` [#2468]\n\n- A new method, ``Wcsprm.bounds_check`` (corresponding to wcslib's\n ``wcsbchk``) has been added to control what bounds checking is performed by\n wcslib.\n\n- ``WCS.to_header`` will now raise a more meaningful exception when the WCS\n information is invalid or inconsistent in some way. [#1854]\n\n- In ``WCS.to_header``, ``RESTFRQ`` and ``RESTWAV`` are no longer\n rewritten if zero. [#2468]\n\n- In ``WCS.to_header``, floating point values will now always be written\n with an exponent or fractional part, i.e. ``.0`` being appended if necessary\n to acheive this. [#2468]\n\n- If the C extension for ``astropy.wcs`` was not built or fails to import for\n any reason, ``import astropy.wcs`` will result in an ``ImportError``,\n rather than getting obscure errors once the ``astropy.wcs`` is used.\n [#2061]\n\n- When the C extension for ``astropy.wcs`` is built using a version of\n ``wscslib`` already present in the system, the package does not try\n to install ``wcslib`` headers under ``astropy/wcs/include``. [#2536]\n\n- Fixes an unresolved external symbol error in the\n ``astropy.wcs._wcs`` C extension on Microsoft Windows when built\n with a Microsoft compiler. [#2478]\n\nMisc\n^^^^\n\n- Running the test suite with ``python setup.py test`` now works if\n the path to the source contains spaces. [#2488]\n\n- The version of ERFA included with Astropy is now v1.1.0 [#2497]\n\n- Removed deprecated option from travis configuration and force use of\n wheels rather than allowing build from source. [#2576]\n\n- The short option ``-n`` to run tests in parallel was broken\n (conflicts with the distutils built-in option of \"dry-run\").\n Changed to ``-j``. [#2566]\n\nOther Changes and Additions\n---------------------------\n\n- python setup.py test --coverage will now give more accurate\n results, because the coverage analysis will include early imports of\n astropy. There doesn't seem to be a way to get this to work when\n doing ``import astropy; astropy.test()``, so the ``coverage``\n keyword to ``astropy.test`` has been removed. Coverage testing now\n depends only on `coverage.py\n <http://coverage.readthedocs.io/en/latest/>`__, not\n ``pytest-cov``. [#2112]\n\n- The included version of py.test has been upgraded to 2.5.1. [#1970]\n\n- The included version of six.py has been upgraded to 1.5.2. [#2006]\n\n- Where appropriate, tests are now run both with and without the\n ``unicode_literals`` option to ensure that we support both cases. [#1962]\n\n- Running the Astropy test suite from within the IPython REPL is disabled for\n now due to bad interaction between the test runner and IPython's logging\n and I/O handler. For now, run the Astropy tests should be run in the basic\n Python interpreter. [#2684]\n\n- Added support for numerical comparison of floating point values appearing in\n the output of doctests using a ``+FLOAT_CMP`` doctest flag. [#2087]\n\n- A monkey patch is performed to fix a bug in Numpy version 1.7 and\n earlier where unicode fill values on masked arrays are not\n supported. This may cause unintended side effects if your\n application also monkey patches ``numpy.ma`` or relies on the broken\n behavior. If unicode support of masked arrays is important to your\n application, upgrade to Numpy 1.8 or later for best results. [#2059]\n\n- The developer documentation has been extensively rearranged and\n rewritten. [#1712]\n\n- The ``human_time`` function in ``astropy.utils`` now returns strings\n without zero padding. [#2420]\n\n- The ``bdist_dmg`` command for ``setup.py`` has now been removed. [#2553]\n\n- Many broken API links have been fixed in the documentation, and the\n ``nitpick`` Sphinx option is now used to avoid broken links in future.\n [#1221, #2019, #2109, #2161, #2162, #2192, #2200, #2296, #2448, #2456,\n #2460, #2467, #2476, #2508, #2509]\n\n\n0.3.2 (2014-05-13)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- if ``sep`` argument is specified to be a single character in\n ``sexagisimal_to_string``, it now includes seperators only between\n items [#2183]\n\n- Ensure comparisons involving ``Distance`` objects do not raise exceptions;\n also ensure operations that lead to units other than length return\n ``Quantity``. [#2206, #2250]\n\n- Multiplication and division of ``Angle`` objects is now\n supported. [#2273]\n\n- Fixed ``Angle.to_string`` functionality so that negative angles have the\n correct amount of padding when ``pad=True``. [#2337]\n\n- Mixing strings and quantities in the ``Angle`` constructor now\n works. For example: ``Angle(['1d', 1. * u.d])``. [#2398]\n\n- If ``Longitude`` is given a ``Longitude`` as input, use its ``wrap_angle``\n by default [#2705]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Fixed ``format()`` compatibility with Python 2.6. [#2129]\n\n- Be more careful about converting to floating point internally [#1815, #1818]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- The CDS reader in ``astropy.io.ascii`` can now handle multiple\n description lines in ReadMe files. [#2225]\n\n- When reading a table with values that generate an overflow error during\n type conversion (e.g. overflowing the native C long type), fall through to\n using string. Previously this generated an exception [#2234].\n\n- Recognize any string with one to four dashes as null value. [#1335]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Allow pickling of ``FITS_rec`` objects. [#1597]\n\n- Improved behavior when writing large compressed images on OSX by removing\n an unnecessary check for platform architecture. [#2345]\n\n- Fixed an issue where Astropy ``Table`` objects containing boolean columns\n were not correctly written out to FITS files. [#1953]\n\n- Several other bug fixes ported from PyFITS v3.2.3 [#2368]\n\n- Fixed a crash on Python 2.x when writing a FITS file directly to a\n ``StringIO.StringIO`` object. [#2463]\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Allow readers/writers with the same name to be attached to different\n classes. [#2312]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- By default, floating point values are now written out using\n ``repr`` rather than ``str`` to preserve precision [#2137]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed the ``SIP`` and ``InverseSIP`` models both so that they work in the\n first place, and so that they return results consistent with the SIP\n functions in ``astropy.wcs``. [#2177]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Ensure the ``axis`` keyword in ``astropy.stats.funcs`` can now be used for\n all axes. [#2173]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Ensure nameless columns can be printed, using 'None' for the header. [#2213]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fixed pickling of ``Time`` objects. [#2123]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- ``Quantity._repr_latex_()`` returns ``NotImplementedError`` for quantity\n arrays instead of an uninformative formatting exception. [#2258]\n\n- Ensure ``Quantity.flat`` always returns ``Quantity``. [#2251]\n\n- Angstrom unit renders better in MathJax [#2286]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Progress bars will now be displayed inside the IPython\n qtconsole. [#2230]\n\n- ``data.download_file()`` now evaluates ``REMOTE_TIMEOUT()`` at runtime\n rather than import time. Previously, setting ``REMOTE_TIMEOUT`` after\n import had no effect on the function's behavior. [#2302]\n\n- Progressbar will be limited to 100% so that the bar does not exceed the\n terminal width. The numerical display can still exceed 100%, however.\n\nastropy.vo\n^^^^^^^^^^\n\n- Fixed ``format()`` compatibility with Python 2.6. [#2129]\n\n- Cone Search validation no longer raises ``ConeSearchError`` for positive RA.\n [#2240, #2242]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed a bug where calling ``astropy.wcs.Wcsprm.sub`` with\n ``WCSSUB_CELESTIAL`` may cause memory corruption due to\n underallocation of a temporary buffer. [#2350]\n\n- Fixed a memory allocation bug in ``astropy.wcs.Wcsprm.sub`` and\n ``astropy.wcs.Wcsprm.copy``. [#2439]\n\nMisc\n^^^^\n\n- Fixes for compatibility with Python 3.4. [#1945]\n\n- ``import astropy; astropy.test()`` now correctly uses the same test\n configuration as ``python setup.py test`` [#1811]\n\n\n0.3.1 (2014-03-04)\n==================\n\nBug Fixes\n---------\n\nastropy.config\n^^^^^^^^^^^^^^\n\n- Fixed a bug where ``ConfigurationItem.set_temp()`` does not reset to\n default value when exception is raised within ``with`` block. [#2117]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug where ``_truncation`` was left undefined for ``CustomKernel``.\n [#2016]\n\n- Fixed a bug with ``_normalization`` when ``CustomKernel`` input array\n sums to zero. [#2016]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed a bug where using ``==`` on two array coordinates wouldn't\n work. [#1832]\n\n- Fixed bug which caused ``len()`` not to work for coordinate objects and\n added a ``.shape`` property to get appropriately array-like behavior.\n [#1761, #2014]\n\n- Fixed a bug where sexagesimal notation would sometimes include\n exponential notation in the last field. [#1908, #1913]\n\n- ``CompositeStaticMatrixTransform`` no longer attempts to reference the\n undefined variable ``self.matrix`` during instantiation. [#1944]\n\n- Fixed pickling of ``Longitude``, ensuring ``wrap_angle`` is preserved\n [#1961]\n\n- Allow ``sep`` argument in ``Angle.to_string`` to be empty (resulting in no\n separators) [#1989]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Allow passing unicode delimiters when reading or writing tables. The\n delimiter must be convertible to pure ASCII. [#1949]\n\n- Fix a problem when reading a table and renaming the columns to names that\n already exist. [#1991]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Ported all bug fixes from PyFITS 3.2.1. See the PyFITS changelog at\n http://pyfits.readthedocs.io/en/v3.2.1/ [#2056]\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- Fixed issues in the HDF5 Table reader/writer functions that occurred on\n Windows. [#2099]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- The ``write_null_values`` kwarg to ``VOTable.to_xml``, when set to `False`\n (the default) would produce non-standard VOTable files. Therefore, this\n functionality has been replaced by a better understanding that knows which\n fields in a VOTable may be left empty (only ``char``, ``float`` and\n ``double`` in VOTable 1.1 and 1.2, and all fields in VOTable 1.3). The\n kwarg is still accepted but it will be ignored, and a warning is emitted.\n [#1809]\n\n- Printing out a ``astropy.io.votable.tree.Table`` object using `repr` or\n `str` now uses the pretty formatting in ``astropy.table``, so it's possible\n to easily preview the contents of a ``VOTable``. [#1766]\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Fixed bug in computation of model derivatives in ``LinearLSQFitter``.\n [#1903]\n\n- Raise a ``NotImplementedError`` when fitting composite models. [#1915]\n\n- Fixed bug in the computation of the ``Gaussian2D`` model. [#2038]\n\n- Fixed bug in the computation of the ``AiryDisk2D`` model. [#2093]\n\nastropy.sphinx\n^^^^^^^^^^^^^^\n\n- Added slightly more useful debug info for AstropyAutosummary. [#2024]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- The column string representation for n-dimensional cells with only\n one element has been fixed. [#1522]\n\n- Fix a problem that caused ``MaskedColumn.__getitem__`` to not preserve\n column metadata. [#1471, #1872]\n\n- With Numpy prior to version 1.6.2, tables with Unicode columns now\n sort correctly. [#1867]\n\n- ``astropy.table`` can now print out tables with Unicode columns containing\n non-ascii characters. [#1864]\n\n- Columns can now be named with Unicode strings, as long as they contain only\n ascii characters. This makes using ``astropy.table`` easier on Python 2\n when ``from __future__ import unicode_literals`` is used. [#1864]\n\n- Allow pickling of ``Table``, ``Column``, and ``MaskedColumn`` objects. [#792]\n\n- Fix a problem where it was not possible to rename columns after sorting or\n adding a row. [#2039]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fix a problem where scale conversion problem in TimeFromEpoch\n was not showing a useful error [#2046]\n\n- Fix a problem when converting to one of the formats ``unix``, ``cxcsec``,\n ``gps`` or ``plot_date`` when the time scale is ``UT1``, ``TDB`` or ``TCB``\n [#1732]\n\n- Ensure that ``delta_ut1_utc`` gets calculated when accessed directly,\n instead of failing and giving a rather obscure error message [#1925]\n\n- Fix a bug when computing the TDB to TT offset. The transform routine was\n using meters instead of kilometers for the Earth vector. [#1929]\n\n- Increase ``__array_priority__`` so that ``TimeDelta`` can convert itself\n to a ``Quantity`` also in reverse operations [#1940]\n\n- Correct hop list from TCG to TDB to ensure that conversion is\n possible [#2074]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- ``Quantity`` initialisation rewritten for speed [#1775]\n\n- Fixed minor string formatting issue for dimensionless quantities. [#1772]\n\n- Fix error for inplace operations on non-contiguous quantities [#1834].\n\n- The definition of the unit ``bar`` has been corrected to \"1e5\n Pascal\" from \"100 Pascal\" [#1910]\n\n- For units that are close to known units, but not quite, for\n example due to differences in case, the exception will now include\n recommendations. [#1870]\n\n- The generic and FITS unit parsers now accept multiple slashes in\n the unit string. There are multiple ways to interpret them, but\n the approach taken here is to convert \"m/s/kg\" to \"m s-1 kg-1\".\n Multiple slashes are accepted, but discouraged, by the FITS\n standard, due to the ambiguity of parsing, so a warning is raised\n when it is encountered. [#1911]\n\n- The use of \"angstrom\" (with a lower case \"a\") is now accepted in FITS unit\n strings, since it is in common usage. However, since it is not officially\n part of the FITS standard, a warning will be issued when it is encountered.\n [#1911]\n\n- Pickling unrecognized units will not raise a ``AttributeError``. [#2047]\n\n- ``astropy.units`` now correctly preserves the precision of\n fractional powers. [#2070]\n\n- If a ``Unit`` or ``Quantity`` is raised to a floating point power\n that is very close to a rational number with a denominator less\n than or equal to 10, it is converted to a ``Fraction`` object to\n preserve its precision through complex unit conversion operations.\n [#2070]\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Fixed crash in ``timer.RunTimePredictor.do_fit``. [#1905]\n\n- Fixed ``astropy.utils.compat.argparse`` for Python 3.1. [#2017]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- ``astropy.wcs.WCS``, ``astropy.wcs.WCS.fix`` and\n ``astropy.wcs.find_all_wcs`` now have a ``translate_units`` keyword\n argument that is passed down to ``astropy.wcs.Wcsprm.fix``. This can be\n used to specify any unsafe translations of units from rarely used ones to\n more commonly used ones.\n\n Although ``\"S\"`` is commonly used to represent seconds, its translation to\n ``\"s\"`` is potentially unsafe since the standard recognizes ``\"S\"``\n formally as Siemens, however rarely that may be used. The same applies to\n ``\"H\"`` for hours (Henry), and ``\"D\"`` for days (Debye).\n\n When these sorts of changes are performed, a warning is emitted.\n [#1854]\n\n- When a unit is \"fixed\" by ``astropy.wcs.WCS.fix`` or\n ``astropy.wcs.Wcsprm.unitfix``, it now correctly reports the ``CUNIT``\n field that was changed. [#1854]\n\n- ``astropy.wcs.Wcs.printwcs`` will no longer warn that ``cdelt`` is being\n ignored when none was present in the FITS file. [#1845]\n\n- ``astropy.wcs.Wcsprm.set`` is called from within the ``astropy.wcs.WCS``\n constructor, therefore any invalid information in the keywords will be\n raised from the constructor, rather than on a subsequent call to a\n transformation method. [#1918]\n\n- Fix a memory corruption bug when using ``astropy.wcs.Wcs.sub`` with\n ``astropy.wcs.WCSSUB_CELESTIAL``. [#1960]\n\n- Fixed the ``AttributeError`` exception that was raised when using\n ``astropy.wcs.WCS.footprint_to_file``. [#1912]\n\n- Fixed a ``NameError`` exception that was raised when using\n ``astropy.wcs.validate`` or the ``wcslint`` script. [#2053]\n\n- Fixed a bug where named WCSes may be erroneously reported as ``' '`` when\n using ``astropy.wcs.validate`` or the ``wcslint`` script. [#2053]\n\n- Fixed a bug where error messages about incorrect header keywords\n may not be propagated correctly, resulting in a \"NULL error object\n in wcslib\" message. [#2106]\n\nMisc\n^^^^\n\n- There are a number of improvements to make Astropy work better on big\n endian platforms, such as MIPS, PPC, s390x and SPARC. [#1849]\n\n- The test suite will now raise exceptions when a deprecated feature of\n Python or Numpy is used. [#1948]\n\nOther Changes and Additions\n---------------------------\n\n- A new function, ``astropy.wcs.get_include``, has been added to get the\n location of the ``astropy.wcs`` C header files. [#1755]\n\n- The doctests in the ``.rst`` files in the ``docs`` folder are now\n tested along with the other unit tests. This is in addition to the\n testing of doctests in docstrings that was already being performed.\n See ``docs/development/testguide.rst`` for more information. [#1771]\n\n- Fix a problem where import fails on Python 3 if setup.py exists\n in current directory. [#1877]\n\n\n0.3 (2013-11-20)\n================\n\nNew Features\n------------\n\n- General\n\n- A top-level configuration item, ``unicode_output`` has been added to\n control whether the Unicode string representation of certain\n objects will contain Unicode characters. For example, when\n ``use_unicode`` is `False` (default)::\n\n >>> from astropy import units as u\n >>> print(unicode(u.degree))\n deg\n\n When ``use_unicode`` is `True`::\n\n >>> from astropy import units as u\n >>> print(unicode(u.degree))\n °\n\n See `handling-unicode\n <http://docs.astropy.org/en/v0.3/development/codeguide.html#unicode-guidelines>`_\n for more information. [#1441]\n\n- ``astropy.utils.misc.find_api_page`` is now imported into the top-level.\n This allows usage like ``astropy.find_api_page(astropy.units.Quantity)``.\n [#1779]\n\nastropy.convolution\n^^^^^^^^^^^^^^^^^^^\n\n- New class-based system for generating kernels, replacing ``make_kernel``.\n [#1255] The ``astropy.nddata.convolution`` sub-package has now been moved\n to ``astropy.convolution``. [#1451]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Two classes ``astropy.coordinates.Longitude`` and\n ``astropy.coordinates.Latitude`` have been added. These are derived from\n the new ``Angle`` class and used for all longitude-like (RA, azimuth,\n galactic L) and latitude-like coordinates (Dec, elevation, galactic B)\n respectively. The ``Longitude`` class provides auto-wrapping capability\n and ``Latitude`` performs bounds checking.\n\n- ``astropy.coordinates.Distance`` supports conversion to and from distance\n modulii. [#1472]\n\n- ``astropy.coordinates.SphericalCoordinateBase`` and derived classes now\n support arrays of coordinates, enabling large speed-ups for some operations\n on multiple coordinates at the same time. These coordinates can also be\n indexed using standard slicing or any Numpy-compatible indexing. [#1535,\n #1615]\n\n- Array coordinates can be matched to other array coordinates, finding the\n closest matches between the two sets of coordinates (see the\n ``astropy.coordinates.matching.match_coordinates_3d`` and\n ``astropy.coordinates.matching.match_coordinates_sky`` functions). [#1535]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Added support for including massive Neutrinos in the cosmology classes. The\n Planck (2013) cosmology has been updated to use this. [#1364]\n\n- Calculations now use and return ``Quantity`` objects where appropriate.\n [#1237]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Added support for writing IPAC format tables [#1152].\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Added initial support for table columns containing pseudo-unsigned\n integers. This is currently enabled by using the ``uint=True`` option when\n opening files; any table columns with the correct BZERO value will be\n interpreted and returned as arrays of unsigned integers. [#906]\n\n- Upgraded vendored copy of CFITSIO to v3.35, though backwards compatibility\n back to version v3.28 is maintained.\n\n- Added support for reading and writing tables using the Q format for columns.\n The Q format is identical to the P format (variable-length arrays) except\n that it uses 64-bit integers for the data descriptors, allowing more than\n 4 GB of variable-length array data in a single table.\n\n- Some refactoring of the table and ``FITS_rec`` modules in order to better\n separate the details of the FITS binary and ASCII table data structures from\n the HDU data structures that encapsulate them. Most of these changes should\n not be apparent to users (but see API Changes below).\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Updated to support the VOTable 1.3 draft. [#433]\n\n- Added the ability to look up and group elements by their utype attribute.\n [#622]\n\n- The format of the units of a VOTable file can be specified using the\n ``unit_format`` parameter. Note that units are still always written out\n using the CDS format, to ensure compatibility with the standard.\n\nastropy.modeling\n^^^^^^^^^^^^^^^^\n\n- Added a new framework for representing and evaluating mathematical models\n and for fitting data to models. See \"What's New in Astropy 0.3\" in the\n documentation for further details. [#493]\n\nastropy.stats\n^^^^^^^^^^^^^\n\n- Added robust statistics functions\n ``astropy.stats.funcs.median_absolute_deviation``,\n ``astropy.stats.funcs.biweight_location``, and\n ``astropy.stats.funcs.biweight_midvariance``. [#621]\n\n- Added ``astropy.stats.funcs.signal_to_noise_oir_ccd`` for computing the\n signal to noise ratio for source being observed in the optical/IR using a\n CCD. [#870]\n\n- Add ``axis=int`` option to ``stropy.stats.funcs.sigma_clip`` to allow\n clipping along a given axis for multidimensional data. [#1083]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- New columns can be added to a table via assignment to a non-existing\n column by name. [#726]\n\n- Added ``join`` function to perform a database-like join on two tables. This\n includes support for inner, left, right, and outer joins as well as\n metadata merging. [#903]\n\n- Added ``hstack`` and ``vstack`` functions to stack two or more tables.\n [#937]\n\n- Tables now have a ``.copy`` method and include support for ``copy`` and\n ``deepcopy``. [#1208]\n\n- Added support for selecting and manipulating groups within a table with\n a database style ``group_by`` method. [#1424]\n\n- Table ``read`` and ``write`` functions now include rudimentary support\n reading and writing of FITS tables via the unified reading/writing\n interface. [#591]\n\n- The ``units`` and ``dtypes`` attributes and keyword arguments in Column,\n MaskedColumn, Row, and Table are now deprecated in favor of the\n single-tense ``unit`` and ``dtype``. [#1174]\n\n- Setting a column from a Quantity now correctly sets the unit on the Column\n object. [#732]\n\n- Add ``remove_row`` and ``remove_rows`` to remove table rows. [#1230]\n\n- Added a new ``Table.show_in_browser`` method that opens a web browser\n and displays the table rendered as HTML. [#1342]\n\n- New tables can now be instantiated using a single row from an existing\n table. [#1417]\n\nastropy.time\n^^^^^^^^^^^^\n\n- New ``Time`` objects can be instantiated from existing ``Time`` objects\n (but with different format, scale, etc.) [#889]\n\n- Added a ``Time.now`` classmethod that returns the current UTC time,\n similarly to Python's ``datetime.now``. [#1061]\n\n- Update internal time manipulations so that arithmetic with Time and\n TimeDelta objects maintains sub-nanosecond precision over a time span\n longer than the age of the universe. [#1189]\n\n- Use ``astropy.utils.iers`` to provide ``delta_ut1_utc``, so that\n automatic calculation of UT1 becomes possible. [#1145]\n\n- Add ``datetime`` format which allows converting to and from standard\n library ``datetime.datetime`` objects. [#860]\n\n- Add ``plot_date`` format which allows converting to and from the date\n representation used when plotting dates with matplotlib via the\n ``matplotlib.pyplot.plot_date`` function. [#860]\n\n- Add ``gps`` format (seconds since 1980-01-01 00:00:00 UTC,\n including leap seconds) [#1164]\n\n- Add array indexing to Time objects [#1132]\n\n- Allow for arithmetic of multi-element and single-element Time and TimeDelta\n objects. [#1081]\n\n- Allow multiplication and division of TimeDelta objects by\n constants and arrays, as well as changing sign (negation) and\n taking the absolute value of TimeDelta objects. [#1082]\n\n- Allow comparisons of Time and TimeDelta objects. [#1171]\n\n- Support interaction of Time and Quantity objects that represent a time\n interval. [#1431]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added parallax equivalency for length-angle. [#985]\n\n- Added mass-energy equivalency. [#1333]\n\n- Added a new-style format method which will use format specifiers\n (like ``0.03f``) in new-style format strings for the Quantity's value.\n Specifiers which can't be applied to the value will fall back to the\n entire string representation of the quantity. [#1383]\n\n- Added support for complex number values in quantities. [#1384]\n\n- Added new spectroscopic equivalencies for velocity conversions\n (relativistic, optical, and radio conventions are supported) [#1200]\n\n- The ``spectral`` equivalency now also handles wave number.\n\n- The ``spectral_density`` equivalency now also accepts a Quantity for the\n frequency or wavelength. It also handles additional flux units.\n\n- Added Brightness Temperature (antenna gain) equivalency for conversion\n between :math:`T_B` and flux density. [#1327]\n\n- Added percent unit, and allowed any string containing just a number to be\n interpreted as a scaled dimensionless unit. [#1409]\n\n- New-style format strings can be used to set the unit output format. For\n example, ``\"{0:latex}\".format(u.km)`` will print with the latex formatter.\n [#1462]\n\n- The ``Unit.is_equivalent`` method can now take a tuple. In this case, the\n method returns ``True`` if the unit is equivalent to any of the units\n listed in the tuple. [#1521]\n\n- ``def_unit`` can now take a 2-tuple of names of the form (short, long),\n where each entry is a list. This allows for handling strange units that\n might have multiple short names. [#1543]\n\n- Added ``dimensionless_angles`` equivalency, which allows conversion of any\n power of radian to dimensionless. [#1161]\n\n- Added the ability to enable set of units, or equivalencies that are used by\n default. Also provided context managers for these cases. [#1268]\n\n- Imperial units are disabled by default. [#1593, #1662]\n\n- Added an ``astropy.units.add_enabled_units`` context manager, which allows\n creating a temporary context with additional units temporarily enabled in\n the global units namespace. [#1662]\n\n- ``Unit`` instances now have ``.si`` and ``.cgs`` properties a la\n ``Quantity``. These serve as shortcuts for ``Unit.to_system(cgs)[0]``\n etc. [#1610]\n\nastropy.vo\n^^^^^^^^^^\n\n- New package added to support Virtual Observatory Simple Cone Search query\n and service validation. [#552]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed attribute error in ``astropy.wcs.Wcsprm`` (lattype->lattyp) [#1463]\n\n- Included a new command-line script called ``wcslint`` and accompanying API\n for validating the WCS in a given FITS file or header. [#580]\n\n- Upgraded included version of WCSLIB to 4.19.\n\nastropy.utils\n^^^^^^^^^^^^^\n\n- Added a new set of utilities in ``astropy.utils.timer`` for analyzing the\n runtime of functions and making runtime predections for larger inputs.\n [#743]\n\n- ``ProgressBar`` and ``Spinner`` classes can now be used directly to return\n generator expressions. [#771]\n\n- Added ``astropy.utils.iers`` which allows reading in of IERS A or IERS B\n bulletins and interpolation in UT1-UTC.\n\n- Added a function ``astropy.utils.find_api_page``--given a class or object\n from the ``astropy`` package, this will open that class's API documentation\n in a web browser. [#663]\n\n- Data download functions such as ``download_file`` now accept a\n ``show_progress`` argument to suppress console output, and a ``timeout``\n argument. [#865, #1258]\n\nastropy.extern.six\n^^^^^^^^^^^^^^^^^^\n\n- Added `six <https://pypi.python.org/pypi/six/>`_ for python2/python3\n compatibility\n\n- Astropy now uses the ERFA library instead of the IAU SOFA library for\n fundamental time transformation routines. The ERFA library is derived, with\n permission, from the IAU SOFA library but is distributed under a BSD license.\n See ``license/ERFA.rst`` for details. [#1293]\n\nastropy.logger\n^^^^^^^^^^^^^^\n\n- The Astropy logger now no longer catches exceptions by default, and also\n only captures warnings emitted by Astropy itself (prior to this change,\n following an import of Astropy, any warning got re-directed through the\n Astropy logger). Logging to the Astropy log file has also been disabled by\n default. However, users of Astropy 0.2 will likely still see the previous\n behavior with Astropy 0.3 for exceptions and logging to file since the\n default configuration file installed by 0.2 set the exception logging to be\n on by default. To get the new behavior, set the ``log_exceptions`` and\n ``log_to_file`` configuration items to ``False`` in the ``astropy.cfg``\n file. [#1331]\n\nAPI Changes\n-----------\n\n- General\n\n- The configuration option ``utils.console.use_unicode`` has been\n moved to the top level and renamed to ``unicode_output``. It now\n not only affects console widgets, such as progress bars, but also\n controls whether calling `unicode` on certain classes will return a\n string containing unicode characters.\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- The ``astropy.coordinates.Angle`` class is now a subclass of\n ``astropy.units.Quantity``. This means it has all of the methods of a\n `numpy.ndarray`. [#1006]\n\n- The ``astropy.coordinates.Distance`` class is now a subclass of\n ``astropy.units.Quantity``. This means it has all of the methods of a\n `numpy.ndarray`. [#1472]\n\n- All angular units are now supported, not just ``radian``, ``degree`` and\n ``hour``, but now ``arcsecond`` and ``arcminute`` as well. The object\n will retain its native unit, so when printing out a value initially\n provided in hours, its ``to_string()`` will, by default, also be\n expressed in hours.\n\n- The ``Angle`` class now supports arrays of angles.\n\n- To be consistent with ``units.Unit``, ``Angle.format`` has been\n deprecated and renamed to ``Angle.to_string``.\n\n- To be consistent with ``astropy.units``, all plural forms of unit names\n have been removed. Therefore, the following properties of\n ``astropy.coordinates.Angle`` should be renamed:\n\n- ``radians`` -> ``radian``\n\n- ``degrees`` -> ``degree``\n\n- ``hours`` -> ``hour``\n\n- Multiplication and division of two ``Angle`` objects used to raise\n ``NotImplementedError``. Now they raise ``TypeError``.\n\n- The ``astropy.coordinates.Angle`` class no longer has a ``bounds``\n attribute so there is no bounds-checking or auto-wrapping at this level.\n This allows ``Angle`` objects to be used in arbitrary arithmetic\n expressions (e.g. coordinate distance computation).\n\n- The ``astropy.coordinates.RA`` and ``astropy.coordinates.Dec`` classes have\n been removed and replaced with ``astropy.coordinates.Longitude`` and\n ``astropy.coordinates.Latitude`` respectively. These are now used for the\n components of Galactic and Horizontal (Alt-Az) coordinates as well instead\n of plain ``Angle`` objects.\n\n- ``astropy.coordinates.angles.rotation_matrix`` and\n ``astropy.coordinates.angles.angle_axis`` now take a ``unit`` kwarg instead\n of ``degrees`` kwarg to specify the units of the angles.\n ``rotation_matrix`` will also take the unit from the given ``Angle`` object\n if no unit is provided.\n\n- The ``AngularSeparation`` class has been removed. The output of the\n coordinates ``separation()`` method is now an\n ``astropy.coordinates.Angle``. [#1007]\n\n- The coordinate classes have been renamed in a way that remove the\n ``Coordinates`` at the end of the class names. E.g., ``ICRSCoordinates``\n from previous versions is now called ``ICRS``. [#1614]\n\n- ``HorizontalCoordinates`` are now named ``AltAz``, to reflect more common\n terminology.\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- The Planck (2013) cosmology will likely give slightly different (and more\n accurate) results due to the inclusion of Neutrino masses. [#1364]\n\n- Cosmology class properties now return ``Quantity`` objects instead of\n simple floating-point values. [#1237]\n\n- The names of cosmology instances are now truly optional, and are set to\n ``None`` rather than the name of the class if the user does not provide\n them. [#1705]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- In the ``read`` method of ``astropy.io.ascii``, empty column values in an\n ASCII table are now treated as missing values instead of the previous\n treatment as a zero-length string \"\". This now corresponds to the behavior\n of other table readers like ``numpy.genfromtxt``. To restore the previous\n behavior set ``fill_values=None`` in the call to ``ascii.read()``. [#919]\n\n- The ``read`` and ``write`` methods of ``astropy.io.ascii`` now have a\n ``format`` argument for specifying the file format. This is the preferred\n way to choose the format instead of the ``Reader`` and ``Writer``\n arguments. [#961]\n\n- The ``include_names`` and ``exclude_names`` arguments were removed from\n the ``BaseHeader`` initializer, and now instead handled by the reader and\n writer classes directly. [#1350]\n\n- Allow numeric and otherwise unusual column names when reading a table\n where the ``format`` argument is specified, but other format details such\n as the delimiter or quote character are being guessed. [#1692]\n\n- When reading an ASCII table using the ``Table.read()`` method, the default\n has changed from ``guess=False`` to ``guess=True`` to allow auto-detection\n of file format. This matches the default behavior of ``ascii.read()``.\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- The ``astropy.io.fits.new_table`` function is marked \"pending deprecation\".\n This does not mean it will be removed outright or that its functionality\n has changed. It will likely be replaced in the future for a function with\n similar, if not subtly different functionality. A better, if not slightly\n more verbose approach is to use ``pyfits.FITS_rec.from_columns`` to create\n a new ``FITS_rec`` table--this has the same interface as\n ``pyfits.new_table``. The difference is that it returns a plan\n ``FITS_rec`` array, and not an HDU instance. This ``FITS_rec`` object can\n then be used as the data argument in the constructors for ``BinTableHDU``\n (for binary tables) or ``TableHDU`` (for ASCII tables). This is analogous\n to creating an ``ImageHDU`` by passing in an image array.\n ``pyfits.FITS_rec.from_columns`` is just a simpler way of creating a\n FITS-compatible recarray from a FITS column specification.\n\n- The ``updateHeader``, ``updateHeaderData``, and ``updateCompressedData``\n methods of the ``CompDataHDU`` class are pending deprecation and moved to\n internal methods. The operation of these methods depended too much on\n internal state to be used safely by users; instead they are invoked\n automatically in the appropriate places when reading/writing compressed\n image HDUs.\n\n- The ``CompDataHDU.compData`` attribute is pending deprecation in favor of\n the clearer and more PEP-8 compatible ``CompDataHDU.compressed_data``.\n\n- The constructor for ``CompDataHDU`` has been changed to accept new keyword\n arguments. The new keyword arguments are essentially the same, but are in\n underscore_separated format rather than camelCase format. The old\n arguments are still pending deprecation.\n\n- The internal attributes of HDU classes ``_hdrLoc``, ``_datLoc``, and\n ``_datSpan`` have been replaced with ``_header_offset``, ``_data_offset``,\n and ``_data_size`` respectively. The old attribute names are still pending\n deprecation. This should only be of interest to advanced users who have\n created their own HDU subclasses.\n\n- The following previously deprecated functions and methods have been removed\n entirely: ``createCard``, ``createCardFromString``, ``upperKey``,\n ``ColDefs.data``, ``setExtensionNameCaseSensitive``, ``_File.getfile``,\n ``_TableBaseHDU.get_coldefs``, ``Header.has_key``, ``Header.ascardlist``.\n\n- Interfaces that were pending deprecation are now fully deprecated. These\n include: ``create_card``, ``create_card_from_string``, ``upper_key``,\n ``Header.get_history``, and ``Header.get_comment``.\n\n- The ``.name`` attribute on HDUs is now directly tied to the HDU's header, so\n that if ``.header['EXTNAME']`` changes so does ``.name`` and vice-versa.\n\nastropy.io.registry\n^^^^^^^^^^^^^^^^^^^\n\n- Identifier functions for reading/writing Table and NDData objects should\n now accept ``(origin, *args, **kwargs)`` instead of ``(origin, args,\n kwargs)``. [#591]\n\n- Added a new ``astropy.io.registry.get_formats`` function for listing\n registered I/O formats and details about the their readers/writers. [#1669]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Added a new option ``use_names_over_ids`` option to use when converting\n from VOTable objects to Astropy Tables. This can prevent a situation where\n column names are not preserved when converting from a VOTable. [#609]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- The ``astropy.nddata.convolution`` sub-package has now been moved to\n ``astropy.convolution``, and the ``make_kernel`` function has been removed.\n (the kernel classes should be used instead) [#1451]\n\nastropy.stats.funcs\n^^^^^^^^^^^^^^^^^^^\n\n- For ``sigma_clip``, the ``maout`` optional parameter has been removed, and\n the function now always returns a masked array. A new boolean parameter\n ``copy`` can be used to indicated whether the input data should be copied\n (``copy=True``, default) or used by reference (``copy=False``) in the\n output masked array. [#1083]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- The first argument to the ``Column`` and ``MaskedColumn`` classes is now\n the data array--the ``name`` argument has been changed to an optional\n keyword argument. [#840]\n\n- Added support for instantiating a ``Table`` from a list of dict, each one\n representing a single row with the keys mapping to column names. [#901]\n\n- The plural 'units' and 'dtypes' have been switched to 'unit' and 'dtype'\n where appropriate. The original attributes are still present in this\n version as deprecated attributes, but will be removed in the next version.\n [#1174]\n\n- The ``copy`` methods of ``Column`` and ``MaskedColumn`` were changed so\n that the first argument is now ``order='C'``. This is required for\n compatibility with Numpy 1.8 which is currently in development. [#1250]\n\n- Comparing a column (with == or !=) to a scalar, an array, or another column\n now always returns a boolean Numpy array (which is a masked array if either\n of the arguments in the comparison was masked). This is in contrast to the\n previous behavior, which in some cases returned a boolean Numpy array, and\n in some cases returned a boolean Column object. [#1446]\n\nastropy.time\n^^^^^^^^^^^^\n\n- For consistency with ``Quantity``, the attributes ``val`` and\n ``is_scalar`` have been renamed to ``value`` and ``isscalar``,\n respectively, and the attribute ``vals`` has been dropped. [#767]\n\n- The double-float64 internal representation of time is used more\n efficiently to enable better accuracy. [#366]\n\n- Format and scale arguments are now allowed to be case-insensitive. [#1128]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- The ``Quantity`` class now inherits from the Numpy array class, and\n includes the following API changes [#929]:\n\n- Using ``float(...)``, ``int(...)``, and ``long(...)`` on a quantity will\n now only work if the quantity is dimensionless and unscaled.\n\n- All Numpy ufuncs should now treat units correctly (or raise an exception\n if not supported), rather than extract the value of quantities and\n operate on this, emitting a warning about the implicit loss of units.\n\n- When using relevant Numpy ufuncs on dimensionless quantities (e.g.\n ``np.exp(h * nu / (k_B * T))``), or combining dimensionless quantities\n with Python scalars or plain Numpy arrays ``1 + v / c``, the\n dimensionless Quantity will automatically be converted to an unscaled\n dimensionless Quantity.\n\n- When initializing a quantity from a value with no unit, it is now set to\n be dimensionless and unscaled by default. When initializing a Quantity\n from another Quantity and with no unit specified in the initializer, the\n unit is now taken from the unit of the Quantity being initialized from.\n\n- Strings are no longer allowed as the values for Quantities. [#1005]\n\n- Quantities are always comparable with zero regardless of their units.\n [#1254]\n\n- The exception ``astropy.units.UnitsException`` has been renamed to\n ``astropy.units.UnitsError`` to be more consistent with the naming\n of built-in Python exceptions. [#1406]\n\n- Multiplication with and division by a string now always returns a Unit\n (rather than a Quantity when the string was first) [#1408]\n\n- Imperial units are disabled by default.\n\nastropy.wcs\n^^^^^^^^^^^\n\n- For those including the ``astropy.wcs`` C headers in their project, they\n should now include it as:\n\n #include \"astropy_wcs/astropy_wcs_api.h\"\n\n instead of:\n\n #include \"astropy_wcs_api.h\"\n\n [#1631]\n\n- The ``--enable-legacy`` option for ``setup.py`` has been removed. [#1493]\n\nBug Fixes\n---------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- The ``write()`` function was ignoring the ``fill_values`` argument. [#910]\n\n- Fixed an issue in ``DefaultSplitter.join`` where the delimiter attribute\n was ignored when writing the CSV. [#1020]\n\n- Fixed writing of IPAC tables containing null values. [#1366]\n\n- When a table with no header row was read without specifying the format and\n using the ``names`` argument, then the first row could be dropped. [#1692]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Binary tables containing compressed images may, optionally, contain other\n columns unrelated to the tile compression convention. Although this is an\n uncommon use case, it is permitted by the standard.\n\n- Reworked some of the file I/O routines to allow simpler, more consistent\n mapping between OS-level file modes ('rb', 'wb', 'ab', etc.) and the more\n \"PyFITS-specific\" modes used by PyFITS like \"readonly\" and \"update\". That\n is, if reading a FITS file from an open file object, it doesn't matter as\n much what \"mode\" it was opened in so long as it has the right capabilities\n (read/write/etc.) Also works around bugs in the Python io module in 2.6+\n with regard to file modes.\n\n- Fixed a long-standing issue where writing binary tables did not correctly\n write the TFORMn keywords for variable-length array columns (they omitted\n the max array length parameter of the format). This was thought fixed in\n an earlier version, but it was only fixed for compressed image HDUs and\n not for binary tables in general.\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- Fixed crash when trying to multiple or divide ``NDData`` objects with\n uncertainties. [#1547]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Using a list of strings to index a table now correctly returns a new table\n with the columns named in the list. [#1454]\n\n- Inequality operators now work properly with ``Column`` objects. [#1685]\n\nastropy.time\n^^^^^^^^^^^^\n\n- ``Time`` scale and format attributes are now shown when calling ``dir()``\n on a ``Time`` object. [#1130]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed assignment to string-like WCS attributes on Python 3. [#956]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed a bug that caused the order of multiplication/division of plain\n Numpy arrays with Quantities to matter (i.e. if the plain array comes\n first the units were not preserved in the output). [#899]\n\n- Directly instantiated ``CompositeUnits`` were made printable without\n crashing. [#1576]\n\nMisc\n^^^^\n\n- Fixed various modules that hard-coded ``sys.stdout`` as default arguments\n to functions at import time, rather than using the runtime value of\n ``sys.stdout``. [#1648]\n\n- Minor documentation fixes and enhancements [#922, #1034, #1210, #1217,\n #1491, #1492, #1498, #1582, #1608, #1621, #1646, #1670, #1756]\n\n- Fixed a crash that could sometimes occur when running the test suite on\n systems with platform names containing non-ASCII characters. [#1698]\n\nOther Changes and Additions\n---------------------------\n\n- General\n\n- Astropy now follows the PSF Code of Conduct. [#1216]\n\n- Astropy's test suite now tests all doctests in inline docstrings. Support\n for running doctests in the reST documentation is planned to follow in\n v0.3.1.\n\n- Astropy's test suite can be run on multiple CPUs in parallel, often\n greatly improving runtime, using the ``--parallel`` option. [#1040]\n\n- A warning is now issued when using Astropy with Numpy < 1.5--much of\n Astropy may still work in this case but it shouldn't be expected to\n either. [#1479]\n\n- Added automatic download/build/installation of Numpy during Astropy\n installation if not already found. [#1483]\n\n- Handling of metadata for the ``NDData`` and ``Table`` classes has been\n unified by way of a common ``MetaData`` descriptor--it allows instantiating\n an object with metadata of any mapping type, and subsequently prevents\n replacing the mapping stored in the ``.meta`` attribute (only direct\n updates to that object are allowed). [#1686]\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Angles containing out of bounds minutes or seconds (e.g. 60) can be\n parsed--the value modulo 60 is used with carry to the hours/minutes, and a\n warning is issued rather than raising an exception. [#990]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- The new compression code also adds support for the ZQUANTIZ and ZDITHER0\n keywords added in more recent versions of this FITS Tile Compression spec.\n This includes support for lossless compression with GZIP. (#198) By default\n no dithering is used, but the ``SUBTRACTIVE_DITHER_1`` and\n ``SUBTRACTIVE_DITHER_2`` methods can be enabled by passing the correct\n constants to the ``quantize_method`` argument to the ``CompImageHDU``\n constructor. A seed can be manually specified, or automatically generated\n using either the system clock or checksum-based methods via the\n ``dither_seed`` argument. See the documentation for ``CompImageHDU`` for\n more details.\n\n- Images compressed with the Tile Compression standard can now be larger than\n 4 GB through support of the Q format.\n\n- All HDUs now have a ``.ver`` ``.level`` attribute that returns the value of\n the EXTVAL and EXTLEVEL keywords from that HDU's header, if the exist.\n This was added for consistency with the ``.name`` attribute which returns\n the EXTNAME value from the header.\n\n- Then ``Column`` and ``ColDefs`` classes have new ``.dtype`` attributes\n which give the Numpy dtype for the column data in the first case, and the\n full Numpy compound dtype for each table row in the latter case.\n\n- There was an issue where new tables created defaulted the values in all\n string columns to '0.0'. Now string columns are filled with empty strings\n by default--this seems a less surprising default, but it may cause\n differences with tables created with older versions of PyFITS or Astropy.\n\nastropy.io.misc\n^^^^^^^^^^^^^^^\n\n- The HDF5 reader can now refer to groups in the path as well as datasets;\n if given a group, the first dataset in that group is read. [#1159]\n\nastropy.nddata\n^^^^^^^^^^^^^^\n\n- ``NDData`` objects have more helpful, though still rudimentary ``__str__`\n and ``__repr__`` displays. [#1313]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added 'cycle' unit. [#1160]\n\n- Extended units supported by the CDS formatter/parser. [#1468]\n\n- Added unicode an LaTeX symbols for liter. [#1618]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Redundant SCAMP distortion parameters are removed with SIP distortions are\n also present. [#1278]\n\n- Added iterative implementation of ``all_world2pix`` that can be reliably\n inverted. [#1281]\n\n\n0.2.5 (2013-10-25)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed incorrect string formatting of Angles using ``precision=0``. [#1319]\n\n- Fixed string formatting of Angles using ``decimal=True`` which ignored the\n ``precision`` argument. [#1323]\n\n- Fixed parsing of format strings using appropriate unicode characters\n instead of the ASCII ``-`` for minus signs. [#1429]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fixed a crash in the IPAC table reader when the ``include/exclude_names``\n option is set. [#1348]\n\n- Fixed writing AASTex tables to honor the ``tabletype`` option. [#1372]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Improved round-tripping and preservation of manually assigned column\n attributes (``TNULLn``, ``TSCALn``, etc.) in table HDU headers. (Note: This\n issue was previously reported as fixed in Astropy v0.2.2 by mistake; it is\n not fixed until v0.3.) [#996]\n\n- Fixed a bug that could cause a segfault when trying to decompress an\n compressed HDU whose contents are truncated (due to a corrupt file, for\n example). This still causes a Python traceback but better that than a\n segfault. [#1332]\n\n- Newly created ``CompImageHDU`` HDUs use the correct value of the\n ``DEFAULT_COMPRESSION_TYPE`` module-level constant instead of hard-coding\n \"RICE_1\" in the header.\n\n- Fixed a corner case where when extra memory is allocated to compress an\n image, it could lead to unnecessary in-memory copying of the compressed\n image data and a possible memory leak through Numpy.\n\n- Fixed a bug where assigning from an mmap'd array in one FITS file over\n the old (also mmap'd) array in another FITS file failed to update the\n destination file. Corresponds to PyFITS issue 25.\n\n- Some miscellaneous documentation fixes.\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Added a warning for when a VOTable 1.2 file contains no ``RESOURCES``\n elements (at least one should be present). [#1337]\n\n- Fixed a test failure specific to MIPS architecture caused by an errant\n floating point warning. [#1179]\n\nastropy.nddata.convolution\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Prevented in-place modification of the input arrays to ``convolve()``.\n [#1153]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Added HTML escaping for string values in tables when outputting the table\n as HTML. [#1347]\n\n- Added a workaround in a bug in Numpy that could cause a crash when\n accessing a table row in a masked table containing ``dtype=object``\n columns. [#1229]\n\n- Fixed an issue similar to the one in #1229, but specific to unmasked\n tables. [#1403]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Improved error handling for unparseable units and fixed parsing CDS units\n without mantissas in the exponent. [#1288]\n\n- Added a physical type for spectral flux density. [#1410]\n\n- Normalized conversions that should result in a scale of exactly 1.0 to\n round off slight floating point imprecisions. [#1407]\n\n- Added support in the CDS unit parser/formatter for unusual unit prefixes\n that are nonetheless required to be supported by that convention. [#1426]\n\n- Fixed the parsing of ``sqrt()`` in unit format strings which was returning\n ``unit ** 2`` instead of ``unit ** 0.5``. [#1458]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- When passing a single array to the wcs transformation functions,\n (``astropy.wcs.Wcs.all_pix2world``, etc.), its second dimension must now\n exactly match the number of dimensions in the transformation. [#1395]\n\n- Improved error message when incorrect arguments are passed to\n ``WCS.wcs_world2pix``. [#1394]\n\n- Fixed a crash when trying to read WCS from FITS headers on Python 3.3\n in Windows. [#1363]\n\n- Only headers that are required as part of the WCSLIB C API are installed\n by the package, per request of system packagers. [#1666]\n\nMisc\n^^^^\n\n- Fixed crash when the ``COLUMNS`` environment variable is set to a\n non-integer value. [#1291]\n\n- Fixed a bug in ``ProgressBar.map`` where ``multiprocess=True`` could cause\n it to hang on waiting for the process pool to be destroyed. [#1381]\n\n- Fixed a crash on Python 3.2 when affiliated packages try to use the\n ``astropy.utils.data.get_pkg_data_*`` functions. [#1256]\n\n- Fixed a minor path normalization issue that could occur on Windows in\n ``astropy.utils.data.get_pkg_data_filename``. [#1444]\n\n- Fixed an annoyance where configuration items intended only for testing\n showed up in users' astropy.cfg files. [#1477]\n\n- Prevented crashes in exception logging in unusual cases where no traceback\n is associated with the exception. [#1518]\n\n- Fixed a crash when running the tests in unusual environments where\n ``sys.stdout.encoding`` is ``None``. [#1530]\n\n- Miscellaneous documentation fixes and improvements [#1308, #1317, #1377,\n #1393, #1362, #1516]\n\nOther Changes and Additions\n---------------------------\n\n- Astropy installation now requests setuptools >= 0.7 during build/installation\n if neither distribute or setuptools >= 0.7 is already installed. In other\n words, if ``import setuptools`` fails, ``ez_setup.py`` is used to bootstrap\n the latest setuptools (rather than using ``distribute_setup.py`` to bootstrap\n the now obsolete distribute package). [#1197]\n\n- When importing Astropy from a source checkout without having built the\n extension modules first an ``ImportError`` is raised rather than a\n ``SystemExit`` exception. [#1269]\n\n\n0.2.4 (2013-07-24)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed the angle parser to support parsing the string \"1 degree\". [#1168]\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Fixed a crash in the ``comoving_volume`` method on non-flat cosmologies\n when passing it an array of redshifts.\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fixed a bug that prevented saving changes to the comment symbol when\n writing changes to a table. [#1167]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Added a workaround for a bug in 64-bit OSX that could cause truncation when\n writing files greater than 2^32 bytes in size. [#839]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fixed incorrect reading of tables containing multiple ``<RESOURCE>``\n elements. [#1223]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed a bug where ``Table.remove_column`` and ``Table.rename_column``\n could cause a masked table to lose its masking. [#1120]\n\n- Fixed bugs where subclasses of ``Table`` did not preserver their class in\n certain operations. [#1142]\n\n- Fixed a bug where slicing a masked table did not preserve the mask. [#1187]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed a bug where the ``.si`` and ``.cgs`` properties of dimensionless\n ``Quantity`` objects raised a ``ZeroDivisionError``. [#1150]\n\n- Fixed a bug where multiple subsequent calls to the ``.decompose()`` method\n on array quantities applied a scale factor each time. [#1163]\n\nMisc\n^^^^\n\n- Fixed an installation crash that could occur sometimes on Debian/Ubuntu\n and other \\*NIX systems where ``pkg_resources`` can be installed without\n installing ``setuptools``. [#1150]\n\n- Updated the ``distribute_setup.py`` bootstrapper to use setuptools >= 0.7\n when installing on systems that don't already have an up to date version\n of distribute/setuptools. [#1180]\n\n- Changed the ``version.py`` template so that Astropy affiliated packages can\n (and they should) use their own ``cython_version.py`` and\n ``utils._compiler`` modules where appropriate. This issue only pertains to\n affiliated package maintainers. [#1198]\n\n- Fixed a corner case where the default config file generation could crash\n if building with matplotlib but *not* Sphinx installed in a virtualenv.\n [#1225]\n\n- Fixed a crash that could occur in the logging module on systems that\n don't have a default preferred encoding (in particular this happened\n in some versions of PyCharm). [#1244]\n\n- The Astropy log now supports passing non-string objects (and calling\n ``str()`` on them by default) to the logging methods, in line with Python's\n standard logging API. [#1267]\n\n- Minor documentation fixes [#582, #696, #1154, #1194, #1212, #1213, #1246,\n #1252]\n\nOther Changes and Additions\n---------------------------\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Added a new ``Plank13`` object representing the Plank 2013 results. [#895]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Performance improvements in initialization of ``Quantity`` objects with\n a large number of elements. [#1231]\n\n\n0.2.3 (2013-05-30)\n==================\n\nBug Fixes\n---------\n\nastropy.time\n^^^^^^^^^^^^\n\n- Fixed inaccurate handling of leap seconds when converting from UTC to UNIX\n timestamps. [#1118]\n\n- Tightened required accuracy in many of the time conversion tests. [#1121]\n\nMisc\n^^^^\n\n- Fixed a regression that was introduced in v0.2.2 by the fix to issue #992\n that was preventing installation of Astropy affiliated packages that use\n Astropy's setup framework. [#1124]\n\n\n0.2.2 (2013-05-21)\n==================\n\nBug Fixes\n---------\n\nastropy.io\n^^^^^^^^^^\n\n- Fixed issues in both the ``fits`` and ``votable`` sub-packages where array\n byte order was not being handled consistently, leading to possible crashes\n especially on big-endian systems. [#1003]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- When an error occurs opening a file in fitsdiff the exception message will\n now at least mention which file had the error.\n\n- Fixed a couple cases where creating a new table using TDIMn in some of the\n columns could cause a crash.\n\n- Slightly refactored how tables containing variable-length array columns are\n handled to add two improvements: Fixes an issue where accessing the data\n after a call to the ``astropy.io.fits.getdata`` convenience function caused\n an exception, and allows the VLA data to be read from an existing mmap of\n the FITS file.\n\n- Fixed a bug on Python 3 where attempting to open a non-existent file on\n Python 3 caused a seemingly unrelated traceback.\n\n- Fixed an issue in the tests that caused some tests to fail if Astropy is\n installed with read-only permissions.\n\n- Fixed a bug where instantiating a ``BinTableHDU`` from a numpy array\n containing boolean fields converted all the values to ``False``.\n\n- Fixed an issue where passing an array of integers into the constructor of\n ``Column()`` when the column type is floats of the same byte width caused\n the column array to become garbled.\n\n- Fixed inconsistent behavior in creating CONTINUE cards from byte strings\n versus unicode strings in Python 2--CONTINUE cards can now be created\n properly from unicode strings (so long as they are convertable to ASCII).\n\n- Fixed a bug in parsing HIERARCH keywords that do not have a space after the\n first equals sign (before the value).\n\n- Prevented extra leading whitespace on HIERARCH keywords from being treated\n as part of the keyword.\n\n- Fixed a bug where HIERARCH keywords containing lower-case letters was\n mistakenly marked as invalid during header validation along with an\n ancillary issue where the ``Header.index()`` method id not work correctly\n with HIERARCH keywords containing lower-case letters.\n\n- Disallowed assigning NaN and Inf floating point values as header values,\n since the FITS standard does not define a way to represent them in. Because\n this is undefined, the previous behavior did not make sense and produced\n invalid FITS files. [#954]\n\n- Fixed an obscure issue that can occur on systems that don't have flush to\n memory-mapped files implemented (namely GNU Hurd). [#968]\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Stopped deprecation warnings from the ``astropy.io.votable`` package that\n could occur during setup. [#970]\n\n- Fixed an issue where INFO elements were being incorrectly dropped when\n occurring inside a TABLE element. [#1000]\n\n- Fixed obscure test failures on MIPS platforms. [#1010]\n\nastropy.nddata.convolution\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Fixed an issue in ``make_kernel()`` when using an Airy function kernel.\n Also removed the superfluous 'brickwall' option. [#939]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed a crash that could occur when adding a row to an empty (rowless)\n table with masked columns. [#973]\n\n- Made it possible to assign to one table row from the value of another row,\n effectively making it easier to copy rows, for example. [#1019]\n\nastropy.time\n^^^^^^^^^^^^\n\n- Added appropriate ``__copy__`` and ``__deepcopy__`` behavior; this\n omission caused a seemingly unrelated error in FK5 coordinate separation.\n [#891]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Fixed an issue where the ``isiterable()`` utility returned ``True`` for\n quantities with scalar values. Added an ``__iter__`` method for the\n ``Quantity`` class and fixed ``isiterable()`` to catch false positives.\n [#878]\n\n- Fixed previously undefined behavior when multiplying a unit by a string.\n [#949]\n\n- Added 'time' as a physical type--this was a simple omission. [#959]\n\n- Fixed issues with pickling unit objects so as to play nicer with the\n multiprocessing module. [#974]\n\n- Made it more difficult to accidentally override existing units with a new\n unit of the same name. [#1070]\n\n- Added several more physical types and units that were previously omitted,\n including 'mass density', 'specific volume', 'molar volume', 'momentum',\n 'angular momentum', 'angular speed', 'angular acceleration', 'electric\n current', 'electric current density', 'electric field strength', 'electric\n flux density', 'electric charge density', 'permittivity', 'electromagnetic\n field strength', 'radiant intensity', 'data quantity', 'bandwidth'; and\n 'knots', 'nautical miles', 'becquerels', and 'curies' respectively. [#1072]\n\nMisc\n^^^^\n\n- Fixed a permission error that could occur when running ``astropy.test()``\n on Python 3 when Astropy is installed as root. [#811]\n\n- Made it easier to filter warnings from the ``convolve()`` function and\n from ``Quantity`` objects. [#853]\n\n- Fixed a crash that could occur in Python 3 when generation of the default\n config file fails during setup. [#952]\n\n- Fixed an unrelated error message that could occur when trying to import\n astropy from a source checkout without having build the extension modules\n first. This issue was claimed to be fixed in v0.2.1, but the fix itself had\n a bug. [#971]\n\n- Fixed a crash that could occur when running the ``build_sphinx`` setup\n command in Python 3. [#977]\n\n- Added a more helpful error message when trying to run the\n ``setup.py build_sphinx`` command when Sphinx is not installed. [#1027]\n\n- Minor documentation fixes and restructuring.\n [#935, #967, #978, #1004, #1028, #1047]\n\nOther Changes and Additions\n---------------------------\n\n- Some performance improvements to the ``astropy.units`` package, in particular\n improving the time it takes to import the sub-package. [#1015]\n\n\n0.2.1 (2013-04-03)\n==================\n\nBug Fixes\n---------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- Fixed encoding errors that could occur when formatting coordinate objects\n in code using ``from __future__ import unicode_literals``. [#817]\n\n- Fixed a bug where the minus sign was dropped when string formatting dms\n coordinates with -0 degrees. [#875]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Properly supports the ZQUANTIZ keyword used to support quantization\n level--this includes working support for lossless GZIP compression of\n images.\n\n- Fixed support for opening gzipped FITS files in a writeable mode. [#256]\n\n- Added a more helpful exception message when trying to read invalid values\n from a table when the required ``TNULLn`` keyword is missing. [#309]\n\n- More refactoring of the tile compression handling to work around a\n potential memory access violation that was particularly prevalent on\n Windows. [#507]\n\n- Fixed an integer size mismatch in the compression module that could affect\n 32-bit systems. [#786]\n\n- Fixed malformatting of the ``TFORMn`` keywords when writing compressed\n image tables (they omitted the max array length parameter from the\n variable-length array format).\n\n- Fixed a crash that could occur when writing a table containing multi-\n dimensional array columns from an existing file into a new file.\n\n- Fixed a bug in fitsdiff that reported two header keywords containing NaN\n as having different values.\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- Fixed links to the ``astropy.io.votable`` documentation in the VOTable\n validator output. [#806]\n\n- When reading VOTables containing integers that are out of range for their\n column type, display a warning rather than raising an exception. [#825]\n\n- Changed the default string format for floating point values for better\n round-tripping. [#856]\n\n- Fixed opening VOTables through the ``Table.read()`` interface for tables\n that have no names. [#927]\n\n- Fixed creation of VOTables from an Astropy table that does not have a data\n mask. [#928]\n\n- Minor documentation fixes. [#932]\n\nastropy.nddata.convolution\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Added better handling of ``inf`` values to the ``convolve_fft`` family of\n functions. [#893]\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Fixed silent failure to assign values to a row on multiple columns. [#764]\n\n- Fixed various buggy behavior when viewing a table after sorting by one of\n its columns. [#829]\n\n- Fixed using ``numpy.where()`` with table indexing. [#838]\n\n- Fixed a bug where opening a remote table with ``Table.read()`` could cause\n the entire table to be downloaded twice. [#845]\n\n- Fixed a bug where ``MaskedColumn`` no longer worked if the column being\n masked is renamed. [#916]\n\nastropy.units\n^^^^^^^^^^^^^\n\n- Added missing capability for array ``Quantity``\\s to be initializable by\n a list of ``Quantity``\\s. [#835]\n\n- Fixed the definition of year and lightyear to be in terms of Julian year\n per the IAU definition. [#861]\n\n- \"degree\" was removed from the list of SI base units. [#863]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Fixed ``TypeError`` when calling ``WCS.to_header_string()``. [#822]\n\n- Added new method ``WCS.all_world2pix`` for converting from world\n coordinates to pixel space, including inversion of the astrometric\n distortion correction. [#1066, #1281]\n\nMisc\n^^^^\n\n- Fixed a minor issue when installing with ``./setup.py develop`` on a fresh\n git clone. This is likely only of interest to developers on Astropy.\n [#725]\n\n- Fixes a crash with ``ImportError: No module named 'astropy.version'`` when\n running setup.py from a source checkout for the first time on OSX with\n Python 3.3. [#820]\n\n- Fixed an installation issue where running ``./setup.py install`` or when\n installing with pip the ``.astropy`` directory gets created in the home\n directory of the user running the command. The user's ``.astropy``\n directory should only be created when they use Astropy, not when they\n install it. [#867]\n\n- Fixed an exception when creating a ``ProgressBar`` with a \"total\" of 0.\n [#752]\n\n- Added better documentation of behavior that can occur when trying to import\n the astropy package from within a source checkout without first building\n the extension modules. [#795, #864]\n\n- Added link to the installation instructions in the README. [#797]\n\n- Catches segfaults in xmllint which can occur sometimes and is otherwise out\n of our control. [#803]\n\n- Minor changes to the documentation template. [#805]\n\n- Fixed a minor exception handling bug in ``download_file()``. [#808]\n\n- Added cleanup of any temporary files if an error occurs in\n ``download_file()``. [#857]\n\n- Filesystem free space is checked for before attempting to download a file\n with ``download_file()``. [#858]\n\n- Fixed package data locating to work across symlinks--required to work with\n some OS packaging layouts. [#827]\n\n- Fixed a bug when building Cython extensions where hidden files containing\n ``.pyx`` extensions could cause the build to crash. This can be an issue\n with software and filesystems that autogenerate hidden files. [#834]\n\n- Fixed bug that could cause a \"script\" called README.rst to be installed\n in a bin directory. [#852]\n\n- Fixed some miscellaneous and mostly rare reference leaks caught by\n cpychecker. [#914]\n\nOther Changes and Additions\n---------------------------\n\n- Added logo and branding for Windows binary installers. [#741]\n\n- Upgraded included version libexpat to 2.1.0. [#781]\n\n- ~25% performance improvement in unit composition/decomposition. [#836]\n\n- Added previously missing LaTeX formatting for ``L_sun`` and ``R_sun``. [#841]\n\n- ConfigurationItem\\s now have a more useful and informative __repr__\n and improved documentation for how to use them. [#855]\n\n- Added a friendlier error message when trying to import astropy from a source\n checkout without first building the extension modules inplace. [#864]\n\n- py.test now outputs more system information for help in debugging issues\n from users. [#869]\n\n- Added unit definitions \"mas\" and \"uas\" for \"milliarcsecond\" and\n \"microarcsecond\" respectively. [#892]\n\n\n0.2 (2013-02-19)\n================\n\nNew Features\n------------\n\nastropy.coordinates\n^^^^^^^^^^^^^^^^^^^\n\n- This new subpackage contains a representation of celestial coordinates,\n and provides a wide range of related functionality. While\n fully-functional, it is a work in progress and parts of the API may\n change in subsequent releases.\n\nastropy.cosmology\n^^^^^^^^^^^^^^^^^\n\n- Update to include cosmologies with variable dark energy equations of state.\n (This introduces some API incompatibilities with the older Cosmology\n objects).\n\n- Added parameters for relativistic species (photons, neutrinos) to the\n astropy.cosmology classes. The current treatment assumes that neutrinos are\n massless. [#365]\n\n- Add a WMAP9 object using the final (9-year) WMAP parameters from\n Hinshaw et al. 2013. It has also been made the default cosmology.\n [#629, #724]\n\n- astropy.table I/O infrastructure for custom readers/writers\n implemented. [#305]\n\n- Added support for reading/writing HDF5 files [#461]\n\n- Added support for masked tables with missing or invalid data [#451]\n\n- New ``astropy.time`` sub-package. [#332]\n\n- New ``astropy.units`` sub-package that includes a class for units\n (``astropy.units.Unit``) and scalar quantities that have units\n (``astropy.units.Quantity``). [#370, #445]\n\n This has the following effects on other sub-packages:\n\n- In ``astropy.wcs``, the ``wcs.cunit`` list now takes and returns\n ``astropy.units.Unit`` objects. [#379]\n\n- In ``astropy.nddata``, units are now stored as ``astropy.units.Unit``\n objects. [#382]\n\n- In ``astropy.table``, units on columns are now stored as\n ``astropy.units.Unit`` objects. [#380]\n\n- In ``astropy.constants``, constants are now stored as\n ``astropy.units.Quantity`` objects. [#529]\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Improved integration with the ``astropy.table`` Table class so that\n table and column metadata (e.g. keywords, units, description,\n formatting) are directly available in the output table object. The\n CDS, DAOphot, and IPAC format readers now provide this type of\n integrated metadata.\n\n- Changed to using ``astropy.table`` masked tables instead of NumPy\n masked arrays for tables with missing values.\n\n- Added SExtractor table reader to ``astropy.io.ascii`` [#420]\n\n- Removed the Memory reader class which was used to convert data input\n passed to the ``write`` function into an internal table. Instead\n ``write`` instantiates an astropy Table object using the data\n input to ``write``.\n\n- Removed the NumpyOutputter as the output of reading a table is now\n always a ``Table`` object.\n\n- Removed the option of supplying a function as a column output\n formatter.\n\n- Added a new ``strip_whitespace`` keyword argument to the ``write``\n function. This controls whether whitespace is stripped from\n the left and right sides of table elements before writing.\n Default is True.\n\n- Fixed a bug in reading IPAC tables with null values.\n\n- Generalized I/O infrastructure so that ``astropy.nddata`` can also have\n custom readers/writers [#659]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- From updating the underlying wcslib 4.16:\n\n- When ``astropy.wcs.WCS`` constructs a default coordinate representation\n it will give it the special name \"DEFAULTS\", and will not report \"Found\n one coordinate representation\".\n\nOther Changes and Additions\n---------------------------\n\n- A configuration file with all options set to their defaults is now generated\n when astropy is installed. This file will be pulled in as the users'\n astropy configuration file the first time they ``import astropy``. [#498]\n\n- Astropy doc themes moved into ``astropy.sphinx`` to allow affiliated packages\n to access them.\n\n- Added expanded documentation for the ``astropy.cosmology`` sub-package.\n [#272]\n\n- Added option to disable building of \"legacy\" packages (pyfits, vo, etc.).\n\n- The value of the astronomical unit (au) has been updated to that adopted by\n IAU 2012 Resolution B2, and the values of the pc and kpc constants have been\n updated to reflect this. [#368]\n\n- Added links to the documentation pages to directly edit the documentation on\n GitHub. [#347]\n\n- Several updates merged from ``pywcs`` into ``astropy.wcs`` [#384]:\n\n- Improved the reading of distortion images.\n\n- Added a new option to choose whether or not to write SIP coefficients.\n\n- Uses the ``relax`` option by default so that non-standard keywords are\n allowed. [#585]\n\n\n- Added HTML representation of tables in IPython notebook [#409]\n\n- Rewrote CFITSIO-based backend for handling tile compression of FITS files.\n It now uses a standard CFITSIO instead of heavily modified pieces of CFITSIO\n as before. Astropy ships with its own copy of CFITSIO v3.30, but system\n packagers may choose instead to strip this out in favor of a\n system-installed version of CFITSIO. This corresponds to PyFITS ticket 169.\n [#318]\n\n- Moved ``astropy.config.data`` to ``astropy.utils.data`` and re-factored the\n I/O routines to separate out the generic I/O code that can be used to open\n any file or resource from the code used to access Astropy-related data. The\n 'core' I/O routine is now ``get_readable_fileobj``, which can be used to\n access any local as well as remote data, supports caching, and can decompress\n gzip and bzip2 files on-the-fly. [#425]\n\n- Added a classmethod to\n ``astropy.coordinates.coordsystems.SphericalCoordinatesBase`` that performs a\n name resolve query using Sesame to retrieve coordinates for the requested\n object. This works for any subclass of ``SphericalCoordinatesBase``, but\n requires an internet connection. [#556]\n\n- astropy.nddata.convolution removed requirement of PyFFTW3; uses Numpy's\n FFT by default instead with the added ability to specify an FFT\n implementation to use. [#660]\n\n\nBug Fixes\n---------\n\nastropy.io.ascii\n^^^^^^^^^^^^^^^^\n\n- Fixed crash when pprinting a row with INDEF values. [#511]\n\n- Fixed failure when reading DAOphot files with empty keyword values. [#666]\n\nastropy.io.fits\n^^^^^^^^^^^^^^^\n\n- Improved handling of scaled images and pseudo-unsigned integer images in\n compressed image HDUs. They now work more transparently like normal image\n HDUs with support for the ``do_not_scale_image_data`` and ``uint`` options,\n as well as ``scale_back`` and ``save_backup``. The ``.scale()`` method\n works better too. Corresponds to PyFITS ticket 88.\n\n- Permits non-string values for the EXTNAME keyword when reading in a file,\n rather than throwing an exception due to the malformatting. Added\n verification for the format of the EXTNAME keyword when writing.\n Corresponds to PyFITS ticket 96.\n\n- Added support for EXTNAME and EXTVER in PRIMARY HDUs. That is, if EXTNAME\n is specified in the header, it will also be reflected in the ``.name``\n attribute and in ``fits.info()``. These keywords used to be verboten in\n PRIMARY HDUs, but the latest version of the FITS standard allows them.\n Corresponds to PyFITS ticket 151.\n\n- HCOMPRESS can again be used to compress data cubes (and higher-dimensional\n arrays) so long as the tile size is effectively 2-dimensional. In fact,\n compatible tile sizes will automatically be used even if they're not\n explicitly specified. Corresponds to PyFITS ticket 171.\n\n- Fixed a bug that could cause a deadlock in the filesystem on OSX when\n reading the data from certain types of FITS files. This only occurred\n when used in conjunction with Numpy 1.7. [#369]\n\n- Added support for the optional ``endcard`` parameter in the\n ``Header.fromtextfile()`` and ``Header.totextfile()`` methods. Although\n ``endcard=False`` was a reasonable default assumption, there are still text\n dumps of FITS headers that include the END card, so this should have been\n more flexible. Corresponds to PyFITS ticket 176.\n\n- Fixed a crash when running fitsdiff on two empty (that is, zero row) tables.\n Corresponds to PyFITS ticket 178.\n\n- Fixed an issue where opening a FITS file containing a random group HDU in\n update mode could result in an unnecessary rewriting of the file even if\n no changes were made. This corresponds to PyFITS ticket 179.\n\n- Fixed a crash when generating diff reports from diffs using the\n ``ignore_comments`` options. Corresponds to PyFITS ticket 181.\n\n- Fixed some bugs with WCS distortion paper record-valued keyword cards:\n\n- Cards that looked kind of like RVKCs but were not intended to be were\n over-permissively treated as such--commentary keywords like COMMENT and\n HISTORY were particularly affected. Corresponds to PyFITS ticket 183.\n\n- Looking up a card in a header by its standard FITS keyword only should\n always return the raw value of that card. That way cards containing\n values that happen to valid RVKCs but were not intended to be will still\n be treated like normal cards. Corresponds to PyFITS ticket 184.\n\n- Looking up a RVKC in a header with only part of the field-specifier (for\n example \"DP1.AXIS\" instead of \"DP1.AXIS.1\") was implicitly treated as a\n wildcard lookup. Corresponds to PyFITS ticket 184.\n\n- Fixed a crash when diffing two FITS files where at least one contains a\n compressed image HDU which was not recognized as an image instead of a\n table. Corresponds to PyFITS ticket 187.\n\n- Fixed a bug where opening a file containing compressed image HDUs in\n 'update' mode and then immediately closing it without making any changes\n caused the file to be rewritten unnecessarily.\n\n- Fixed two memory leaks that could occur when writing compressed image data,\n or in some cases when opening files containing compressed image HDUs in\n 'update' mode.\n\n- Fixed a bug where ``ImageHDU.scale(option='old')`` wasn't working at\n all--it was not restoring the image to its original BSCALE and BZERO\n values.\n\n- Fixed a bug when writing out files containing zero-width table columns,\n where the TFIELDS keyword would be updated incorrectly, leaving the table\n largely unreadable.\n\n- Fixed a minor string formatting issue.\n\n- Fixed bugs in the backwards compatibility layer for the ``CardList.index``\n and ``CardList.count`` methods. Corresponds to PyFITS ticket 190.\n\n- Improved ``__repr__`` and text file representation of cards with long\n values that are split into CONTINUE cards. Corresponds to PyFITS ticket\n 193.\n\n- Fixed a crash when trying to assign a long (> 72 character) value to blank\n ('') keywords. This also changed how blank keywords are represented--there\n are still exactly 8 spaces before any commentary content can begin; this\n *may* affect the exact display of header cards that assumed there could be\n fewer spaces in a blank keyword card before the content begins. However,\n the current approach is more in line with the requirements of the FITS\n standard. Corresponds to PyFITS ticket 194.\n\nastropy.io.votable\n^^^^^^^^^^^^^^^^^^\n\n- The ``Table`` class now maintains a single array object which is a\n Numpy masked array. For variable-length columns, the object that\n is stored there is also a Numpy masked array.\n\n- Changed the ``pedantic`` configuration option to be ``False`` by default\n due to the vast proliferation of non-compliant VO Tables. [#296]\n\n- Renamed ``astropy.io.vo`` to ``astropy.io.votable``.\n\nastropy.table\n^^^^^^^^^^^^^\n\n- Added a workaround for an upstream bug in Numpy 1.6.2 that could cause\n a maximum recursion depth RuntimeError when printing table rows. [#341]\n\nastropy.wcs\n^^^^^^^^^^^\n\n- Updated to wcslib 4.15 [#418]\n\n- Fixed a problem with handling FITS headers on locales that do not use\n dot as a decimal separator. This required an upstream fix to wcslib which\n is included in wcslib 4.14. [#313]\n\n- Fixed some tests that could fail due to missing/incorrect logging\n configuration--ensures that tests don't have any impact on the default log\n location or contents. [#291]\n\n- Various minor documentation fixes [#293 and others]\n\n- Fixed a bug where running the tests with the ``py.test`` command still tried\n to replace the system-installed pytest with the one bundled with Astropy.\n [#454]\n\n- Improved multiprocessing compatibility for file downloads. [#615]\n\n- Fixed handling of Cython modules when building from a source checkout of a\n tagged release version. [#594]\n\n- Added a workaround for a bug in Sphinx that could occur when using the\n ``:tocdepth:`` directive. [#595]\n\n- Minor VOTable fixes [#596]\n\n- Fixed how ``setup.py`` uses ``distribute_setup.py`` to prevent possible\n ``VersionConflict`` errors when an older version of distribute is already\n installed on the user's system. [#616][#640]\n\n- Changed use of ``log.warn`` in the logging module to ``log.warning`` since\n the former is deprecated. [#624]\n\n\n0.1 (2012-06-19)\n================\n\n- Initial release.\n", "astropy/units/astrophys.py": "# -*- coding: utf-8 -*-\n# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\n\"\"\"\nThis package defines the astrophysics-specific units. They are also\navailable in the `astropy.units` namespace.\n\"\"\"\n\n\nfrom . import si\nfrom ..constants import si as _si\nfrom .core import (UnitBase, def_unit, si_prefixes, binary_prefixes,\n set_enabled_units)\n\n# To ensure si units of the constants can be interpreted.\nset_enabled_units([si])\n\nimport numpy as _numpy\n\n_ns = globals()\n\n###########################################################################\n# LENGTH\n\ndef_unit((['AU', 'au'], ['astronomical_unit']), _si.au, namespace=_ns, prefixes=True,\n doc=\"astronomical unit: approximately the mean Earth--Sun \"\n \"distance.\")\n\ndef_unit(['pc', 'parsec'], _si.pc, namespace=_ns, prefixes=True,\n doc=\"parsec: approximately 3.26 light-years.\")\n\ndef_unit(['solRad', 'R_sun', 'Rsun'], _si.R_sun, namespace=_ns,\n doc=\"Solar radius\", prefixes=False,\n format={'latex': r'R_{\\odot}', 'unicode': 'R⊙'})\ndef_unit(['jupiterRad', 'R_jup', 'Rjup', 'R_jupiter', 'Rjupiter'],\n _si.R_jup, namespace=_ns, prefixes=False, doc=\"Jupiter radius\",\n # LaTeX jupiter symbol requires wasysym\n format={'latex': r'R_{\\rm J}', 'unicode': 'R♃'})\ndef_unit(['earthRad', 'R_earth', 'Rearth'], _si.R_earth, namespace=_ns,\n prefixes=False, doc=\"Earth radius\",\n # LaTeX earth symbol requires wasysym\n format={'latex': r'R_{\\oplus}', 'unicode': 'R⊕'})\n\ndef_unit(['lyr', 'lightyear'], (_si.c * si.yr).to(si.m),\n namespace=_ns, prefixes=True, doc=\"Light year\")\n\n\n###########################################################################\n# AREAS\n\ndef_unit(['barn', 'barn'], 10 ** -28 * si.m ** 2, namespace=_ns, prefixes=True,\n doc=\"barn: unit of area used in HEP\")\n\n\n###########################################################################\n# ANGULAR MEASUREMENTS\n\ndef_unit(['cycle', 'cy'], 2.0 * _numpy.pi * si.rad,\n namespace=_ns, prefixes=False,\n doc=\"cycle: angular measurement, a full turn or rotation\")\n\n###########################################################################\n# MASS\n\ndef_unit(['solMass', 'M_sun', 'Msun'], _si.M_sun, namespace=_ns,\n prefixes=False, doc=\"Solar mass\",\n format={'latex': r'M_{\\odot}', 'unicode': 'M⊙'})\ndef_unit(['jupiterMass', 'M_jup', 'Mjup', 'M_jupiter', 'Mjupiter'],\n _si.M_jup, namespace=_ns, prefixes=False, doc=\"Jupiter mass\",\n # LaTeX jupiter symbol requires wasysym\n format={'latex': r'M_{\\rm J}', 'unicode': 'M♃'})\ndef_unit(['earthMass', 'M_earth', 'Mearth'], _si.M_earth, namespace=_ns,\n prefixes=False, doc=\"Earth mass\",\n # LaTeX earth symbol requires wasysym\n format={'latex': r'M_{\\oplus}', 'unicode': 'M⊕'})\ndef_unit(['M_p'], _si.m_p, namespace=_ns, doc=\"Proton mass\",\n format={'latex': r'M_{p}', 'unicode': 'Mₚ'})\ndef_unit(['M_e'], _si.m_e, namespace=_ns, doc=\"Electron mass\",\n format={'latex': r'M_{e}', 'unicode': 'Mₑ'})\n# Unified atomic mass unit\ndef_unit(['u', 'Da', 'Dalton'], _si.u, namespace=_ns,\n prefixes=True, exclude_prefixes=['a', 'da'],\n doc=\"Unified atomic mass unit\")\n\n##########################################################################\n# ENERGY\n\n# Here, explicitly convert the planck constant to 'eV s' since the constant\n# can override that to give a more precise value that takes into account\n# covariances between e and h. Eventually, this may also be replaced with\n# just `_si.Ryd.to(eV)`.\ndef_unit(['Ry', 'rydberg'],\n (_si.Ryd * _si.c * _si.h.to(si.eV * si.s)).to(si.eV),\n namespace=_ns, prefixes=True,\n doc=\"Rydberg: Energy of a photon whose wavenumber is the Rydberg \"\n \"constant\",\n format={'latex': r'R_{\\infty}', 'unicode': 'R∞'})\n\n\n###########################################################################\n# ILLUMINATION\n\ndef_unit(['solLum', 'L_sun', 'Lsun'], _si.L_sun, namespace=_ns,\n prefixes=False, doc=\"Solar luminance\",\n format={'latex': r'L_{\\odot}', 'unicode': 'L⊙'})\n\n\n###########################################################################\n# SPECTRAL DENSITY\n\ndef_unit((['ph', 'photon'], ['photon']),\n format={'ogip': 'photon', 'vounit': 'photon'},\n namespace=_ns, prefixes=True)\ndef_unit(['Jy', 'Jansky', 'jansky'], 1e-26 * si.W / si.m ** 2 / si.Hz,\n namespace=_ns, prefixes=True,\n doc=\"Jansky: spectral flux density\")\ndef_unit(['R', 'Rayleigh', 'rayleigh'],\n (1e10 / (4 * _numpy.pi)) *\n ph * si.m ** -2 * si.s ** -1 * si.sr ** -1,\n namespace=_ns, prefixes=True,\n doc=\"Rayleigh: photon flux\")\n\n\n###########################################################################\n# MISCELLANEOUS\n\n# Some of these are very FITS-specific and perhaps considered a mistake.\n# Maybe they should be moved into the FITS format class?\n# TODO: This is defined by the FITS standard as \"relative to the sun\".\n# Is that mass, volume, what?\ndef_unit(['Sun'], namespace=_ns)\n\n\n###########################################################################\n# EVENTS\n\ndef_unit((['ct', 'count'], ['count']),\n format={'fits': 'count', 'ogip': 'count', 'vounit': 'count'},\n namespace=_ns, prefixes=True, exclude_prefixes=['p'])\ndef_unit((['pix', 'pixel'], ['pixel']),\n format={'ogip': 'pixel', 'vounit': 'pixel'},\n namespace=_ns, prefixes=True)\n\n\n###########################################################################\n# MISCELLANEOUS\n\ndef_unit(['chan'], namespace=_ns, prefixes=True)\ndef_unit(['bin'], namespace=_ns, prefixes=True)\ndef_unit((['vox', 'voxel'], ['voxel']),\n format={'fits': 'voxel', 'ogip': 'voxel', 'vounit': 'voxel'},\n namespace=_ns, prefixes=True)\ndef_unit((['bit', 'b'], ['bit']), namespace=_ns,\n prefixes=si_prefixes + binary_prefixes)\ndef_unit((['byte', 'B'], ['byte']), 8 * bit, namespace=_ns,\n format={'vounit': 'byte'},\n prefixes=si_prefixes + binary_prefixes,\n exclude_prefixes=['d'])\ndef_unit(['adu'], namespace=_ns, prefixes=True)\ndef_unit(['beam'], namespace=_ns, prefixes=True)\ndef_unit(['electron'], doc=\"Number of electrons\", namespace=_ns,\n format={'latex': r'e^{-}', 'unicode': 'e⁻'})\n\n\n###########################################################################\n# CLEANUP\n\ndel UnitBase\ndel def_unit\ndel si\n\n\n###########################################################################\n# DOCSTRING\n\n# This generates a docstring for this module that describes all of the\n# standard units defined here.\nfrom .utils import generate_unit_summary as _generate_unit_summary\nif __doc__ is not None:\n __doc__ += _generate_unit_summary(globals())\n", "astropy/units/equivalencies.py": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"A set of standard astronomical equivalencies.\"\"\"\n\n# THIRD-PARTY\nimport numpy as np\nimport warnings\n\n# LOCAL\nfrom ..constants import si as _si\nfrom . import si\nfrom . import cgs\nfrom . import astrophys\nfrom .function import units as function_units\nfrom . import dimensionless_unscaled\nfrom .core import UnitsError, Unit\n\n\n__all__ = ['parallax', 'spectral', 'spectral_density', 'doppler_radio',\n 'doppler_optical', 'doppler_relativistic', 'mass_energy',\n 'brightness_temperature', 'thermodynamic_temperature',\n 'beam_angular_area', 'dimensionless_angles', 'logarithmic',\n 'temperature', 'temperature_energy', 'molar_mass_amu',\n 'pixel_scale', 'plate_scale']\n\n\ndef dimensionless_angles():\n \"\"\"Allow angles to be equivalent to dimensionless (with 1 rad = 1 m/m = 1).\n\n It is special compared to other equivalency pairs in that it\n allows this independent of the power to which the angle is raised,\n and independent of whether it is part of a more complicated unit.\n \"\"\"\n return [(si.radian, None)]\n\n\ndef logarithmic():\n \"\"\"Allow logarithmic units to be converted to dimensionless fractions\"\"\"\n return [\n (dimensionless_unscaled, function_units.dex,\n np.log10, lambda x: 10.**x)\n ]\n\n\ndef parallax():\n \"\"\"\n Returns a list of equivalence pairs that handle the conversion\n between parallax angle and distance.\n \"\"\"\n return [\n (si.arcsecond, astrophys.parsec, lambda x: 1. / x)\n ]\n\n\ndef spectral():\n \"\"\"\n Returns a list of equivalence pairs that handle spectral\n wavelength, wave number, frequency, and energy equivalences.\n\n Allows conversions between wavelength units, wave number units,\n frequency units, and energy units as they relate to light.\n\n There are two types of wave number:\n\n * spectroscopic - :math:`1 / \\\\lambda` (per meter)\n * angular - :math:`2 \\\\pi / \\\\lambda` (radian per meter)\n\n \"\"\"\n hc = _si.h.value * _si.c.value\n two_pi = 2.0 * np.pi\n inv_m_spec = si.m ** -1\n inv_m_ang = si.radian / si.m\n\n return [\n (si.m, si.Hz, lambda x: _si.c.value / x),\n (si.m, si.J, lambda x: hc / x),\n (si.Hz, si.J, lambda x: _si.h.value * x, lambda x: x / _si.h.value),\n (si.m, inv_m_spec, lambda x: 1.0 / x),\n (si.Hz, inv_m_spec, lambda x: x / _si.c.value,\n lambda x: _si.c.value * x),\n (si.J, inv_m_spec, lambda x: x / hc, lambda x: hc * x),\n (inv_m_spec, inv_m_ang, lambda x: x * two_pi, lambda x: x / two_pi),\n (si.m, inv_m_ang, lambda x: two_pi / x),\n (si.Hz, inv_m_ang, lambda x: two_pi * x / _si.c.value,\n lambda x: _si.c.value * x / two_pi),\n (si.J, inv_m_ang, lambda x: x * two_pi / hc, lambda x: hc * x / two_pi)\n ]\n\n\ndef spectral_density(wav, factor=None):\n \"\"\"\n Returns a list of equivalence pairs that handle spectral density\n with regard to wavelength and frequency.\n\n Parameters\n ----------\n wav : `~astropy.units.Quantity`\n `~astropy.units.Quantity` associated with values being converted\n (e.g., wavelength or frequency).\n\n Notes\n -----\n The ``factor`` argument is left for backward-compatibility with the syntax\n ``spectral_density(unit, factor)`` but users are encouraged to use\n ``spectral_density(factor * unit)`` instead.\n\n \"\"\"\n from .core import UnitBase\n\n if isinstance(wav, UnitBase):\n if factor is None:\n raise ValueError(\n 'If `wav` is specified as a unit, `factor` should be set')\n wav = factor * wav # Convert to Quantity\n\n c_Aps = _si.c.to_value(si.AA / si.s) # Angstrom/s\n h_cgs = _si.h.cgs.value # erg * s\n hc = c_Aps * h_cgs\n\n # flux density\n f_la = cgs.erg / si.angstrom / si.cm ** 2 / si.s\n f_nu = cgs.erg / si.Hz / si.cm ** 2 / si.s\n nu_f_nu = cgs.erg / si.cm ** 2 / si.s\n la_f_la = nu_f_nu\n phot_f_la = astrophys.photon / (si.cm ** 2 * si.s * si.AA)\n phot_f_nu = astrophys.photon / (si.cm ** 2 * si.s * si.Hz)\n\n # luminosity density\n L_nu = cgs.erg / si.s / si.Hz\n L_la = cgs.erg / si.s / si.angstrom\n nu_L_nu = cgs.erg / si.s\n la_L_la = nu_L_nu\n phot_L_la = astrophys.photon / (si.s * si.AA)\n phot_L_nu = astrophys.photon / (si.s * si.Hz)\n\n def converter(x):\n return x * (wav.to_value(si.AA, spectral()) ** 2 / c_Aps)\n\n def iconverter(x):\n return x / (wav.to_value(si.AA, spectral()) ** 2 / c_Aps)\n\n def converter_f_nu_to_nu_f_nu(x):\n return x * wav.to_value(si.Hz, spectral())\n\n def iconverter_f_nu_to_nu_f_nu(x):\n return x / wav.to_value(si.Hz, spectral())\n\n def converter_f_la_to_la_f_la(x):\n return x * wav.to_value(si.AA, spectral())\n\n def iconverter_f_la_to_la_f_la(x):\n return x / wav.to_value(si.AA, spectral())\n\n def converter_phot_f_la_to_f_la(x):\n return hc * x / wav.to_value(si.AA, spectral())\n\n def iconverter_phot_f_la_to_f_la(x):\n return x * wav.to_value(si.AA, spectral()) / hc\n\n def converter_phot_f_la_to_f_nu(x):\n return h_cgs * x * wav.to_value(si.AA, spectral())\n\n def iconverter_phot_f_la_to_f_nu(x):\n return x / (wav.to_value(si.AA, spectral()) * h_cgs)\n\n def converter_phot_f_la_phot_f_nu(x):\n return x * wav.to_value(si.AA, spectral()) ** 2 / c_Aps\n\n def iconverter_phot_f_la_phot_f_nu(x):\n return c_Aps * x / wav.to_value(si.AA, spectral()) ** 2\n\n converter_phot_f_nu_to_f_nu = converter_phot_f_la_to_f_la\n iconverter_phot_f_nu_to_f_nu = iconverter_phot_f_la_to_f_la\n\n def converter_phot_f_nu_to_f_la(x):\n return x * hc * c_Aps / wav.to_value(si.AA, spectral()) ** 3\n\n def iconverter_phot_f_nu_to_f_la(x):\n return x * wav.to_value(si.AA, spectral()) ** 3 / (hc * c_Aps)\n\n # for luminosity density\n converter_L_nu_to_nu_L_nu = converter_f_nu_to_nu_f_nu\n iconverter_L_nu_to_nu_L_nu = iconverter_f_nu_to_nu_f_nu\n converter_L_la_to_la_L_la = converter_f_la_to_la_f_la\n iconverter_L_la_to_la_L_la = iconverter_f_la_to_la_f_la\n\n converter_phot_L_la_to_L_la = converter_phot_f_la_to_f_la\n iconverter_phot_L_la_to_L_la = iconverter_phot_f_la_to_f_la\n converter_phot_L_la_to_L_nu = converter_phot_f_la_to_f_nu\n iconverter_phot_L_la_to_L_nu = iconverter_phot_f_la_to_f_nu\n converter_phot_L_la_phot_L_nu = converter_phot_f_la_phot_f_nu\n iconverter_phot_L_la_phot_L_nu = iconverter_phot_f_la_phot_f_nu\n converter_phot_L_nu_to_L_nu = converter_phot_f_nu_to_f_nu\n iconverter_phot_L_nu_to_L_nu = iconverter_phot_f_nu_to_f_nu\n converter_phot_L_nu_to_L_la = converter_phot_f_nu_to_f_la\n iconverter_phot_L_nu_to_L_la = iconverter_phot_f_nu_to_f_la\n\n return [\n # flux\n (f_la, f_nu, converter, iconverter),\n (f_nu, nu_f_nu, converter_f_nu_to_nu_f_nu, iconverter_f_nu_to_nu_f_nu),\n (f_la, la_f_la, converter_f_la_to_la_f_la, iconverter_f_la_to_la_f_la),\n (phot_f_la, f_la, converter_phot_f_la_to_f_la, iconverter_phot_f_la_to_f_la),\n (phot_f_la, f_nu, converter_phot_f_la_to_f_nu, iconverter_phot_f_la_to_f_nu),\n (phot_f_la, phot_f_nu, converter_phot_f_la_phot_f_nu, iconverter_phot_f_la_phot_f_nu),\n (phot_f_nu, f_nu, converter_phot_f_nu_to_f_nu, iconverter_phot_f_nu_to_f_nu),\n (phot_f_nu, f_la, converter_phot_f_nu_to_f_la, iconverter_phot_f_nu_to_f_la),\n # luminosity\n (L_la, L_nu, converter, iconverter),\n (L_nu, nu_L_nu, converter_L_nu_to_nu_L_nu, iconverter_L_nu_to_nu_L_nu),\n (L_la, la_L_la, converter_L_la_to_la_L_la, iconverter_L_la_to_la_L_la),\n (phot_L_la, L_la, converter_phot_L_la_to_L_la, iconverter_phot_L_la_to_L_la),\n (phot_L_la, L_nu, converter_phot_L_la_to_L_nu, iconverter_phot_L_la_to_L_nu),\n (phot_L_la, phot_L_nu, converter_phot_L_la_phot_L_nu, iconverter_phot_L_la_phot_L_nu),\n (phot_L_nu, L_nu, converter_phot_L_nu_to_L_nu, iconverter_phot_L_nu_to_L_nu),\n (phot_L_nu, L_la, converter_phot_L_nu_to_L_la, iconverter_phot_L_nu_to_L_la),\n ]\n\n\ndef doppler_radio(rest):\n r\"\"\"\n Return the equivalency pairs for the radio convention for velocity.\n\n The radio convention for the relation between velocity and frequency is:\n\n :math:`V = c \\frac{f_0 - f}{f_0} ; f(V) = f_0 ( 1 - V/c )`\n\n Parameters\n ----------\n rest : `~astropy.units.Quantity`\n Any quantity supported by the standard spectral equivalencies\n (wavelength, energy, frequency, wave number).\n\n References\n ----------\n `NRAO site defining the conventions <http://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`_\n\n Examples\n --------\n >>> import astropy.units as u\n >>> CO_restfreq = 115.27120*u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> radio_CO_equiv = u.doppler_radio(CO_restfreq)\n >>> measured_freq = 115.2832*u.GHz\n >>> radio_velocity = measured_freq.to(u.km/u.s, equivalencies=radio_CO_equiv)\n >>> radio_velocity # doctest: +FLOAT_CMP\n <Quantity -31.209092088877583 km / s>\n \"\"\"\n\n assert_is_spectral_unit(rest)\n\n ckms = _si.c.to_value('km/s')\n\n def to_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n return (restfreq-x) / (restfreq) * ckms\n\n def from_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n voverc = x/ckms\n return restfreq * (1-voverc)\n\n def to_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return (x-restwav) / (x) * ckms\n\n def from_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return restwav * ckms / (ckms-x)\n\n def to_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n return (resten-x) / (resten) * ckms\n\n def from_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n voverc = x/ckms\n return resten * (1-voverc)\n\n return [(si.Hz, si.km/si.s, to_vel_freq, from_vel_freq),\n (si.AA, si.km/si.s, to_vel_wav, from_vel_wav),\n (si.eV, si.km/si.s, to_vel_en, from_vel_en),\n ]\n\n\ndef doppler_optical(rest):\n r\"\"\"\n Return the equivalency pairs for the optical convention for velocity.\n\n The optical convention for the relation between velocity and frequency is:\n\n :math:`V = c \\frac{f_0 - f}{f } ; f(V) = f_0 ( 1 + V/c )^{-1}`\n\n Parameters\n ----------\n rest : `~astropy.units.Quantity`\n Any quantity supported by the standard spectral equivalencies\n (wavelength, energy, frequency, wave number).\n\n References\n ----------\n `NRAO site defining the conventions <http://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`_\n\n Examples\n --------\n >>> import astropy.units as u\n >>> CO_restfreq = 115.27120*u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> optical_CO_equiv = u.doppler_optical(CO_restfreq)\n >>> measured_freq = 115.2832*u.GHz\n >>> optical_velocity = measured_freq.to(u.km/u.s, equivalencies=optical_CO_equiv)\n >>> optical_velocity # doctest: +FLOAT_CMP\n <Quantity -31.20584348799674 km / s>\n \"\"\"\n\n assert_is_spectral_unit(rest)\n\n ckms = _si.c.to_value('km/s')\n\n def to_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n return ckms * (restfreq-x) / x\n\n def from_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n voverc = x/ckms\n return restfreq / (1+voverc)\n\n def to_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return ckms * (x/restwav-1)\n\n def from_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n voverc = x/ckms\n return restwav * (1+voverc)\n\n def to_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n return ckms * (resten-x) / x\n\n def from_vel_en(x):\n resten = rest.to_value(si.eV, equivalencies=spectral())\n voverc = x/ckms\n return resten / (1+voverc)\n\n return [(si.Hz, si.km/si.s, to_vel_freq, from_vel_freq),\n (si.AA, si.km/si.s, to_vel_wav, from_vel_wav),\n (si.eV, si.km/si.s, to_vel_en, from_vel_en),\n ]\n\n\ndef doppler_relativistic(rest):\n r\"\"\"\n Return the equivalency pairs for the relativistic convention for velocity.\n\n The full relativistic convention for the relation between velocity and frequency is:\n\n :math:`V = c \\frac{f_0^2 - f^2}{f_0^2 + f^2} ; f(V) = f_0 \\frac{\\left(1 - (V/c)^2\\right)^{1/2}}{(1+V/c)}`\n\n Parameters\n ----------\n rest : `~astropy.units.Quantity`\n Any quantity supported by the standard spectral equivalencies\n (wavelength, energy, frequency, wave number).\n\n References\n ----------\n `NRAO site defining the conventions <http://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`_\n\n Examples\n --------\n >>> import astropy.units as u\n >>> CO_restfreq = 115.27120*u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> relativistic_CO_equiv = u.doppler_relativistic(CO_restfreq)\n >>> measured_freq = 115.2832*u.GHz\n >>> relativistic_velocity = measured_freq.to(u.km/u.s, equivalencies=relativistic_CO_equiv)\n >>> relativistic_velocity # doctest: +FLOAT_CMP\n <Quantity -31.207467619351537 km / s>\n >>> measured_velocity = 1250 * u.km/u.s\n >>> relativistic_frequency = measured_velocity.to(u.GHz, equivalencies=relativistic_CO_equiv)\n >>> relativistic_frequency # doctest: +FLOAT_CMP\n <Quantity 114.79156866993588 GHz>\n >>> relativistic_wavelength = measured_velocity.to(u.mm, equivalencies=relativistic_CO_equiv)\n >>> relativistic_wavelength # doctest: +FLOAT_CMP\n <Quantity 2.6116243681798923 mm>\n \"\"\"\n\n assert_is_spectral_unit(rest)\n\n ckms = _si.c.to_value('km/s')\n\n def to_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n return (restfreq**2-x**2) / (restfreq**2+x**2) * ckms\n\n def from_vel_freq(x):\n restfreq = rest.to_value(si.Hz, equivalencies=spectral())\n voverc = x/ckms\n return restfreq * ((1-voverc) / (1+(voverc)))**0.5\n\n def to_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n return (x**2-restwav**2) / (restwav**2+x**2) * ckms\n\n def from_vel_wav(x):\n restwav = rest.to_value(si.AA, spectral())\n voverc = x/ckms\n return restwav * ((1+voverc) / (1-voverc))**0.5\n\n def to_vel_en(x):\n resten = rest.to_value(si.eV, spectral())\n return (resten**2-x**2) / (resten**2+x**2) * ckms\n\n def from_vel_en(x):\n resten = rest.to_value(si.eV, spectral())\n voverc = x/ckms\n return resten * ((1-voverc) / (1+(voverc)))**0.5\n\n return [(si.Hz, si.km/si.s, to_vel_freq, from_vel_freq),\n (si.AA, si.km/si.s, to_vel_wav, from_vel_wav),\n (si.eV, si.km/si.s, to_vel_en, from_vel_en),\n ]\n\n\ndef molar_mass_amu():\n \"\"\"\n Returns the equivalence between amu and molar mass.\n \"\"\"\n return [\n (si.g/si.mol, astrophys.u)\n ]\n\n\ndef mass_energy():\n \"\"\"\n Returns a list of equivalence pairs that handle the conversion\n between mass and energy.\n \"\"\"\n\n return [(si.kg, si.J, lambda x: x * _si.c.value ** 2,\n lambda x: x / _si.c.value ** 2),\n (si.kg / si.m ** 2, si.J / si.m ** 2,\n lambda x: x * _si.c.value ** 2,\n lambda x: x / _si.c.value ** 2),\n (si.kg / si.m ** 3, si.J / si.m ** 3,\n lambda x: x * _si.c.value ** 2,\n lambda x: x / _si.c.value ** 2),\n (si.kg / si.s, si.J / si.s, lambda x: x * _si.c.value ** 2,\n lambda x: x / _si.c.value ** 2),\n ]\n\n\ndef brightness_temperature(frequency, beam_area=None):\n r\"\"\"\n Defines the conversion between Jy/sr and \"brightness temperature\",\n :math:`T_B`, in Kelvins. The brightness temperature is a unit very\n commonly used in radio astronomy. See, e.g., \"Tools of Radio Astronomy\"\n (Wilson 2009) eqn 8.16 and eqn 8.19 (these pages are available on `google\n books\n <http://books.google.com/books?id=9KHw6R8rQEMC&pg=PA179&source=gbs_toc_r&cad=4#v=onepage&q&f=false>`__).\n\n :math:`T_B \\equiv S_\\nu / \\left(2 k \\nu^2 / c^2 \\right)`\n\n If the input is in Jy/beam or Jy (assuming it came from a single beam), the\n beam area is essential for this computation: the brightness temperature is\n inversely proportional to the beam area.\n\n Parameters\n ----------\n frequency : `~astropy.units.Quantity` with spectral units\n The observed ``spectral`` equivalent `~astropy.units.Unit` (e.g.,\n frequency or wavelength). The variable is named 'frequency' because it\n is more commonly used in radio astronomy.\n BACKWARD COMPATIBILITY NOTE: previous versions of the brightness\n temperature equivalency used the keyword ``disp``, which is no longer\n supported.\n beam_area : angular area equivalent\n Beam area in angular units, i.e. steradian equivalent\n\n Examples\n --------\n Arecibo C-band beam::\n\n >>> import numpy as np\n >>> from astropy import units as u\n >>> beam_sigma = 50*u.arcsec\n >>> beam_area = 2*np.pi*(beam_sigma)**2\n >>> freq = 5*u.GHz\n >>> equiv = u.brightness_temperature(freq)\n >>> (1*u.Jy/beam_area).to(u.K, equivalencies=equiv) # doctest: +FLOAT_CMP\n <Quantity 3.526295144567176 K>\n\n VLA synthetic beam::\n\n >>> bmaj = 15*u.arcsec\n >>> bmin = 15*u.arcsec\n >>> fwhm_to_sigma = 1./(8*np.log(2))**0.5\n >>> beam_area = 2.*np.pi*(bmaj*bmin*fwhm_to_sigma**2)\n >>> freq = 5*u.GHz\n >>> equiv = u.brightness_temperature(freq)\n >>> (u.Jy/beam_area).to(u.K, equivalencies=equiv) # doctest: +FLOAT_CMP\n <Quantity 217.2658703625732 K>\n\n Any generic surface brightness:\n\n >>> surf_brightness = 1e6*u.MJy/u.sr\n >>> surf_brightness.to(u.K, equivalencies=u.brightness_temperature(500*u.GHz)) # doctest: +FLOAT_CMP\n <Quantity 130.1931904778803 K>\n \"\"\"\n if frequency.unit.is_equivalent(si.sr):\n if not beam_area.unit.is_equivalent(si.Hz):\n raise ValueError(\"The inputs to `brightness_temperature` are \"\n \"frequency and angular area.\")\n warnings.warn(\"The inputs to `brightness_temperature` have changed. \"\n \"Frequency is now the first input, and angular area \"\n \"is the second, optional input.\",\n DeprecationWarning)\n frequency, beam_area = beam_area, frequency\n\n nu = frequency.to(si.GHz, spectral())\n\n if beam_area is not None:\n beam = beam_area.to_value(si.sr)\n def convert_Jy_to_K(x_jybm):\n factor = (2 * _si.k_B * si.K * nu**2 / _si.c**2).to(astrophys.Jy).value\n return (x_jybm / beam / factor)\n\n def convert_K_to_Jy(x_K):\n factor = (astrophys.Jy / (2 * _si.k_B * nu**2 / _si.c**2)).to(si.K).value\n return (x_K * beam / factor)\n\n return [(astrophys.Jy, si.K, convert_Jy_to_K, convert_K_to_Jy),\n (astrophys.Jy/astrophys.beam, si.K, convert_Jy_to_K, convert_K_to_Jy)\n ]\n else:\n def convert_JySr_to_K(x_jysr):\n factor = (2 * _si.k_B * si.K * nu**2 / _si.c**2).to(astrophys.Jy).value\n return (x_jysr / factor)\n\n def convert_K_to_JySr(x_K):\n factor = (astrophys.Jy / (2 * _si.k_B * nu**2 / _si.c**2)).to(si.K).value\n return (x_K / factor) # multiplied by 1x for 1 steradian\n\n return [(astrophys.Jy/si.sr, si.K, convert_JySr_to_K, convert_K_to_JySr)]\n\n\ndef beam_angular_area(beam_area):\n \"\"\"\n Convert between the ``beam`` unit, which is commonly used to express the area\n of a radio telescope resolution element, and an area on the sky.\n This equivalency also supports direct conversion between ``Jy/beam`` and\n ``Jy/steradian`` units, since that is a common operation.\n\n Parameters\n ----------\n beam_area : angular area equivalent\n The area of the beam in angular area units (e.g., steradians)\n \"\"\"\n return [(astrophys.beam, Unit(beam_area)),\n (astrophys.beam**-1, Unit(beam_area)**-1),\n (astrophys.Jy/astrophys.beam, astrophys.Jy/Unit(beam_area)),\n ]\n\n\ndef thermodynamic_temperature(frequency, T_cmb=None):\n r\"\"\"Defines the conversion between Jy/beam and \"thermodynamic temperature\",\n :math:`T_{CMB}`, in Kelvins. The thermodynamic temperature is a unit very\n commonly used in cosmology. See eqn 8 in [1]\n\n :math:`K_{CMB} \\equiv I_\\nu / \\left(2 k \\nu^2 / c^2 f(\\nu) \\right)`\n\n with :math:`f(\\nu) = \\frac{ x^2 e^x}{(e^x - 1 )^2}`\n where :math:`x = h \\nu / k T`\n\n Parameters\n ----------\n frequency : `~astropy.units.Quantity` with spectral units\n The observed `spectral` equivalent `~astropy.units.Unit` (e.g.,\n frequency or wavelength)\n T_cmb : `~astropy.units.Quantity` with temperature units (default Planck15 value)\n The CMB temperature at z=0\n\n Notes\n -----\n For broad band receivers, this conversion do not hold\n as it highly depends on the frequency\n\n References\n ----------\n .. [1] Planck 2013 results. IX. HFI spectral response\n https://arxiv.org/abs/1303.5070\n\n Examples\n --------\n Planck HFI 143 GHz::\n\n >>> from astropy import units as u\n >>> freq = 143 * u.GHz\n >>> equiv = u.thermodynamic_temperature(freq)\n >>> (1. * u.mK).to(u.MJy / u.sr, equivalencies=equiv) # doctest: +FLOAT_CMP\n <Quantity 0.37993172 MJy / sr>\n\n \"\"\"\n nu = frequency.to(si.GHz, spectral())\n\n if T_cmb is None:\n from ..cosmology import Planck15\n T_cmb = Planck15.Tcmb0\n\n def f(nu, T_cmb=T_cmb):\n x = _si.h * nu / _si.k_B / T_cmb\n return x**2 * np.exp(x) / np.expm1(x)**2\n\n def convert_Jy_to_K(x_jybm):\n factor = (f(nu) * 2 * _si.k_B * si.K * nu**2 / _si.c**2).to_value(astrophys.Jy)\n return x_jybm / factor\n\n def convert_K_to_Jy(x_K):\n factor = (astrophys.Jy / (f(nu) * 2 * _si.k_B * nu**2 / _si.c**2)).to_value(si.K)\n return x_K / factor\n\n return [(astrophys.Jy/si.sr, si.K, convert_Jy_to_K, convert_K_to_Jy)]\n\n\ndef temperature():\n \"\"\"Convert between Kelvin, Celsius, and Fahrenheit here because\n Unit and CompositeUnit cannot do addition or subtraction properly.\n \"\"\"\n from .imperial import deg_F\n return [\n (si.K, si.deg_C, lambda x: x - 273.15, lambda x: x + 273.15),\n (si.deg_C, deg_F, lambda x: x * 1.8 + 32.0, lambda x: (x - 32.0) / 1.8),\n (si.K, deg_F, lambda x: (x - 273.15) * 1.8 + 32.0,\n lambda x: ((x - 32.0) / 1.8) + 273.15)]\n\n\ndef temperature_energy():\n \"\"\"Convert between Kelvin and keV(eV) to an equivalent amount.\"\"\"\n return [\n (si.K, si.eV, lambda x: x / (_si.e.value / _si.k_B.value),\n lambda x: x * (_si.e.value / _si.k_B.value))]\n\n\ndef assert_is_spectral_unit(value):\n try:\n value.to(si.Hz, spectral())\n except (AttributeError, UnitsError) as ex:\n raise UnitsError(\"The 'rest' value must be a spectral equivalent \"\n \"(frequency, wavelength, or energy).\")\n\n\ndef pixel_scale(pixscale):\n \"\"\"\n Convert between pixel distances (in units of ``pix``) and angular units,\n given a particular ``pixscale``.\n\n Parameters\n ----------\n pixscale : `~astropy.units.Quantity`\n The pixel scale either in units of angle/pixel or pixel/angle.\n \"\"\"\n if pixscale.unit.is_equivalent(si.arcsec/astrophys.pix):\n pixscale_val = pixscale.to_value(si.radian/astrophys.pix)\n elif pixscale.unit.is_equivalent(astrophys.pix/si.arcsec):\n pixscale_val = (1/pixscale).to_value(si.radian/astrophys.pix)\n else:\n raise UnitsError(\"The pixel scale must be in angle/pixel or \"\n \"pixel/angle\")\n\n return [(astrophys.pix, si.radian, lambda px: px*pixscale_val, lambda rad: rad/pixscale_val)]\n\n\ndef plate_scale(platescale):\n \"\"\"\n Convert between lengths (to be interpreted as lengths in the focal plane)\n and angular units with a specified ``platescale``.\n\n Parameters\n ----------\n platescale : `~astropy.units.Quantity`\n The pixel scale either in units of distance/pixel or distance/angle.\n \"\"\"\n if platescale.unit.is_equivalent(si.arcsec/si.m):\n platescale_val = platescale.to_value(si.radian/si.m)\n elif platescale.unit.is_equivalent(si.m/si.arcsec):\n platescale_val = (1/platescale).to_value(si.radian/si.m)\n else:\n raise UnitsError(\"The pixel scale must be in angle/distance or \"\n \"distance/angle\")\n\n return [(si.m, si.radian, lambda d: d*platescale_val, lambda rad: rad/platescale_val)]\n", "docs/units/equivalencies.rst": ".. |quantity| replace:: :class:`~astropy.units.Quantity`\n\n.. _unit_equivalencies:\n\nEquivalencies\n*************\n\nThe unit module has machinery for supporting equivalences between\ndifferent units in certain contexts, namely when equations can\nuniquely relate a value in one unit to a different unit. A good\nexample is the equivalence between wavelength, frequency and energy\nfor specifying a wavelength of radiation. Normally these units are not\nconvertible, but when understood as representing light, they are\nconvertible in certain contexts. Here we describe how to use the\nequivalencies included in `astropy.units` and how to\ndefine new equivalencies.\n\nEquivalencies are used by passing a list of equivalency pairs to the\n``equivalencies`` keyword argument of :meth:`Quantity.to\n<astropy.units.quantity.Quantity.to>` or :meth:`Unit.to\n<astropy.units.core.UnitBase.to>` methods. Alternatively, if a larger\npiece of code needs the same equivalencies, one can set them for a\n:ref:`given context <equivalency-context>`.\n\nBuilt-in equivalencies\n======================\n\nParallax Units\n--------------\n\n:func:`~astropy.units.equivalencies.parallax` is a function that returns an\nequivalency list to handle conversions between angles and length.\n\nLength and angles are not normally convertible, so\n:meth:`~astropy.units.core.UnitBase.to` raises an exception::\n\n >>> from astropy import units as u\n >>> (8.0 * u.arcsec).to(u.parsec) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n UnitConversionError: 'arcsec' (angle) and 'pc' (length) are not convertible\n\nHowever, when passing the result of\n:func:`~astropy.units.equivalencies.parallax` as the third argument to the\n:meth:`~astropy.units.core.UnitBase.to` method, angles can be converted\ninto units of length (and vice versa).\n\n >>> (8.0 * u.arcsec).to(u.parsec, equivalencies=u.parallax())\n <Quantity 0.125 pc>\n >>> u.AU.to(u.arcminute, equivalencies=u.parallax())\n 3437.7467707580054\n\nAngles as Dimensionless Units\n-----------------------------\n\nAngles are treated as a physically distinct type, which usually helps\nto avoid mistakes. However, this is not very handy when working with\nunits related to rotational energy or the small angle approximation.\n(Indeed, this double-sidedness underlies why radian went from\n`supplementary to derived unit <http://www.bipm.org/en/CGPM/db/20/8/>`__.)\nThe function :func:`~astropy.units.equivalencies.dimensionless_angles`\nprovides the required equivalency list that helps convert between\nangles and dimensionless units. It is somewhat\ndifferent from all others in that it allows an arbitrary change in the\nnumber of powers to which radian is raised (i.e., including zero and thus\ndimensionless). For instance, normally the following raise exceptions::\n\n >>> from astropy import units as u\n >>> u.degree.to('') # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n UnitConversionError: 'deg' (angle) and '' (dimensionless) are not convertible\n >>> (u.kg * u.m**2 * (u.cycle / u.s)**2).to(u.J) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n UnitConversionError: 'cycle2 kg m2 / s2' and 'J' (energy) are not convertible\n\nBut when passing the proper conversion function,\n:func:`~astropy.units.equivalencies.dimensionless_angles`, it works.\n\n >>> u.deg.to('', equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n 0.017453292519943295\n >>> (0.5e38 * u.kg * u.m**2 * (u.cycle / u.s)**2).to(u.J,\n ... equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n <Quantity 1.9739208802178715e+39 J>\n >>> import numpy as np\n >>> np.exp((1j*0.125*u.cycle).to('', equivalencies=u.dimensionless_angles())) # doctest: +SKIP\n <Quantity 0.70710678+0.70710678j>\n\nThe example with complex numbers is also one may well be doing a fair\nnumber of similar calculations. For such situations, there is the\noption to :ref:`set default equivalencies <equivalency-context>`.\n\nIn some situations, this equivalency may behave differently than\nanticipated. For instance, it might at first seem reasonable to use it\nfor converting from an angular velocity :math:`\\omega` in radians per\nsecond to the corresponding frequency :math:`f` in hertz (i.e., to\nimplement :math:`f=\\omega/2\\pi`). However, attempting this yields:\n\n >>> (1*u.rad/u.s).to(u.Hz, equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n <Quantity 1. Hz>\n >>> (1*u.cycle/u.s).to(u.Hz, equivalencies=u.dimensionless_angles()) # doctest: +FLOAT_CMP\n <Quantity 6.283185307179586 Hz>\n\nHere, we might have expected ~0.159 Hz in the first example and 1 Hz in\nthe second. However, :func:`~astropy.units.equivalencies.dimensionless_angles`\nconverts to radians per second and then drops radians as a unit. The\nimplicit mistake made in these examples is that the unit Hz is taken to be\nequivalent to cycles per second, which it is not (it is just \"per second\").\nThis realization also leads to the solution: to use an explicit equivalency\nbetween cycles per second and hertz:\n\n >>> (1*u.rad/u.s).to(u.Hz, equivalencies=[(u.cy/u.s, u.Hz)]) # doctest: +FLOAT_CMP\n <Quantity 0.15915494309189535 Hz>\n >>> (1*u.cy/u.s).to(u.Hz, equivalencies=[(u.cy/u.s, u.Hz)]) # doctest: +FLOAT_CMP\n <Quantity 1. Hz>\n\nSpectral Units\n--------------\n\n:func:`~astropy.units.equivalencies.spectral` is a function that returns\nan equivalency list to handle conversions between wavelength,\nfrequency, energy, and wave number.\n\nAs mentioned above with parallax units, we simply pass a list of\nequivalencies (in this case, the result of\n:func:`~astropy.units.equivalencies.spectral`) as the third argument to the\n:meth:`~astropy.units.core.UnitBase.to` method and wavelength, frequency and\nenergy can be converted.\n\n >>> ([1000, 2000] * u.nm).to(u.Hz, equivalencies=u.spectral()) # doctest: +FLOAT_CMP\n <Quantity [2.99792458e+14, 1.49896229e+14] Hz>\n >>> ([1000, 2000] * u.nm).to(u.eV, equivalencies=u.spectral()) # doctest: +FLOAT_CMP\n <Quantity [1.23984193, 0.61992096] eV>\n\nThese equivalencies even work with non-base units::\n\n >>> # Inches to calories\n >>> from astropy.units import imperial\n >>> imperial.inch.to(imperial.Cal, equivalencies=u.spectral()) # doctest: +FLOAT_CMP\n 1.869180759162485e-27\n\nSpectral (Doppler) equivalencies\n--------------------------------\n\nSpectral equivalencies allow you to convert between wavelength,\nfrequency, energy, and wave number but not to velocity, which is\nfrequently the quantity of interest.\n\nIt is fairly straightforward to define the equivalency, but note that there are\ndifferent `conventions <http://www.gb.nrao.edu/~fghigo/gbtdoc/doppler.html>`__.\nIn these conventions :math:`f_0` is the rest frequency, :math:`f` is the observed frequency,\n:math:`V` is the velocity, and :math:`c` is the speed of light:\n\n * Radio :math:`V = c \\frac{f_0 - f}{f_0} ; f(V) = f_0 ( 1 - V/c )`\n * Optical :math:`V = c \\frac{f_0 - f}{f } ; f(V) = f_0 ( 1 + V/c )^{-1}`\n * Relativistic :math:`V = c \\frac{f_0^2 - f^2}{f_0^2 + f^2} ; f(V) = f_0 \\frac{\\left(1 - (V/c)^2\\right)^{1/2}}{(1+V/c)}`\n\nThese three conventions are implemented in\n:mod:`astropy.units.equivalencies` as\n:func:`~astropy.units.equivalencies.doppler_optical`,\n:func:`~astropy.units.equivalencies.doppler_radio`, and\n:func:`~astropy.units.equivalencies.doppler_relativistic`. Example use::\n\n >>> restfreq = 115.27120 * u.GHz # rest frequency of 12 CO 1-0 in GHz\n >>> freq_to_vel = u.doppler_radio(restfreq)\n >>> (116e9 * u.Hz).to(u.km / u.s, equivalencies=freq_to_vel) # doctest: +FLOAT_CMP\n <Quantity -1895.4321928669085 km / s>\n\nSpectral Flux / Luminosity Density Units\n----------------------------------------\n\nThere is also support for spectral flux and luminosity density units. Their use\nis more complex, since it is necessary to also supply the location in the\nspectrum for which the conversions will be done, and the units of those spectral\nlocations. The function that handles these unit conversions is\n:func:`~astropy.units.equivalencies.spectral_density`. This function takes as\nits arguments the |quantity| for the spectral location. For example::\n\n >>> (1.5 * u.Jy).to(u.photon / u.cm**2 / u.s / u.Hz,\n ... equivalencies=u.spectral_density(3500 * u.AA)) # doctest: +FLOAT_CMP\n <Quantity 2.6429114293019694e-12 ph / (cm2 Hz s)>\n >>> (1.5 * u.Jy).to(u.photon / u.cm**2 / u.s / u.micron,\n ... equivalencies=u.spectral_density(3500 * u.AA)) # doctest: +FLOAT_CMP\n <Quantity 6467.9584789120845 ph / (cm2 micron s)>\n >>> a = 1. * u.photon / u.s / u.angstrom\n >>> a.to(u.erg / u.s / u.Hz,\n ... equivalencies=u.spectral_density(5500 * u.AA)) # doctest: +FLOAT_CMP\n <Quantity 3.6443382634999996e-23 erg / (Hz s)>\n\nBrightness Temperature / Surface Brightness Equivalency\n-------------------------------------------------------\n\nThere is an equivalency between surface brightness (flux density per area) and\nbrightness temperature. This equivalency is often referred to as \"Antenna\nGain\" since, at a given frequency, telescope brightness sensitivity is\nunrelated to aperture size, but flux density sensitivity is, so this\nequivalency is only dependent on the aperture size. See `Tools of Radio\nAstronomy\n<http://books.google.com/books?id=9KHw6R8rQEMC&pg=PA179&source=gbs_toc_r&cad=4#v=onepage&q&f=false>`__\nfor details.\n\n.. note:: The brightness temperature mentioned here is the Rayleigh-Jeans\n equivalent temperature, which results in a linear relation between\n flux and temperature. This is the convention that is most often used\n in relation to observations, but if you are interested in computing\n the *exact* temperature of a planck function that would produce a\n given flux, you should not use this equivalency.\n\nThe `~astropy.units.equivalencies.brightness_temperature` equivalency requires\nthe beam area and frequency as arguments. Recalling that the area of a 2D\ngaussian is :math:`2 \\pi \\sigma^2` (see `wikipedia\n<https://en.wikipedia.org/wiki/Gaussian_function#Two-dimensional_Gaussian_function>`_),\nhere is an example::\n\n >>> import numpy as np\n >>> beam_sigma = 50*u.arcsec\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> freq = 5 * u.GHz\n >>> (1*u.Jy/omega_B).to(u.K, equivalencies=u.brightness_temperature(freq)) # doctest: +FLOAT_CMP\n <Quantity 3.526295144567176 K>\n\nIf you have beam full-width half-maxima (FWHM), which are often quoted and are\nthe values stored in the FITS header keywords BMAJ and BMIN, a more appropriate\nexample converts the FWHM to sigma::\n\n >>> import numpy as np\n >>> beam_fwhm = 50*u.arcsec\n >>> fwhm_to_sigma = 1. / (8 * np.log(2))**0.5\n >>> beam_sigma = beam_fwhm * fwhm_to_sigma\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> freq = 5 * u.GHz\n >>> (1*u.Jy/omega_B).to(u.K, equivalencies=u.brightness_temperature(freq)) # doctest: +FLOAT_CMP\n <Quantity 19.553932298231704 K>\n\nYou can also convert between ``Jy/beam`` and ``K`` by specifying the beam area::\n\n >>> import numpy as np\n >>> beam_fwhm = 50*u.arcsec\n >>> fwhm_to_sigma = 1. / (8 * np.log(2))**0.5\n >>> beam_sigma = beam_fwhm * fwhm_to_sigma\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> freq = 5 * u.GHz\n >>> (1*u.Jy/u.beam).to(u.K, u.brightness_temperature(freq, beam_area=omega_B)) # doctest: +FLOAT_CMP\n <Quantity 19.553932298231704 K>\n\nFinally, there is an equivalency that allows you to convert from Jansky to Kelvin.\nIn this case, the Jansky unit is *implicitly* Jansky/beam. Because of the implicit\nassumed per beam unit, this approach is deprecated.::\n\n >>> import numpy as np\n >>> beam_fwhm = 50*u.arcsec\n >>> fwhm_to_sigma = 1. / (8 * np.log(2))**0.5\n >>> beam_sigma = beam_fwhm * fwhm_to_sigma\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> freq = 5 * u.GHz\n >>> # DEPRECATED\n >>> (1*u.Jy).to(u.K, u.brightness_temperature(freq, beam_area=omega_B)) # doctest: +FLOAT_CMP\n <Quantity 19.553932298231704 K>\n\n\nBeam Equivalency\n----------------\n\nRadio data, especially from interferometers, is often produced in units of ``Jy/beam``.\nConverting this number to a beam-independent value, e.g., ``Jy/sr``, can be done\nwith the `~astropy.units.equivalencies.beam_angular_area` equivalency::\n\n >>> import numpy as np\n >>> beam_fwhm = 50*u.arcsec\n >>> fwhm_to_sigma = 1. / (8 * np.log(2))**0.5\n >>> beam_sigma = beam_fwhm * fwhm_to_sigma\n >>> omega_B = 2 * np.pi * beam_sigma**2\n >>> (1*u.Jy/u.beam).to(u.MJy/u.sr, equivalencies=u.beam_angular_area(omega_B)) # doctest: +FLOAT_CMP\n <Quantity 15.019166691021288 MJy / sr>\n\n\nNote that the `radio_beam <https://github.com/radio-astro-tools/radio-beam>`_ package\ndeals with beam input/output and various operations more directly.\n\nTemperature Energy Equivalency\n------------------------------\n\nThis equivalency allows conversion between temperature and its equivalent\nin energy (i.e., the temperature multiplied by the Boltzmann constant),\nusually expressed in electronvolts. This is used frequently for\nobservations at high-energy, be it for solar or X-ray astronomy. Example::\n\n >>> import astropy.units as u\n >>> t_k = 1e6 * u.K\n >>> t_k.to(u.eV, equivalencies=u.temperature_energy()) # doctest: +FLOAT_CMP\n <Quantity 86.17332384960955 eV>\n\nThermodynamic Temperature Equivalency\n-------------------------------------\n\nThis :func:`~astropy.units.equivalencies.thermodynamic_temperature`\nequivalency allows conversion between Jy/beam and \"thermodynamic\ntemperature\", :math:`T_{CMB}`, in Kelvins. Example::\n\n >>> import astropy.units as u\n >>> nu = 143 * u.GHz\n >>> t_k = 0.0026320518775281975 * u.K\n >>> t_k.to(u.MJy / u.sr, equivalencies=u.thermodynamic_temperature(nu)) # doctest: +FLOAT_CMP\n <Quantity 1. MJy / sr>\n\n\nMolar Mass AMU Equivalency\n--------------------------\n\nThis equivalency allows conversion\nbetween the atomic mass unit and the equivalent g/mol.\nFor reference to why this was added,\nrefer to `NIST Mole Reference <https://physics.nist.gov/cuu/Units/mole.html>`_\nThe following is an example of it's usage:\n\n >>> import astropy.units as u\n >>> import astropy.constants as const\n >>> x = 1 * (u.g / u.mol)\n >>> y = 1 * u.u\n >>> x.to(u.u, equivalencies=u.molar_mass_amu()) # doctest: +FLOAT_CMP\n <Quantity 1.0 u>\n >>> y.to(u.g/u.mol, equivalencies=u.molar_mass_amu()) # doctest: +FLOAT_CMP\n <Quantity 1.0 g / mol>\n\nPixel and plate scale Equivalencies\n-----------------------------------\n\nThese equivalencies are for converting between angular scales and either linear\nscales in the focal plane or distances in units of the number of pixels. For\nexample, suppose you are working with cutouts from the Sloan Digital Sky Survey,\nwhich defaults to a pixel scale of 0.4 arcseconds per pixel, and want to know\nthe true size of something that you measure to be 240 pixels across in the\ncutout image::\n\n >>> import astropy.units as u\n >>> sdss_pixelscale = u.pixel_scale(0.4*u.arcsec/u.pixel)\n >>> (240*u.pixel).to(u.arcmin, sdss_pixelscale) # doctest: +FLOAT_CMP\n <Quantity 1.6 arcmin>\n\nOr maybe you are designing an instrument for a telescope that someone told you\nhas a (inverse) plate scale of 7.8 meters per radian (for your desired focus),\nand you want to know how big your pixels need to be to cover half an arcsecond::\n\n >>> import astropy.units as u\n >>> tel_platescale = u.plate_scale(7.8*u.m/u.radian)\n >>> (0.5*u.arcsec).to(u.micron, tel_platescale) # doctest: +FLOAT_CMP\n <Quantity 18.9077335632719 micron>\n\nPhotometric Zero Point Equivalencies\n------------------------------------\n\nThis equivalency provides an easy way to move between photometric systems (i.e.,\nthose defined relative to a particular zero-point flux) and absolute fluxes.\nThis is most useful in conjuction with support for :ref:`logarithmic_units`.\nFor example, suppose you are observing a target with a filter with a reported\nstandard zero point of 3631.1 Jy::\n\n >>> target_flux = 1.2 * u.nanomaggy\n >>> zero_point_star_equiv = u.zero_point_flux(3631.1 * u.Jy)\n >>> u.Magnitude(target_flux.to(u.AB, zero_point_star_equiv)) # doctest: +FLOAT_CMP\n <Magnitude 22.30195136 mag(AB)>\n\nWriting new equivalencies\n=========================\n\nAn equivalence list is just a list of tuples, where each tuple has 4\nelements::\n\n (from_unit, to_unit, forward, backward)\n\n``from_unit`` and ``to_unit`` are the equivalent units. ``forward`` and\n``backward`` are functions that convert values between those units. ``forward``\nand ``backward`` are optional, and if omitted such an equivalency simply\ndeclares that the two units should be taken as equivalent.\n\nFor example, until 1964 the metric liter was defined as the volume of\n1kg of water at 4°C at 760mm mercury pressure. Volumes and masses are\nnot normally directly convertible, but if we hold the constants in the\n1964 definition of the liter as true, we could build an equivalency\nfor them::\n\n >>> liters_water = [\n ... (u.l, u.g, lambda x: 1000.0 * x, lambda x: x / 1000.0)\n ... ]\n >>> u.l.to(u.kg, 1, equivalencies=liters_water)\n 1.0\n\nNote that the equivalency can be used with any other compatible units::\n\n >>> from astropy.units import imperial\n >>> imperial.gallon.to(imperial.pound, 1, equivalencies=liters_water) # doctest: +FLOAT_CMP\n 8.345404463333525\n\nAnd it also works in the other direction::\n\n >>> imperial.lb.to(imperial.pint, 1, equivalencies=liters_water) # doctest: +FLOAT_CMP\n 0.9586114172355459\n\nA slightly more complicated example: Spectral Doppler Equivalencies\n-------------------------------------------------------------------\n\nWe show how to define an equivalency using the radio convention for CO 1-0.\nThis function is already defined in\n:func:`~astropy.units.equivalencies.doppler_radio`,\nbut this example is illustrative::\n\n >>> from astropy.constants import si\n >>> restfreq = 115.27120 # rest frequency of 12 CO 1-0 in GHz\n >>> freq_to_vel = [(u.GHz, u.km/u.s,\n ... lambda x: (restfreq-x) / restfreq * si.c.to_value('km/s'),\n ... lambda x: (1-x/si.c.to_value('km/s')) * restfreq )]\n >>> u.Hz.to(u.km / u.s, 116e9, equivalencies=freq_to_vel) # doctest: +FLOAT_CMP\n -1895.4321928669262\n >>> (116e9 * u.Hz).to(u.km / u.s, equivalencies=freq_to_vel) # doctest: +FLOAT_CMP\n <Quantity -1895.4321928669262 km / s>\n\nNote that once this is defined for GHz and km/s, it will work for all other\nunits of frequency and velocity. ``x`` is converted from the input frequency\nunit (e.g., Hz) to GHz before being passed to ``lambda x:``. Similarly, the\nreturn value is assumed to be in units of ``km/s``, which is why the ``.value``\nof ``c`` is used instead of the constant.\n\nDisplaying available equivalencies\n==================================\n\nThe :meth:`~astropy.units.core.UnitBase.find_equivalent_units` method also\nunderstands equivalencies. For example, without passing equivalencies,\nthere are three compatible units for ``Hz`` in the standard set::\n\n >>> u.Hz.find_equivalent_units()\n Primary name | Unit definition | Aliases\n [\n Bq | 1 / s | becquerel ,\n Ci | 3.7e+10 / s | curie ,\n Hz | 1 / s | Hertz, hertz ,\n ]\n\nHowever, when passing the spectral equivalency, you can see there are\nall kinds of things that ``Hz`` can be converted to::\n\n >>> u.Hz.find_equivalent_units(equivalencies=u.spectral())\n Primary name | Unit definition | Aliases\n [\n AU | 1.49598e+11 m | au, astronomical_unit ,\n Angstrom | 1e-10 m | AA, angstrom ,\n Bq | 1 / s | becquerel ,\n Ci | 3.7e+10 / s | curie ,\n Hz | 1 / s | Hertz, hertz ,\n J | kg m2 / s2 | Joule, joule ,\n Ry | 2.17987e-18 kg m2 / s2 | rydberg ,\n cm | 0.01 m | centimeter ,\n eV | 1.60218e-19 kg m2 / s2 | electronvolt ,\n earthRad | 6.3781e+06 m | R_earth, Rearth ,\n erg | 1e-07 kg m2 / s2 | ,\n jupiterRad | 7.1492e+07 m | R_jup, Rjup, R_jupiter, Rjupiter ,\n k | 100 / m | Kayser, kayser ,\n lyr | 9.46073e+15 m | lightyear ,\n m | irreducible | meter ,\n micron | 1e-06 m | ,\n pc | 3.08568e+16 m | parsec ,\n solRad | 6.957e+08 m | R_sun, Rsun ,\n ]\n\n.. _equivalency-context:\n\nUsing equivalencies in larger pieces of code\n============================================\nSometimes one has an involved calculation where one is regularly\nswitching back between equivalent units. For these cases, one can set\nequivalencies that will by default be used, in a way similar to which\none can :ref:`enable other units <enabling-other-units>`.\n\nFor instance, to enable radian to be treated as a dimensionless unit,\nsimply do:\n\n.. doctest-skip::\n\n >>> import astropy.units as u\n >>> u.set_enabled_equivalencies(u.dimensionless_angles())\n <astropy.units.core._UnitContext object at ...>\n >>> u.deg.to('') # doctest: +FLOAT_CMP\n 0.017453292519943295\n\nHere, any list of equivalencies could be used, or one could add, e.g.,\n:func:`~astropy.units.equivalencies.spectral` and\n:func:`~astropy.units.equivalencies.spectral_density` (since these return\nlists, they should indeed be combined by adding them together).\n\nThe disadvantage of the above approach is that you may forget to turn\nthe default off (done by giving an empty argument). To automate this,\na context manager is provided:\n\n.. doctest-skip::\n\n >>> import astropy.units as u\n >>> with u.set_enabled_equivalencies(u.dimensionless_angles()):\n ... phase = 0.5 * u.cycle\n ... c = np.exp(1j*phase)\n >>> c # doctest: +FLOAT_CMP\n <Quantity (-1+1.2246063538223773e-16j) >\n"}
|
diff --git a/CHANGES.rst b/CHANGES.rst
index bbd4ad7014dc..ca55537547af 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -224,6 +224,8 @@ astropy.units
- ``AB`` and ``ST`` are now enabled by default, and have alternate names
``ABflux`` and ``STflux`` [#7891]
+- Added ``littleh`` unit and associated ``with_H0`` equivalency. [#7970]
+
astropy.utils
^^^^^^^^^^^^^
diff --git a/docs/units/equivalencies.rst b/docs/units/equivalencies.rst
index 86206968e1e9..7236ad0f8a66 100644
--- a/docs/units/equivalencies.rst
+++ b/docs/units/equivalencies.rst
@@ -347,8 +347,8 @@ and you want to know how big your pixels need to be to cover half an arcsecond::
>>> (0.5*u.arcsec).to(u.micron, tel_platescale) # doctest: +FLOAT_CMP
<Quantity 18.9077335632719 micron>
-Photometric Zero Point Equivalencies
-------------------------------------
+Photometric Zero Point Equivalency
+----------------------------------
This equivalency provides an easy way to move between photometric systems (i.e.,
those defined relative to a particular zero-point flux) and absolute fluxes.
@@ -361,6 +361,47 @@ standard zero point of 3631.1 Jy::
>>> u.Magnitude(target_flux.to(u.AB, zero_point_star_equiv)) # doctest: +FLOAT_CMP
<Magnitude 22.30195136 mag(AB)>
+Reduced Hubble constant/"little-h" Equivalency
+----------------------------------------------
+
+The dimensionless version of the Hubble constant - often known as "little h" -
+is a frequently-used quantity in extragalactic astrophysics. It is also widely
+known as the bane of beginners' existence in such fields (See e.g., the title of
+`this paper <https://doi.org/10.1017/pasa.2013.31>`__, which also provides
+valuable advice on the use of little h). Astropy provides an equivalency that
+helps keep this straight in at least some of these cases, by providing a way to
+convert to/from physical to "little h" units. Two example conversions:
+
+ >>> import astropy.units as u
+ >>> H0_70 = 70 * u.km/u.s / u.Mpc
+ >>> distance = 100 * (u.Mpc/u.littleh)
+ >>> distance.to(u.Mpc, u.with_H0(H0_70)) # doctest: +FLOAT_CMP
+ <Quantity 70.0 Mpc>
+ >>> luminosity = 1 * u.Lsun * u.littleh**-2
+ >>> luminosity.to(u.Lsun, u.with_H0(H0_70)) # doctest: +FLOAT_CMP
+ <Quantity 0.49 solLum>
+
+Note the unit name ``littleh`` - while this unit is usually expressed in the
+literature as just ``h``, here it is ``littleh`` to not cause confusion with
+"hours".
+
+If no argument is given (or the argument is `None`), this equivalency assumes
+the ``H0`` from the current default cosmology:
+
+ >>> distance = 100 * (u.Mpc/u.littleh)
+ >>> distance.to(u.Mpc, u.with_H0()) # doctest: +FLOAT_CMP
+ <Quantity 69.32 Mpc>
+
+This equivalency also allows the common magnitude formulation of little h
+scaling:
+
+ >>> mag_quantity = 12 * (u.mag + u.MagUnit(u.littleh**2))
+ >>> mag_quantity # doctest: +FLOAT_CMP
+ <Magnitude 12. mag(littleh2)>
+ >>> mag_quantity.to(u.mag, u.with_H0(H0_70)) # doctest: +FLOAT_CMP
+ <Quantity 11.2254902 mag>
+
+
Writing new equivalencies
=========================
|
{"astropy/units/equivalencies.py": [{"type": "function", "name": "with_H0", "lines": [692, 715], "signature": "def with_H0(H0=None):", "doc": "Convert between quantities with little-h and the equivalent physical units.\n\nParameters\n----------\nH0 : `None` or `~astropy.units.Quantity`\n The value of the Hubble constant to assume. If a `~astropy.units.Quantity`,\n will assume the quantity *is* ``H0``. If `None` (default), use the\n ``H0`` attribute from the default `astropy.cosmology` cosmology.\n\nReferences\n----------\nFor an illuminating discussion on why you may or may not want to use\nlittle-h at all, see https://arxiv.org/pdf/1308.4150.pdf"}]}
|
3.0
|
["astropy/units/tests/test_equivalencies.py::test_littleh"]
|
["astropy/units/tests/test_equivalencies.py::test_dimensionless_angles", "astropy/units/tests/test_equivalencies.py::test_logarithmic[log_unit0]", "astropy/units/tests/test_equivalencies.py::test_logarithmic[log_unit1]", "astropy/units/tests/test_equivalencies.py::test_logarithmic[log_unit2]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_0[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_0[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_0[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_0[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_0[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_0[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_0[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_0[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_0[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_circle[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_circle[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_frequency_circle[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_circle[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_circle[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_wavelength_circle[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_circle[doppler_optical]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_circle[doppler_radio]", "astropy/units/tests/test_equivalencies.py::test_doppler_energy_circle[doppler_relativistic]", "astropy/units/tests/test_equivalencies.py::test_30kms[doppler_optical-999.899940784289]", "astropy/units/tests/test_equivalencies.py::test_30kms[doppler_radio-999.8999307714406]", "astropy/units/tests/test_equivalencies.py::test_30kms[doppler_relativistic-999.8999357778647]", "astropy/units/tests/test_equivalencies.py::test_bad_restfreqs[doppler_optical-5]", "astropy/units/tests/test_equivalencies.py::test_bad_restfreqs[doppler_radio-value1]", "astropy/units/tests/test_equivalencies.py::test_bad_restfreqs[doppler_relativistic-None]", "astropy/units/tests/test_equivalencies.py::test_massenergy", "astropy/units/tests/test_equivalencies.py::test_is_equivalent", "astropy/units/tests/test_equivalencies.py::test_parallax", "astropy/units/tests/test_equivalencies.py::test_parallax2", "astropy/units/tests/test_equivalencies.py::test_spectral", "astropy/units/tests/test_equivalencies.py::test_spectral2", "astropy/units/tests/test_equivalencies.py::test_spectral3", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val0-in_unit0]", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val1-in_unit1]", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val2-in_unit2]", "astropy/units/tests/test_equivalencies.py::test_spectral4[in_val3-in_unit3]", "astropy/units/tests/test_equivalencies.py::test_spectraldensity2", "astropy/units/tests/test_equivalencies.py::test_spectraldensity3", "astropy/units/tests/test_equivalencies.py::test_spectraldensity4", "astropy/units/tests/test_equivalencies.py::test_spectraldensity5", "astropy/units/tests/test_equivalencies.py::test_equivalent_units", "astropy/units/tests/test_equivalencies.py::test_equivalent_units2", "astropy/units/tests/test_equivalencies.py::test_trivial_equivalency", "astropy/units/tests/test_equivalencies.py::test_invalid_equivalency", "astropy/units/tests/test_equivalencies.py::test_irrelevant_equivalency", "astropy/units/tests/test_equivalencies.py::test_brightness_temperature", "astropy/units/tests/test_equivalencies.py::test_swapped_args_brightness_temperature", "astropy/units/tests/test_equivalencies.py::test_surfacebrightness", "astropy/units/tests/test_equivalencies.py::test_beam", "astropy/units/tests/test_equivalencies.py::test_thermodynamic_temperature", "astropy/units/tests/test_equivalencies.py::test_thermodynamic_temperature_w_tcmb", "astropy/units/tests/test_equivalencies.py::test_equivalency_context", "astropy/units/tests/test_equivalencies.py::test_equivalency_context_manager", "astropy/units/tests/test_equivalencies.py::test_temperature", "astropy/units/tests/test_equivalencies.py::test_temperature_energy", "astropy/units/tests/test_equivalencies.py::test_molar_mass_amu", "astropy/units/tests/test_equivalencies.py::test_compose_equivalencies", "astropy/units/tests/test_equivalencies.py::test_pixel_scale", "astropy/units/tests/test_equivalencies.py::test_plate_scale"]
|
7de4f351b0a2b9b177b5ada1143caca467367cfe
|
{"first_commit_time": 1540484750.0, "pr_title": "Little h equivalency", "pr_body": "This adds a new `littleh` equivalency. Based on my experience as a grad student, this PR will serve to rescue many students from confusion who are starting with extragalactic or cosmology work!\r\n\r\nThis has been on my to-do list for years, but it turns out it was a nearly-trivial implementation due to what I learned from @mhvk in #7891, so hopefully it can squeak in for 3.1!", "pr_timeline": [{"time": 1540643604.0, "comment": "Hi there @eteq :wave: - thanks for the pull request! I'm just a friendly :robot: that checks for issues related to the changelog and making sure that this pull request is milestoned and labeled correctly. This is mainly intended for the maintainers, so if you are not a maintainer you can ignore this, and a maintainer will let you know if any action is required on your part :smiley:.\n\nEverything looks good from my point of view! :+1:\n\n*If there are any issues with this message, please report them [here](https://github.com/astropy/astropy-bot/issues).*"}, {"time": 1540490375.0, "comment": "Do we need a what's new?"}, {"time": 1540490693.0, "comment": "Also, failure seems related:\r\n\r\n```\r\nOverflowError: (34, 'Numerical result out of range')\r\n```"}, {"time": 1540491659.0, "comment": "The test failure ended up symptomatic of a bigger problem: units cannot have numbers in them! The string parser views the unit `'h100'` as \"hours^100\". I'm not sure if this is intentional, but as a workaround I just changed h100 to \"littleh\". Not quite as pretty, but I think it still gets the right idea across. I then ended up renaming the equivalency `littleh_as`, but I think that's better based on one of @pllim's comments, anyway."}, {"time": 1540499829.0, "comment": "@astrofrog - I did `littleh_as` instead of `littleh` because the latter is the actual unit, and both should be available as `u.<whatever>`. Originally I wanted to do `h100` as the unit and the equivalency be `littleh`... But apparently the units machinery is not compatible with having numbers in units."}, {"time": 1540501492.0, "comment": "(Oh, also, @pllim: apparently you can't \"react\" to the main review comment, otherwise I would have reacted to your \"cutting it a bit close\" with \ud83d\ude1d)"}, {"time": 1540501568.0, "comment": "I guess the only downside of ``littleh_as`` is it looks incomplete when run with the default parameter ``littleh_as()``. But I'm easy going :)"}, {"time": 1540501671.0, "comment": "Yeah, I see your point there @astrofrog - I'm definitely open to other ideas for the name, I just couldn't come up with anything that seemed better!"}, {"time": 1540502102.0, "comment": "Oh and aside from the `littleh_as` vs alternative names, I think I've addressed the review items."}, {"time": 1540502645.0, "comment": "What about calling the equivalency ``u.cosmology()``? with an optional argument of the cosmology (not the H100 value). That could read nicely:\r\n\r\n```\r\n>>> (3 * u.littleh).to(stuff, u.cosmology())\r\n```\r\n\r\nThat is - ``u.littleh`` can be interpreted in a cosmological context."}, {"time": 1540502866.0, "comment": "Hmm, I\u2019m ok with the \u201cradically different names\u201d idea, *but* I don\u2019t want this to actually be \u201ccosmology\u201d - that has too large of a scope - I am *intentionally* limiting the scope to just the little-h conversion. I'm also -0 to not allowing H0 values, because one of the points of the little-h convention in the first place is sort of to \"quickly and easily\" let people fiddle with the value of H0 without having to think too hard about the whole cosmology."}, {"time": 1540504296.0, "comment": "I think this approach to littleh makes sense. Doing things this way leaves the knowledge of the sense of the conversion up to the user, which is definitely the way to go (for now) since the conversion for some units such as Msun depends on context that only the user can know (e.g., halo mass is Msun/h whereas stellar mass is Msun/h/h). I know this is late in the dev cycle for this release, but from my point of view this seems like a modest change and lays some important groundwork for enabling more full-featured functionality in the future. Nice job, @eteq! \r\n\r\n"}, {"time": 1540504881.0, "comment": "A few other ideas for the equivalency name from a combination of @astrofrog and @aphearin :\r\n\r\n* `hubble`\r\n* `hubble_constant` \r\n* `assume_littleh`\r\n* `littleh_assumed`\r\n\r\nI think I'm mildly for the last one, and mildly against the former... but could accept any. Any other opinions?"}, {"time": 1540506378.0, "comment": "I guess the issue with ``hubble_constant`` is that people might think ``u.hubble_constant`` is a unit..."}, {"time": 1540506803.0, "comment": "Just another suggestion: ``u.evaluate_littleh`` - works nicely with and without arguments"}, {"time": 1540510763.0, "comment": "I'm a bit ambivalent about this use of units for what is really a scaling, but can see the possible simplicity of being able to use the equivalency.\r\n\r\nI do think we need a better name, and also that this belongs in cosmology rather than in units (cc @aconley). One advantage of that is that one can simply call it `h` rather than `littleh`. As for the equivalency, maybe `with_H0(..)`? (I'd just force users to pass in `WMAP9.H0`).\r\n\r\nSeparately, I think the tests and examples should include some more complicated cases."}, {"time": 1540574001.0, "comment": "@mhvk - my argument for units is that it's this is how people who do this work conceptualize it. That is \"in little-h units\" is a phrase people use. I would argue it is in fact *not* at all cosmology - most of the confusion in it stems from thinking too hard about it as \"cosmology\" rather than \"a scaling that acts like a unit\". The sort of things @aphearin was thinking about above may well be better done in `cosmology`, but I'd argue this is not quite the same.\r\n\r\nAlso I'm against calling it `h` regardless of where it lives because I think that is too ambiguous.\r\n\r\nI can see the argument for `with_H0`, though - most of the convenience is from the `None` case - it's really not that much more work to add the ``.H0`` now that you say it. Does that sound OK to you, @astrofrog?"}, {"time": 1540574047.0, "comment": "Sounds good to me!"}, {"time": 1540584079.0, "comment": "Also, as suggested by @mhvk, I added an example of Lsun (which is a h^-2 scaling) and the magnitude formulation in both the tests and the docs."}, {"time": 1540650264.0, "comment": "p.s. If #8002 and #8003 make it into the same release as this, maybe they can all be in one section. "}, {"time": 1540650812.0, "comment": "p.p.s. I have the urge to pronounce this as \"lit-tleh\" instead of \"lit-tle-ache\"."}], "issues": {}}
|
aws-powertools/powertools-lambda-python
| 5,588
|
https://github.com/aws-powertools/powertools-lambda-python/pull/5588
|
aws-powertools__powertools-lambda-python-5588
|
[]
|
3ff5132bdf894fb49da11cfaa7cbff7412d5675c
|
diff --git a/aws_lambda_powertools/event_handler/appsync.py b/aws_lambda_powertools/event_handler/appsync.py
index 6f1cb72d067..3dbbf207859 100644
--- a/aws_lambda_powertools/event_handler/appsync.py
+++ b/aws_lambda_powertools/event_handler/appsync.py
@@ -53,6 +53,7 @@ def __init__(self):
"""
super().__init__()
self.context = {} # early init as customers might add context before event resolution
+ self._exception_handlers: dict[type, Callable] = {}
def __call__(
self,
@@ -142,12 +143,18 @@ def lambda_handler(event, context):
self.lambda_context = context
Router.lambda_context = context
- if isinstance(event, list):
- Router.current_batch_event = [data_model(e) for e in event]
- response = self._call_batch_resolver(event=event, data_model=data_model)
- else:
- Router.current_event = data_model(event)
- response = self._call_single_resolver(event=event, data_model=data_model)
+ try:
+ if isinstance(event, list):
+ Router.current_batch_event = [data_model(e) for e in event]
+ response = self._call_batch_resolver(event=event, data_model=data_model)
+ else:
+ Router.current_event = data_model(event)
+ response = self._call_single_resolver(event=event, data_model=data_model)
+ except Exception as exp:
+ response_builder = self._lookup_exception_handler(type(exp))
+ if response_builder:
+ return response_builder(exp)
+ raise
# We don't clear the context for coroutines because we don't have control over the event loop.
# If we clean the context immediately, it might not be available when the coroutine is actually executed.
@@ -470,3 +477,47 @@ def async_batch_resolver(
raise_on_error=raise_on_error,
aggregate=aggregate,
)
+
+ def exception_handler(self, exc_class: type[Exception] | list[type[Exception]]):
+ """
+ A decorator function that registers a handler for one or more exception types.
+
+ Parameters
+ ----------
+ exc_class (type[Exception] | list[type[Exception]])
+ A single exception type or a list of exception types.
+
+ Returns
+ -------
+ Callable:
+ A decorator function that registers the exception handler.
+ """
+
+ def register_exception_handler(func: Callable):
+ if isinstance(exc_class, list): # pragma: no cover
+ for exp in exc_class:
+ self._exception_handlers[exp] = func
+ else:
+ self._exception_handlers[exc_class] = func
+ return func
+
+ return register_exception_handler
+
+ def _lookup_exception_handler(self, exp_type: type) -> Callable | None:
+ """
+ Looks up the registered exception handler for the given exception type or its base classes.
+
+ Parameters
+ ----------
+ exp_type (type):
+ The exception type to look up the handler for.
+
+ Returns
+ -------
+ Callable | None:
+ The registered exception handler function if found, otherwise None.
+ """
+ for cls in exp_type.__mro__:
+ if cls in self._exception_handlers:
+ return self._exception_handlers[cls]
+ return None
diff --git a/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py b/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py
index a52e5fbc7a2..3497227ed70 100644
--- a/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py
+++ b/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py
@@ -311,7 +311,7 @@ def put_artifact(self, artifact_name: str, body: Any, content_type: str) -> None
key = artifact.location.s3_location.key
# boto3 doesn't support None to omit the parameter when using ServerSideEncryption and SSEKMSKeyId
- # So we are using if/else instead.
+ # So we are using if/else instead.
if self.data.encryption_key:
diff --git a/docs/core/event_handler/appsync.md b/docs/core/event_handler/appsync.md
index a2f29e5dba5..0c556dedfbf 100644
--- a/docs/core/event_handler/appsync.md
+++ b/docs/core/event_handler/appsync.md
@@ -288,6 +288,19 @@ You can use `append_context` when you want to share data between your App and Ro
--8<-- "examples/event_handler_graphql/src/split_operation_append_context_module.py"
```
+### Exception handling
+
+You can use **`exception_handler`** decorator with any Python exception. This allows you to handle a common exception outside your resolver, for example validation errors.
+
+The `exception_handler` function also supports passing a list of exception types you wish to handle with one handler.
+
+```python hl_lines="5-7 11" title="Exception handling"
+--8<-- "examples/event_handler_graphql/src/exception_handling_graphql.py"
+```
+
+???+ warning
+ This is not supported when using async single resolvers.
+
### Batch processing
```mermaid
diff --git a/examples/event_handler_graphql/src/exception_handling_graphql.py b/examples/event_handler_graphql/src/exception_handling_graphql.py
new file mode 100644
index 00000000000..b135f75112b
--- /dev/null
+++ b/examples/event_handler_graphql/src/exception_handling_graphql.py
@@ -0,0 +1,17 @@
+from aws_lambda_powertools.event_handler import AppSyncResolver
+
+app = AppSyncResolver()
+
+
[email protected]_handler(ValueError)
+def handle_value_error(ex: ValueError):
+ return {"message": "error"}
+
+
[email protected](field_name="createSomething")
+def create_something():
+ raise ValueError("Raising an exception")
+
+
+def lambda_handler(event, context):
+ return app.resolve(event, context)
|
diff --git a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py
index c594be54a5b..59c5ec08a15 100644
--- a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py
+++ b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py
@@ -981,3 +981,125 @@ async def get_user(event: List) -> List:
# THEN the resolver must be able to return a field in the batch_current_event
assert app.context == {}
assert ret[0] == "powertools"
+
+
+def test_exception_handler_with_batch_resolver_and_raise_exception():
+
+ # GIVEN a AppSyncResolver instance
+ app = AppSyncResolver()
+
+ event = [
+ {
+ "typeName": "Query",
+ "info": {
+ "fieldName": "listLocations",
+ "parentTypeName": "Post",
+ },
+ "fieldName": "listLocations",
+ "arguments": {},
+ "source": {
+ "id": "1",
+ },
+ },
+ {
+ "typeName": "Query",
+ "info": {
+ "fieldName": "listLocations",
+ "parentTypeName": "Post",
+ },
+ "fieldName": "listLocations",
+ "arguments": {},
+ "source": {
+ "id": "2",
+ },
+ },
+ {
+ "typeName": "Query",
+ "info": {
+ "fieldName": "listLocations",
+ "parentTypeName": "Post",
+ },
+ "fieldName": "listLocations",
+ "arguments": {},
+ "source": {
+ "id": [3, 4],
+ },
+ },
+ ]
+
+ # WHEN we configure exception handler for ValueError
+ @app.exception_handler(ValueError)
+ def handle_value_error(ex: ValueError):
+ return {"message": "error"}
+
+ # WHEN the sync batch resolver for the 'listLocations' field is defined with raise_on_error=True
+ @app.batch_resolver(field_name="listLocations", raise_on_error=True, aggregate=False)
+ def create_something(event: AppSyncResolverEvent) -> Optional[list]: # noqa AA03 VNE003
+ raise ValueError
+
+ # Call the implicit handler
+ result = app(event, {})
+
+ # THEN the return must be the Exception Handler error message
+ assert result["message"] == "error"
+
+
+def test_exception_handler_with_batch_resolver_and_no_raise_exception():
+
+ # GIVEN a AppSyncResolver instance
+ app = AppSyncResolver()
+
+ event = [
+ {
+ "typeName": "Query",
+ "info": {
+ "fieldName": "listLocations",
+ "parentTypeName": "Post",
+ },
+ "fieldName": "listLocations",
+ "arguments": {},
+ "source": {
+ "id": "1",
+ },
+ },
+ {
+ "typeName": "Query",
+ "info": {
+ "fieldName": "listLocations",
+ "parentTypeName": "Post",
+ },
+ "fieldName": "listLocations",
+ "arguments": {},
+ "source": {
+ "id": "2",
+ },
+ },
+ {
+ "typeName": "Query",
+ "info": {
+ "fieldName": "listLocations",
+ "parentTypeName": "Post",
+ },
+ "fieldName": "listLocations",
+ "arguments": {},
+ "source": {
+ "id": [3, 4],
+ },
+ },
+ ]
+
+ # WHEN we configure exception handler for ValueError
+ @app.exception_handler(ValueError)
+ def handle_value_error(ex: ValueError):
+ return {"message": "error"}
+
+ # WHEN the sync batch resolver for the 'listLocations' field is defined with raise_on_error=False
+ @app.batch_resolver(field_name="listLocations", raise_on_error=False, aggregate=False)
+ def create_something(event: AppSyncResolverEvent) -> Optional[list]: # noqa AA03 VNE003
+ raise ValueError
+
+ # Call the implicit handler
+ result = app(event, {})
+
+ # THEN the return must not trigger the Exception Handler, but instead return from the resolver
+ assert result == [None, None, None]
diff --git a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py
index df44793f33b..d58c966e67b 100644
--- a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py
+++ b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py
@@ -329,3 +329,25 @@ async def get_async():
# THEN
assert asyncio.run(result) == "value"
assert app.context == {}
+
+
+def test_exception_handler_with_single_resolver():
+ # GIVEN a AppSyncResolver instance
+ mock_event = load_event("appSyncDirectResolver.json")
+
+ app = AppSyncResolver()
+
+ # WHEN we configure exception handler for ValueError
+ @app.exception_handler(ValueError)
+ def handle_value_error(ex: ValueError):
+ return {"message": "error"}
+
+ @app.resolver(field_name="createSomething")
+ def create_something(id: str): # noqa AA03 VNE003
+ raise ValueError("Error")
+
+ # Call the implicit handler
+ result = app(mock_event, {})
+
+ # THEN the return must be the Exception Handler error message
+ assert result["message"] == "error"
| 2024-11-19T12:49:39
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"aws_lambda_powertools/event_handler/appsync.py": "from __future__ import annotations\n\nimport asyncio\nimport logging\nimport warnings\nfrom typing import TYPE_CHECKING, Any, Callable\n\nfrom aws_lambda_powertools.event_handler.graphql_appsync.exceptions import InvalidBatchResponse, ResolverNotFoundError\nfrom aws_lambda_powertools.event_handler.graphql_appsync.router import Router\nfrom aws_lambda_powertools.utilities.data_classes import AppSyncResolverEvent\n\nif TYPE_CHECKING:\n from aws_lambda_powertools.utilities.typing import LambdaContext\n\nfrom aws_lambda_powertools.warnings import PowertoolsUserWarning\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppSyncResolver(Router):\n \"\"\"\n AppSync GraphQL API Resolver\n\n Example\n -------\n ```python\n from aws_lambda_powertools.event_handler import AppSyncResolver\n\n app = AppSyncResolver()\n\n @app.resolver(type_name=\"Query\", field_name=\"listLocations\")\n def list_locations(page: int = 0, size: int = 10) -> list:\n # Your logic to fetch locations with arguments passed in\n return [{\"id\": 100, \"name\": \"Smooth Grooves\"}]\n\n @app.resolver(type_name=\"Merchant\", field_name=\"extraInfo\")\n def get_extra_info() -> dict:\n # Can use \"app.current_event.source\" to filter within the parent context\n account_type = app.current_event.source[\"accountType\"]\n method = \"BTC\" if account_type == \"NEW\" else \"USD\"\n return {\"preferredPaymentMethod\": method}\n\n @app.resolver(field_name=\"commonField\")\n def common_field() -> str:\n # Would match all fieldNames matching 'commonField'\n return str(uuid.uuid4())\n ```\n \"\"\"\n\n def __init__(self):\n \"\"\"\n Initialize a new instance of the AppSyncResolver.\n \"\"\"\n super().__init__()\n self.context = {} # early init as customers might add context before event resolution\n\n def __call__(\n self,\n event: dict,\n context: LambdaContext,\n data_model: type[AppSyncResolverEvent] = AppSyncResolverEvent,\n ) -> Any:\n \"\"\"Implicit lambda handler which internally calls `resolve`\"\"\"\n return self.resolve(event, context, data_model)\n\n def resolve(\n self,\n event: dict | list[dict],\n context: LambdaContext,\n data_model: type[AppSyncResolverEvent] = AppSyncResolverEvent,\n ) -> Any:\n \"\"\"Resolves the response based on the provide event and decorator routes\n\n Parameters\n ----------\n event : dict | list[Dict]\n Lambda event either coming from batch processing endpoint or from standard processing endpoint\n context : LambdaContext\n Lambda context\n data_model:\n Your data data_model to decode AppSync event, by default AppSyncResolverEvent\n\n Example\n -------\n\n ```python\n from aws_lambda_powertools.event_handler import AppSyncResolver\n from aws_lambda_powertools.utilities.typing import LambdaContext\n\n @app.resolver(field_name=\"createSomething\")\n def create_something(id: str): # noqa AA03 VNE003\n return id\n\n def handler(event, context: LambdaContext):\n return app.resolve(event, context)\n ```\n\n **Bringing custom models**\n\n ```python\n from aws_lambda_powertools import Logger, Tracer\n\n from aws_lambda_powertools.logging import correlation_paths\n from aws_lambda_powertools.event_handler import AppSyncResolver\n\n tracer = Tracer(service=\"sample_resolver\")\n logger = Logger(service=\"sample_resolver\")\n app = AppSyncResolver()\n\n\n class MyCustomModel(AppSyncResolverEvent):\n @property\n def country_viewer(self) -> str:\n return self.request_headers.get(\"cloudfront-viewer-country\", \"\")\n\n\n @app.resolver(field_name=\"listLocations\")\n @app.resolver(field_name=\"locations\")\n def get_locations(name: str, description: str = \"\"):\n if app.current_event.country_viewer == \"US\":\n ...\n return name + description\n\n\n @logger.inject_lambda_context(correlation_id_path=correlation_paths.APPSYNC_RESOLVER)\n @tracer.capture_lambda_handler\n def lambda_handler(event, context):\n return app.resolve(event, context, data_model=MyCustomModel)\n ```\n\n Returns\n -------\n Any\n Returns the result of the resolver\n\n Raises\n -------\n ValueError\n If we could not find a field resolver\n \"\"\"\n\n self.lambda_context = context\n Router.lambda_context = context\n\n if isinstance(event, list):\n Router.current_batch_event = [data_model(e) for e in event]\n response = self._call_batch_resolver(event=event, data_model=data_model)\n else:\n Router.current_event = data_model(event)\n response = self._call_single_resolver(event=event, data_model=data_model)\n\n # We don't clear the context for coroutines because we don't have control over the event loop.\n # If we clean the context immediately, it might not be available when the coroutine is actually executed.\n # For single async operations, the context should be cleaned up manually after the coroutine completes.\n # See: https://github.com/aws-powertools/powertools-lambda-python/issues/5290\n # REVIEW: Review this support in Powertools V4\n if not asyncio.iscoroutine(response):\n self.clear_context()\n\n return response\n\n def _call_single_resolver(self, event: dict, data_model: type[AppSyncResolverEvent]) -> Any:\n \"\"\"Call single event resolver\n\n Parameters\n ----------\n event : dict\n Event\n data_model : type[AppSyncResolverEvent]\n Data_model to decode AppSync event, by default it is of AppSyncResolverEvent type or subclass of it\n \"\"\"\n\n logger.debug(\"Processing direct resolver event\")\n\n self.current_event = data_model(event)\n resolver = self._resolver_registry.find_resolver(self.current_event.type_name, self.current_event.field_name)\n if not resolver:\n raise ValueError(f\"No resolver found for '{self.current_event.type_name}.{self.current_event.field_name}'\")\n return resolver[\"func\"](**self.current_event.arguments)\n\n def _call_sync_batch_resolver(\n self,\n resolver: Callable,\n raise_on_error: bool = False,\n aggregate: bool = True,\n ) -> list[Any]:\n \"\"\"\n Calls a synchronous batch resolver function for each event in the current batch.\n\n Parameters\n ----------\n resolver: Callable\n The callable function to resolve events.\n raise_on_error: bool\n A flag indicating whether to raise an error when processing batches\n with failed items. Defaults to False, which means errors are handled without raising exceptions.\n aggregate: bool\n A flag indicating whether the batch items should be processed at once or individually.\n If True (default), the batch resolver will process all items in the batch as a single event.\n If False, the batch resolver will process each item in the batch individually.\n\n Returns\n -------\n list[Any]\n A list of results corresponding to the resolved events.\n \"\"\"\n\n logger.debug(f\"Graceful error handling flag {raise_on_error=}\")\n\n # Checks whether the entire batch should be processed at once\n if aggregate:\n # Process the entire batch\n response = resolver(event=self.current_batch_event)\n\n if not isinstance(response, list):\n raise InvalidBatchResponse(\"The response must be a List when using batch resolvers\")\n\n return response\n\n # Non aggregated events, so we call this event list x times\n # Stop on first exception we encounter\n if raise_on_error:\n return [\n resolver(event=appconfig_event, **appconfig_event.arguments)\n for appconfig_event in self.current_batch_event\n ]\n\n # By default, we gracefully append `None` for any records that failed processing\n results = []\n for idx, event in enumerate(self.current_batch_event):\n try:\n results.append(resolver(event=event, **event.arguments))\n except Exception:\n logger.debug(f\"Failed to process event number {idx} from field '{event.info.field_name}'\")\n results.append(None)\n\n return results\n\n async def _call_async_batch_resolver(\n self,\n resolver: Callable,\n raise_on_error: bool = False,\n aggregate: bool = True,\n ) -> list[Any]:\n \"\"\"\n Asynchronously call a batch resolver for each event in the current batch.\n\n Parameters\n ----------\n resolver: Callable\n The asynchronous resolver function.\n raise_on_error: bool\n A flag indicating whether to raise an error when processing batches\n with failed items. Defaults to False, which means errors are handled without raising exceptions.\n aggregate: bool\n A flag indicating whether the batch items should be processed at once or individually.\n If True (default), the batch resolver will process all items in the batch as a single event.\n If False, the batch resolver will process each item in the batch individually.\n\n Returns\n -------\n list[Any]\n A list of results corresponding to the resolved events.\n \"\"\"\n\n logger.debug(f\"Graceful error handling flag {raise_on_error=}\")\n\n # Checks whether the entire batch should be processed at once\n if aggregate:\n # Process the entire batch\n ret = await resolver(event=self.current_batch_event)\n if not isinstance(ret, list):\n raise InvalidBatchResponse(\"The response must be a List when using batch resolvers\")\n\n return ret\n\n response: list = []\n\n # Prime coroutines\n tasks = [resolver(event=e, **e.arguments) for e in self.current_batch_event]\n\n # Aggregate results or raise at first error\n if raise_on_error:\n response.extend(await asyncio.gather(*tasks))\n return response\n\n # Aggregate results and exceptions, then filter them out\n # Use `None` upon exception for graceful error handling at GraphQL engine level\n #\n # NOTE: asyncio.gather(return_exceptions=True) catches and includes exceptions in the results\n # this will become useful when we support exception handling in AppSync resolver\n results = await asyncio.gather(*tasks, return_exceptions=True)\n response.extend(None if isinstance(ret, Exception) else ret for ret in results)\n\n return response\n\n def _call_batch_resolver(self, event: list[dict], data_model: type[AppSyncResolverEvent]) -> list[Any]:\n \"\"\"Call batch event resolver for sync and async methods\n\n Parameters\n ----------\n event : list[dict]\n Batch event\n data_model : type[AppSyncResolverEvent]\n Data_model to decode AppSync event, by default AppSyncResolverEvent or a subclass\n\n Returns\n -------\n list[Any]\n Results of the resolver execution.\n\n Raises\n ------\n InconsistentPayloadError:\n When all events in the batch do not have the same fieldName.\n\n ResolverNotFoundError:\n When no resolver is found for the specified type and field.\n \"\"\"\n logger.debug(\"Processing batch resolver event\")\n\n self.current_batch_event = [data_model(e) for e in event]\n type_name, field_name = self.current_batch_event[0].type_name, self.current_batch_event[0].field_name\n\n resolver = self._batch_resolver_registry.find_resolver(type_name, field_name)\n async_resolver = self._async_batch_resolver_registry.find_resolver(type_name, field_name)\n\n if resolver and async_resolver:\n warnings.warn(\n f\"Both synchronous and asynchronous resolvers found for the same event and field.\"\n f\"The synchronous resolver takes precedence. Executing: {resolver['func'].__name__}\",\n stacklevel=2,\n category=PowertoolsUserWarning,\n )\n\n if resolver:\n logger.debug(f\"Found sync resolver. {resolver=}, {field_name=}\")\n return self._call_sync_batch_resolver(\n resolver=resolver[\"func\"],\n raise_on_error=resolver[\"raise_on_error\"],\n aggregate=resolver[\"aggregate\"],\n )\n\n if async_resolver:\n logger.debug(f\"Found async resolver. {resolver=}, {field_name=}\")\n return asyncio.run(\n self._call_async_batch_resolver(\n resolver=async_resolver[\"func\"],\n raise_on_error=async_resolver[\"raise_on_error\"],\n aggregate=async_resolver[\"aggregate\"],\n ),\n )\n\n raise ResolverNotFoundError(f\"No resolver found for '{type_name}.{field_name}'\")\n\n def include_router(self, router: Router) -> None:\n \"\"\"Adds all resolvers defined in a router\n\n Parameters\n ----------\n router : Router\n A router containing a dict of field resolvers\n \"\"\"\n\n # Merge app and router context\n logger.debug(\"Merging router and app context\")\n self.context.update(**router.context)\n\n # use pointer to allow context clearance after event is processed e.g., resolve(evt, ctx)\n router.context = self.context\n\n logger.debug(\"Merging router resolver registries\")\n self._resolver_registry.merge(router._resolver_registry)\n self._batch_resolver_registry.merge(router._batch_resolver_registry)\n self._async_batch_resolver_registry.merge(router._async_batch_resolver_registry)\n\n def resolver(self, type_name: str = \"*\", field_name: str | None = None) -> Callable:\n \"\"\"Registers direct resolver function for GraphQL type and field name.\n\n Parameters\n ----------\n type_name : str, optional\n GraphQL type e.g., Query, Mutation, by default \"*\" meaning any\n field_name : Optional[str], optional\n GraphQL field e.g., getTodo, createTodo, by default None\n\n Returns\n -------\n Callable\n Registered resolver\n\n Example\n -------\n\n ```python\n from aws_lambda_powertools.event_handler import AppSyncResolver\n\n from typing import TypedDict\n\n app = AppSyncResolver()\n\n class Todo(TypedDict, total=False):\n id: str\n userId: str\n title: str\n completed: bool\n\n # resolve any GraphQL `getTodo` queries\n # arguments are injected as function arguments as-is\n @app.resolver(type_name=\"Query\", field_name=\"getTodo\")\n def get_todo(id: str = \"\", status: str = \"open\") -> Todo:\n todos: Response = requests.get(f\"https://jsonplaceholder.typicode.com/todos/{id}\")\n todos.raise_for_status()\n\n return todos.json()\n\n def lambda_handler(event, context):\n return app.resolve(event, context)\n ```\n \"\"\"\n return self._resolver_registry.register(field_name=field_name, type_name=type_name)\n\n def batch_resolver(\n self,\n type_name: str = \"*\",\n field_name: str | None = None,\n raise_on_error: bool = False,\n aggregate: bool = True,\n ) -> Callable:\n \"\"\"Registers batch resolver function for GraphQL type and field name.\n\n By default, we handle errors gracefully by returning `None`. If you want\n to short-circuit and fail the entire batch use `raise_on_error=True`.\n\n Parameters\n ----------\n type_name : str, optional\n GraphQL type e.g., Query, Mutation, by default \"*\" meaning any\n field_name : Optional[str], optional\n GraphQL field e.g., getTodo, createTodo, by default None\n raise_on_error : bool, optional\n Whether to fail entire batch upon error, or handle errors gracefully (None), by default False\n aggregate: bool\n A flag indicating whether the batch items should be processed at once or individually.\n If True (default), the batch resolver will process all items in the batch as a single event.\n If False, the batch resolver will process each item in the batch individually.\n\n Returns\n -------\n Callable\n Registered resolver\n \"\"\"\n return self._batch_resolver_registry.register(\n field_name=field_name,\n type_name=type_name,\n raise_on_error=raise_on_error,\n aggregate=aggregate,\n )\n\n def async_batch_resolver(\n self,\n type_name: str = \"*\",\n field_name: str | None = None,\n raise_on_error: bool = False,\n aggregate: bool = True,\n ) -> Callable:\n return self._async_batch_resolver_registry.register(\n field_name=field_name,\n type_name=type_name,\n raise_on_error=raise_on_error,\n aggregate=aggregate,\n )\n", "aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py": "from __future__ import annotations\n\nimport tempfile\nimport zipfile\nfrom functools import cached_property\nfrom typing import Any\nfrom urllib.parse import unquote_plus\n\nfrom aws_lambda_powertools.utilities.data_classes.common import DictWrapper\n\n\nclass CodePipelineConfiguration(DictWrapper):\n @property\n def function_name(self) -> str:\n \"\"\"Function name\"\"\"\n return self[\"FunctionName\"]\n\n @property\n def user_parameters(self) -> str | None:\n \"\"\"User parameters\"\"\"\n return self.get(\"UserParameters\", None)\n\n @cached_property\n def decoded_user_parameters(self) -> dict[str, Any]:\n \"\"\"Json Decoded user parameters\"\"\"\n if self.user_parameters is not None:\n return self._json_deserializer(self.user_parameters)\n return {}\n\n\nclass CodePipelineActionConfiguration(DictWrapper):\n \"\"\"CodePipeline Action Configuration\"\"\"\n\n @property\n def configuration(self) -> CodePipelineConfiguration:\n return CodePipelineConfiguration(self[\"configuration\"])\n\n\nclass CodePipelineS3Location(DictWrapper):\n @property\n def bucket_name(self) -> str:\n return self[\"bucketName\"]\n\n @property\n def key(self) -> str:\n \"\"\"Raw S3 object key\"\"\"\n return self[\"objectKey\"]\n\n @property\n def object_key(self) -> str:\n \"\"\"Unquote plus of the S3 object key\"\"\"\n return unquote_plus(self[\"objectKey\"])\n\n\nclass CodePipelineLocation(DictWrapper):\n @property\n def get_type(self) -> str:\n \"\"\"Location type eg: S3\"\"\"\n return self[\"type\"]\n\n @property\n def s3_location(self) -> CodePipelineS3Location:\n \"\"\"S3 location\"\"\"\n return CodePipelineS3Location(self[\"s3Location\"])\n\n\nclass CodePipelineArtifact(DictWrapper):\n @property\n def name(self) -> str:\n \"\"\"Name\"\"\"\n return self[\"name\"]\n\n @property\n def revision(self) -> str | None:\n return self.get(\"revision\")\n\n @property\n def location(self) -> CodePipelineLocation:\n return CodePipelineLocation(self[\"location\"])\n\n\nclass CodePipelineArtifactCredentials(DictWrapper):\n _sensitive_properties = [\"secret_access_key\", \"session_token\"]\n\n @property\n def access_key_id(self) -> str:\n return self[\"accessKeyId\"]\n\n @property\n def secret_access_key(self) -> str:\n return self[\"secretAccessKey\"]\n\n @property\n def session_token(self) -> str:\n return self[\"sessionToken\"]\n\n @property\n def expiration_time(self) -> int | None:\n return self.get(\"expirationTime\")\n\n\nclass CodePipelineEncryptionKey(DictWrapper):\n @property\n def get_id(self) -> str:\n return self[\"id\"]\n\n @property\n def get_type(self) -> str:\n return self[\"type\"]\n\n\nclass CodePipelineData(DictWrapper):\n \"\"\"CodePipeline Job Data\"\"\"\n\n @property\n def action_configuration(self) -> CodePipelineActionConfiguration:\n \"\"\"CodePipeline action configuration\"\"\"\n return CodePipelineActionConfiguration(self[\"actionConfiguration\"])\n\n @property\n def input_artifacts(self) -> list[CodePipelineArtifact]:\n \"\"\"Represents a CodePipeline input artifact\"\"\"\n return [CodePipelineArtifact(item) for item in self[\"inputArtifacts\"]]\n\n @property\n def output_artifacts(self) -> list[CodePipelineArtifact]:\n \"\"\"Represents a CodePipeline output artifact\"\"\"\n return [CodePipelineArtifact(item) for item in self[\"outputArtifacts\"]]\n\n @property\n def artifact_credentials(self) -> CodePipelineArtifactCredentials:\n \"\"\"Represents a CodePipeline artifact credentials\"\"\"\n return CodePipelineArtifactCredentials(self[\"artifactCredentials\"])\n\n @property\n def continuation_token(self) -> str | None:\n \"\"\"A continuation token if continuing job\"\"\"\n return self.get(\"continuationToken\")\n\n @property\n def encryption_key(self) -> CodePipelineEncryptionKey | None:\n \"\"\"Represents a CodePipeline encryption key\"\"\"\n key_data = self.get(\"encryptionKey\")\n return CodePipelineEncryptionKey(key_data) if key_data is not None else None\n\n\nclass CodePipelineJobEvent(DictWrapper):\n \"\"\"AWS CodePipeline Job Event\n\n Documentation:\n -------------\n - https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html\n - https://docs.aws.amazon.com/lambda/latest/dg/services-codepipeline.html\n \"\"\"\n\n def __init__(self, data: dict[str, Any]):\n super().__init__(data)\n self._job = self[\"CodePipeline.job\"]\n\n @property\n def get_id(self) -> str:\n \"\"\"Job id\"\"\"\n return self._job[\"id\"]\n\n @property\n def account_id(self) -> str:\n \"\"\"Account id\"\"\"\n return self._job[\"accountId\"]\n\n @property\n def data(self) -> CodePipelineData:\n \"\"\"Code pipeline jab data\"\"\"\n return CodePipelineData(self._job[\"data\"])\n\n @property\n def user_parameters(self) -> str | None:\n \"\"\"Action configuration user parameters\"\"\"\n return self.data.action_configuration.configuration.user_parameters\n\n @property\n def decoded_user_parameters(self) -> dict[str, Any]:\n \"\"\"Json Decoded action configuration user parameters\"\"\"\n return self.data.action_configuration.configuration.decoded_user_parameters\n\n @property\n def input_bucket_name(self) -> str:\n \"\"\"Get the first input artifact bucket name\"\"\"\n return self.data.input_artifacts[0].location.s3_location.bucket_name\n\n @property\n def input_object_key(self) -> str:\n \"\"\"Get the first input artifact order key unquote plus\"\"\"\n return self.data.input_artifacts[0].location.s3_location.object_key\n\n def setup_s3_client(self):\n \"\"\"Creates an S3 client\n\n Uses the credentials passed in the event by CodePipeline. These\n credentials can be used to access the artifact bucket.\n\n Returns\n -------\n BaseClient\n An S3 client with the appropriate credentials\n \"\"\"\n # IMPORTING boto3 within the FUNCTION and not at the top level to get\n # it only when we explicitly want it for better performance.\n import boto3\n\n from aws_lambda_powertools.shared import user_agent\n\n s3 = boto3.client(\n \"s3\",\n aws_access_key_id=self.data.artifact_credentials.access_key_id,\n aws_secret_access_key=self.data.artifact_credentials.secret_access_key,\n aws_session_token=self.data.artifact_credentials.session_token,\n )\n user_agent.register_feature_to_client(client=s3, feature=\"data_classes\")\n return s3\n\n def find_input_artifact(self, artifact_name: str) -> CodePipelineArtifact | None:\n \"\"\"Find an input artifact by artifact name\n\n Parameters\n ----------\n artifact_name : str\n The name of the input artifact to look for\n\n Returns\n -------\n CodePipelineArtifact, None\n Matching CodePipelineArtifact if found\n \"\"\"\n for artifact in self.data.input_artifacts:\n if artifact.name == artifact_name:\n return artifact\n return None\n\n def find_output_artifact(self, artifact_name: str) -> CodePipelineArtifact | None:\n \"\"\"Find an output artifact by artifact name\n\n Parameters\n ----------\n artifact_name : str\n The name of the output artifact to look for\n\n Returns\n -------\n CodePipelineArtifact, None\n Matching CodePipelineArtifact if found\n \"\"\"\n for artifact in self.data.output_artifacts:\n if artifact.name == artifact_name:\n return artifact\n return None\n\n def get_artifact(self, artifact_name: str, filename: str | None = None) -> str | None:\n \"\"\"Get a file within an artifact zip on s3\n\n Parameters\n ----------\n artifact_name : str\n Name of the S3 artifact to download\n filename : str\n The file name within the artifact zip to extract as a string\n If None, this will return the raw object body.\n\n Returns\n -------\n str, None\n Returns the contents file contents as a string\n \"\"\"\n artifact = self.find_input_artifact(artifact_name)\n if artifact is None:\n return None\n\n s3 = self.setup_s3_client()\n bucket = artifact.location.s3_location.bucket_name\n key = artifact.location.s3_location.key\n\n if filename:\n with tempfile.NamedTemporaryFile() as tmp_file:\n s3.download_file(bucket, key, tmp_file.name)\n with zipfile.ZipFile(tmp_file.name, \"r\") as zip_file:\n return zip_file.read(filename).decode(\"UTF-8\")\n\n return s3.get_object(Bucket=bucket, Key=key)[\"Body\"].read()\n\n def put_artifact(self, artifact_name: str, body: Any, content_type: str) -> None:\n \"\"\"Writes an object to an s3 output artifact.\n\n Parameters\n ----------\n artifact_name : str\n Name of the S3 artifact to upload\n body: Any\n The data to be written. Binary files should use io.BytesIO.\n content_type: str\n The content type of the data.\n\n Returns\n -------\n None\n \"\"\"\n artifact = self.find_output_artifact(artifact_name)\n if artifact is None:\n raise ValueError(f\"Artifact not found: {artifact_name}.\")\n\n s3 = self.setup_s3_client()\n bucket = artifact.location.s3_location.bucket_name\n key = artifact.location.s3_location.key\n\n # boto3 doesn't support None to omit the parameter when using ServerSideEncryption and SSEKMSKeyId\n # So we are using if/else instead. \n\n if self.data.encryption_key:\n\n encryption_key_id = self.data.encryption_key.get_id\n encryption_key_type = self.data.encryption_key.get_type\n if encryption_key_type == \"KMS\":\n encryption_key_type = \"aws:kms\"\n\n s3.put_object(\n Bucket=bucket,\n Key=key,\n ContentType=content_type,\n Body=body,\n ServerSideEncryption=encryption_key_type,\n SSEKMSKeyId=encryption_key_id,\n BucketKeyEnabled=True,\n )\n\n else:\n s3.put_object(\n Bucket=bucket,\n Key=key,\n ContentType=content_type,\n Body=body,\n BucketKeyEnabled=True,\n )\n", "docs/core/event_handler/appsync.md": "---\ntitle: GraphQL API\ndescription: Core utility\n---\n\nEvent Handler for AWS AppSync and Amplify GraphQL Transformer.\n\n```mermaid\nstateDiagram-v2\n direction LR\n EventSource: AWS Lambda Event Sources\n EventHandlerResolvers: AWS AppSync Direct invocation<br/><br/> AWS AppSync Batch invocation\n LambdaInit: Lambda invocation\n EventHandler: Event Handler\n EventHandlerResolver: Route event based on GraphQL type/field keys\n YourLogic: Run your registered resolver function\n EventHandlerResolverBuilder: Adapts response to Event Source contract\n LambdaResponse: Lambda response\n\n state EventSource {\n EventHandlerResolvers\n }\n\n EventHandlerResolvers --> LambdaInit\n\n LambdaInit --> EventHandler\n EventHandler --> EventHandlerResolver\n\n state EventHandler {\n [*] --> EventHandlerResolver: app.resolve(event, context)\n EventHandlerResolver --> YourLogic\n YourLogic --> EventHandlerResolverBuilder\n }\n\n EventHandler --> LambdaResponse\n```\n\n## Key Features\n\n* Choose between strictly match a GraphQL field name or all of them to a function\n* Automatically parse API arguments to function arguments\n* Integrates with [Event Source Data classes utilities](../../utilities/data_classes.md){target=\"_blank\"} to access resolver and identity information\n* Support async Python 3.8+ functions and generators\n\n## Terminology\n\n**[Direct Lambda Resolver](https://docs.aws.amazon.com/appsync/latest/devguide/direct-lambda-reference.html){target=\"_blank\"}**. A custom AppSync Resolver to bypass the use of Apache Velocity Template (VTL) and automatically map your function's response to a GraphQL field.\n\n**[Amplify GraphQL Transformer](https://docs.amplify.aws/cli/graphql-transformer/function){target=\"_blank\"}**. Custom GraphQL directives to define your application's data model using Schema Definition Language _(SDL)_, _e.g., `@function`_. Amplify CLI uses these directives to convert GraphQL SDL into full descriptive AWS CloudFormation templates.\n\n## Getting started\n\n???+ tip \"Tip: Designing GraphQL Schemas for the first time?\"\n Visit [AWS AppSync schema documentation](https://docs.aws.amazon.com/appsync/latest/devguide/designing-your-schema.html){target=\"_blank\"} to understand how to define types, nesting, and pagination.\n\n### Required resources\n\nYou must have an existing AppSync GraphQL API and IAM permissions to invoke your Lambda function. That said, there is no additional permissions to use Event Handler as routing requires no dependency (_standard library_).\n\nThis is the sample infrastructure we are using for the initial examples with a AppSync Direct Lambda Resolver.\n\n=== \"getting_started_schema.graphql\"\n\n ```typescript\n --8<-- \"examples/event_handler_graphql/src/getting_started_schema.graphql\"\n ```\n\n=== \"template.yaml\"\n\n ```yaml hl_lines=\"59-60 71-72 94-95 104-105 112-113\"\n --8<-- \"examples/event_handler_graphql/sam/template.yaml\"\n ```\n\n### Resolver decorator\n\nYou can define your functions to match GraphQL types and fields with the `app.resolver()` decorator.\n\n???+ question \"What is a type and field?\"\n A type would be a top-level **GraphQL Type** like `Query`, `Mutation`, `Todo`. A **GraphQL Field** would be `listTodos` under `Query`, `createTodo` under `Mutation`, etc.\n\nHere's an example with two separate functions to resolve `getTodo` and `listTodos` fields within the `Query` type. For completion, we use [Scalar type utilities](#scalar-functions) to generate the right output based on our schema definition.\n\n???+ important\n GraphQL arguments are passed as function keyword arguments.\n\n **Example**\n\n The GraphQL Query `getTodo(id: \"todo_id_value\")` will\n call `get_todo` as `get_todo(id=\"todo_id_value\")`.\n\n=== \"getting_started_graphql_api_resolver.py\"\n\n ```python hl_lines=\"7 14 24 26 27 36 38 46 48 59\"\n --8<-- \"examples/event_handler_graphql/src/getting_started_graphql_api_resolver.py\"\n ```\n\n=== \"getting_started_schema.graphql\"\n\n ```typescript hl_lines=\"7-9 13\"\n --8<-- \"examples/event_handler_graphql/src/getting_started_schema.graphql\"\n ```\n\n=== \"sample events\"\n\n === \"getting_started_get_todo.json\"\n\n ```json hl_lines=\"2-3 42\"\n --8<-- \"examples/event_handler_graphql/src/getting_started_get_todo.json\"\n ```\n\n === \"getting_started_list_todos.json\"\n\n ```json hl_lines=\"2 40\"\n --8<-- \"examples/event_handler_graphql/src/getting_started_list_todos.json\"\n ```\n\n === \"getting_started_create_todo.json\"\n\n ```json hl_lines=\"2 48 49\"\n --8<-- \"examples/event_handler_graphql/src/getting_started_create_todo.json\"\n ```\n\n### Scalar functions\n\nWhen working with [AWS AppSync Scalar types](https://docs.aws.amazon.com/appsync/latest/devguide/scalars.html){target=\"_blank\"}, you might want to generate the same values for data validation purposes.\n\nFor convenience, the most commonly used values are available as functions within `scalar_types_utils` module.\n\n```python hl_lines=\"1-6\" title=\"Creating key scalar values with scalar_types_utils\"\n--8<-- \"examples/event_handler_graphql/src/scalar_functions.py\"\n```\n\nHere's a table with their related scalar as a quick reference:\n\n| Scalar type | Scalar function | Sample value |\n| ---------------- | ---------------------------------- | -------------------------------------- |\n| **ID** | `scalar_types_utils.make_id` | `e916c84d-48b6-484c-bef3-cee3e4d86ebf` |\n| **AWSDate** | `scalar_types_utils.aws_date` | `2022-07-08Z` |\n| **AWSTime** | `scalar_types_utils.aws_time` | `15:11:00.189Z` |\n| **AWSDateTime** | `scalar_types_utils.aws_datetime` | `2022-07-08T15:11:00.189Z` |\n| **AWSTimestamp** | `scalar_types_utils.aws_timestamp` | `1657293060` |\n\n## Advanced\n\n### Nested mappings\n\n???+ note\n\n The following examples use a more advanced schema. These schemas differ from [initial sample infrastructure we used earlier](#required-resources).\n\nYou can nest `app.resolver()` decorator multiple times when resolving fields with the same return value.\n\n=== \"nested_mappings.py\"\n\n ```python hl_lines=\"4 10 20 21 23 30\"\n --8<-- \"examples/event_handler_graphql/src/nested_mappings.py\"\n ```\n\n=== \"nested_mappings_schema.graphql\"\n\n ```typescript hl_lines=\"6 20\"\n --8<-- \"examples/event_handler_graphql/src/nested_mappings_schema.graphql\"\n ```\n\n### Async functions\n\nFor Lambda Python3.8+ runtime, this utility supports async functions when you use in conjunction with `asyncio.run`.\n\n```python hl_lines=\"7 14 24 25 34 36\" title=\"Resolving GraphQL resolvers async\"\n--8<-- \"examples/event_handler_graphql/src/async_resolvers.py\"\n```\n\n### Amplify GraphQL Transformer\n\nAssuming you have [Amplify CLI installed](https://docs.amplify.aws/cli/start/install){target=\"_blank\"}, create a new API using `amplify add api` and use the following GraphQL Schema.\n\n<!-- AppSync resolver decorator is a concise way to create lambda functions to handle AppSync resolvers for multiple `typeName` and `fieldName` declarations. -->\n\n```typescript hl_lines=\"7 15 20 22\" title=\"Example GraphQL Schema\"\n--8<-- \"examples/event_handler_graphql/src/amplify_graphql_transformer_schema.graphql\"\n```\n\n[Create two new basic Python functions](https://docs.amplify.aws/cli/function#set-up-a-function){target=\"_blank\"} via `amplify add function`.\n\n???+ note\n Amplify CLI generated functions use `Pipenv` as a dependency manager. Your function source code is located at **`amplify/backend/function/your-function-name`**.\n\nWithin your function's folder, add Powertools for AWS Lambda (Python) as a dependency with `pipenv install aws-lambda-powertools`.\n\nUse the following code for `merchantInfo` and `searchMerchant` functions respectively.\n\n=== \"graphql_transformer_merchant_info.py\"\n\n ```python hl_lines=\"4 6 23 24 29 30 36\"\n --8<-- \"examples/event_handler_graphql/src/graphql_transformer_merchant_info.py\"\n ```\n\n=== \"graphql_transformer_search_merchant.py\"\n\n ```python hl_lines=\"4 6 21 22 36 42\"\n --8<-- \"examples/event_handler_graphql/src/graphql_transformer_search_merchant.py\"\n ```\n\n=== \"graphql_transformer_list_locations.json\"\n\n ```json hl_lines=\"2-7\"\n --8<-- \"examples/event_handler_graphql/src/graphql_transformer_list_locations.json\"\n ```\n\n=== \"graphql_transformer_common_field.json\"\n\n ```json hl_lines=\"2 3\"\n --8<-- \"examples/event_handler_graphql/src/graphql_transformer_common_field.json\"\n ```\n\n=== \"graphql_transformer_find_merchant.json\"\n\n ```json hl_lines=\"2-6\"\n --8<-- \"examples/event_handler_graphql/src/graphql_transformer_find_merchant.json\"\n ```\n\n### Custom data models\n\nYou can subclass [AppSyncResolverEvent](../../utilities/data_classes.md#appsync-resolver){target=\"_blank\"} to bring your own set of methods to handle incoming events, by using `data_model` param in the `resolve` method.\n\n=== \"custom_models.py.py\"\n\n ```python hl_lines=\"4 7-9 25-27 31 32 39 45\"\n --8<-- \"examples/event_handler_graphql/src/custom_models.py\"\n ```\n\n=== \"nested_mappings_schema.graphql\"\n\n ```typescript hl_lines=\"6 20\"\n --8<-- \"examples/event_handler_graphql/src/nested_mappings_schema.graphql\"\n ```\n\n=== \"graphql_transformer_list_locations.json\"\n\n ```json hl_lines=\"18-19\"\n --8<-- \"examples/event_handler_graphql/src/graphql_transformer_list_locations.json\"\n ```\n\n### Split operations with Router\n\n???+ tip\n Read the **[considerations section for trade-offs between monolithic and micro functions](./api_gateway.md#considerations){target=\"_blank\"}**, as it's also applicable here.\n\nAs you grow the number of related GraphQL operations a given Lambda function should handle, it is natural to split them into separate files to ease maintenance - That's when the `Router` feature comes handy.\n\nLet's assume you have `split_operation.py` as your Lambda function entrypoint and routes in `split_operation_module.py`. This is how you'd use the `Router` feature.\n\n=== \"split_operation_module.py\"\n\n We import **Router** instead of **AppSyncResolver**; syntax wise is exactly the same.\n\n \t```python hl_lines=\"4 8 18 19\"\n --8<-- \"examples/event_handler_graphql/src/split_operation_module.py\"\n \t```\n\n=== \"split_operation.py\"\n\n \tWe use `include_router` method and include all `location` operations registered in the `router` global object.\n\n ```python hl_lines=\"1 11\"\n --8<-- \"examples/event_handler_graphql/src/split_operation.py\"\n ```\n\n#### Sharing contextual data\n\nYou can use `append_context` when you want to share data between your App and Router instances. Any data you share will be available via the `context` dictionary available in your App or Router context.\n\n???+ warning\n For safety, we clear the context after each invocation, except for async single resolvers. For these, use `app.context.clear()` before returning the function.\n\n???+ tip\n This can also be useful for middlewares injecting contextual information before a request is processed.\n\n=== \"split_route_append_context.py\"\n\n\t```python hl_lines=\"17\"\n --8<-- \"examples/event_handler_graphql/src/split_operation_append_context.py\"\n\t```\n\n=== \"split_route_append_context_module.py\"\n\n\t```python hl_lines=\"22\"\n --8<-- \"examples/event_handler_graphql/src/split_operation_append_context_module.py\"\n\t```\n\n### Batch processing\n\n```mermaid\nstateDiagram-v2\n direction LR\n LambdaInit: Lambda invocation\n EventHandler: Event Handler\n EventHandlerResolver: Route event based on GraphQL type/field keys\n Client: Client query (listPosts)\n YourLogic: Run your registered resolver function\n EventHandlerResolverBuilder: Verifies response is a list\n AppSyncBatchPostsResolution: query listPosts\n AppSyncBatchPostsItems: get all posts data <em>(id, title, relatedPosts)</em>\n AppSyncBatchRelatedPosts: get related posts <em>(id, title, relatedPosts)</em>\n AppSyncBatchAggregate: aggregate batch resolver event\n AppSyncBatchLimit: reached batch size limit\n LambdaResponse: Lambda response\n\n Client --> AppSyncBatchResolverMode\n state AppSyncBatchResolverMode {\n [*] --> AppSyncBatchPostsResolution\n AppSyncBatchPostsResolution --> AppSyncBatchPostsItems\n AppSyncBatchPostsItems --> AppSyncBatchRelatedPosts: <strong>N additional queries</strong>\n AppSyncBatchRelatedPosts --> AppSyncBatchRelatedPosts\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchAggregate --> AppSyncBatchLimit\n }\n\n AppSyncBatchResolverMode --> LambdaInit: 1x Invoke with N events\n LambdaInit --> EventHandler\n\n state EventHandler {\n [*] --> EventHandlerResolver: app.resolve(event, context)\n EventHandlerResolver --> YourLogic\n YourLogic --> EventHandlerResolverBuilder\n EventHandlerResolverBuilder --> LambdaResponse\n }\n```\n\n<em><center>Batch resolvers mechanics: visualizing N+1 in `relatedPosts` field.</center></em>\n\n#### Understanding N+1 problem\n\nWhen AWS AppSync has [batching enabled for Lambda Resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html#advanced-use-case-batching){target=\"_blank\"}, it will group as many requests as possible before invoking your Lambda invocation. Effectively solving the [N+1 problem in GraphQL](https://aws.amazon.com/blogs/mobile/introducing-configurable-batching-size-for-aws-appsync-lambda-resolvers/){target=\"_blank\"}.\n\nFor example, say you have a query named `listPosts`. For each post, you also want `relatedPosts`. **Without batching**, AppSync will:\n\n1. Invoke your Lambda function to get the first post\n2. Invoke your Lambda function for each related post\n3. Repeat 1 until done\n\n```mermaid\nsequenceDiagram\n participant Client\n participant AppSync\n participant Lambda\n participant Database\n\n Client->>AppSync: GraphQL Query\n Note over Client,AppSync: query listPosts { <br/>id <br/>title <br/>relatedPosts { id title } <br/> }\n\n AppSync->>Lambda: Fetch N posts (listPosts)\n Lambda->>Database: Query\n Database->>Lambda: Posts\n Lambda-->>AppSync: Return posts (id, title)\n loop Fetch N related posts (relatedPosts)\n AppSync->>Lambda: Invoke function (N times)\n Lambda->>Database: Query\n Database-->>Lambda: Return related posts\n Lambda-->>AppSync: Return related posts\n end\n AppSync-->>Client: Return posts and their related posts\n```\n\n#### Batch resolvers\n\nYou can use `@batch_resolver` or `@async_batch_resolver` decorators to receive the entire batch of requests.\n\nIn this mode, you must return results in the same order of your batch items, so AppSync can associate the results back to the client.\n\n=== \"advanced_batch_resolver.py\"\n \t```python hl_lines=\"5 9 23\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_resolver.py\"\n \t```\n\n 1. The entire batch is sent to the resolver. You need to iterate through it to process all records.\n 2. We use `post_id` as our unique identifier of the GraphQL request.\n\n=== \"advanced_batch_resolver_payload.json\"\n \t```json hl_lines=\"6 16 25 35 44 54\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_resolver_payload.json\"\n \t```\n\n=== \"advanced_batch_query.graphql\"\n \t```typescript hl_lines=\"3 6\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_query.graphql\"\n \t```\n\n##### Processing items individually\n\n```mermaid\nstateDiagram-v2\n direction LR\n LambdaInit: Lambda invocation\n EventHandler: Event Handler\n EventHandlerResolver: Route event based on GraphQL type/field keys\n Client: Client query (listPosts)\n YourLogic: Call your registered resolver function <strong>N times</strong>\n EventHandlerResolverErrorHandling: Gracefully <strong>handle errors</strong> with null response\n EventHandlerResolverBuilder: Aggregate responses to match batch size\n AppSyncBatchPostsResolution: query listPosts\n AppSyncBatchPostsItems: get all posts data <em>(id, title, relatedPosts)</em>\n AppSyncBatchRelatedPosts: get related posts <em>(id, title, relatedPosts)</em>\n AppSyncBatchAggregate: aggregate batch resolver event\n AppSyncBatchLimit: reached batch size limit\n LambdaResponse: Lambda response\n\n Client --> AppSyncBatchResolverMode\n state AppSyncBatchResolverMode {\n [*] --> AppSyncBatchPostsResolution\n AppSyncBatchPostsResolution --> AppSyncBatchPostsItems\n AppSyncBatchPostsItems --> AppSyncBatchRelatedPosts: <strong>N additional queries</strong>\n AppSyncBatchRelatedPosts --> AppSyncBatchRelatedPosts\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchAggregate --> AppSyncBatchLimit\n }\n\n AppSyncBatchResolverMode --> LambdaInit: 1x Invoke with N events\n LambdaInit --> EventHandler\n\n state EventHandler {\n [*] --> EventHandlerResolver: app.resolve(event, context)\n EventHandlerResolver --> YourLogic\n YourLogic --> EventHandlerResolverErrorHandling\n EventHandlerResolverErrorHandling --> EventHandlerResolverBuilder\n EventHandlerResolverBuilder --> LambdaResponse\n }\n```\n\n<em><center>Batch resolvers: reducing Lambda invokes but fetching data N times (similar to single resolver).</center></em>\n\nIn rare scenarios, you might want to process each item individually, trading ease of use for increased latency as you handle one batch item at a time.\n\nYou can toggle `aggregate` parameter in `@batch_resolver` decorator for your resolver function to be called N times.\n\n!!! note \"This does not resolve the N+1 problem, but shifts it to the Lambda runtime.\"\n\nIn this mode, we will:\n\n1. Aggregate each response we receive from your function in the exact order it receives\n2. Gracefully handle errors by adding `None` in the final response for each batch item that failed processing\n * You can customize `nul` or error responses back to the client in the [AppSync resolver mapping templates](https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html#returning-individual-errors){target=\"_blank\"}\n\n=== \"advanced_batch_resolver_individual.py\"\n \t```python hl_lines=\"5 9 19\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_resolver_individual.py\"\n \t```\n\n 1. You need to disable the aggregated event by using `aggregate` flag.\n The resolver receives and processes each record one at a time.\n\n=== \"advanced_batch_resolver_payload.json\"\n \t```json hl_lines=\"6 16 25 35 44 54\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_resolver_payload.json\"\n \t```\n\n=== \"advanced_batch_query.graphql\"\n \t```typescript hl_lines=\"3 6\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_query.graphql\"\n \t```\n\n##### Raise on error\n\n```mermaid\nstateDiagram-v2\n direction LR\n LambdaInit: Lambda invocation\n EventHandler: Event Handler\n EventHandlerResolver: Route event based on GraphQL type/field keys\n Client: Client query (listPosts)\n YourLogic: Call your registered resolver function <strong>N times</strong>\n EventHandlerResolverErrorHandling: <strong>Error?</strong>\n EventHandlerResolverHappyPath: <strong>No error?</strong>\n EventHandlerResolverUnhappyPath: Propagate any exception\n EventHandlerResolverBuilder: Aggregate responses to match batch size\n AppSyncBatchPostsResolution: query listPosts\n AppSyncBatchPostsItems: get all posts data <em>(id, title, relatedPosts)</em>\n AppSyncBatchRelatedPosts: get related posts <em>(id, title, relatedPosts)</em>\n AppSyncBatchAggregate: aggregate batch resolver event\n AppSyncBatchLimit: reached batch size limit\n LambdaResponse: <strong>Lambda response</strong>\n LambdaErrorResponse: <strong>Lambda error</strong>\n\n Client --> AppSyncBatchResolverMode\n state AppSyncBatchResolverMode {\n [*] --> AppSyncBatchPostsResolution\n AppSyncBatchPostsResolution --> AppSyncBatchPostsItems\n AppSyncBatchPostsItems --> AppSyncBatchRelatedPosts: <strong>N additional queries</strong>\n AppSyncBatchRelatedPosts --> AppSyncBatchRelatedPosts\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchRelatedPosts --> AppSyncBatchAggregate\n AppSyncBatchAggregate --> AppSyncBatchLimit\n }\n\n AppSyncBatchResolverMode --> LambdaInit: 1x Invoke with N events\n LambdaInit --> EventHandler\n\n state EventHandler {\n [*] --> EventHandlerResolver: app.resolve(event, context)\n EventHandlerResolver --> YourLogic\n YourLogic --> EventHandlerResolverHappyPath\n YourLogic --> EventHandlerResolverErrorHandling\n EventHandlerResolverHappyPath --> EventHandlerResolverBuilder\n EventHandlerResolverErrorHandling --> EventHandlerResolverUnhappyPath\n EventHandlerResolverUnhappyPath --> LambdaErrorResponse\n\n EventHandlerResolverBuilder --> LambdaResponse\n }\n```\n\n<em><center>Batch resolvers: reducing Lambda invokes but fetching data N times (similar to single resolver).</center></em>\n\nYou can toggle `raise_on_error` parameter in `@batch_resolver` to propagate any exception instead of gracefully returning `None` for a given batch item.\n\nThis is useful when you want to stop processing immediately in the event of an unhandled or unrecoverable exception.\n\n=== \"advanced_batch_resolver_handling_error.py\"\n \t```python hl_lines=\"5 9 19\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_resolver_handling_error.py\"\n \t```\n\n 1. You can enable enable the error handling by using `raise_on_error` flag.\n\n=== \"advanced_batch_resolver_payload.json\"\n \t```json hl_lines=\"6 16 25 35 44 54\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_resolver_payload.json\"\n \t```\n\n=== \"advanced_batch_query.graphql\"\n \t```typescript hl_lines=\"3 6\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_query.graphql\"\n \t```\n\n#### Async batch resolver\n\nSimilar to `@batch_resolver` explained in [batch resolvers](#batch-resolvers), you can use `async_batch_resolver` to handle async functions.\n\n=== \"advanced_batch_async_resolver.py\"\n \t```python hl_lines=\"5 9 23\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_async_resolver.py\"\n \t```\n\n 1. `async_batch_resolver` takes care of running and waiting for coroutine completion.\n\n=== \"advanced_batch_resolver_payload.json\"\n \t```json hl_lines=\"6 16 25 35 44 54\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_resolver_payload.json\"\n \t```\n\n=== \"advanced_batch_query.graphql\"\n \t```typescript hl_lines=\"3 6\"\n --8<-- \"examples/event_handler_graphql/src/advanced_batch_query.graphql\"\n \t```\n\n## Testing your code\n\nYou can test your resolvers by passing a mocked or actual AppSync Lambda event that you're expecting.\n\nYou can use either `app.resolve(event, context)` or simply `app(event, context)`.\n\nHere's an example of how you can test your synchronous resolvers:\n\n=== \"assert_graphql_response.py\"\n\n ```python hl_lines=\"8 28 31\"\n --8<-- \"examples/event_handler_graphql/src/assert_graphql_response.py\"\n ```\n\n=== \"assert_graphql_response_module.py\"\n\n ```python hl_lines=\"11\"\n --8<-- \"examples/event_handler_graphql/src/assert_graphql_response_module.py\"\n ```\n\n=== \"assert_graphql_response.json\"\n\n ```json hl_lines=\"5\"\n --8<-- \"examples/event_handler_graphql/src/assert_graphql_response.json\"\n ```\n\nAnd an example for testing asynchronous resolvers. Note that this requires the `pytest-asyncio` package. This tests a specific async GraphQL operation.\n\n???+ note\n Alternatively, you can continue call `lambda_handler` function synchronously as it'd run `asyncio.run` to await for the coroutine to complete.\n\n=== \"assert_async_graphql_response.py\"\n\n ```python hl_lines=\"31\"\n --8<-- \"examples/event_handler_graphql/src/assert_async_graphql_response.py\"\n ```\n\n=== \"assert_async_graphql_response_module.py\"\n\n ```python hl_lines=\"14\"\n --8<-- \"examples/event_handler_graphql/src/assert_async_graphql_response_module.py\"\n ```\n\n=== \"assert_async_graphql_response.json\"\n\n ```json hl_lines=\"3 4\"\n --8<-- \"examples/event_handler_graphql/src/assert_async_graphql_response.json\"\n ```\n", "examples/event_handler_graphql/src/exception_handling_graphql.py": null}
|
diff --git a/docs/core/event_handler/appsync.md b/docs/core/event_handler/appsync.md
index a2f29e5dba5..0c556dedfbf 100644
--- a/docs/core/event_handler/appsync.md
+++ b/docs/core/event_handler/appsync.md
@@ -288,6 +288,19 @@ You can use `append_context` when you want to share data between your App and Ro
--8<-- "examples/event_handler_graphql/src/split_operation_append_context_module.py"
```
+### Exception handling
+
+You can use **`exception_handler`** decorator with any Python exception. This allows you to handle a common exception outside your resolver, for example validation errors.
+
+The `exception_handler` function also supports passing a list of exception types you wish to handle with one handler.
+
+```python hl_lines="5-7 11" title="Exception handling"
+--8<-- "examples/event_handler_graphql/src/exception_handling_graphql.py"
+```
+
+???+ warning
+ This is not supported when using async single resolvers.
+
### Batch processing
```mermaid
|
{"aws_lambda_powertools/event_handler/appsync.py": [{"type": "function", "name": "AppSyncResolver.exception_handler", "lines": [481, 504], "signature": "def exception_handler(self, exc_class: type[Exception] | list[type[Exception]]):", "doc": "A decorator function that registers a handler for one or more exception types.\n\nParameters\n----------\nexc_class (type[Exception] | list[type[Exception]])\n A single exception type or a list of exception types.\n\nReturns\n-------\nCallable:\n A decorator function that registers the exception handler."}, {"type": "function", "name": "AppSyncResolver.exception_handler.register_exception_handler", "lines": [496, 502], "signature": "def register_exception_handler(func: Callable):", "doc": ""}, {"type": "function", "name": "AppSyncResolver._lookup_exception_handler", "lines": [506, 523], "signature": "def _lookup_exception_handler(self, exp_type: type) -> Callable | None:", "doc": "Looks up the registered exception handler for the given exception type or its base classes.\n\nParameters\n----------\nexp_type (type):\n The exception type to look up the handler for.\n\nReturns\n-------\nCallable | None:\n The registered exception handler function if found, otherwise None."}], "examples/event_handler_graphql/src/exception_handling_graphql.py": [{"type": "function", "name": "handle_value_error", "lines": [7, 8], "signature": "def handle_value_error(ex: ValueError):", "doc": ""}, {"type": "function", "name": "create_something", "lines": [12, 13], "signature": "def create_something():", "doc": ""}, {"type": "function", "name": "lambda_handler", "lines": [16, 17], "signature": "def lambda_handler(event, context):", "doc": ""}]}
| null |
["tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_exception_handler_with_batch_resolver_and_raise_exception", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_exception_handler_with_batch_resolver_and_no_raise_exception", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_exception_handler_with_single_resolver"]
|
["tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_related_events_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_simple_queries_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_raise_on_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_async_resolve_batch_processing_with_raise_on_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_without_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_without_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolver_batch_with_resolver_not_found_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolver_batch_with_sync_and_async_resolver_at_same_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_batch_resolver_with_router", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_and_sync_singular_processing", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_async_resolver_include_batch_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_simple_queries_with_aggregate", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_with_simple_queries_with_aggregate", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_aggregate_and_returning_a_non_list", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_with_aggregate_and_returning_a_non_list", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_sync_batch_processing_with_aggregate_and_without_return", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_with_aggregate_and_without_return", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_include_router_access_batch_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_app_access_batch_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_context_is_accessible_in_sync_batch_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_context_is_accessible_in_async_batch_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_direct_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_amplify_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_no_params", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_value_error", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_yield", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_multiple_mappings", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_async", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolve_custom_data_model", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_include_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_append_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_router_append_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_route_context_is_cleared_after_resolve", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_router_has_access_to_app_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_include_router_merges_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_include_router_access_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_app_access_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_route_context_is_not_cleared_after_resolve_async", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_route_context_is_manually_cleared_after_resolve_async"]
|
d1a58cdd12dfcac1e9ce022fe8a29f69ea6007b4
|
{"first_commit_time": 1732016649.0, "pr_title": "feat(event_handler): add exception handling mechanism for AppSyncResolver", "pr_body": "<!-- markdownlint-disable MD041 MD043 -->\r\n**Issue number:** #2184\r\n\r\n## Summary\r\n\r\n### Changes\r\n\r\nThis PR introduces a new feature to handle exceptions in `AppSyncResolver` using a standard error handling mechanism, similar to the one used for HTTP Resolvers. Currently, there is no built-in support for exception handling in `AppSyncResolver`, and this PR aims to address that gap.\r\n\r\n**Changes**\r\n\r\n1. **Added exception catching function**: A new decorator `@app.exception_handler` has been implemented to catch and handle exceptions raised during the execution of AppSync resolvers. This decorator allows developers to define custom error handling logic based on the type of exception raised.\r\n\r\n2. **Added tests**: Test cases have been added to ensure the proper functioning of the exception handling mechanism and to maintain code quality.\r\n\r\n3. **Added documentation**: The usage and implementation details of the `@app.exception_handler` decorator have been documented to provide guidance for developers who wish to utilize this new feature.\r\n\r\n**Note**\r\n\r\nIt's important to note that this exception handling mechanism is not supported when using single async resolvers.\r\n\r\n\r\n### User experience\r\n\r\n```python\r\nfrom aws_lambda_powertools.event_handler import AppSyncResolver\r\n\r\napp = AppSyncResolver()\r\n\r\[email protected]_handler(ValueError)\r\ndef handle_value_error(ex: ValueError):\r\n return {\"message\": \"error\"}\r\n\r\[email protected](field_name=\"createSomething\")\r\ndef create_something(id: str):\r\n raise ValueError(\"Error\")\r\n\r\ndef lambda_handler(event, context):\r\n return app.resolve(event, context)\r\n```\r\n\r\n## Checklist\r\n\r\nIf your change doesn't seem to apply, please leave them unchecked.\r\n\r\n* [x] [Meet tenets criteria](https://docs.powertools.aws.dev/lambda/python/#tenets)\r\n* [x] I have performed a self-review of this change\r\n* [x] Changes have been tested\r\n* [x] Changes are documented\r\n* [x] PR title follows [conventional commit semantics](https://github.com/aws-powertools/powertools-lambda-python/blob/develop/.github/semantic.yml)\r\n\r\n<details>\r\n<summary>Is this a breaking change?</summary>\r\n\r\n**RFC issue number**:\r\n\r\nChecklist:\r\n\r\n* [ ] Migration process documented\r\n* [ ] Implement warnings (if it can live side by side)\r\n\r\n</details>\r\n\r\n## Acknowledgment\r\n\r\nBy submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.\r\n\r\n**Disclaimer**: We value your time and bandwidth. As such, any pull requests created on non-triaged issues might not be successful.\r\n", "pr_timeline": [{"time": 1732025370.0, "comment": "## [](https://sonarcloud.io/dashboard?id=aws-powertools_powertools-lambda-python&pullRequest=5588) **Quality Gate passed** \nIssues \n [0 New issues](https://sonarcloud.io/project/issues?id=aws-powertools_powertools-lambda-python&pullRequest=5588&issueStatuses=OPEN,CONFIRMED&sinceLeakPeriod=true) \n [0 Accepted issues](https://sonarcloud.io/project/issues?id=aws-powertools_powertools-lambda-python&pullRequest=5588&issueStatuses=ACCEPTED)\n\nMeasures \n [0 Security Hotspots](https://sonarcloud.io/project/security_hotspots?id=aws-powertools_powertools-lambda-python&pullRequest=5588&issueStatuses=OPEN,CONFIRMED&sinceLeakPeriod=true) \n [0.0% Coverage on New Code](https://sonarcloud.io/component_measures?id=aws-powertools_powertools-lambda-python&pullRequest=5588&metric=new_coverage&view=list) \n [0.0% Duplication on New Code](https://sonarcloud.io/component_measures?id=aws-powertools_powertools-lambda-python&pullRequest=5588&metric=new_duplicated_lines_density&view=list) \n \n[See analysis details on SonarQube Cloud](https://sonarcloud.io/dashboard?id=aws-powertools_powertools-lambda-python&pullRequest=5588)\n\n"}, {"time": 1732028594.0, "comment": "## [Codecov](https://app.codecov.io/gh/aws-powertools/powertools-lambda-python/pull/5588?dropdown=coverage&src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws-powertools) Report\nAll modified and coverable lines are covered by tests :white_check_mark:\n> Project coverage is 96.16%. Comparing base [(`150623b`)](https://app.codecov.io/gh/aws-powertools/powertools-lambda-python/commit/150623b5884070e9ec3a32a91e72486a1818c095?dropdown=coverage&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws-powertools) to head [(`d4950b2`)](https://app.codecov.io/gh/aws-powertools/powertools-lambda-python/commit/d4950b2f9f8ef04f34dd6a06ca2cb6b199f45a72?dropdown=coverage&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws-powertools).\n> Report is 3 commits behind head on develop.\n\n<details><summary>Additional details and impacted files</summary>\n\n\n```diff\n@@ Coverage Diff @@\n## develop #5588 +/- ##\n========================================\n Coverage 96.16% 96.16% \n========================================\n Files 229 229 \n Lines 10836 10853 +17 \n Branches 2015 2018 +3 \n========================================\n+ Hits 10420 10437 +17 \n Misses 327 327 \n Partials 89 89 \n```\n\n</details>\n\n[:umbrella: View full report in Codecov by Sentry](https://app.codecov.io/gh/aws-powertools/powertools-lambda-python/pull/5588?dropdown=coverage&src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws-powertools). \n:loudspeaker: Have feedback on the report? [Share it here](https://about.codecov.io/codecov-pr-comment-feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws-powertools).\n\n----\n\ud83d\udea8 Try these New Features:\n\n- [Flaky Tests Detection](https://docs.codecov.com/docs/test-result-ingestion-beta) - Detect and resolve failed and flaky tests"}], "issues": {}}
|
aws/sagemaker-python-sdk
| 3,423
|
https://github.com/aws/sagemaker-python-sdk/pull/3423
|
aws__sagemaker-python-sdk-3423
|
[]
|
8dc17fb89fafd3609e14cb1539ae2911be510f2e
|
diff --git a/doc/frameworks/pytorch/using_pytorch.rst b/doc/frameworks/pytorch/using_pytorch.rst
index 725f34aa5a..f56085f756 100644
--- a/doc/frameworks/pytorch/using_pytorch.rst
+++ b/doc/frameworks/pytorch/using_pytorch.rst
@@ -293,6 +293,121 @@ using two ``ml.p4d.24xlarge`` instances:
pt_estimator.fit("s3://bucket/path/to/training/data")
+.. _distributed-pytorch-training-on-trainium:
+
+Distributed Training with PyTorch Neuron on Trn1 instances
+==========================================================
+
+SageMaker Training supports Amazon EC2 Trn1 instances powered by
+`AWS Trainium <https://aws.amazon.com/machine-learning/trainium/>`_ device,
+the second generation purpose-built machine learning accelerator from AWS.
+Each Trn1 instance consists of up to 16 Trainium devices, and each
+Trainium device consists of two `NeuronCores
+<https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-hardware/trn1-arch.html#trainium-architecture>`_
+in the *AWS Neuron Documentation*.
+
+You can run distributed training job on Trn1 instances.
+SageMaker supports the ``xla`` package through ``torchrun``.
+With this, you do not need to manually pass ``RANK``,
+``WORLD_SIZE``, ``MASTER_ADDR``, and ``MASTER_PORT``.
+You can launch the training job using the
+:class:`sagemaker.pytorch.estimator.PyTorch` estimator class
+with the ``torch_distributed`` option as the distribution strategy.
+
+.. note::
+
+ This ``torch_distributed`` support is available
+ in the AWS Deep Learning Containers for PyTorch Neuron starting v1.11.0.
+ To find a complete list of supported versions of PyTorch Neuron, see
+ `Neuron Containers <https://github.com/aws/deep-learning-containers/blob/master/available_images.md#neuron-containers>`_
+ in the *AWS Deep Learning Containers GitHub repository*.
+
+.. note::
+
+ SageMaker Debugger is currently not supported with Trn1 instances.
+
+Adapt Your Training Script to Initialize with the XLA backend
+-------------------------------------------------------------
+
+To initialize distributed training in your script, call
+`torch.distributed.init_process_group
+<https://pytorch.org/docs/master/distributed.html#torch.distributed.init_process_group>`_
+with the ``xla`` backend as shown below.
+
+.. code:: python
+
+ import torch.distributed as dist
+
+ dist.init_process_group('xla')
+
+SageMaker takes care of ``'MASTER_ADDR'`` and ``'MASTER_PORT'`` for you via ``torchrun``
+
+For detailed documentation about modifying your training script for Trainium, see `Multi-worker data-parallel MLP training using torchrun <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/mlp.html?highlight=torchrun#multi-worker-data-parallel-mlp-training-using-torchrun>`_ in the *AWS Neuron Documentation*.
+
+**Currently Supported backends:**
+
+- ``xla`` for Trainium (Trn1) instances
+
+For up-to-date information on supported backends for Trn1 instances, see `AWS Neuron Documentation <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html>`_.
+
+Launching a Distributed Training Job on Trainium
+------------------------------------------------
+
+You can run multi-node distributed PyTorch training jobs on Trn1 instances using the
+:class:`sagemaker.pytorch.estimator.PyTorch` estimator class.
+With ``instance_count=1``, the estimator submits a
+single-node training job to SageMaker; with ``instance_count`` greater
+than one, a multi-node training job is launched.
+
+With the ``torch_distributed`` option, the SageMaker PyTorch estimator runs a SageMaker
+training container for PyTorch Neuron, sets up the environment, and launches
+the training job using the ``torchrun`` command on each worker with the given information.
+
+**Examples**
+
+The following examples show how to run a PyTorch training using ``torch_distributed`` in SageMaker
+on one ``ml.trn1.2xlarge`` instance and two ``ml.trn1.32xlarge`` instances:
+
+.. code:: python
+
+ from sagemaker.pytorch import PyTorch
+
+ pt_estimator = PyTorch(
+ entry_point="train_torch_distributed.py",
+ role="SageMakerRole",
+ framework_version="1.11.0",
+ py_version="py38",
+ instance_count=1,
+ instance_type="ml.trn1.2xlarge",
+ distribution={
+ "torch_distributed": {
+ "enabled": True
+ }
+ }
+ )
+
+ pt_estimator.fit("s3://bucket/path/to/training/data")
+
+.. code:: python
+
+ from sagemaker.pytorch import PyTorch
+
+ pt_estimator = PyTorch(
+ entry_point="train_torch_distributed.py",
+ role="SageMakerRole",
+ framework_version="1.11.0",
+ py_version="py38",
+ instance_count=2,
+ instance_type="ml.trn1.32xlarge",
+ distribution={
+ "torch_distributed": {
+ "enabled": True
+ }
+ }
+ )
+
+ pt_estimator.fit("s3://bucket/path/to/training/data")
+
*********************
Deploy PyTorch Models
*********************
diff --git a/src/sagemaker/image_uri_config/pytorch-neuron.json b/src/sagemaker/image_uri_config/pytorch-neuron.json
new file mode 100644
index 0000000000..b116a8a36b
--- /dev/null
+++ b/src/sagemaker/image_uri_config/pytorch-neuron.json
@@ -0,0 +1,41 @@
+{
+ "training": {
+ "processors": ["trn"],
+ "version_aliases": {"1.11": "1.11.0"},
+ "versions": {
+ "1.11.0": {
+ "py_versions": ["py38"],
+ "repository": "pytorch-training-neuron",
+ "registries": {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884"
+ },
+ "container_version": {"trn": "ubuntu20.04"},
+ "sdk_versions": ["sdk2.4.0"]
+ }
+ }
+ }
+ }
\ No newline at end of file
diff --git a/src/sagemaker/image_uris.py b/src/sagemaker/image_uris.py
index fa00fe873c..b961ee0c4e 100644
--- a/src/sagemaker/image_uris.py
+++ b/src/sagemaker/image_uris.py
@@ -33,6 +33,7 @@
HUGGING_FACE_FRAMEWORK = "huggingface"
XGBOOST_FRAMEWORK = "xgboost"
SKLEARN_FRAMEWORK = "sklearn"
+TRAINIUM_ALLOWED_FRAMEWORKS = "pytorch"
@override_pipeline_parameter_var
@@ -150,11 +151,12 @@ def retrieve(
)
else:
_framework = framework
- if framework == HUGGING_FACE_FRAMEWORK:
+ if framework == HUGGING_FACE_FRAMEWORK or framework in TRAINIUM_ALLOWED_FRAMEWORKS:
inference_tool = _get_inference_tool(inference_tool, instance_type)
if inference_tool == "neuron":
_framework = f"{framework}-{inference_tool}"
final_image_scope = _get_final_image_scope(framework, instance_type, image_scope)
+ _validate_for_suppported_frameworks_and_instance_type(framework, instance_type)
config = _config_for_framework_and_scope(_framework, final_image_scope, accelerator_type)
original_version = version
@@ -186,6 +188,12 @@ def retrieve(
if version_config.get("container_version"):
container_version = version_config["container_version"][processor]
+ # Append sdk version in case of trainium instances
+ if repo in ["pytorch-training-neuron"]:
+ if not sdk_version:
+ sdk_version = _get_latest_versions(version_config["sdk_versions"])
+ container_version = sdk_version + "-" + container_version
+
if framework == HUGGING_FACE_FRAMEWORK:
pt_or_tf_version = (
re.compile("^(pytorch|tensorflow)(.*)$").match(base_framework_version).group(2)
@@ -344,6 +352,16 @@ def _config_for_framework_and_scope(framework, image_scope, accelerator_type=Non
return config if "scope" in config else config[image_scope]
+def _validate_for_suppported_frameworks_and_instance_type(framework, instace_type):
+ """Validate if framework is supported for the instance_type"""
+ if (
+ instace_type is not None
+ and "trn" in instace_type
+ and framework not in TRAINIUM_ALLOWED_FRAMEWORKS
+ ):
+ _validate_framework(framework, TRAINIUM_ALLOWED_FRAMEWORKS, "framework")
+
+
def config_for_framework(framework):
"""Loads the JSON config for the given framework."""
fname = os.path.join(os.path.dirname(__file__), "image_uri_config", "{}.json".format(framework))
@@ -371,7 +389,7 @@ def _get_inference_tool(inference_tool, instance_type):
"""Extract the inference tool name from instance type."""
if not inference_tool:
instance_type_family = _get_instance_type_family(instance_type)
- if instance_type_family.startswith("inf"):
+ if instance_type_family.startswith("inf") or instance_type_family.startswith("trn"):
return "neuron"
return inference_tool
@@ -460,6 +478,8 @@ def _processor(instance_type, available_processors, serverless_inference_config=
processor = family
elif family.startswith("inf"):
processor = "inf"
+ elif family.startswith("trn"):
+ processor = "trn"
elif family[0] in ("g", "p"):
processor = "gpu"
else:
@@ -523,6 +543,15 @@ def _validate_arg(arg, available_options, arg_name):
)
+def _validate_framework(framework, allowed_frameworks, arg_name):
+ """Checks if the framework is in the allowed frameworks, and raises a ``ValueError`` if not."""
+ if framework not in allowed_frameworks:
+ raise ValueError(
+ f"Unsupported {arg_name}: {framework}. "
+ f"Supported {arg_name}(s) for trainium instances: {allowed_frameworks}."
+ )
+
+
def _format_tag(tag_prefix, processor, py_version, container_version, inference_tool=None):
"""Creates a tag for the image URI."""
if inference_tool:
|
diff --git a/tests/conftest.py b/tests/conftest.py
index 249f25cfcb..e92d98112b 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -358,6 +358,11 @@ def huggingface_neuron_latest_inference_py_version():
return "py37"
[email protected](scope="module")
+def pytorch_neuron_version():
+ return "1.11"
+
+
@pytest.fixture(scope="module")
def pytorch_eia_py_version():
return "py3"
diff --git a/tests/unit/sagemaker/image_uris/expected_uris.py b/tests/unit/sagemaker/image_uris/expected_uris.py
index a49ff57936..0574536754 100644
--- a/tests/unit/sagemaker/image_uris/expected_uris.py
+++ b/tests/unit/sagemaker/image_uris/expected_uris.py
@@ -30,6 +30,24 @@ def framework_uri(repo, fw_version, account, py_version=None, processor="cpu", r
return IMAGE_URI_FORMAT.format(account, region, domain, repo, tag)
+def neuron_framework_uri(
+ repo,
+ fw_version,
+ account,
+ py_version=None,
+ inference_tool="neuron",
+ region=REGION,
+ sdk_version="sdk2.4.0",
+ container_version="ubuntu20.04",
+):
+ domain = ALTERNATE_DOMAINS.get(region, DOMAIN)
+ tag = "-".join(
+ x for x in (fw_version, inference_tool, py_version, sdk_version, container_version) if x
+ )
+
+ return IMAGE_URI_FORMAT.format(account, region, domain, repo, tag)
+
+
def algo_uri(algo, account, region, version=1):
domain = ALTERNATE_DOMAINS.get(region, DOMAIN)
return IMAGE_URI_FORMAT.format(account, region, domain, algo, version)
diff --git a/tests/unit/sagemaker/image_uris/test_trainium.py b/tests/unit/sagemaker/image_uris/test_trainium.py
new file mode 100644
index 0000000000..d2b7d4a949
--- /dev/null
+++ b/tests/unit/sagemaker/image_uris/test_trainium.py
@@ -0,0 +1,74 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"). You
+# may not use this file except in compliance with the License. A copy of
+# the License is located at
+#
+# http://aws.amazon.com/apache2.0/
+#
+# or in the "license" file accompanying this file. This file is
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
+# ANY KIND, either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+from __future__ import absolute_import
+
+from sagemaker import image_uris
+from tests.unit.sagemaker.image_uris import expected_uris
+
+ACCOUNTS = {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884",
+}
+
+TRAINIUM_REGIONS = ACCOUNTS.keys()
+
+
+def _expected_trainium_framework_uri(
+ framework, version, region="us-west-2", inference_tool="neuron"
+):
+ return expected_uris.neuron_framework_uri(
+ "{}-neuron".format(framework),
+ fw_version=version,
+ py_version="py38",
+ account=ACCOUNTS[region],
+ region=region,
+ inference_tool=inference_tool,
+ )
+
+
+def _test_trainium_framework_uris(framework, version):
+ for region in TRAINIUM_REGIONS:
+ uri = image_uris.retrieve(
+ framework, region, instance_type="ml.trn1.xlarge", version=version
+ )
+ expected = _expected_trainium_framework_uri(
+ "{}-training".format(framework), version, region=region, inference_tool="neuron"
+ )
+ assert expected == uri
+
+
+def test_trainium_pytorch(pytorch_neuron_version):
+ _test_trainium_framework_uris("pytorch", pytorch_neuron_version)
| 2022-10-18T21:48:25
|
{}
|
{"doc/frameworks/pytorch/using_pytorch.rst": "#########################################\nUse PyTorch with the SageMaker Python SDK\n#########################################\n\nWith PyTorch Estimators and Models, you can train and host PyTorch models on Amazon SageMaker.\n\nFor information about supported versions of PyTorch, see the `AWS documentation <https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-images.html>`__.\n\nWe recommend that you use the latest supported version because that's where we focus our development efforts.\n\nYou can visit the PyTorch repository at https://github.com/pytorch/pytorch.\n\n.. contents::\n\n**************************\nTrain a Model with PyTorch\n**************************\n\nTo train a PyTorch model by using the SageMaker Python SDK:\n\n.. |create pytorch estimator| replace:: Create a ``sagemaker.pytorch.PyTorch`` Estimator\n.. _create pytorch estimator: #create-an-estimator\n\n.. |call fit| replace:: Call the estimator's ``fit`` method\n.. _call fit: #call-the-fit-method\n\n1. `Prepare a training script <#prepare-a-pytorch-training-script>`_\n2. |create pytorch estimator|_\n3. |call fit|_\n\nPrepare a PyTorch Training Script\n=================================\n\nYour PyTorch training script must be a Python 3.6 compatible source file.\n\nPrepare your script in a separate source file than the notebook, terminal session, or source file you're\nusing to submit the script to SageMaker via a ``PyTorch`` Estimator. This will be discussed in further detail below.\n\nThe training script is very similar to a training script you might run outside of SageMaker, but you\ncan access useful properties about the training environment through various environment variables.\nFor example:\n\n* ``SM_NUM_GPUS``: An integer representing the number of GPUs available to the host.\n* ``SM_MODEL_DIR``: A string representing the path to the directory to write model artifacts to.\n These artifacts are uploaded to S3 for model hosting.\n* ``SM_OUTPUT_DATA_DIR``: A string representing the filesystem path to write output artifacts to. Output artifacts may\n include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed\n and uploaded to S3 to the same S3 prefix as the model artifacts.\n* ``SM_CHANNEL_XXXX``: A string that represents the path to the directory that contains the input data for the specified channel.\n For example, if you specify two input channels in the PyTorch estimator's ``fit`` call, named 'train' and 'test',\n the environment variables ``SM_CHANNEL_TRAIN`` and ``SM_CHANNEL_TEST`` are set.\n\nA typical training script loads data from the input channels, configures training with hyperparameters, trains a model,\nand saves a model to ``model_dir`` so that it can be hosted later. Hyperparameters are passed to your script as arguments\nand can be retrieved with an argparse.ArgumentParser instance. For example, a training script might start\nwith the following:\n\n.. code:: python\n\n import argparse\n import os\n\n if __name__ =='__main__':\n\n parser = argparse.ArgumentParser()\n\n # hyperparameters sent by the client are passed as command-line arguments to the script.\n parser.add_argument('--epochs', type=int, default=50)\n parser.add_argument('--batch-size', type=int, default=64)\n parser.add_argument('--learning-rate', type=float, default=0.05)\n parser.add_argument('--use-cuda', type=bool, default=False)\n\n # Data, model, and output directories\n parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])\n parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])\n parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])\n parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])\n\n args, _ = parser.parse_known_args()\n\n # ... load from args.train and args.test, train a model, write model to args.model_dir.\n\nBecause SageMaker imports your training script, you should put your training code in a main guard\n(``if __name__=='__main__':``) if you are using the same script to host your model, so that SageMaker does not\ninadvertently run your training code at the wrong point in execution.\n\nNote that SageMaker doesn't support argparse actions. If you want to use, for example, boolean hyperparameters,\nyou need to specify `type` as `bool` in your script and provide an explicit `True` or `False` value for this hyperparameter\nwhen instantiating PyTorch Estimator.\n\nFor more on training environment variables, see the `SageMaker Training Toolkit <https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md>`_.\n\nSave the Model\n--------------\n\nIn order to save your trained PyTorch model for deployment on SageMaker, your training script should save your model\nto a certain filesystem path called ``model_dir``. This value is accessible through the environment variable\n``SM_MODEL_DIR``. The following code demonstrates how to save a trained PyTorch model named ``model`` as\n``model.pth`` at the :\n\n.. code:: python\n\n import argparse\n import os\n import torch\n\n if __name__=='__main__':\n # default to the value in environment variable `SM_MODEL_DIR`. Using args makes the script more portable.\n parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])\n args, _ = parser.parse_known_args()\n\n # ... train `model`, then save it to `model_dir`\n with open(os.path.join(args.model_dir, 'model.pth'), 'wb') as f:\n torch.save(model.state_dict(), f)\n\nAfter your training job is complete, SageMaker compresses and uploads the serialized model to S3, and your model data\nwill be available in the S3 ``output_path`` you specified when you created the PyTorch Estimator.\n\nIf you are using Elastic Inference, you must convert your models to the TorchScript format and use ``torch.jit.save`` to save the model.\nFor example:\n\n.. code:: python\n\n import os\n import torch\n\n # ... train `model`, then save it to `model_dir`\n model_dir = os.path.join(model_dir, \"model.pt\")\n torch.jit.save(model, model_dir)\n\nUsing third-party libraries\n---------------------------\n\nWhen running your training script on SageMaker, it will have access to some pre-installed third-party libraries including ``torch``, ``torchvision``, and ``numpy``.\nFor more information on the runtime environment, including specific package versions, see `SageMaker PyTorch Docker containers <https://github.com/aws/deep-learning-containers/tree/master/pytorch>`_.\n\nIf there are other packages you want to use with your script, you can include a ``requirements.txt`` file in the same directory as your training script to install other dependencies at runtime. Both ``requirements.txt`` and your training script should be put in the same folder. You must specify this folder in ``source_dir`` argument when creating PyTorch estimator.\n\nThe function of installing packages using ``requirements.txt`` is supported for all PyTorch versions during training. When serving a PyTorch model, support for this function varies with PyTorch versions. For PyTorch 1.3.1 or newer, ``requirements.txt`` must be under folder ``code``. The SageMaker PyTorch Estimator will automatically save ``code`` in ``model.tar.gz`` after training (assuming you set up your script and ``requirements.txt`` correctly as stipulated in the previous paragraph). In the case of bringing your own trained model for deployment, you must save ``requirements.txt`` under folder ``code`` in ``model.tar.gz`` yourself or specify it through ``dependencies``. For PyTorch 1.2.0, ``requirements.txt`` is not supported for inference. For PyTorch 0.4.0 to 1.1.0, ``requirements.txt`` must be in ``source_dir``.\n\nA ``requirements.txt`` file is a text file that contains a list of items that are installed by using ``pip install``. You can also specify the version of an item to install. For information about the format of a ``requirements.txt`` file, see `Requirements Files <https://pip.pypa.io/en/stable/user_guide/#requirements-files>`__ in the pip documentation.\n\nCreate an Estimator\n===================\n\nYou run PyTorch training scripts on SageMaker by creating ``PyTorch`` Estimators.\nSageMaker training of your script is invoked when you call ``fit`` on a ``PyTorch`` Estimator.\nThe following code sample shows how you train a custom PyTorch script \"pytorch-train.py\", passing\nin three hyperparameters ('epochs', 'batch-size', and 'learning-rate'), and using two input channel\ndirectories ('train' and 'test').\n\n.. code:: python\n\n pytorch_estimator = PyTorch('pytorch-train.py',\n instance_type='ml.p3.2xlarge',\n instance_count=1,\n framework_version='1.8.0',\n py_version='py3',\n hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})\n pytorch_estimator.fit({'train': 's3://my-data-bucket/path/to/my/training/data',\n 'test': 's3://my-data-bucket/path/to/my/test/data'})\n\n\n\n\nCall the fit Method\n===================\n\nYou start your training script by calling ``fit`` on a ``PyTorch`` Estimator. ``fit`` takes both required and optional\narguments.\n\nfit Required Arguments\n----------------------\n\n- ``inputs``: This can take one of the following forms: A string\n S3 URI, for example ``s3://my-bucket/my-training-data``. In this\n case, the S3 objects rooted at the ``my-training-data`` prefix will\n be available in the default ``train`` channel. A dict from\n string channel names to S3 URIs. In this case, the objects rooted at\n each S3 prefix will be available as files in each channel directory.\n\nFor example:\n\n.. code:: python\n\n {'train':'s3://my-bucket/my-training-data',\n 'eval':'s3://my-bucket/my-evaluation-data'}\n\n.. optional-arguments-1:\n\nfit Optional Arguments\n----------------------\n\n- ``wait``: Defaults to True, whether to block and wait for the\n training script to complete before returning.\n- ``logs``: Defaults to True, whether to show logs produced by training\n job in the Python session. Only meaningful when wait is True.\n\n\nDistributed PyTorch Training\n============================\n\nSageMaker supports the `PyTorch DistributedDataParallel (DDP)\n<https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html>`_\npackage. You simply need to check the variables in your training script,\nsuch as the world size and the rank of the current host, when initializing\nprocess groups for distributed training.\nAnd then, launch the training job using the\n:class:`sagemaker.pytorch.estimator.PyTorch` estimator class\nwith the ``pytorchddp`` option as the distribution strategy.\n\n.. note::\n\n This PyTorch DDP support is available\n in the SageMaker PyTorch Deep Learning Containers v1.12 and later.\n\nAdapt Your Training Script\n--------------------------\n\nTo initialize distributed training in your script, call\n`torch.distributed.init_process_group\n<https://pytorch.org/docs/master/distributed.html#torch.distributed.init_process_group>`_\nwith the desired backend and the rank of the current host.\n\n.. code:: python\n\n import torch.distributed as dist\n\n if args.distributed:\n # Initialize the distributed environment.\n world_size = len(args.hosts)\n os.environ['WORLD_SIZE'] = str(world_size)\n host_rank = args.hosts.index(args.current_host)\n dist.init_process_group(backend=args.backend, rank=host_rank)\n\nSageMaker sets ``'MASTER_ADDR'`` and ``'MASTER_PORT'`` environment variables for you,\nbut you can also overwrite them.\n\n**Supported backends:**\n\n- ``gloo`` and ``tcp`` for CPU instances\n- ``gloo`` and ``nccl`` for GPU instances\n\nLaunching a Distributed Training Job\n------------------------------------\n\nYou can run multi-node distributed PyTorch training jobs using the\n:class:`sagemaker.pytorch.estimator.PyTorch` estimator class.\nWith ``instance_count=1``, the estimator submits a\nsingle-node training job to SageMaker; with ``instance_count`` greater\nthan one, a multi-node training job is launched.\n\nTo run a distributed training script that adopts\nthe `PyTorch DistributedDataParallel (DDP) package\n<https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html>`_,\nchoose the ``pytorchddp`` as the distributed training option in the ``PyTorch`` estimator.\n\nWith the ``pytorchddp`` option, the SageMaker PyTorch estimator runs a SageMaker\ntraining container for PyTorch, sets up the environment for MPI, and launches\nthe training job using the ``mpirun`` command on each worker with the given information\nduring the PyTorch DDP initialization.\n\n.. note::\n\n The SageMaker PyTorch estimator operates ``mpirun`` in the backend.\n It doesn’t use ``torchrun`` for distributed training.\n\nFor more information about setting up PyTorch DDP in your training script,\nsee `Getting Started with Distributed Data Parallel\n<https://pytorch.org/tutorials/intermediate/ddp_tutorial.html>`_ in the\nPyTorch documentation.\n\nThe following example shows how to run a PyTorch DDP training in SageMaker\nusing two ``ml.p4d.24xlarge`` instances:\n\n.. code:: python\n\n from sagemaker.pytorch import PyTorch\n\n pt_estimator = PyTorch(\n entry_point=\"train_ptddp.py\",\n role=\"SageMakerRole\",\n framework_version=\"1.12.0\",\n py_version=\"py38\",\n instance_count=2,\n instance_type=\"ml.p4d.24xlarge\",\n distribution={\n \"pytorchddp\": {\n \"enabled\": True\n }\n }\n )\n\n pt_estimator.fit(\"s3://bucket/path/to/training/data\")\n\n*********************\nDeploy PyTorch Models\n*********************\n\nAfter a PyTorch Estimator has been fit, you can host the newly created model in SageMaker.\n\nAfter calling ``fit``, you can call ``deploy`` on a ``PyTorch`` Estimator to create a SageMaker Endpoint.\nThe Endpoint runs a SageMaker-provided PyTorch model server and hosts the model produced by your training script,\nwhich was run when you called ``fit``. This was the model you saved to ``model_dir``.\n\n``deploy`` returns a ``Predictor`` object, which you can use to do inference on the Endpoint hosting your PyTorch model.\nEach ``Predictor`` provides a ``predict`` method which can do inference with numpy arrays or Python lists.\nInference arrays or lists are serialized and sent to the PyTorch model server by an ``InvokeEndpoint`` SageMaker\noperation.\n\n``predict`` returns the result of inference against your model. By default, the inference result a NumPy array.\n\n.. code:: python\n\n # Train my estimator\n pytorch_estimator = PyTorch(entry_point='train_and_deploy.py',\n instance_type='ml.p3.2xlarge',\n instance_count=1,\n framework_version='1.8.0',\n py_version='py3')\n pytorch_estimator.fit('s3://my_bucket/my_training_data/')\n\n # Deploy my estimator to a SageMaker Endpoint and get a Predictor\n predictor = pytorch_estimator.deploy(instance_type='ml.m4.xlarge',\n initial_instance_count=1)\n\n # `data` is a NumPy array or a Python list.\n # `response` is a NumPy array.\n response = predictor.predict(data)\n\nYou use the SageMaker PyTorch model server to host your PyTorch model when you call ``deploy`` on an ``PyTorch``\nEstimator. The model server runs inside a SageMaker Endpoint, which your call to ``deploy`` creates.\nYou can access the name of the Endpoint by the ``name`` property on the returned ``Predictor``.\n\nElastic Inference\n=================\n\nPyTorch on Amazon SageMaker has support for `Elastic Inference <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html>`_, which allows for inference acceleration to a hosted endpoint for a fraction of the cost of using a full GPU instance.\nIn order to attach an Elastic Inference accelerator to your endpoint provide the accelerator type to ``accelerator_type`` to your ``deploy`` call.\n\n.. code:: python\n\n predictor = pytorch_estimator.deploy(instance_type='ml.m4.xlarge',\n initial_instance_count=1,\n accelerator_type='ml.eia2.medium')\n\nModel Directory Structure\n=========================\n\nIn general, if you use the same version of PyTorch for both training and inference with the SageMaker Python SDK,\nthe SDK should take care of ensuring that the contents of your ``model.tar.gz`` file are organized correctly.\n\nFor versions 1.2 and higher\n---------------------------\n\nFor PyTorch versions 1.2 and higher, the contents of ``model.tar.gz`` should be organized as follows:\n\n- Model files in the top-level directory\n- Inference script (and any other source files) in a directory named ``code/`` (for more about the inference script, see `The SageMaker PyTorch Model Server <#the-sagemaker-pytorch-model-server>`_)\n- Optional requirements file located at ``code/requirements.txt`` (for more about requirements files, see `Using third-party libraries <#using-third-party-libraries>`_)\n\nFor example:\n\n.. code::\n\n model.tar.gz/\n |- model.pth\n |- code/\n |- inference.py\n |- requirements.txt # only for versions 1.3.1 and higher\n\nIn this example, ``model.pth`` is the model file saved from training, ``inference.py`` is the inference script, and ``requirements.txt`` is a requirements file.\n\nThe ``PyTorch`` and ``PyTorchModel`` classes repack ``model.tar.gz`` to include the inference script (and related files),\nas long as the ``framework_version`` is set to 1.2 or higher.\n\nFor versions 1.1 and lower\n--------------------------\n\nFor PyTorch versions 1.1 and lower, ``model.tar.gz`` should contain only the model files,\nwhile your inference script and optional requirements file are packed in a separate tarball, named ``sourcedir.tar.gz`` by default.\n\nFor example:\n\n.. code::\n\n model.tar.gz/\n |- model.pth\n\n sourcedir.tar.gz/\n |- script.py\n |- requirements.txt\n\nIn this example, ``model.pth`` is the model file saved from training, ``script.py`` is the inference script, and ``requirements.txt`` is a requirements file.\n\nThe SageMaker PyTorch Model Server\n==================================\n\nThe PyTorch Endpoint you create with ``deploy`` runs a SageMaker PyTorch model server.\nThe model server loads the model that was saved by your training script and performs inference on the model in response\nto SageMaker InvokeEndpoint API calls.\n\nYou can configure two components of the SageMaker PyTorch model server: Model loading and model serving.\nModel loading is the process of deserializing your saved model back into a PyTorch model.\nServing is the process of translating InvokeEndpoint requests to inference calls on the loaded model.\n\nYou configure the PyTorch model server by defining functions in the Python source file you passed to the PyTorch constructor.\n\nLoad a Model\n------------\n\nBefore a model can be served, it must be loaded. The SageMaker PyTorch model server loads your model by invoking a\n``model_fn`` function that you must provide in your script when you are not using Elastic Inference. The ``model_fn`` should have the following signature:\n\n.. code:: python\n\n def model_fn(model_dir, context)\n\n``context`` is an optional argument that contains additional serving information, such as the GPU ID and batch size.\nIf specified in the function declaration, the context will be created and passed to the function by SageMaker.\nFor more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.\n\nSageMaker will inject the directory where your model files and sub-directories, saved by ``save``, have been mounted.\nYour model function should return a model object that can be used for model serving.\n\nThe following code-snippet shows an example ``model_fn`` implementation.\nIt loads the model parameters from a ``model.pth`` file in the SageMaker model directory ``model_dir``. As explained in the preceding example,\n``context`` is an optional argument that passes additional information.\n\n.. code:: python\n\n import torch\n import os\n\n def model_fn(model_dir, context):\n model = Your_Model()\n with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:\n model.load_state_dict(torch.load(f))\n return model\n\nHowever, if you are using PyTorch Elastic Inference 1.3.1, you do not have to provide a ``model_fn`` since the PyTorch serving\ncontainer has a default one for you. But please note that if you are utilizing the default ``model_fn``, please save\nyour ScriptModule as ``model.pt``. If you are implementing your own ``model_fn``, please use TorchScript and ``torch.jit.save``\nto save your ScriptModule, then load it in your ``model_fn`` with ``torch.jit.load(..., map_location=torch.device('cpu'))``.\n\nIf you are using PyTorch Elastic Inference 1.5.1, you should provide ``model_fn`` like below in your script to use new api ``attach_eia``. Reference can be find in `Elastic Inference documentation <https://docs.aws.amazon.com/elastic-inference/latest/developerguide/ei-pytorch-using.html>`_.\n\n\n.. code:: python\n\n import torch\n\n\n def model_fn(model_dir):\n model = torch.jit.load('model.pth', map_location=torch.device('cpu'))\n if torch.__version__ == '1.5.1':\n import torcheia\n model = model.eval()\n # attach_eia() is introduced in PyTorch Elastic Inference 1.5.1,\n model = torcheia.jit.attach_eia(model, 0)\n return model\n\n\nThe client-side Elastic Inference framework is CPU-only, even though inference still happens in a CUDA context on the server. Thus, the default ``model_fn`` for Elastic Inference loads the model to CPU. Tracing models may lead to tensor creation on a specific device, which may cause device-related errors when loading a model onto a different device. Providing an explicit ``map_location=torch.device('cpu')`` argument forces all tensors to CPU.\n\nFor more information on the default inference handler functions, please refer to:\n`SageMaker PyTorch Default Inference Handler <https://github.com/aws/sagemaker-pytorch-inference-toolkit/blob/master/src/sagemaker_pytorch_serving_container/default_pytorch_inference_handler.py>`_.\n\nServe a PyTorch Model\n---------------------\n\nAfter the SageMaker model server has loaded your model by calling ``model_fn``, SageMaker will serve your model.\nModel serving is the process of responding to inference requests, received by SageMaker InvokeEndpoint API calls.\nThe SageMaker PyTorch model server breaks request handling into three steps:\n\n\n- input processing,\n- prediction, and\n- output processing.\n\nIn a similar way to model loading, you configure these steps by defining functions in your Python source file.\n\nEach step involves invoking a python function, with information about the request and the return value from the previous\nfunction in the chain. Inside the SageMaker PyTorch model server, the process looks like:\n\n.. code:: python\n\n # Deserialize the Invoke request body into an object we can perform prediction on\n input_object = input_fn(request_body, request_content_type, context)\n\n # Perform prediction on the deserialized object, with the loaded model\n prediction = predict_fn(input_object, model, context)\n\n # Serialize the prediction result into the desired response content type\n output = output_fn(prediction, response_content_type, context)\n\nThe above code sample shows the three function definitions:\n\n- ``input_fn``: Takes request data and deserializes the data into an\n object for prediction.\n- ``predict_fn``: Takes the deserialized request object and performs\n inference against the loaded model.\n- ``output_fn``: Takes the result of prediction and serializes this\n according to the response content type.\n\nThe SageMaker PyTorch model server provides default implementations of these functions.\nYou can provide your own implementations for these functions in your hosting script.\nIf you omit any definition then the SageMaker PyTorch model server will use its default implementation for that\nfunction.\nIf you use PyTorch Elastic Inference 1.5.1, remember to implement ``predict_fn`` yourself.\n\nThe ``Predictor`` used by PyTorch in the SageMaker Python SDK serializes NumPy arrays to the `NPY <https://docs.scipy.org/doc/numpy/neps/npy-format.html>`_ format\nby default, with Content-Type ``application/x-npy``. The SageMaker PyTorch model server can deserialize NPY-formatted\ndata (along with JSON and CSV data).\n\nIf you rely solely on the SageMaker PyTorch model server defaults, you get the following functionality:\n\n- Prediction on models that implement the ``__call__`` method\n- Serialization and deserialization of torch.Tensor.\n\nThe default ``input_fn`` and ``output_fn`` are meant to make it easy to predict on torch.Tensors. If your model expects\na torch.Tensor and returns a torch.Tensor, then these functions do not have to be overridden when sending NPY-formatted\ndata.\n\nIn the following sections we describe the default implementations of input_fn, predict_fn, and output_fn.\nWe describe the input arguments and expected return types of each, so you can define your own implementations.\n\nProcess Model Input\n^^^^^^^^^^^^^^^^^^^\n\nWhen an InvokeEndpoint operation is made against an Endpoint running a SageMaker PyTorch model server,\nthe model server receives two pieces of information:\n\n- The request Content-Type, for example \"application/x-npy\"\n- The request data body, a byte array\n\nThe SageMaker PyTorch model server will invoke an ``input_fn`` function in your hosting script,\npassing in this information. If you define an ``input_fn`` function definition,\nit should return an object that can be passed to ``predict_fn`` and have the following signature:\n\n.. code:: python\n\n def input_fn(request_body, request_content_type, context)\n\nWhere ``request_body`` is a byte buffer and ``request_content_type`` is a Python string.\n\n``context`` is an optional argument that contains additional serving information, such as the GPU ID and batch size.\nIf specified in the function declaration, the context will be created and passed to the function by SageMaker.\nFor more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.\n\nThe SageMaker PyTorch model server provides a default implementation of ``input_fn``.\nThis function deserializes JSON, CSV, or NPY encoded data into a torch.Tensor.\n\nDefault NPY deserialization requires ``request_body`` to follow the `NPY <https://docs.scipy.org/doc/numpy/neps/npy-format.html>`_ format. For PyTorch, the Python SDK\ndefaults to sending prediction requests with this format.\n\nDefault JSON deserialization requires ``request_body`` contain a single json list.\nSending multiple JSON objects within the same ``request_body`` is not supported.\nThe list must have a dimensionality compatible with the model loaded in ``model_fn``.\nThe list's shape must be identical to the model's input shape, for all dimensions after the first (which first\ndimension is the batch size).\n\nDefault csv deserialization requires ``request_body`` contain one or more lines of CSV numerical data.\nThe data is loaded into a two-dimensional array, where each line break defines the boundaries of the first dimension.\n\nThe example below shows a custom ``input_fn`` for preparing pickled torch.Tensor.\n\n.. code:: python\n\n import numpy as np\n import torch\n from six import BytesIO\n\n def input_fn(request_body, request_content_type):\n \"\"\"An input_fn that loads a pickled tensor\"\"\"\n if request_content_type == 'application/python-pickle':\n return torch.load(BytesIO(request_body))\n else:\n # Handle other content-types here or raise an Exception\n # if the content type is not supported.\n pass\n\n\n\nGet Predictions from a PyTorch Model\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAfter the inference request has been deserialized by ``input_fn``, the SageMaker PyTorch model server invokes\n``predict_fn`` on the return value of ``input_fn``.\n\nAs with ``input_fn``, you can define your own ``predict_fn`` or use the SageMaker PyTorch model server default.\n\nThe ``predict_fn`` function has the following signature:\n\n.. code:: python\n\n def predict_fn(input_object, model, context)\n\nWhere ``input_object`` is the object returned from ``input_fn`` and\n``model`` is the model loaded by ``model_fn``.\nIf you are using multiple GPUs, then specify the ``context`` argument, which contains information such as the GPU ID for a dynamically-selected GPU and the batch size.\nOne of the examples below demonstrates how to configure ``predict_fn`` with the ``context`` argument to handle multiple GPUs. For more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.\nIf you are using CPUs or a single GPU, then you do not need to specify the ``context`` argument.\n\nThe default implementation of ``predict_fn`` invokes the loaded model's ``__call__`` function on ``input_object``,\nand returns the resulting value. The return-type should be a torch.Tensor to be compatible with the default\n``output_fn``.\n\nThe following example shows an overridden ``predict_fn``:\n\n.. code:: python\n\n import torch\n import numpy as np\n\n def predict_fn(input_data, model):\n device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n model.to(device)\n model.eval()\n with torch.no_grad():\n return model(input_data.to(device))\n\nThe following example is for use cases with multiple GPUs and shows an overridden ``predict_fn`` that uses the ``context`` argument to dynamically select a GPU device for making predictions:\n\n.. code:: python\n\n import torch\n import numpy as np\n\n def predict_fn(input_data, model):\n device = torch.device(\"cuda:\" + str(context.system_properties.get(\"gpu_id\")) if torch.cuda.is_available() else \"cpu\")\n model.to(device)\n model.eval()\n with torch.no_grad():\n return model(input_data.to(device))\n\nIf you implement your own prediction function, you should take care to ensure that:\n\n- The first argument is expected to be the return value from input_fn.\n If you use the default input_fn, this will be a torch.Tensor.\n- The second argument is the loaded model.\n- The return value should be of the correct type to be passed as the\n first argument to ``output_fn``. If you use the default\n ``output_fn``, this should be a torch.Tensor.\n\nThe default Elastic Inference ``predict_fn`` is similar but runs the TorchScript model using ``torch.jit.optimized_execution``.\nIf you are implementing your own ``predict_fn``, please also use the ``torch.jit.optimized_execution``\nblock, for example:\n\n.. code:: python\n\n import torch\n import numpy as np\n\n def predict_fn(input_data, model):\n device = torch.device(\"cpu\")\n model = model.to(device)\n input_data = data.to(device)\n model.eval()\n with torch.jit.optimized_execution(True, {\"target_device\": \"eia:0\"}):\n output = model(input_data)\n\nIf you use PyTorch Elastic Inference 1.5.1, please implement your own ``predict_fn`` like below.\n\n.. code:: python\n\n import numpy as np\n import torch\n\n\n def predict_fn(input_data, model):\n device = torch.device(\"cpu\")\n input_data = data.to(device)\n # make sure torcheia is imported so that Elastic Inference api call will be invoked\n import torcheia\n # we need to set the profiling executor for EIA\n torch._C._jit_set_profiling_executor(False)\n with torch.jit.optimized_execution(True):\n output = model.forward(input_data)\n\n\nProcess Model Output\n^^^^^^^^^^^^^^^^^^^^\n\nAfter invoking ``predict_fn``, the model server invokes ``output_fn``, passing in the return value from ``predict_fn``\nand the content type for the response, as specified by the InvokeEndpoint request.\n\nThe ``output_fn`` has the following signature:\n\n.. code:: python\n\n def output_fn(prediction, content_type, context)\n\nWhere ``prediction`` is the result of invoking ``predict_fn`` and\nthe content type for the response, as specified by the InvokeEndpoint request. The function should return a byte array of data serialized to ``content_type``.\n\n``context`` is an optional argument that contains additional serving information, such as the GPU ID and batch size.\nIf specified in the function declaration, the context will be created and passed to the function by SageMaker.\nFor more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.\n\nThe default implementation expects ``prediction`` to be a torch.Tensor and can serialize the result to JSON, CSV, or NPY.\nIt accepts response content types of \"application/json\", \"text/csv\", and \"application/x-npy\".\n\n\nBring your own model\n====================\n\nYou can deploy a PyTorch model that you trained outside of SageMaker by using the ``PyTorchModel`` class.\nTypically, you save a PyTorch model as a file with extension ``.pt`` or ``.pth``.\nTo do this, you need to:\n\n* Write an inference script.\n* Create the directory structure for your model files.\n* Create the ``PyTorchModel`` object.\n\nWrite an inference script\n-------------------------\n\nYou must create an inference script that implements (at least) the ``model_fn`` function that calls the loaded model to get a prediction.\n\n**Note**: If you use elastic inference with PyTorch, you can use the default ``model_fn`` implementation provided in the serving container.\n\nOptionally, you can also implement ``input_fn`` and ``output_fn`` to process input and output,\nand ``predict_fn`` to customize how the model server gets predictions from the loaded model.\nFor information about how to write an inference script, see `Serve a PyTorch Model <#serve-a-pytorch-model>`_.\nSave the inference script in the same folder where you saved your PyTorch model.\nPass the filename of the inference script as the ``entry_point`` parameter when you create the ``PyTorchModel`` object.\n\nCreate the directory structure for your model files\n---------------------------------------------------\n\nYou have to create a directory structure and place your model files in the correct location.\nThe ``PyTorchModel`` constructor packs the files into a ``tar.gz`` file and uploads it to S3.\n\nThe directory structure where you saved your PyTorch model should look something like the following:\n\n**Note:** This directory struture is for PyTorch versions 1.2 and higher.\nFor the directory structure for versions 1.1 and lower,\nsee `For versions 1.1 and lower <#for-versions-1.1-and-lower>`_.\n\n::\n\n | my_model\n | |--model.pth\n |\n | code\n | |--inference.py\n | |--requirements.txt\n\nWhere ``requirments.txt`` is an optional file that specifies dependencies on third-party libraries.\n\nCreate a ``PyTorchModel`` object\n--------------------------------\n\nNow call the :class:`sagemaker.pytorch.model.PyTorchModel` constructor to create a model object, and then call its ``deploy()`` method to deploy your model for inference.\n\n.. code:: python\n\n from sagemaker import get_execution_role\n role = get_execution_role()\n\n pytorch_model = PyTorchModel(model_data='s3://my-bucket/my-path/model.tar.gz', role=role,\n entry_point='inference.py')\n\n predictor = pytorch_model.deploy(instance_type='ml.c4.xlarge', initial_instance_count=1)\n\n\nNow you can call the ``predict()`` method to get predictions from your deployed model.\n\n***********************************************\nAttach an estimator to an existing training job\n***********************************************\n\nYou can attach a PyTorch Estimator to an existing training job using the\n``attach`` method.\n\n.. code:: python\n\n my_training_job_name = 'MyAwesomePyTorchTrainingJob'\n pytorch_estimator = PyTorch.attach(my_training_job_name)\n\nAfter attaching, if the training job has finished with job status \"Completed\", it can be\n``deploy``\\ ed to create a SageMaker Endpoint and return a\n``Predictor``. If the training job is in progress,\nattach will block and display log messages from the training job, until the training job completes.\n\nThe ``attach`` method accepts the following arguments:\n\n- ``training_job_name:`` The name of the training job to attach\n to.\n- ``sagemaker_session:`` The Session used\n to interact with SageMaker\n\n*************************\nPyTorch Training Examples\n*************************\n\nAmazon provides several example Jupyter notebooks that demonstrate end-to-end training on Amazon SageMaker using PyTorch.\nPlease refer to:\n\nhttps://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-python-sdk\n\nThese are also available in SageMaker Notebook Instance hosted Jupyter notebooks under the sample notebooks folder.\n\n*************************\nSageMaker PyTorch Classes\n*************************\n\nFor information about the different PyTorch-related classes in the SageMaker Python SDK, see https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/sagemaker.pytorch.html.\n\n***********************************\nSageMaker PyTorch Docker Containers\n***********************************\n\nFor information about the SageMaker PyTorch containers, see:\n\n- `SageMaker PyTorch training toolkit <https://github.com/aws/sagemaker-pytorch-container>`_\n- `SageMaker PyTorch serving toolkit <https://github.com/aws/sagemaker-pytorch-serving-container>`_\n- `Deep Learning Container (DLC) Dockerfiles for PyTorch <https://github.com/aws/deep-learning-containers/tree/master/pytorch>`_\n- `Deep Learning Container (DLC) Images <https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-images.html>`_ and `release notes <https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/dlc-release-notes.html>`_\n", "src/sagemaker/image_uri_config/pytorch-neuron.json": null, "src/sagemaker/image_uris.py": "# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"Functions for generating ECR image URIs for pre-built SageMaker Docker images.\"\"\"\nfrom __future__ import absolute_import\n\nimport json\nimport logging\nimport os\nimport re\nfrom typing import Optional\n\nfrom sagemaker import utils\nfrom sagemaker.jumpstart.utils import is_jumpstart_model_input\nfrom sagemaker.spark import defaults\nfrom sagemaker.jumpstart import artifacts\nfrom sagemaker.workflow import is_pipeline_variable\nfrom sagemaker.workflow.utilities import override_pipeline_parameter_var\nfrom sagemaker.fw_utils import GRAVITON_ALLOWED_TARGET_INSTANCE_FAMILY, GRAVITON_ALLOWED_FRAMEWORKS\n\nlogger = logging.getLogger(__name__)\n\nECR_URI_TEMPLATE = \"{registry}.dkr.{hostname}/{repository}\"\nHUGGING_FACE_FRAMEWORK = \"huggingface\"\nXGBOOST_FRAMEWORK = \"xgboost\"\nSKLEARN_FRAMEWORK = \"sklearn\"\n\n\n@override_pipeline_parameter_var\ndef retrieve(\n framework,\n region,\n version=None,\n py_version=None,\n instance_type=None,\n accelerator_type=None,\n image_scope=None,\n container_version=None,\n distribution=None,\n base_framework_version=None,\n training_compiler_config=None,\n model_id=None,\n model_version=None,\n tolerate_vulnerable_model=False,\n tolerate_deprecated_model=False,\n sdk_version=None,\n inference_tool=None,\n serverless_inference_config=None,\n) -> str:\n \"\"\"Retrieves the ECR URI for the Docker image matching the given arguments.\n\n Ideally this function should not be called directly, rather it should be called from the\n fit() function inside framework estimator.\n\n Args:\n framework (str): The name of the framework or algorithm.\n region (str): The AWS region.\n version (str): The framework or algorithm version. This is required if there is\n more than one supported version for the given framework or algorithm.\n py_version (str): The Python version. This is required if there is\n more than one supported Python version for the given framework version.\n instance_type (str): The SageMaker instance type. For supported types, see\n https://aws.amazon.com/sagemaker/pricing. This is required if\n there are different images for different processor types.\n accelerator_type (str): Elastic Inference accelerator type. For more, see\n https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html.\n image_scope (str): The image type, i.e. what it is used for.\n Valid values: \"training\", \"inference\", \"eia\". If ``accelerator_type`` is set,\n ``image_scope`` is ignored.\n container_version (str): the version of docker image.\n Ideally the value of parameter should be created inside the framework.\n For custom use, see the list of supported container versions:\n https://github.com/aws/deep-learning-containers/blob/master/available_images.md\n (default: None).\n distribution (dict): A dictionary with information on how to run distributed training\n training_compiler_config (:class:`~sagemaker.training_compiler.TrainingCompilerConfig`):\n A configuration class for the SageMaker Training Compiler\n (default: None).\n model_id (str): The JumpStart model ID for which to retrieve the image URI\n (default: None).\n model_version (str): The version of the JumpStart model for which to retrieve the\n image URI (default: None).\n tolerate_vulnerable_model (bool): ``True`` if vulnerable versions of model specifications\n should be tolerated without an exception raised. If ``False``, raises an exception if\n the script used by this version of the model has dependencies with known security\n vulnerabilities. (Default: False).\n tolerate_deprecated_model (bool): True if deprecated versions of model specifications\n should be tolerated without an exception raised. If False, raises an exception\n if the version of the model is deprecated. (Default: False).\n sdk_version (str): the version of python-sdk that will be used in the image retrieval.\n (default: None).\n inference_tool (str): the tool that will be used to aid in the inference.\n Valid values: \"neuron, None\"\n (default: None).\n serverless_inference_config (sagemaker.serverless.ServerlessInferenceConfig):\n Specifies configuration related to serverless endpoint. Instance type is\n not provided in serverless inference. So this is used to determine processor type.\n\n Returns:\n str: The ECR URI for the corresponding SageMaker Docker image.\n\n Raises:\n NotImplementedError: If the scope is not supported.\n ValueError: If the combination of arguments specified is not supported or\n any PipelineVariable object is passed in.\n VulnerableJumpStartModelError: If any of the dependencies required by the script have\n known security vulnerabilities.\n DeprecatedJumpStartModelError: If the version of the model is deprecated.\n \"\"\"\n args = dict(locals())\n for name, val in args.items():\n if is_pipeline_variable(val):\n raise ValueError(\n \"When retrieving the image_uri, the argument %s should not be a pipeline variable \"\n \"(%s) since pipeline variables are only interpreted in the pipeline execution time.\"\n % (name, type(val))\n )\n\n if is_jumpstart_model_input(model_id, model_version):\n return artifacts._retrieve_image_uri(\n model_id,\n model_version,\n image_scope,\n framework,\n region,\n version,\n py_version,\n instance_type,\n accelerator_type,\n container_version,\n distribution,\n base_framework_version,\n training_compiler_config,\n tolerate_vulnerable_model,\n tolerate_deprecated_model,\n )\n\n if training_compiler_config and (framework == HUGGING_FACE_FRAMEWORK):\n config = _config_for_framework_and_scope(\n framework + \"-training-compiler\", image_scope, accelerator_type\n )\n else:\n _framework = framework\n if framework == HUGGING_FACE_FRAMEWORK:\n inference_tool = _get_inference_tool(inference_tool, instance_type)\n if inference_tool == \"neuron\":\n _framework = f\"{framework}-{inference_tool}\"\n final_image_scope = _get_final_image_scope(framework, instance_type, image_scope)\n config = _config_for_framework_and_scope(_framework, final_image_scope, accelerator_type)\n\n original_version = version\n version = _validate_version_and_set_if_needed(version, config, framework)\n version_config = config[\"versions\"][_version_for_config(version, config)]\n\n if framework == HUGGING_FACE_FRAMEWORK:\n if version_config.get(\"version_aliases\"):\n full_base_framework_version = version_config[\"version_aliases\"].get(\n base_framework_version, base_framework_version\n )\n _validate_arg(full_base_framework_version, list(version_config.keys()), \"base framework\")\n version_config = version_config.get(full_base_framework_version)\n\n py_version = _validate_py_version_and_set_if_needed(py_version, version_config, framework)\n version_config = version_config.get(py_version) or version_config\n registry = _registry_from_region(region, version_config[\"registries\"])\n hostname = utils._botocore_resolver().construct_endpoint(\"ecr\", region)[\"hostname\"]\n\n repo = version_config[\"repository\"]\n\n processor = _processor(\n instance_type,\n config.get(\"processors\") or version_config.get(\"processors\"),\n serverless_inference_config,\n )\n\n # if container version is available in .json file, utilize that\n if version_config.get(\"container_version\"):\n container_version = version_config[\"container_version\"][processor]\n\n if framework == HUGGING_FACE_FRAMEWORK:\n pt_or_tf_version = (\n re.compile(\"^(pytorch|tensorflow)(.*)$\").match(base_framework_version).group(2)\n )\n _version = original_version\n\n if repo in [\n \"huggingface-pytorch-trcomp-training\",\n \"huggingface-tensorflow-trcomp-training\",\n ]:\n _version = version\n if repo in [\"huggingface-pytorch-inference-neuron\"]:\n if not sdk_version:\n sdk_version = _get_latest_versions(version_config[\"sdk_versions\"])\n container_version = sdk_version + \"-\" + container_version\n if config.get(\"version_aliases\").get(original_version):\n _version = config.get(\"version_aliases\")[original_version]\n if (\n config.get(\"versions\", {})\n .get(_version, {})\n .get(\"version_aliases\", {})\n .get(base_framework_version, {})\n ):\n _base_framework_version = config.get(\"versions\")[_version][\"version_aliases\"][\n base_framework_version\n ]\n pt_or_tf_version = (\n re.compile(\"^(pytorch|tensorflow)(.*)$\").match(_base_framework_version).group(2)\n )\n\n tag_prefix = f\"{pt_or_tf_version}-transformers{_version}\"\n else:\n tag_prefix = version_config.get(\"tag_prefix\", version)\n\n if repo == f\"{framework}-inference-graviton\":\n container_version = f\"{container_version}-sagemaker\"\n\n tag = _get_image_tag(\n container_version,\n distribution,\n framework,\n inference_tool,\n instance_type,\n processor,\n py_version,\n tag_prefix,\n version,\n )\n\n if tag:\n repo += \":{}\".format(tag)\n\n return ECR_URI_TEMPLATE.format(registry=registry, hostname=hostname, repository=repo)\n\n\ndef _get_instance_type_family(instance_type):\n \"\"\"Return the family of the instance type.\n\n Regex matches either \"ml.<family>.<size>\" or \"ml_<family>. If input is None\n or there is no match, return an empty string.\n \"\"\"\n instance_type_family = \"\"\n if isinstance(instance_type, str):\n match = re.match(r\"^ml[\\._]([a-z\\d]+)\\.?\\w*$\", instance_type)\n if match is not None:\n instance_type_family = match[1]\n return instance_type_family\n\n\ndef _get_image_tag(\n container_version,\n distribution,\n framework,\n inference_tool,\n instance_type,\n processor,\n py_version,\n tag_prefix,\n version,\n):\n \"\"\"Return image tag based on framework, container, and compute configuration(s).\"\"\"\n instance_type_family = _get_instance_type_family(instance_type)\n if (\n framework in (XGBOOST_FRAMEWORK, SKLEARN_FRAMEWORK)\n and instance_type_family in GRAVITON_ALLOWED_TARGET_INSTANCE_FAMILY\n ):\n version_to_arm64_tag_mapping = {\n \"xgboost\": {\n \"1.5-1\": \"1.5-1-arm64\",\n \"1.3-1\": \"1.3-1-arm64\",\n },\n \"sklearn\": {\n \"1.0-1\": \"1.0-1-arm64-cpu-py3\",\n },\n }\n tag = version_to_arm64_tag_mapping[framework][version]\n else:\n tag = _format_tag(tag_prefix, processor, py_version, container_version, inference_tool)\n\n if instance_type is not None and _should_auto_select_container_version(\n instance_type, distribution\n ):\n container_versions = {\n \"tensorflow-2.3-gpu-py37\": \"cu110-ubuntu18.04-v3\",\n \"tensorflow-2.3.1-gpu-py37\": \"cu110-ubuntu18.04\",\n \"tensorflow-2.3.2-gpu-py37\": \"cu110-ubuntu18.04\",\n \"tensorflow-1.15-gpu-py37\": \"cu110-ubuntu18.04-v8\",\n \"tensorflow-1.15.4-gpu-py37\": \"cu110-ubuntu18.04\",\n \"tensorflow-1.15.5-gpu-py37\": \"cu110-ubuntu18.04\",\n \"mxnet-1.8-gpu-py37\": \"cu110-ubuntu16.04-v1\",\n \"mxnet-1.8.0-gpu-py37\": \"cu110-ubuntu16.04\",\n \"pytorch-1.6-gpu-py36\": \"cu110-ubuntu18.04-v3\",\n \"pytorch-1.6.0-gpu-py36\": \"cu110-ubuntu18.04\",\n \"pytorch-1.6-gpu-py3\": \"cu110-ubuntu18.04-v3\",\n \"pytorch-1.6.0-gpu-py3\": \"cu110-ubuntu18.04\",\n }\n key = \"-\".join([framework, tag])\n if key in container_versions:\n tag = \"-\".join([tag, container_versions[key]])\n\n return tag\n\n\ndef _config_for_framework_and_scope(framework, image_scope, accelerator_type=None):\n \"\"\"Loads the JSON config for the given framework and image scope.\"\"\"\n config = config_for_framework(framework)\n\n if accelerator_type:\n _validate_accelerator_type(accelerator_type)\n\n if image_scope not in (\"eia\", \"inference\"):\n logger.warning(\n \"Elastic inference is for inference only. Ignoring image scope: %s.\", image_scope\n )\n image_scope = \"eia\"\n\n available_scopes = config.get(\"scope\", list(config.keys()))\n\n if len(available_scopes) == 1:\n if image_scope and image_scope != available_scopes[0]:\n logger.warning(\n \"Defaulting to only supported image scope: %s. Ignoring image scope: %s.\",\n available_scopes[0],\n image_scope,\n )\n image_scope = available_scopes[0]\n\n if not image_scope and \"scope\" in config and set(available_scopes) == {\"training\", \"inference\"}:\n logger.info(\n \"Same images used for training and inference. Defaulting to image scope: %s.\",\n available_scopes[0],\n )\n image_scope = available_scopes[0]\n\n _validate_arg(image_scope, available_scopes, \"image scope\")\n return config if \"scope\" in config else config[image_scope]\n\n\ndef config_for_framework(framework):\n \"\"\"Loads the JSON config for the given framework.\"\"\"\n fname = os.path.join(os.path.dirname(__file__), \"image_uri_config\", \"{}.json\".format(framework))\n with open(fname) as f:\n return json.load(f)\n\n\ndef _get_final_image_scope(framework, instance_type, image_scope):\n \"\"\"Return final image scope based on provided framework and instance type.\"\"\"\n if (\n framework in GRAVITON_ALLOWED_FRAMEWORKS\n and _get_instance_type_family(instance_type) in GRAVITON_ALLOWED_TARGET_INSTANCE_FAMILY\n ):\n return \"inference_graviton\"\n if image_scope is None and framework in (XGBOOST_FRAMEWORK, SKLEARN_FRAMEWORK):\n # Preserves backwards compatibility with XGB/SKLearn configs which no\n # longer define top-level \"scope\" keys after introducing support for\n # Graviton inference. Training and inference configs for XGB/SKLearn are\n # identical, so default to training.\n return \"training\"\n return image_scope\n\n\ndef _get_inference_tool(inference_tool, instance_type):\n \"\"\"Extract the inference tool name from instance type.\"\"\"\n if not inference_tool:\n instance_type_family = _get_instance_type_family(instance_type)\n if instance_type_family.startswith(\"inf\"):\n return \"neuron\"\n return inference_tool\n\n\ndef _get_latest_versions(list_of_versions):\n \"\"\"Extract the latest version from the input list of available versions.\"\"\"\n return sorted(list_of_versions, reverse=True)[0]\n\n\ndef _validate_accelerator_type(accelerator_type):\n \"\"\"Raises a ``ValueError`` if ``accelerator_type`` is invalid.\"\"\"\n if not accelerator_type.startswith(\"ml.eia\") and accelerator_type != \"local_sagemaker_notebook\":\n raise ValueError(\n \"Invalid SageMaker Elastic Inference accelerator type: {}. \"\n \"See https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html\".format(accelerator_type)\n )\n\n\ndef _validate_version_and_set_if_needed(version, config, framework):\n \"\"\"Checks if the framework/algorithm version is one of the supported versions.\"\"\"\n available_versions = list(config[\"versions\"].keys())\n aliased_versions = list(config.get(\"version_aliases\", {}).keys())\n\n if len(available_versions) == 1 and version not in aliased_versions:\n log_message = \"Defaulting to the only supported framework/algorithm version: {}.\".format(\n available_versions[0]\n )\n if version and version != available_versions[0]:\n logger.warning(\"%s Ignoring framework/algorithm version: %s.\", log_message, version)\n elif not version:\n logger.info(log_message)\n\n return available_versions[0]\n\n _validate_arg(version, available_versions + aliased_versions, \"{} version\".format(framework))\n return version\n\n\ndef _version_for_config(version, config):\n \"\"\"Returns the version string for retrieving a framework version's specific config.\"\"\"\n if \"version_aliases\" in config:\n if version in config[\"version_aliases\"].keys():\n return config[\"version_aliases\"][version]\n\n return version\n\n\ndef _registry_from_region(region, registry_dict):\n \"\"\"Returns the ECR registry (AWS account number) for the given region.\"\"\"\n _validate_arg(region, registry_dict.keys(), \"region\")\n return registry_dict[region]\n\n\ndef _processor(instance_type, available_processors, serverless_inference_config=None):\n \"\"\"Returns the processor type for the given instance type.\"\"\"\n if not available_processors:\n logger.info(\"Ignoring unnecessary instance type: %s.\", instance_type)\n return None\n\n if len(available_processors) == 1 and not instance_type:\n logger.info(\"Defaulting to only supported image scope: %s.\", available_processors[0])\n return available_processors[0]\n\n if serverless_inference_config is not None:\n logger.info(\"Defaulting to CPU type when using serverless inference\")\n return \"cpu\"\n\n if not instance_type:\n raise ValueError(\n \"Empty SageMaker instance type. For options, see: \"\n \"https://aws.amazon.com/sagemaker/pricing/instance-types\"\n )\n\n if instance_type.startswith(\"local\"):\n processor = \"cpu\" if instance_type == \"local\" else \"gpu\"\n elif instance_type.startswith(\"neuron\"):\n processor = \"neuron\"\n else:\n # looks for either \"ml.<family>.<size>\" or \"ml_<family>\"\n family = _get_instance_type_family(instance_type)\n if family:\n # For some frameworks, we have optimized images for specific families, e.g c5 or p3.\n # In those cases, we use the family name in the image tag. In other cases, we use\n # 'cpu' or 'gpu'.\n if family in available_processors:\n processor = family\n elif family.startswith(\"inf\"):\n processor = \"inf\"\n elif family[0] in (\"g\", \"p\"):\n processor = \"gpu\"\n else:\n processor = \"cpu\"\n else:\n raise ValueError(\n \"Invalid SageMaker instance type: {}. For options, see: \"\n \"https://aws.amazon.com/sagemaker/pricing/instance-types\".format(instance_type)\n )\n\n _validate_arg(processor, available_processors, \"processor\")\n return processor\n\n\ndef _should_auto_select_container_version(instance_type, distribution):\n \"\"\"Returns a boolean that indicates whether to use an auto-selected container version.\"\"\"\n p4d = False\n if instance_type:\n # looks for either \"ml.<family>.<size>\" or \"ml_<family>\"\n family = _get_instance_type_family(instance_type)\n if family:\n p4d = family == \"p4d\"\n\n smdistributed = False\n if distribution:\n smdistributed = \"smdistributed\" in distribution\n\n return p4d or smdistributed\n\n\ndef _validate_py_version_and_set_if_needed(py_version, version_config, framework):\n \"\"\"Checks if the Python version is one of the supported versions.\"\"\"\n if \"repository\" in version_config:\n available_versions = version_config.get(\"py_versions\")\n else:\n available_versions = list(version_config.keys())\n\n if not available_versions:\n if py_version:\n logger.info(\"Ignoring unnecessary Python version: %s.\", py_version)\n return None\n\n if py_version is None and defaults.SPARK_NAME == framework:\n return None\n\n if py_version is None and len(available_versions) == 1:\n logger.info(\"Defaulting to only available Python version: %s\", available_versions[0])\n return available_versions[0]\n\n _validate_arg(py_version, available_versions, \"Python version\")\n return py_version\n\n\ndef _validate_arg(arg, available_options, arg_name):\n \"\"\"Checks if the arg is in the available options, and raises a ``ValueError`` if not.\"\"\"\n if arg not in available_options:\n raise ValueError(\n \"Unsupported {arg_name}: {arg}. You may need to upgrade your SDK version \"\n \"(pip install -U sagemaker) for newer {arg_name}s. Supported {arg_name}(s): \"\n \"{options}.\".format(arg_name=arg_name, arg=arg, options=\", \".join(available_options))\n )\n\n\ndef _format_tag(tag_prefix, processor, py_version, container_version, inference_tool=None):\n \"\"\"Creates a tag for the image URI.\"\"\"\n if inference_tool:\n return \"-\".join(x for x in (tag_prefix, inference_tool, py_version, container_version) if x)\n return \"-\".join(x for x in (tag_prefix, processor, py_version, container_version) if x)\n\n\ndef get_training_image_uri(\n region,\n framework,\n framework_version=None,\n py_version=None,\n image_uri=None,\n distribution=None,\n compiler_config=None,\n tensorflow_version=None,\n pytorch_version=None,\n instance_type=None,\n) -> str:\n \"\"\"Retrieves the image URI for training.\n\n Args:\n region (str): The AWS region to use for image URI.\n framework (str): The framework for which to retrieve an image URI.\n framework_version (str): The framework version for which to retrieve an\n image URI (default: None).\n py_version (str): The python version to use for the image (default: None).\n image_uri (str): If an image URI is supplied, it is returned (default: None).\n distribution (dict): A dictionary with information on how to run distributed\n training (default: None).\n compiler_config (:class:`~sagemaker.training_compiler.TrainingCompilerConfig`):\n A configuration class for the SageMaker Training Compiler\n (default: None).\n tensorflow_version (str): The version of TensorFlow to use. (default: None)\n pytorch_version (str): The version of PyTorch to use. (default: None)\n instance_type (str): The instance type to use. (default: None)\n\n Returns:\n str: The image URI string.\n \"\"\"\n\n if image_uri:\n return image_uri\n\n logger.info(\n \"image_uri is not presented, retrieving image_uri based on instance_type, framework etc.\"\n )\n base_framework_version: Optional[str] = None\n\n if tensorflow_version is not None or pytorch_version is not None:\n processor = _processor(instance_type, [\"cpu\", \"gpu\"])\n is_native_huggingface_gpu = processor == \"gpu\" and not compiler_config\n container_version = \"cu110-ubuntu18.04\" if is_native_huggingface_gpu else None\n if tensorflow_version is not None:\n base_framework_version = f\"tensorflow{tensorflow_version}\"\n else:\n base_framework_version = f\"pytorch{pytorch_version}\"\n else:\n container_version = None\n base_framework_version = None\n\n return retrieve(\n framework,\n region,\n instance_type=instance_type,\n version=framework_version,\n py_version=py_version,\n image_scope=\"training\",\n distribution=distribution,\n base_framework_version=base_framework_version,\n container_version=container_version,\n training_compiler_config=compiler_config,\n )\n"}
|
diff --git a/doc/frameworks/pytorch/using_pytorch.rst b/doc/frameworks/pytorch/using_pytorch.rst
index 725f34aa5a..f56085f756 100644
--- a/doc/frameworks/pytorch/using_pytorch.rst
+++ b/doc/frameworks/pytorch/using_pytorch.rst
@@ -293,6 +293,121 @@ using two ``ml.p4d.24xlarge`` instances:
pt_estimator.fit("s3://bucket/path/to/training/data")
+.. _distributed-pytorch-training-on-trainium:
+
+Distributed Training with PyTorch Neuron on Trn1 instances
+==========================================================
+
+SageMaker Training supports Amazon EC2 Trn1 instances powered by
+`AWS Trainium <https://aws.amazon.com/machine-learning/trainium/>`_ device,
+the second generation purpose-built machine learning accelerator from AWS.
+Each Trn1 instance consists of up to 16 Trainium devices, and each
+Trainium device consists of two `NeuronCores
+<https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-hardware/trn1-arch.html#trainium-architecture>`_
+in the *AWS Neuron Documentation*.
+
+You can run distributed training job on Trn1 instances.
+SageMaker supports the ``xla`` package through ``torchrun``.
+With this, you do not need to manually pass ``RANK``,
+``WORLD_SIZE``, ``MASTER_ADDR``, and ``MASTER_PORT``.
+You can launch the training job using the
+:class:`sagemaker.pytorch.estimator.PyTorch` estimator class
+with the ``torch_distributed`` option as the distribution strategy.
+
+.. note::
+
+ This ``torch_distributed`` support is available
+ in the AWS Deep Learning Containers for PyTorch Neuron starting v1.11.0.
+ To find a complete list of supported versions of PyTorch Neuron, see
+ `Neuron Containers <https://github.com/aws/deep-learning-containers/blob/master/available_images.md#neuron-containers>`_
+ in the *AWS Deep Learning Containers GitHub repository*.
+
+.. note::
+
+ SageMaker Debugger is currently not supported with Trn1 instances.
+
+Adapt Your Training Script to Initialize with the XLA backend
+-------------------------------------------------------------
+
+To initialize distributed training in your script, call
+`torch.distributed.init_process_group
+<https://pytorch.org/docs/master/distributed.html#torch.distributed.init_process_group>`_
+with the ``xla`` backend as shown below.
+
+.. code:: python
+
+ import torch.distributed as dist
+
+ dist.init_process_group('xla')
+
+SageMaker takes care of ``'MASTER_ADDR'`` and ``'MASTER_PORT'`` for you via ``torchrun``
+
+For detailed documentation about modifying your training script for Trainium, see `Multi-worker data-parallel MLP training using torchrun <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/mlp.html?highlight=torchrun#multi-worker-data-parallel-mlp-training-using-torchrun>`_ in the *AWS Neuron Documentation*.
+
+**Currently Supported backends:**
+
+- ``xla`` for Trainium (Trn1) instances
+
+For up-to-date information on supported backends for Trn1 instances, see `AWS Neuron Documentation <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html>`_.
+
+Launching a Distributed Training Job on Trainium
+------------------------------------------------
+
+You can run multi-node distributed PyTorch training jobs on Trn1 instances using the
+:class:`sagemaker.pytorch.estimator.PyTorch` estimator class.
+With ``instance_count=1``, the estimator submits a
+single-node training job to SageMaker; with ``instance_count`` greater
+than one, a multi-node training job is launched.
+
+With the ``torch_distributed`` option, the SageMaker PyTorch estimator runs a SageMaker
+training container for PyTorch Neuron, sets up the environment, and launches
+the training job using the ``torchrun`` command on each worker with the given information.
+
+**Examples**
+
+The following examples show how to run a PyTorch training using ``torch_distributed`` in SageMaker
+on one ``ml.trn1.2xlarge`` instance and two ``ml.trn1.32xlarge`` instances:
+
+.. code:: python
+
+ from sagemaker.pytorch import PyTorch
+
+ pt_estimator = PyTorch(
+ entry_point="train_torch_distributed.py",
+ role="SageMakerRole",
+ framework_version="1.11.0",
+ py_version="py38",
+ instance_count=1,
+ instance_type="ml.trn1.2xlarge",
+ distribution={
+ "torch_distributed": {
+ "enabled": True
+ }
+ }
+ )
+
+ pt_estimator.fit("s3://bucket/path/to/training/data")
+
+.. code:: python
+
+ from sagemaker.pytorch import PyTorch
+
+ pt_estimator = PyTorch(
+ entry_point="train_torch_distributed.py",
+ role="SageMakerRole",
+ framework_version="1.11.0",
+ py_version="py38",
+ instance_count=2,
+ instance_type="ml.trn1.32xlarge",
+ distribution={
+ "torch_distributed": {
+ "enabled": True
+ }
+ }
+ )
+
+ pt_estimator.fit("s3://bucket/path/to/training/data")
+
*********************
Deploy PyTorch Models
*********************
diff --git a/src/sagemaker/image_uri_config/pytorch-neuron.json b/src/sagemaker/image_uri_config/pytorch-neuron.json
new file mode 100644
index 0000000000..b116a8a36b
--- /dev/null
+++ b/src/sagemaker/image_uri_config/pytorch-neuron.json
@@ -0,0 +1,41 @@
+{
+ "training": {
+ "processors": ["trn"],
+ "version_aliases": {"1.11": "1.11.0"},
+ "versions": {
+ "1.11.0": {
+ "py_versions": ["py38"],
+ "repository": "pytorch-training-neuron",
+ "registries": {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884"
+ },
+ "container_version": {"trn": "ubuntu20.04"},
+ "sdk_versions": ["sdk2.4.0"]
+ }
+ }
+ }
+ }
\ No newline at end of file
|
{"src/sagemaker/image_uris.py": [{"type": "function", "name": "_validate_for_suppported_frameworks_and_instance_type", "lines": [355, 362], "signature": "def _validate_for_suppported_frameworks_and_instance_type(framework, instace_type):", "doc": "Validate if framework is supported for the instance_type"}, {"type": "function", "name": "_validate_framework", "lines": [546, 551], "signature": "def _validate_framework(framework, allowed_frameworks, arg_name):", "doc": "Checks if the framework is in the allowed frameworks, and raises a ``ValueError`` if not."}]}
| null |
["tests/unit/sagemaker/image_uris/test_trainium.py::test_trainium_pytorch"]
|
[]
|
03738861995e0c3fda73958d251e83465aba3c04
|
{"first_commit_time": 1661472718.0, "pr_title": "feature: Trainium Neuron support for PyTorch", "pr_body": "*Issue #, if available:*\r\n\r\n*Description of changes:*\r\nAdded the necessary changes to incorporate the `v1.0-pt-1.11.0-tr-neuron-sdk2.3.0-py38` image into the `PyTorch` framework.\r\n\r\n*Testing done:*\r\n\r\nLocally tested the feature to extract image string with the following code, where the output matched with the [released DLC image](https://github.com/aws/deep-learning-containers/releases/tag/v1.0-pt-1.11.0-tr-neuron-sdk2.3.0-py38)\r\n\r\n```bash\r\nimport sagemaker, boto3\r\n\r\nsagemaker.image_uris.retrieve(framework=\"pytorch\",\r\n region=boto3.Session().region_name,\r\n instance_type=\"ml.trn1.xlarge\")\r\n\r\n'763104351884.dkr.ecr.us-west-2.amazonaws.com/pytorch-training-neuron:1.11.0-neuron-py38-sdk2.3.0-ubuntu20.04'\r\n```\r\n\r\n## Merge Checklist\r\n\r\n_Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request._\r\n\r\n#### General\r\n\r\n- [x] I have read the [CONTRIBUTING](https://github.com/aws/sagemaker-python-sdk/blob/master/CONTRIBUTING.md) doc\r\n- [x] I certify that the changes I am introducing will be backward compatible, and I have discussed concerns about this, if any, with the Python SDK team\r\n- [x] I used the commit message format described in [CONTRIBUTING](https://github.com/aws/sagemaker-python-sdk/blob/master/CONTRIBUTING.md#committing-your-change)\r\n- [ ] I have passed the region in to all S3 and STS clients that I've initialized as part of this change.\r\n- [ ] I have updated any necessary documentation, including [READMEs](https://github.com/aws/sagemaker-python-sdk/blob/master/README.rst) and [API docs](https://github.com/aws/sagemaker-python-sdk/tree/master/doc) (if appropriate)\r\n\r\n#### Tests\r\n\r\n- [x] I have added tests that prove my fix is effective or that my feature works (if appropriate)\r\n- [x] I have added unit and/or integration tests as appropriate to ensure backward compatibility of the changes\r\n- [x] I have checked that my tests are not configured for a specific region or account (if appropriate)\r\n- [ ] I have used [`unique_name_from_base`](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/utils.py#L77) to create resource names in integ tests (if appropriate)\r\n\r\nBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.\r\n", "pr_timeline": [{"time": 1666130053.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: 464cb7299866c389082e7f5b880ef0fa17ad46fe\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=d22842e7-c370-4608-ac83-f5cf9e3b1e0b%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666130227.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: 464cb7299866c389082e7f5b880ef0fa17ad46fe\n* Result: FAILED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=aa042d37-1982-47f1-aaa3-b71ce09cc7f6%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666132395.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: 464cb7299866c389082e7f5b880ef0fa17ad46fe\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=d61e55e8-0a56-4087-a75a-d58b4a8c2791%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666133625.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: 464cb7299866c389082e7f5b880ef0fa17ad46fe\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=76325497-cc91-4548-b7e6-53acbe9bdeb9%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666134226.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: ec0212c8c41cff11b8646111a99f73ab191df6c1\n* Result: FAILED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=09f4f636-c3d7-4aaf-b254-88715dedaafa%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666136409.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: ec0212c8c41cff11b8646111a99f73ab191df6c1\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=e038497e-e650-4c1d-a4b5-d2f617e4930e%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666137050.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: ec0212c8c41cff11b8646111a99f73ab191df6c1\n* Result: FAILED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=c71765f0-10a2-409f-beb1-195e86947475%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666137539.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: ec0212c8c41cff11b8646111a99f73ab191df6c1\n* Result: FAILED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=21c55b97-70a2-4b03-912a-849f52e2b4ef%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666144037.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: ec0212c8c41cff11b8646111a99f73ab191df6c1\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=8ecc1eb7-046a-4434-a0e1-f1d3905ab765%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666198397.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: ec0212c8c41cff11b8646111a99f73ab191df6c1\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=b75fb12d-e93b-4ebb-b46a-039b9be473b7%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666200726.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: c05b46b24c8969588c38349c669204a99a144415\n* Result: FAILED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=7cb773ca-fe0a-47d3-99fb-37938490c001%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666911836.0, "comment": "# [Codecov](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) Report\n> Merging [#3423](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) (6371ba1) into [master](https://codecov.io/gh/aws/sagemaker-python-sdk/commit/1fa23772efa2cff2bb8e422b4e69325a8f5ea618?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) (1fa2377) will **decrease** coverage by `0.35%`.\n> The diff coverage is `83.59%`.\n\n```diff\n@@ Coverage Diff @@\n## master #3423 +/- ##\n==========================================\n- Coverage 89.17% 88.82% -0.36% \n==========================================\n Files 204 205 +1 \n Lines 18979 19458 +479 \n==========================================\n+ Hits 16924 17283 +359 \n- Misses 2055 2175 +120 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) | Coverage \u0394 | |\n|---|---|---|\n| [src/sagemaker/sklearn/model.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9za2xlYXJuL21vZGVsLnB5) | `92.98% <\u00f8> (-1.76%)` | :arrow_down: |\n| [...sagemaker/workflow/monitor\\_batch\\_transform\\_step.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci93b3JrZmxvdy9tb25pdG9yX2JhdGNoX3RyYW5zZm9ybV9zdGVwLnB5) | `0.00% <0.00%> (\u00f8)` | |\n| [src/sagemaker/xgboost/model.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci94Z2Jvb3N0L21vZGVsLnB5) | `96.36% <\u00f8> (-1.82%)` | :arrow_down: |\n| [...agemaker/model\\_monitor/clarify\\_model\\_monitoring.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9tb2RlbF9tb25pdG9yL2NsYXJpZnlfbW9kZWxfbW9uaXRvcmluZy5weQ==) | `89.84% <27.58%> (-5.09%)` | :arrow_down: |\n| [src/sagemaker/session.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9zZXNzaW9uLnB5) | `71.46% <50.00%> (-0.21%)` | :arrow_down: |\n| [src/sagemaker/training\\_compiler/config.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci90cmFpbmluZ19jb21waWxlci9jb25maWcucHk=) | `80.43% <60.00%> (-2.90%)` | :arrow_down: |\n| [src/sagemaker/pytorch/estimator.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9weXRvcmNoL2VzdGltYXRvci5weQ==) | `94.73% <63.63%> (-5.27%)` | :arrow_down: |\n| [src/sagemaker/sklearn/estimator.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9za2xlYXJuL2VzdGltYXRvci5weQ==) | `93.93% <63.63%> (-6.07%)` | :arrow_down: |\n| [src/sagemaker/model\\_monitor/dataset\\_format.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9tb2RlbF9tb25pdG9yL2RhdGFzZXRfZm9ybWF0LnB5) | `72.72% <70.00%> (-2.28%)` | :arrow_down: |\n| [src/sagemaker/estimator.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9lc3RpbWF0b3IucHk=) | `89.46% <75.00%> (-0.05%)` | :arrow_down: |\n| ... and [35 more](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3423/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) | |\n\nHelp us with your feedback. Take ten seconds to tell us [how you rate us](https://about.codecov.io/nps?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws). Have a feature suggestion? [Share it here.](https://app.codecov.io/gh/feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws)\n"}, {"time": 1666201592.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: c05b46b24c8969588c38349c669204a99a144415\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=1769eecc-c786-4666-8e30-7dc0aa0b191f%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666202578.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: c05b46b24c8969588c38349c669204a99a144415\n* Result: FAILED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=fc170053-b2bd-4afc-a5be-d5f2e085a751%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666203706.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: c05b46b24c8969588c38349c669204a99a144415\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=8d56aa56-96ba-4726-a931-5ca20264256c%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666209998.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: c05b46b24c8969588c38349c669204a99a144415\n* Result: FAILED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=ea9ba926-4a29-42eb-a072-46224a4a27a0%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666217012.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: 5e6372c6a024ddb37566e0d61bb6e2b8f04e48ea\n* Result: FAILED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=615a7714-aa17-49ce-b5c4-93074690352a%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666219695.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: 5e6372c6a024ddb37566e0d61bb6e2b8f04e48ea\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=5a86a45b-f55d-4ba4-a8de-aa2f3451c817%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666220687.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: 5e6372c6a024ddb37566e0d61bb6e2b8f04e48ea\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=4a964c37-880c-4e87-8f35-4b3f5c97d902%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666220839.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: 5e6372c6a024ddb37566e0d61bb6e2b8f04e48ea\n* Result: SUCCEEDED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=13252a88-5466-45b9-9fd9-44491fcf4fe8%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666221588.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: 5e6372c6a024ddb37566e0d61bb6e2b8f04e48ea\n* Result: SUCCEEDED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=cab22da4-2ac1-4587-84ee-7f9be9ef5550%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666232248.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: 5e6372c6a024ddb37566e0d61bb6e2b8f04e48ea\n* Result: SUCCEEDED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=92e214c9-9bf5-4d22-9c46-666a91a24d88%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666654692.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: bc5f8e1c1aa367341696f4664d76f450a93da388\n* Result: FAILED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=6aacef13-4256-41e3-8a9a-9616a946f735%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666657151.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: bc5f8e1c1aa367341696f4664d76f450a93da388\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=778e4754-6c22-4dd4-a207-c1b355b4612d%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666657490.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: bc5f8e1c1aa367341696f4664d76f450a93da388\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=0e852c44-7524-4a50-b540-268b87223d39%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666657810.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: bc5f8e1c1aa367341696f4664d76f450a93da388\n* Result: FAILED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=e3733688-b03f-4cd2-9a49-fa7fab480d0f%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666658414.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: bc5f8e1c1aa367341696f4664d76f450a93da388\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=687c7229-942e-426d-a78f-57966a58d16c%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666735983.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: df7d1fd754c353cbce09e8cac268e5d5e76cfc36\n* Result: FAILED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=26563041-4ce7-42ac-afc7-93db9f1a8ee8%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666736087.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: df7d1fd754c353cbce09e8cac268e5d5e76cfc36\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=a162d70b-40e4-47c1-b1cf-255c09564d18%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666739786.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: 62b0cb80093c84fc6d4cee95895f9c96f92c5dfa\n* Result: SUCCEEDED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=a095b6c6-e357-4bb6-9637-6acaa630541d%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666739858.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: 62b0cb80093c84fc6d4cee95895f9c96f92c5dfa\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=7e42b450-1079-467c-aab5-43dbee92f553%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666742209.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: fafc82cafa4a8609127897fe5a63e59833b95881\n* Result: SUCCEEDED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=c2e05f42-2244-42ce-88e0-abb8cfc1aa1f%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666742759.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: fafc82cafa4a8609127897fe5a63e59833b95881\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=32aded8e-1e8e-4fb2-b0ab-6606fb2d8ecb%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666743945.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: fafc82cafa4a8609127897fe5a63e59833b95881\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=94223e31-223d-4ac3-a118-9dda3dbd0299%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666744508.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: fafc82cafa4a8609127897fe5a63e59833b95881\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=86b24504-2ee1-414b-854c-56f92593a002%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666745693.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: fafc82cafa4a8609127897fe5a63e59833b95881\n* Result: FAILED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=052ef480-4a47-4a5c-9620-897a6ad964d5%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666805532.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: 4625d132f468cbd0f440c71bfbf8ffcd6b1906e6\n* Result: SUCCEEDED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=64ea19af-f294-40cf-acfe-8e27d62c8684%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666806207.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: 4625d132f468cbd0f440c71bfbf8ffcd6b1906e6\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=fea53a2e-ed48-4287-aa57-dc971dd254bb%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666806644.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: 4625d132f468cbd0f440c71bfbf8ffcd6b1906e6\n* Result: SUCCEEDED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=0a561b9b-74ed-4c5c-b334-4759f83c004f%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666807186.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: 4625d132f468cbd0f440c71bfbf8ffcd6b1906e6\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=c28fe2c0-fbd0-458b-974f-8acec86946c9%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666813715.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: 4625d132f468cbd0f440c71bfbf8ffcd6b1906e6\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=b0cf03dc-09e8-494b-aa81-7a28d1f03086%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666902589.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: 6371ba196fae85953de3b7985d9f11a088dd8f95\n* Result: SUCCEEDED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=94afb8b5-99b0-478a-adef-082edd292aa8%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666903128.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: 6371ba196fae85953de3b7985d9f11a088dd8f95\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=8f4bf34f-1010-49ca-af0b-232d239ab01c%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666904430.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: 6371ba196fae85953de3b7985d9f11a088dd8f95\n* Result: SUCCEEDED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=9247faaf-6ea7-45bb-941a-984819219056%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666904498.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: 6371ba196fae85953de3b7985d9f11a088dd8f95\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=ff7a6d32-4a82-40c9-9228-a978873d6432%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666910776.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: 6371ba196fae85953de3b7985d9f11a088dd8f95\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=5bfd8d5e-c5ed-461d-910d-85c2f18fa3c7%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666911822.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: 6371ba196fae85953de3b7985d9f11a088dd8f95\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=ff298231-9e6d-43e1-9af3-8a86a0007924%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666916606.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: 6371ba196fae85953de3b7985d9f11a088dd8f95\n* Result: SUCCEEDED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=c4709243-afc7-4c39-9ccf-6c2d78de50b1%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}], "issues": {}}
|
aws/sagemaker-python-sdk
| 3,432
|
https://github.com/aws/sagemaker-python-sdk/pull/3432
|
aws__sagemaker-python-sdk-3432
|
[]
|
1a39422883c7c7b04865e58fda201bcb93075669
|
diff --git a/src/sagemaker/fw_utils.py b/src/sagemaker/fw_utils.py
index dacd0a229c..af48a111bc 100644
--- a/src/sagemaker/fw_utils.py
+++ b/src/sagemaker/fw_utils.py
@@ -134,10 +134,13 @@
"1.12.0",
]
+
TORCH_DISTRIBUTED_SUPPORTED_FRAMEWORK_VERSIONS = ["1.11", "1.11.0"]
+
TRAINIUM_SUPPORTED_DISTRIBUTION_STRATEGIES = ["torch_distributed"]
+
SMDISTRIBUTED_SUPPORTED_STRATEGIES = ["dataparallel", "modelparallel"]
@@ -160,6 +163,12 @@ def validate_source_dir(script, directory):
return True
+GRAVITON_ALLOWED_TARGET_INSTANCE_FAMILY = ["c6g", "t4g", "r6g", "m6g"]
+
+
+GRAVITON_ALLOWED_FRAMEWORKS = set(["tensorflow", "pytorch"])
+
+
def validate_source_code_input_against_pipeline_variables(
entry_point: Optional[Union[str, PipelineVariable]] = None,
source_dir: Optional[Union[str, PipelineVariable]] = None,
diff --git a/src/sagemaker/image_uri_config/pytorch.json b/src/sagemaker/image_uri_config/pytorch.json
index a88f7f1c50..74127a1fda 100644
--- a/src/sagemaker/image_uri_config/pytorch.json
+++ b/src/sagemaker/image_uri_config/pytorch.json
@@ -654,6 +654,51 @@
}
}
},
+ "inference_graviton": {
+ "processors": [
+ "cpu"
+ ],
+ "version_aliases": {
+ "1.12": "1.12.1"
+ },
+ "versions": {
+ "1.12.1": {
+ "py_versions": [
+ "py38"
+ ],
+ "registries": {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ap-southeast-3": "907027046896",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884"
+ },
+ "repository": "pytorch-inference-graviton",
+ "container_version": {"cpu": "ubuntu20.04"}
+ }
+ }
+ },
"training": {
"processors": [
"cpu",
diff --git a/src/sagemaker/image_uri_config/tensorflow.json b/src/sagemaker/image_uri_config/tensorflow.json
index 6a2318ddbe..0f5a390c8d 100644
--- a/src/sagemaker/image_uri_config/tensorflow.json
+++ b/src/sagemaker/image_uri_config/tensorflow.json
@@ -1471,6 +1471,51 @@
}
}
},
+ "inference_graviton": {
+ "processors": [
+ "cpu"
+ ],
+ "version_aliases": {
+ "2.9": "2.9.1"
+ },
+ "versions": {
+ "2.9.1": {
+ "py_versions": [
+ "py38"
+ ],
+ "registries": {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ap-southeast-3": "907027046896",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884"
+ },
+ "repository": "tensorflow-inference-graviton",
+ "container_version": {"cpu": "ubuntu20.04"}
+ }
+ }
+ },
"training": {
"processors": [
"cpu",
diff --git a/src/sagemaker/image_uris.py b/src/sagemaker/image_uris.py
index 01ed5f1d99..f1c0d69af2 100644
--- a/src/sagemaker/image_uris.py
+++ b/src/sagemaker/image_uris.py
@@ -25,6 +25,7 @@
from sagemaker.jumpstart import artifacts
from sagemaker.workflow import is_pipeline_variable
from sagemaker.workflow.utilities import override_pipeline_parameter_var
+from sagemaker.fw_utils import GRAVITON_ALLOWED_TARGET_INSTANCE_FAMILY, GRAVITON_ALLOWED_FRAMEWORKS
logger = logging.getLogger(__name__)
@@ -151,6 +152,7 @@ def retrieve(
inference_tool = _get_inference_tool(inference_tool, instance_type)
if inference_tool == "neuron":
_framework = f"{framework}-{inference_tool}"
+ image_scope = _get_image_scope_for_instance_type(_framework, instance_type, image_scope)
config = _config_for_framework_and_scope(_framework, image_scope, accelerator_type)
original_version = version
@@ -216,6 +218,9 @@ def retrieve(
else:
tag_prefix = version_config.get("tag_prefix", version)
+ if repo == f"{framework}-inference-graviton":
+ container_version = f"{container_version}-sagemaker"
+
tag = _format_tag(tag_prefix, processor, py_version, container_version, inference_tool)
if instance_type is not None and _should_auto_select_container_version(
@@ -287,6 +292,15 @@ def config_for_framework(framework):
return json.load(f)
+def _get_image_scope_for_instance_type(framework, instance_type, image_scope):
+ """Extract the image scope from instance type."""
+ if framework in GRAVITON_ALLOWED_FRAMEWORKS and isinstance(instance_type, str):
+ match = re.match(r"^ml[\._]([a-z\d]+)\.?\w*$", instance_type)
+ if match and match[1] in GRAVITON_ALLOWED_TARGET_INSTANCE_FAMILY:
+ return "inference_graviton"
+ return image_scope
+
+
def _get_inference_tool(inference_tool, instance_type):
"""Extract the inference tool name from instance type."""
if not inference_tool and instance_type:
|
diff --git a/tests/conftest.py b/tests/conftest.py
index 17a5c1db9c..a54cbfea15 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -308,6 +308,16 @@ def huggingface_pytorch_latest_inference_py_version(huggingface_inference_pytorc
)
[email protected](scope="module")
+def graviton_tensorflow_version():
+ return "2.9.1"
+
+
[email protected](scope="module")
+def graviton_pytorch_version():
+ return "1.12.1"
+
+
@pytest.fixture(scope="module")
def huggingface_tensorflow_latest_training_py_version():
return "py38"
diff --git a/tests/unit/sagemaker/image_uris/expected_uris.py b/tests/unit/sagemaker/image_uris/expected_uris.py
index 9bcdbc6e81..a49ff57936 100644
--- a/tests/unit/sagemaker/image_uris/expected_uris.py
+++ b/tests/unit/sagemaker/image_uris/expected_uris.py
@@ -38,3 +38,18 @@ def algo_uri(algo, account, region, version=1):
def monitor_uri(account, region=REGION):
domain = ALTERNATE_DOMAINS.get(region, DOMAIN)
return MONITOR_URI_FORMAT.format(account, region, domain)
+
+
+def graviton_framework_uri(
+ repo,
+ fw_version,
+ account,
+ py_version="py38",
+ processor="cpu",
+ region=REGION,
+ container_version="ubuntu20.04-sagemaker",
+):
+ domain = ALTERNATE_DOMAINS.get(region, DOMAIN)
+ tag = "-".join(x for x in (fw_version, processor, py_version, container_version) if x)
+
+ return IMAGE_URI_FORMAT.format(account, region, domain, repo, tag)
diff --git a/tests/unit/sagemaker/image_uris/test_graviton.py b/tests/unit/sagemaker/image_uris/test_graviton.py
new file mode 100644
index 0000000000..450073e558
--- /dev/null
+++ b/tests/unit/sagemaker/image_uris/test_graviton.py
@@ -0,0 +1,83 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"). You
+# may not use this file except in compliance with the License. A copy of
+# the License is located at
+#
+# http://aws.amazon.com/apache2.0/
+#
+# or in the "license" file accompanying this file. This file is
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
+# ANY KIND, either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+from __future__ import absolute_import
+
+from sagemaker import image_uris
+from tests.unit.sagemaker.image_uris import expected_uris
+
+GRAVITON_ALGOS = ("tensorflow", "pytotch")
+GRAVITON_INSTANCE_TYPES = [
+ "ml.c6g.4xlarge",
+ "ml.t4g.2xlarge",
+ "ml.r6g.2xlarge",
+ "ml.m6g.4xlarge",
+]
+
+ACCOUNTS = {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ap-southeast-3": "907027046896",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884",
+}
+
+GRAVITON_REGIONS = ACCOUNTS.keys()
+
+
+def _test_graviton_framework_uris(framework, version):
+ for region in GRAVITON_REGIONS:
+ for instance_type in GRAVITON_INSTANCE_TYPES:
+ uri = image_uris.retrieve(
+ framework, region, instance_type=instance_type, version=version
+ )
+ expected = _expected_graviton_framework_uri(framework, version, region=region)
+ assert expected == uri
+
+
+def test_graviton_tensorflow(graviton_tensorflow_version):
+ _test_graviton_framework_uris("tensorflow", graviton_tensorflow_version)
+
+
+def test_graviton_pytorch(graviton_pytorch_version):
+ _test_graviton_framework_uris("pytorch", graviton_pytorch_version)
+
+
+def _expected_graviton_framework_uri(framework, version, region):
+ return expected_uris.graviton_framework_uri(
+ "{}-inference-graviton".format(framework),
+ fw_version=version,
+ py_version="py38",
+ account=ACCOUNTS[region],
+ region=region,
+ )
| 2022-10-21T15:46:55
|
{}
|
{"src/sagemaker/fw_utils.py": "# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"Utility methods used by framework classes\"\"\"\nfrom __future__ import absolute_import\n\nimport logging\nimport os\nimport re\nimport time\nimport shutil\nimport tempfile\nfrom collections import namedtuple\nfrom typing import Optional, Union, Dict\n\nimport sagemaker.image_uris\nfrom sagemaker.session_settings import SessionSettings\nimport sagemaker.utils\nfrom sagemaker.workflow import is_pipeline_variable\n\nfrom sagemaker.deprecations import renamed_warning, renamed_kwargs\nfrom sagemaker.workflow.entities import PipelineVariable\n\nlogger = logging.getLogger(__name__)\n\n_TAR_SOURCE_FILENAME = \"source.tar.gz\"\n\nUploadedCode = namedtuple(\"UserCode\", [\"s3_prefix\", \"script_name\"])\n\"\"\"sagemaker.fw_utils.UserCode: An object containing the S3 prefix and script name.\nThis is for the source code used for the entry point with an ``Estimator``. It can be\ninstantiated with positional or keyword arguments.\n\"\"\"\n\nPYTHON_2_DEPRECATION_WARNING = (\n \"{latest_supported_version} is the latest version of {framework} that supports \"\n \"Python 2. Newer versions of {framework} will only be available for Python 3.\"\n \"Please set the argument \\\"py_version='py3'\\\" to use the Python 3 {framework} image.\"\n)\nPARAMETER_SERVER_MULTI_GPU_WARNING = (\n \"If you have selected a multi-GPU training instance type \"\n \"and also enabled parameter server for distributed training, \"\n \"distributed training with the default parameter server configuration will not \"\n \"fully leverage all GPU cores; the parameter server will be configured to run \"\n \"only one worker per host regardless of the number of GPUs.\"\n)\n\nDEBUGGER_UNSUPPORTED_REGIONS = (\n \"us-iso-east-1\",\n \"ap-southeast-3\",\n \"ap-southeast-4\",\n \"eu-south-2\",\n \"me-central-1\",\n \"ap-south-2\",\n \"eu-central-2\",\n \"us-gov-east-1\",\n)\nPROFILER_UNSUPPORTED_REGIONS = (\n \"us-iso-east-1\",\n \"ap-southeast-3\",\n \"ap-southeast-4\",\n \"eu-south-2\",\n \"me-central-1\",\n \"ap-south-2\",\n \"eu-central-2\",\n \"us-gov-east-1\",\n)\n\nSINGLE_GPU_INSTANCE_TYPES = (\"ml.p2.xlarge\", \"ml.p3.2xlarge\")\nSM_DATAPARALLEL_SUPPORTED_INSTANCE_TYPES = (\n \"ml.p3.16xlarge\",\n \"ml.p3dn.24xlarge\",\n \"ml.p4d.24xlarge\",\n \"local_gpu\",\n)\nSM_DATAPARALLEL_SUPPORTED_FRAMEWORK_VERSIONS = {\n \"tensorflow\": [\n \"2.3\",\n \"2.3.1\",\n \"2.3.2\",\n \"2.4\",\n \"2.4.1\",\n \"2.4.3\",\n \"2.5\",\n \"2.5.0\",\n \"2.5.1\",\n \"2.6\",\n \"2.6.0\",\n \"2.6.2\",\n \"2.6.3\",\n \"2.7\",\n \"2.7.1\",\n \"2.8\",\n \"2.8.0\",\n \"2.9\",\n \"2.9.1\",\n ],\n \"pytorch\": [\n \"1.6\",\n \"1.6.0\",\n \"1.7\",\n \"1.7.1\",\n \"1.8\",\n \"1.8.0\",\n \"1.8.1\",\n \"1.9\",\n \"1.9.0\",\n \"1.9.1\",\n \"1.10\",\n \"1.10.0\",\n \"1.10.2\",\n \"1.11\",\n \"1.11.0\",\n \"1.12\",\n \"1.12.0\",\n ],\n}\n\nPYTORCHDDP_SUPPORTED_FRAMEWORK_VERSIONS = [\n \"1.10\",\n \"1.10.0\",\n \"1.10.2\",\n \"1.11\",\n \"1.11.0\",\n \"1.12\",\n \"1.12.0\",\n]\n\nTORCH_DISTRIBUTED_SUPPORTED_FRAMEWORK_VERSIONS = [\"1.11\", \"1.11.0\"]\n\nTRAINIUM_SUPPORTED_DISTRIBUTION_STRATEGIES = [\"torch_distributed\"]\n\nSMDISTRIBUTED_SUPPORTED_STRATEGIES = [\"dataparallel\", \"modelparallel\"]\n\n\ndef validate_source_dir(script, directory):\n \"\"\"Validate that the source directory exists and it contains the user script.\n\n Args:\n script (str): Script filename.\n directory (str): Directory containing the source file.\n Raises:\n ValueError: If ``directory`` does not exist, is not a directory, or does\n not contain ``script``.\n \"\"\"\n if directory:\n if not os.path.isfile(os.path.join(directory, script)):\n raise ValueError(\n 'No file named \"{}\" was found in directory \"{}\".'.format(script, directory)\n )\n\n return True\n\n\ndef validate_source_code_input_against_pipeline_variables(\n entry_point: Optional[Union[str, PipelineVariable]] = None,\n source_dir: Optional[Union[str, PipelineVariable]] = None,\n git_config: Optional[Dict[str, str]] = None,\n enable_network_isolation: Union[bool, PipelineVariable] = False,\n):\n \"\"\"Validate source code input against pipeline variables\n\n Args:\n entry_point (str or PipelineVariable): The path to the local Python source file that\n should be executed as the entry point to training (default: None).\n source_dir (str or PipelineVariable): The Path to a directory with any other\n training source code dependencies aside from the entry point file (default: None).\n git_config (Dict[str, str]): Git configurations used for cloning files (default: None).\n enable_network_isolation (bool or PipelineVariable): Specifies whether container will run\n in network isolation mode (default: False).\n \"\"\"\n if is_pipeline_variable(enable_network_isolation) or enable_network_isolation is True:\n if is_pipeline_variable(entry_point) or is_pipeline_variable(source_dir):\n raise TypeError(\n \"entry_point, source_dir should not be pipeline variables \"\n \"when enable_network_isolation is a pipeline variable or it is set to True.\"\n )\n if git_config:\n if is_pipeline_variable(entry_point) or is_pipeline_variable(source_dir):\n raise TypeError(\n \"entry_point, source_dir should not be pipeline variables when git_config is given.\"\n )\n if is_pipeline_variable(entry_point):\n if not source_dir:\n raise TypeError(\n \"The entry_point should not be a pipeline variable when source_dir is missing.\"\n )\n if not is_pipeline_variable(source_dir) and not source_dir.lower().startswith(\"s3://\"):\n raise TypeError(\n \"The entry_point should not be a pipeline variable when source_dir is a local path.\"\n )\n logger.warning(\n \"The entry_point is a pipeline variable: %s. During pipeline execution, \"\n \"the interpreted value of entry_point has to be a local path in the container \"\n \"pointing to a Python source file which is located at the root of source_dir.\",\n type(entry_point),\n )\n if is_pipeline_variable(source_dir):\n logger.warning(\n \"The source_dir is a pipeline variable: %s. During pipeline execution, \"\n \"the interpreted value of source_dir has to be an S3 URI and \"\n \"must point to a tar.gz file\",\n type(source_dir),\n )\n\n\ndef get_mp_parameters(distribution):\n \"\"\"Get the model parallelism parameters provided by the user.\n\n Args:\n distribution: distribution dictionary defined by the user.\n\n Returns:\n params: dictionary containing model parallelism parameters\n used for training.\n \"\"\"\n try:\n mp_dict = distribution[\"smdistributed\"][\"modelparallel\"]\n except KeyError:\n mp_dict = {}\n if mp_dict.get(\"enabled\", False) is True:\n params = mp_dict.get(\"parameters\", {})\n validate_mp_config(params)\n return params\n return None\n\n\ndef validate_mp_config(config):\n \"\"\"Validate the configuration dictionary for model parallelism.\n\n Args:\n config (dict): Dictionary holding configuration keys and values.\n\n Raises:\n ValueError: If any of the keys have incorrect values.\n \"\"\"\n\n if \"partitions\" not in config:\n raise ValueError(\"'partitions' is a required parameter.\")\n\n def validate_positive(key):\n try:\n if not isinstance(config[key], int) or config[key] < 1:\n raise ValueError(f\"The number of {key} must be a positive integer.\")\n except KeyError:\n pass\n\n def validate_in(key, vals):\n try:\n if config[key] not in vals:\n raise ValueError(f\"{key} must be a value in: {vals}.\")\n except KeyError:\n pass\n\n def validate_bool(keys):\n validate_in(keys, [True, False])\n\n validate_in(\"pipeline\", [\"simple\", \"interleaved\", \"_only_forward\"])\n validate_in(\"placement_strategy\", [\"spread\", \"cluster\"])\n validate_in(\"optimize\", [\"speed\", \"memory\"])\n\n for key in [\"microbatches\", \"partitions\", \"active_microbatches\"]:\n validate_positive(key)\n\n for key in [\n \"auto_partition\",\n \"contiguous\",\n \"load_partition\",\n \"horovod\",\n \"ddp\",\n \"deterministic_server\",\n ]:\n validate_bool(key)\n\n if \"partition_file\" in config and not isinstance(config.get(\"partition_file\"), str):\n raise ValueError(\"'partition_file' must be a str.\")\n\n if config.get(\"auto_partition\") is False and \"default_partition\" not in config:\n raise ValueError(\"default_partition must be supplied if auto_partition is set to False!\")\n\n if \"default_partition\" in config and config[\"default_partition\"] >= config[\"partitions\"]:\n raise ValueError(\"default_partition must be less than the number of partitions!\")\n\n if \"memory_weight\" in config and (\n config[\"memory_weight\"] > 1.0 or config[\"memory_weight\"] < 0.0\n ):\n raise ValueError(\"memory_weight must be between 0.0 and 1.0!\")\n\n if \"ddp_port\" in config and \"ddp\" not in config:\n raise ValueError(\"`ddp_port` needs `ddp` to be set as well\")\n\n if \"ddp_dist_backend\" in config and \"ddp\" not in config:\n raise ValueError(\"`ddp_dist_backend` needs `ddp` to be set as well\")\n\n if \"ddp_port\" in config:\n if not isinstance(config[\"ddp_port\"], int) or config[\"ddp_port\"] < 0:\n value = config[\"ddp_port\"]\n raise ValueError(f\"Invalid port number {value}.\")\n\n if config.get(\"horovod\", False) and config.get(\"ddp\", False):\n raise ValueError(\"'ddp' and 'horovod' cannot be simultaneously enabled.\")\n\n\ndef tar_and_upload_dir(\n session,\n bucket,\n s3_key_prefix,\n script,\n directory=None,\n dependencies=None,\n kms_key=None,\n s3_resource=None,\n settings: Optional[SessionSettings] = None,\n):\n \"\"\"Package source files and upload a compress tar file to S3.\n\n The S3 location will be ``s3://<bucket>/s3_key_prefix/sourcedir.tar.gz``.\n If directory is an S3 URI, an UploadedCode object will be returned, but\n nothing will be uploaded to S3 (this allow reuse of code already in S3).\n If directory is None, the script will be added to the archive at\n ``./<basename of script>``. If directory is not None, the (recursive) contents\n of the directory will be added to the archive. directory is treated as the base\n path of the archive, and the script name is assumed to be a filename or relative path\n inside the directory.\n\n Args:\n session (boto3.Session): Boto session used to access S3.\n bucket (str): S3 bucket to which the compressed file is uploaded.\n s3_key_prefix (str): Prefix for the S3 key.\n script (str): Script filename or path.\n directory (str): Optional. Directory containing the source file. If it\n starts with \"s3://\", no action is taken.\n dependencies (List[str]): Optional. A list of paths to directories\n (absolute or relative) containing additional libraries that will be\n copied into /opt/ml/lib\n kms_key (str): Optional. KMS key ID used to upload objects to the bucket\n (default: None).\n s3_resource (boto3.resource(\"s3\")): Optional. Pre-instantiated Boto3 Resource\n for S3 connections, can be used to customize the configuration,\n e.g. set the endpoint URL (default: None).\n settings (sagemaker.session_settings.SessionSettings): Optional. The settings\n of the SageMaker ``Session``, can be used to override the default encryption\n behavior (default: None).\n Returns:\n sagemaker.fw_utils.UserCode: An object with the S3 bucket and key (S3 prefix) and\n script name.\n \"\"\"\n if directory and (is_pipeline_variable(directory) or directory.lower().startswith(\"s3://\")):\n return UploadedCode(s3_prefix=directory, script_name=script)\n\n script_name = script if directory else os.path.basename(script)\n dependencies = dependencies or []\n key = \"%s/sourcedir.tar.gz\" % s3_key_prefix\n tmp = tempfile.mkdtemp()\n encrypt_artifact = True if settings is None else settings.encrypt_repacked_artifacts\n\n try:\n source_files = _list_files_to_compress(script, directory) + dependencies\n tar_file = sagemaker.utils.create_tar_file(\n source_files, os.path.join(tmp, _TAR_SOURCE_FILENAME)\n )\n\n if kms_key:\n extra_args = {\"ServerSideEncryption\": \"aws:kms\", \"SSEKMSKeyId\": kms_key}\n elif encrypt_artifact:\n # encrypt the tarball at rest in S3 with the default AWS managed KMS key for S3\n # see https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax\n extra_args = {\"ServerSideEncryption\": \"aws:kms\"}\n else:\n extra_args = None\n\n if s3_resource is None:\n s3_resource = session.resource(\"s3\", region_name=session.region_name)\n else:\n print(\"Using provided s3_resource\")\n\n s3_resource.Object(bucket, key).upload_file(tar_file, ExtraArgs=extra_args)\n finally:\n shutil.rmtree(tmp)\n\n return UploadedCode(s3_prefix=\"s3://%s/%s\" % (bucket, key), script_name=script_name)\n\n\ndef _list_files_to_compress(script, directory):\n \"\"\"Placeholder docstring\"\"\"\n if directory is None:\n return [script]\n\n basedir = directory if directory else os.path.dirname(script)\n return [os.path.join(basedir, name) for name in os.listdir(basedir)]\n\n\ndef framework_name_from_image(image_uri):\n # noinspection LongLine\n \"\"\"Extract the framework and Python version from the image name.\n\n Args:\n image_uri (str): Image URI, which should be one of the following forms:\n legacy:\n '<account>.dkr.ecr.<region>.amazonaws.com/sagemaker-<fw>-<py_ver>-<device>:<container_version>'\n legacy:\n '<account>.dkr.ecr.<region>.amazonaws.com/sagemaker-<fw>-<py_ver>-<device>:<fw_version>-<device>-<py_ver>'\n current:\n '<account>.dkr.ecr.<region>.amazonaws.com/sagemaker-<fw>:<fw_version>-<device>-<py_ver>'\n current:\n '<account>.dkr.ecr.<region>.amazonaws.com/sagemaker-rl-<fw>:<rl_toolkit><rl_version>-<device>-<py_ver>'\n current:\n '<account>.dkr.ecr.<region>.amazonaws.com/<fw>-<image_scope>:<fw_version>-<device>-<py_ver>'\n current:\n '<account>.dkr.ecr.<region>.amazonaws.com/sagemaker-xgboost:<fw_version>-<container_version>'\n\n Returns:\n tuple: A tuple containing:\n\n - str: The framework name\n - str: The Python version\n - str: The image tag\n - str: If the TensorFlow image is script mode\n \"\"\"\n sagemaker_pattern = re.compile(sagemaker.utils.ECR_URI_PATTERN)\n sagemaker_match = sagemaker_pattern.match(image_uri)\n if sagemaker_match is None:\n return None, None, None, None\n\n # extract framework, python version and image tag\n # We must support both the legacy and current image name format.\n name_pattern = re.compile(\n r\"\"\"^(?:sagemaker(?:-rl)?-)?\n (tensorflow|mxnet|chainer|pytorch|scikit-learn|xgboost\n |huggingface-tensorflow|huggingface-pytorch\n |huggingface-tensorflow-trcomp|huggingface-pytorch-trcomp)(?:-)?\n (scriptmode|training)?\n :(.*)-(.*?)-(py2|py3\\d*)(?:.*)$\"\"\",\n re.VERBOSE,\n )\n name_match = name_pattern.match(sagemaker_match.group(9))\n if name_match is not None:\n fw, scriptmode, ver, device, py = (\n name_match.group(1),\n name_match.group(2),\n name_match.group(3),\n name_match.group(4),\n name_match.group(5),\n )\n return fw, py, \"{}-{}-{}\".format(ver, device, py), scriptmode\n\n legacy_name_pattern = re.compile(r\"^sagemaker-(tensorflow|mxnet)-(py2|py3)-(cpu|gpu):(.*)$\")\n legacy_match = legacy_name_pattern.match(sagemaker_match.group(9))\n if legacy_match is not None:\n return (legacy_match.group(1), legacy_match.group(2), legacy_match.group(4), None)\n\n # sagemaker-xgboost images are tagged with two aliases, e.g.:\n # 1. Long tag: \"315553699071.dkr.ecr.us-west-2.amazonaws.com/sagemaker-xgboost:1.5-1-cpu-py3\"\n # 2. Short tag: \"315553699071.dkr.ecr.us-west-2.amazonaws.com/sagemaker-xgboost:1.5-1\"\n # Note 1: Both tags point to the same image\n # Note 2: Both tags have full GPU capabilities, despite \"cpu\" delineation in the long tag\n short_xgboost_tag_pattern = re.compile(r\"^sagemaker-(xgboost):(.*)$\")\n short_xgboost_tag_match = short_xgboost_tag_pattern.match(sagemaker_match.group(9))\n if short_xgboost_tag_match is not None:\n return (short_xgboost_tag_match.group(1), \"py3\", short_xgboost_tag_match.group(2), None)\n return None, None, None, None\n\n\ndef framework_version_from_tag(image_tag):\n \"\"\"Extract the framework version from the image tag.\n\n Args:\n image_tag (str): Image tag, which should take the form\n '<framework_version>-<device>-<py_version>'\n '<xgboost_version>-<container_version>'\n\n Returns:\n str: The framework version.\n \"\"\"\n tag_pattern = re.compile(r\"^(.*)-(cpu|gpu)-(py2|py3\\d*)$\")\n tag_match = tag_pattern.match(image_tag)\n if tag_match is None:\n short_xgboost_tag_pattern = re.compile(r\"^(\\d\\.\\d+\\-\\d)$\")\n tag_match = short_xgboost_tag_pattern.match(image_tag)\n return None if tag_match is None else tag_match.group(1)\n\n\ndef model_code_key_prefix(code_location_key_prefix, model_name, image):\n \"\"\"Returns the s3 key prefix for uploading code during model deployment.\n\n The location returned is a potential concatenation of 2 parts\n 1. code_location_key_prefix if it exists\n 2. model_name or a name derived from the image\n\n Args:\n code_location_key_prefix (str): the s3 key prefix from code_location\n model_name (str): the name of the model\n image (str): the image from which a default name can be extracted\n\n Returns:\n str: the key prefix to be used in uploading code\n \"\"\"\n name_from_image = f\"/model_code/{int(time.time())}\"\n if not is_pipeline_variable(image):\n name_from_image = sagemaker.utils.name_from_image(image)\n return \"/\".join(filter(None, [code_location_key_prefix, model_name or name_from_image]))\n\n\ndef warn_if_parameter_server_with_multi_gpu(training_instance_type, distribution):\n \"\"\"Warn the user about training when it doesn't leverage all the GPU cores.\n\n Warn the user that training will not fully leverage all the GPU\n cores if parameter server is enabled and a multi-GPU instance is selected.\n Distributed training with the default parameter server setup doesn't\n support multi-GPU instances.\n\n Args:\n training_instance_type (str): A string representing the type of training instance selected.\n distribution (dict): A dictionary with information to enable distributed training.\n (Defaults to None if distributed training is not enabled.) For example:\n\n .. code:: python\n\n {\n \"parameter_server\": {\n \"enabled\": True\n }\n }\n\n\n \"\"\"\n if training_instance_type == \"local\" or distribution is None:\n return\n if is_pipeline_variable(training_instance_type):\n # The training_instance_type is not available in compile time.\n # Rather, it's given in Pipeline execution time\n return\n\n is_multi_gpu_instance = (\n training_instance_type == \"local_gpu\"\n or training_instance_type.split(\".\")[1].startswith(\"p\")\n ) and training_instance_type not in SINGLE_GPU_INSTANCE_TYPES\n\n ps_enabled = \"parameter_server\" in distribution and distribution[\"parameter_server\"].get(\n \"enabled\", False\n )\n\n if is_multi_gpu_instance and ps_enabled:\n logger.warning(PARAMETER_SERVER_MULTI_GPU_WARNING)\n\n\ndef validate_smdistributed(\n instance_type, framework_name, framework_version, py_version, distribution, image_uri=None\n):\n \"\"\"Check if smdistributed strategy is correctly invoked by the user.\n\n Currently, two strategies are supported: `dataparallel` or `modelparallel`.\n Validate if the user requested strategy is supported.\n\n Currently, only one strategy can be specified at a time. Validate if the user has requested\n more than one strategy simultaneously.\n\n Validate if the smdistributed dict arg is syntactically correct.\n\n Additionally, perform strategy-specific validations.\n\n Args:\n instance_type (str): A string representing the type of training instance selected.\n framework_name (str): A string representing the name of framework selected.\n framework_version (str): A string representing the framework version selected.\n py_version (str): A string representing the python version selected.\n distribution (dict): A dictionary with information to enable distributed training.\n (Defaults to None if distributed training is not enabled.) For example:\n\n .. code:: python\n\n {\n \"smdistributed\": {\n \"dataparallel\": {\n \"enabled\": True\n }\n }\n }\n image_uri (str): A string representing a Docker image URI.\n\n Raises:\n ValueError: if distribution dictionary isn't correctly formatted or\n multiple strategies are requested simultaneously or\n an unsupported strategy is requested or\n strategy-specific inputs are incorrect/unsupported\n \"\"\"\n if \"smdistributed\" not in distribution:\n # Distribution strategy other than smdistributed is selected\n return\n if is_pipeline_variable(instance_type) or is_pipeline_variable(image_uri):\n # The instance_type is not available in compile time.\n # Rather, it's given in Pipeline execution time\n return\n\n # distribution contains smdistributed\n smdistributed = distribution[\"smdistributed\"]\n if not isinstance(smdistributed, dict):\n raise ValueError(\"smdistributed strategy requires a dictionary\")\n\n if len(smdistributed) > 1:\n # more than 1 smdistributed strategy requested by the user\n err_msg = (\n \"Cannot use more than 1 smdistributed strategy. \\n\"\n \"Choose one of the following supported strategies:\"\n f\"{SMDISTRIBUTED_SUPPORTED_STRATEGIES}\"\n )\n raise ValueError(err_msg)\n\n # validate if smdistributed strategy is supported\n # currently this for loop essentially checks for only 1 key\n for strategy in smdistributed:\n if strategy not in SMDISTRIBUTED_SUPPORTED_STRATEGIES:\n err_msg = (\n f\"Invalid smdistributed strategy provided: {strategy} \\n\"\n f\"Supported strategies: {SMDISTRIBUTED_SUPPORTED_STRATEGIES}\"\n )\n raise ValueError(err_msg)\n\n # smdataparallel-specific input validation\n if \"dataparallel\" in smdistributed:\n _validate_smdataparallel_args(\n instance_type, framework_name, framework_version, py_version, distribution, image_uri\n )\n\n\ndef _validate_smdataparallel_args(\n instance_type, framework_name, framework_version, py_version, distribution, image_uri=None\n):\n \"\"\"Check if request is using unsupported arguments.\n\n Validate if user specifies a supported instance type, framework version, and python\n version.\n\n Args:\n instance_type (str): A string representing the type of training instance selected. Ex: `ml.p3.16xlarge`\n framework_name (str): A string representing the name of framework selected. Ex: `tensorflow`\n framework_version (str): A string representing the framework version selected. Ex: `2.3.1`\n py_version (str): A string representing the python version selected. Ex: `py3`\n distribution (dict): A dictionary with information to enable distributed training.\n (Defaults to None if distributed training is not enabled.) Ex:\n\n .. code:: python\n\n {\n \"smdistributed\": {\n \"dataparallel\": {\n \"enabled\": True\n }\n }\n }\n image_uri (str): A string representing a Docker image URI.\n\n Raises:\n ValueError: if\n (`instance_type` is not in SM_DATAPARALLEL_SUPPORTED_INSTANCE_TYPES or\n `py_version` is not python3 or\n `framework_version` is not in SM_DATAPARALLEL_SUPPORTED_FRAMEWORK_VERSION\n \"\"\"\n smdataparallel_enabled = (\n distribution.get(\"smdistributed\").get(\"dataparallel\").get(\"enabled\", False)\n )\n\n if not smdataparallel_enabled:\n return\n\n is_instance_type_supported = instance_type in SM_DATAPARALLEL_SUPPORTED_INSTANCE_TYPES\n\n err_msg = \"\"\n\n if not is_instance_type_supported:\n # instance_type is required\n err_msg += (\n f\"Provided instance_type {instance_type} is not supported by smdataparallel.\\n\"\n \"Please specify one of the supported instance types:\"\n f\"{SM_DATAPARALLEL_SUPPORTED_INSTANCE_TYPES}\\n\"\n )\n\n if not image_uri:\n # ignore framework_version & py_version if image_uri is set\n # in case image_uri is not set, then both are mandatory\n supported = SM_DATAPARALLEL_SUPPORTED_FRAMEWORK_VERSIONS[framework_name]\n if framework_version not in supported:\n err_msg += (\n f\"Provided framework_version {framework_version} is not supported by\"\n \" smdataparallel.\\n\"\n f\"Please specify one of the supported framework versions: {supported} \\n\"\n )\n\n if \"py3\" not in py_version:\n err_msg += (\n f\"Provided py_version {py_version} is not supported by smdataparallel.\\n\"\n \"Please specify py_version>=py3\"\n )\n\n if err_msg:\n raise ValueError(err_msg)\n\n\ndef validate_distribution(\n distribution,\n instance_groups,\n framework_name,\n framework_version,\n py_version,\n image_uri,\n kwargs,\n):\n \"\"\"Check if distribution strategy is correctly invoked by the user.\n\n Currently, check for `dataparallel`, `modelparallel` and heterogeneous cluster set up.\n Validate if the user requested strategy is supported.\n\n Args:\n distribution (dict): A dictionary with information to enable distributed training.\n (Defaults to None if distributed training is not enabled.) For example:\n\n .. code:: python\n\n {\n \"smdistributed\": {\n \"dataparallel\": {\n \"enabled\": True\n }\n }\n }\n instance_groups ([InstanceGroup]): A list contains instance groups used for training.\n framework_name (str): A string representing the name of framework selected.\n framework_version (str): A string representing the framework version selected.\n py_version (str): A string representing the python version selected.\n image_uri (str): A string representing a Docker image URI.\n kwargs(dict): Additional kwargs passed to this function\n\n Returns:\n distribution(dict): updated dictionary with validated information\n to enable distributed training.\n\n Raises:\n ValueError: if distribution dictionary isn't correctly formatted or\n multiple strategies are requested simultaneously or\n an unsupported strategy is requested or\n strategy-specific inputs are incorrect/unsupported or\n heterogeneous cluster set up is incorrect\n \"\"\"\n train_instance_groups = distribution.get(\"instance_groups\", [])\n if instance_groups is None:\n if len(train_instance_groups) >= 1:\n # if estimator's instance_groups is not defined but\n # train_instance_groups are specified in distribution\n raise ValueError(\"Instance groups not specified in the estimator !\")\n else:\n if len(train_instance_groups) > len(instance_groups):\n # if train_instance_groups in distribution are more than estimator's instance_groups\n raise ValueError(\"Train instance groups oversubscribed !\")\n if len(instance_groups) == 1 and len(train_instance_groups) == 0:\n # if just one instance_group but it is not specified in distribution, we set it for user\n train_instance_groups = instance_groups\n elif len(instance_groups) > 1 and len(train_instance_groups) != 1:\n # currently we just support one train instance group\n raise ValueError(\"Distribution should only contain one instance group name !\")\n\n if len(train_instance_groups) != 0:\n # in this case, we are handling a heterogeneous cluster training job\n instance_group_names = []\n for train_instance_group in train_instance_groups:\n # in future version we will support multiple train_instance_groups, so use loop here\n if train_instance_group not in instance_groups:\n # check if train instance groups belongs to what user defined in estimator set up\n raise ValueError(\n f\"Invalid training instance group {train_instance_group.instance_group_name} !\"\n )\n instance_type = train_instance_group.instance_type\n validate_distribution_for_instance_type(\n instance_type=instance_type,\n distribution=distribution,\n )\n validate_smdistributed(\n instance_type=instance_type,\n framework_name=framework_name,\n framework_version=framework_version,\n py_version=py_version,\n distribution=distribution,\n image_uri=image_uri,\n )\n if framework_name and framework_name == \"pytorch\":\n # We need to validate only for PyTorch framework\n validate_pytorch_distribution(\n distribution=distribution,\n framework_name=framework_name,\n framework_version=framework_version,\n py_version=py_version,\n image_uri=image_uri,\n )\n validate_torch_distributed_distribution(\n instance_type=instance_type,\n distribution=distribution,\n framework_version=framework_version,\n py_version=py_version,\n image_uri=image_uri,\n entry_point=kwargs[\"entry_point\"],\n )\n warn_if_parameter_server_with_multi_gpu(\n training_instance_type=instance_type, distribution=distribution\n )\n # get instance group names\n instance_group_names.append(train_instance_group.instance_group_name)\n distribution[\"instance_groups\"] = instance_group_names\n else:\n # in this case, we are handling a normal training job (without heterogeneous cluster)\n instance_type = renamed_kwargs(\n \"train_instance_type\", \"instance_type\", kwargs.get(\"instance_type\"), kwargs\n )\n validate_distribution_for_instance_type(\n instance_type=instance_type,\n distribution=distribution,\n )\n validate_smdistributed(\n instance_type=instance_type,\n framework_name=framework_name,\n framework_version=framework_version,\n py_version=py_version,\n distribution=distribution,\n image_uri=image_uri,\n )\n if framework_name and framework_name == \"pytorch\":\n # We need to validate only for PyTorch framework\n validate_pytorch_distribution(\n distribution=distribution,\n framework_name=framework_name,\n framework_version=framework_version,\n py_version=py_version,\n image_uri=image_uri,\n )\n validate_torch_distributed_distribution(\n instance_type=instance_type,\n distribution=distribution,\n framework_version=framework_version,\n py_version=py_version,\n image_uri=image_uri,\n entry_point=kwargs[\"entry_point\"],\n )\n warn_if_parameter_server_with_multi_gpu(\n training_instance_type=instance_type, distribution=distribution\n )\n return distribution\n\n\ndef validate_distribution_for_instance_type(instance_type, distribution):\n \"\"\"Check if the provided distribution strategy is supported for the instance_type\n\n Args:\n instance_type (str): A string representing the type of training instance selected.\n distribution (dict): A dictionary with information to enable distributed training.\n \"\"\"\n err_msg = \"\"\n if isinstance(instance_type, str):\n match = re.match(r\"^ml[\\._]([a-z\\d]+)\\.?\\w*$\", instance_type)\n if match and match[1].startswith(\"trn\"):\n keys = list(distribution.keys())\n if len(keys) == 0:\n return\n if len(keys) == 1:\n distribution_strategy = keys[0]\n if distribution_strategy != \"torch_distributed\":\n err_msg += (\n f\"Provided distribution strategy {distribution_strategy} is not supported\"\n \" for Trainium instances.\\n\"\n \"Please specify one of the following supported distribution strategies:\"\n f\" {TRAINIUM_SUPPORTED_DISTRIBUTION_STRATEGIES} \\n\"\n )\n elif len(keys) > 1:\n err_msg += (\n \"Multiple distribution strategies are not supported for Trainium instances.\\n\"\n \"Please specify one of the following supported distribution strategies:\"\n f\" {TRAINIUM_SUPPORTED_DISTRIBUTION_STRATEGIES} \"\n )\n\n if err_msg:\n raise ValueError(err_msg)\n\n\ndef validate_pytorch_distribution(\n distribution, framework_name, framework_version, py_version, image_uri\n):\n \"\"\"Check if pytorch distribution strategy is correctly invoked by the user.\n\n Args:\n distribution (dict): A dictionary with information to enable distributed training.\n (Defaults to None if distributed training is not enabled.) For example:\n\n .. code:: python\n\n {\n \"pytorchddp\": {\n \"enabled\": True\n }\n }\n framework_name (str): A string representing the name of framework selected.\n framework_version (str): A string representing the framework version selected.\n py_version (str): A string representing the python version selected.\n image_uri (str): A string representing a Docker image URI.\n\n Raises:\n ValueError: if\n `py_version` is not python3 or\n `framework_version` is not in PYTORCHDDP_SUPPORTED_FRAMEWORK_VERSIONS\n \"\"\"\n if framework_name and framework_name != \"pytorch\":\n # We need to validate only for PyTorch framework\n return\n\n pytorch_ddp_enabled = False\n if \"pytorchddp\" in distribution:\n pytorch_ddp_enabled = distribution.get(\"pytorchddp\").get(\"enabled\", False)\n if not pytorch_ddp_enabled:\n # Distribution strategy other than pytorchddp is selected\n return\n\n err_msg = \"\"\n if not image_uri:\n # ignore framework_version and py_version if image_uri is set\n # in case image_uri is not set, then both are mandatory\n if framework_version not in PYTORCHDDP_SUPPORTED_FRAMEWORK_VERSIONS:\n err_msg += (\n f\"Provided framework_version {framework_version} is not supported by\"\n \" pytorchddp.\\n\"\n \"Please specify one of the supported framework versions:\"\n f\" {PYTORCHDDP_SUPPORTED_FRAMEWORK_VERSIONS} \\n\"\n )\n if \"py3\" not in py_version:\n err_msg += (\n f\"Provided py_version {py_version} is not supported by pytorchddp.\\n\"\n \"Please specify py_version>=py3\"\n )\n if err_msg:\n raise ValueError(err_msg)\n\n\ndef validate_torch_distributed_distribution(\n instance_type,\n distribution,\n framework_version,\n py_version,\n image_uri,\n entry_point,\n):\n \"\"\"Check if torch_distributed distribution strategy is correctly invoked by the user.\n\n Args:\n instance_type (str): A string representing the type of training instance selected.\n distribution (dict): A dictionary with information to enable distributed training.\n (Defaults to None if distributed training is not enabled.) For example:\n\n .. code:: python\n\n {\n \"torch_distributed\": {\n \"enabled\": True\n }\n }\n framework_version (str): A string representing the framework version selected.\n py_version (str): A string representing the python version selected.\n image_uri (str): A string representing a Docker image URI.\n entry_point (str or PipelineVariable): The absolute or relative path to the local Python\n source file that should be executed as the entry point to\n training.\n\n Raises:\n ValueError: if\n `py_version` is not python3 or\n `framework_version` is not in TORCH_DISTRIBUTED_SUPPORTED_FRAMEWORK_VERSIONS\n \"\"\"\n\n torch_distributed_enabled = False\n if \"torch_distributed\" in distribution:\n torch_distributed_enabled = distribution.get(\"torch_distributed\").get(\"enabled\", False)\n if not torch_distributed_enabled:\n # Distribution strategy other than torch_distributed is selected\n return\n\n err_msg = \"\"\n if not image_uri:\n # ignore framework_version and py_version if image_uri is set\n # in case image_uri is not set, then both are mandatory\n if framework_version not in TORCH_DISTRIBUTED_SUPPORTED_FRAMEWORK_VERSIONS:\n err_msg += (\n f\"Provided framework_version {framework_version} is not supported by\"\n \" torch_distributed.\\n\"\n \"Please specify one of the supported framework versions:\"\n f\" {TORCH_DISTRIBUTED_SUPPORTED_FRAMEWORK_VERSIONS} \\n\"\n )\n if \"py3\" not in py_version:\n err_msg += (\n f\"Provided py_version {py_version} is not supported by torch_distributed.\\n\"\n \"Please specify py_version>=py3\"\n )\n\n # Check instance compatibility\n match = re.match(r\"^ml[\\._]([a-z\\d]+)\\.?\\w*$\", instance_type)\n if match:\n if not match[1].startswith(\"trn\"):\n err_msg += (\n \"torch_distributed is currently supported only for trainium instances.\\n\"\n \" Please refer https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#distributed-pytorch-training \\n\" # noqa E501 # pylint: disable=c0301\n \"for information regarding distributed training on non-trainium instances\"\n )\n\n # Check entry point type\n if not entry_point.endswith(\".py\"):\n err_msg += (\n \"Unsupported entry point type for the distribution torch_distributed.\\n\"\n \"Only python programs (*.py) are supported.\"\n )\n\n if err_msg:\n raise ValueError(err_msg)\n\n\ndef python_deprecation_warning(framework, latest_supported_version):\n \"\"\"Placeholder docstring\"\"\"\n return PYTHON_2_DEPRECATION_WARNING.format(\n framework=framework, latest_supported_version=latest_supported_version\n )\n\n\ndef _region_supports_debugger(region_name):\n \"\"\"Returns boolean indicating whether the region supports Amazon SageMaker Debugger.\n\n Args:\n region_name (str): Name of the region to check against.\n\n Returns:\n bool: Whether or not the region supports Amazon SageMaker Debugger.\n\n \"\"\"\n return region_name.lower() not in DEBUGGER_UNSUPPORTED_REGIONS\n\n\ndef _region_supports_profiler(region_name):\n \"\"\"Returns bool indicating whether region supports Amazon SageMaker Debugger profiling feature.\n\n Args:\n region_name (str): Name of the region to check against.\n\n Returns:\n bool: Whether or not the region supports Amazon SageMaker Debugger profiling feature.\n\n \"\"\"\n return region_name.lower() not in PROFILER_UNSUPPORTED_REGIONS\n\n\ndef validate_version_or_image_args(framework_version, py_version, image_uri):\n \"\"\"Checks if version or image arguments are specified.\n\n Validates framework and model arguments to enforce version or image specification.\n\n Args:\n framework_version (str): The version of the framework.\n py_version (str): The version of Python.\n image_uri (str): The URI of the image.\n\n Raises:\n ValueError: if `image_uri` is None and either `framework_version` or `py_version` is\n None.\n \"\"\"\n if (framework_version is None or py_version is None) and image_uri is None:\n raise ValueError(\n \"framework_version or py_version was None, yet image_uri was also None. \"\n \"Either specify both framework_version and py_version, or specify image_uri.\"\n )\n\n\ndef create_image_uri(\n region,\n framework,\n instance_type,\n framework_version,\n py_version=None,\n account=None, # pylint: disable=W0613\n accelerator_type=None,\n optimized_families=None, # pylint: disable=W0613\n):\n \"\"\"Deprecated method. Please use sagemaker.image_uris.retrieve().\n\n Args:\n region (str): AWS region where the image is uploaded.\n framework (str): framework used by the image.\n instance_type (str): SageMaker instance type. Used to determine device\n type (cpu/gpu/family-specific optimized).\n framework_version (str): The version of the framework.\n py_version (str): Optional. Python version. If specified, should be one\n of 'py2' or 'py3'. If not specified, image uri will not include a\n python component.\n account (str): AWS account that contains the image. (default:\n '520713654638')\n accelerator_type (str): SageMaker Elastic Inference accelerator type.\n optimized_families (str): Deprecated. A no-op argument.\n\n Returns:\n the image uri\n \"\"\"\n renamed_warning(\"The method create_image_uri\")\n return sagemaker.image_uris.retrieve(\n framework=framework,\n region=region,\n version=framework_version,\n py_version=py_version,\n instance_type=instance_type,\n accelerator_type=accelerator_type,\n )\n", "src/sagemaker/image_uri_config/pytorch.json": "{\n \"eia\": {\n \"processors\": [\n \"cpu\"\n ],\n \"version_aliases\": {\n \"1.3\": \"1.3.1\",\n \"1.5\": \"1.5.1\"\n },\n \"versions\": {\n \"1.3.1\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference-eia\"\n },\n \"1.5.1\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"eu-west-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference-eia\"\n }\n }\n },\n \"inference\": {\n \"processors\": [\n \"cpu\",\n \"gpu\"\n ],\n \"version_aliases\": {\n \"0.4\": \"0.4.0\",\n \"1.0\": \"1.0.0\",\n \"1.1\": \"1.1.0\",\n \"1.2\": \"1.2.0\",\n \"1.3\": \"1.3.1\",\n \"1.4\": \"1.4.0\",\n \"1.5\": \"1.5.0\",\n \"1.6\": \"1.6.0\",\n \"1.7\": \"1.7.1\",\n \"1.8\": \"1.8.1\",\n \"1.9\": \"1.9.1\",\n \"1.10\": \"1.10.2\",\n \"1.11\": \"1.11.0\",\n \"1.12\": \"1.12.0\"\n },\n \"versions\": {\n \"0.4.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-pytorch\"\n },\n \"1.0.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-pytorch\"\n },\n \"1.1.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-pytorch\"\n },\n \"1.2.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.3.1\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.4.0\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.5.0\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.6.0\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.7.1\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.8.0\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.8.1\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.9.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.9.1\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.10.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.10.2\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.11.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n },\n \"1.12.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-inference\"\n }\n }\n },\n \"training\": {\n \"processors\": [\n \"cpu\",\n \"gpu\"\n ],\n \"version_aliases\": {\n \"0.4\": \"0.4.0\",\n \"1.0\": \"1.0.0\",\n \"1.1\": \"1.1.0\",\n \"1.2\": \"1.2.0\",\n \"1.3\": \"1.3.1\",\n \"1.4\": \"1.4.0\",\n \"1.5\": \"1.5.0\",\n \"1.6\": \"1.6.0\",\n \"1.7\": \"1.7.1\",\n \"1.8\": \"1.8.1\",\n \"1.9\": \"1.9.1\",\n \"1.10\": \"1.10.2\",\n \"1.11\": \"1.11.0\",\n \"1.12\": \"1.12.0\"\n },\n \"versions\": {\n \"0.4.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-pytorch\"\n },\n \"1.0.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-pytorch\"\n },\n \"1.1.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-pytorch\"\n },\n \"1.2.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.3.1\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.4.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.5.0\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.6.0\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.7.1\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.8.0\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.8.1\": {\n \"py_versions\": [\n \"py3\",\n \"py36\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.9.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.9.1\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.10.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.10.2\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.11.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n },\n \"1.12.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"pytorch-training\"\n }\n }\n }\n}\n", "src/sagemaker/image_uri_config/tensorflow.json": "{\n \"eia\": {\n \"processors\": [\n \"cpu\"\n ],\n \"version_aliases\": {\n \"1.10\": \"1.10.0\",\n \"1.11\": \"1.11.0\",\n \"1.12\": \"1.12.0\",\n \"1.13\": \"1.13.0\",\n \"1.14\": \"1.14.0\",\n \"1.15\": \"1.15.0\",\n \"2.0\": \"2.0.0\",\n \"2.3\": \"2.3.0\"\n },\n \"versions\": {\n \"1.10.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-eia\"\n },\n \"1.11.0\": {\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-serving-eia\"\n },\n \"1.12.0\": {\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-serving-eia\"\n },\n \"1.13.0\": {\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-serving-eia\"\n },\n \"1.14.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference-eia\"\n },\n \"1.15.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference-eia\"\n },\n \"2.0.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference-eia\"\n },\n \"2.3.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference-eia\"\n }\n }\n },\n \"inference\": {\n \"processors\": [\n \"cpu\",\n \"gpu\"\n ],\n \"version_aliases\": {\n \"1.10\": \"1.10.0\",\n \"1.11\": \"1.11.0\",\n \"1.12\": \"1.12.0\",\n \"1.13\": \"1.13.0\",\n \"1.14\": \"1.14.0\",\n \"1.15\": \"1.15.5\",\n \"1.4\": \"1.4.1\",\n \"1.5\": \"1.5.0\",\n \"1.6\": \"1.6.0\",\n \"1.7\": \"1.7.0\",\n \"1.8\": \"1.8.0\",\n \"1.9\": \"1.9.0\",\n \"2.0\": \"2.0.4\",\n \"2.1\": \"2.1.3\",\n \"2.2\": \"2.2.2\",\n \"2.3\": \"2.3.2\",\n \"2.4\": \"2.4.3\",\n \"2.5\": \"2.5.1\",\n \"2.6\": \"2.6.3\",\n \"2.7\": \"2.7.0\",\n \"2.8\": \"2.8.0\"\n },\n \"versions\": {\n \"1.10.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.11.0\": {\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-serving\"\n },\n \"1.12.0\": {\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-serving\"\n },\n \"1.13.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"1.14.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"1.15.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"1.15.2\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"1.15.3\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"1.15.4\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"1.15.5\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"1.4.1\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.5.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.6.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.7.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.8.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.9.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"2.0.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.0.1\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.0.2\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.0.3\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.0.4\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.1.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.1.1\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.1.2\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.1.3\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.2.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.2.1\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.2.2\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.3.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.3.1\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.3.2\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.4.1\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.4.3\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.5.1\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.6.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.6.3\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.7.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n },\n \"2.8.0\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-inference\"\n }\n }\n },\n \"training\": {\n \"processors\": [\n \"cpu\",\n \"gpu\"\n ],\n \"version_aliases\": {\n \"1.10\": \"1.10.0\",\n \"1.11\": \"1.11.0\",\n \"1.12\": \"1.12.0\",\n \"1.13\": \"1.13.1\",\n \"1.14\": \"1.14.0\",\n \"1.15\": \"1.15.5\",\n \"1.4\": \"1.4.1\",\n \"1.5\": \"1.5.0\",\n \"1.6\": \"1.6.0\",\n \"1.7\": \"1.7.0\",\n \"1.8\": \"1.8.0\",\n \"1.9\": \"1.9.0\",\n \"2.0\": \"2.0.4\",\n \"2.1\": \"2.1.3\",\n \"2.2\": \"2.2.2\",\n \"2.3\": \"2.3.2\",\n \"2.4\": \"2.4.3\",\n \"2.5\": \"2.5.1\",\n \"2.6\": \"2.6.3\",\n \"2.7\": \"2.7.1\",\n \"2.8\": \"2.8.0\",\n \"2.9\": \"2.9.1\"\n },\n \"versions\": {\n \"1.10.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.11.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-scriptmode\"\n },\n \"1.12.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-scriptmode\"\n },\n \"1.13.1\": {\n \"py2\": {\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow-scriptmode\"\n },\n \"py3\": {\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n }\n },\n \"1.14.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"1.15.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"1.15.2\": {\n \"py_versions\": [\n \"py2\",\n \"py3\",\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"1.15.3\": {\n \"py_versions\": [\n \"py2\",\n \"py3\",\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"1.15.4\": {\n \"py_versions\": [\n \"py3\",\n \"py36\",\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"1.15.5\": {\n \"py_versions\": [\n \"py3\",\n \"py36\",\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"1.4.1\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.5.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.6.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.7.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.8.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"1.9.0\": {\n \"py_versions\": [\n \"py2\"\n ],\n \"registries\": {\n \"af-south-1\": \"313743910680\",\n \"ap-east-1\": \"057415533634\",\n \"ap-northeast-1\": \"520713654638\",\n \"ap-northeast-2\": \"520713654638\",\n \"ap-south-1\": \"520713654638\",\n \"ap-southeast-1\": \"520713654638\",\n \"ap-southeast-2\": \"520713654638\",\n \"ca-central-1\": \"520713654638\",\n \"cn-north-1\": \"422961961927\",\n \"cn-northwest-1\": \"423003514399\",\n \"eu-central-1\": \"520713654638\",\n \"eu-north-1\": \"520713654638\",\n \"eu-south-1\": \"048378556238\",\n \"eu-west-1\": \"520713654638\",\n \"eu-west-2\": \"520713654638\",\n \"eu-west-3\": \"520713654638\",\n \"me-south-1\": \"724002660598\",\n \"sa-east-1\": \"520713654638\",\n \"us-east-1\": \"520713654638\",\n \"us-east-2\": \"520713654638\",\n \"us-gov-west-1\": \"246785580436\",\n \"us-iso-east-1\": \"744548109606\",\n \"us-west-1\": \"520713654638\",\n \"us-west-2\": \"520713654638\"\n },\n \"repository\": \"sagemaker-tensorflow\"\n },\n \"2.0.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.0.1\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.0.2\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.0.3\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.0.4\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.1.0\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.1.1\": {\n \"py_versions\": [\n \"py2\",\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.1.2\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.1.3\": {\n \"py_versions\": [\n \"py3\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.2.0\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.2.1\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.2.2\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.3.0\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.3.1\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.3.2\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.4.1\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.4.3\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.5.0\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.5.1\": {\n \"py_versions\": [\n \"py37\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.6.0\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.6.2\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.6.3\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.7.1\": {\n \"py_versions\": [\n \"py38\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.8.0\": {\n \"py_versions\": [\n \"py39\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n },\n \"2.9.1\": {\n \"py_versions\": [\n \"py39\"\n ],\n \"registries\": {\n \"af-south-1\": \"626614931356\",\n \"ap-east-1\": \"871362719292\",\n \"ap-northeast-1\": \"763104351884\",\n \"ap-northeast-2\": \"763104351884\",\n \"ap-northeast-3\": \"364406365360\",\n \"ap-south-1\": \"763104351884\",\n \"ap-southeast-1\": \"763104351884\",\n \"ap-southeast-2\": \"763104351884\",\n \"ap-southeast-3\": \"907027046896\",\n \"ca-central-1\": \"763104351884\",\n \"cn-north-1\": \"727897471807\",\n \"cn-northwest-1\": \"727897471807\",\n \"eu-central-1\": \"763104351884\",\n \"eu-north-1\": \"763104351884\",\n \"eu-south-1\": \"692866216735\",\n \"eu-west-1\": \"763104351884\",\n \"eu-west-2\": \"763104351884\",\n \"eu-west-3\": \"763104351884\",\n \"me-south-1\": \"217643126080\",\n \"sa-east-1\": \"763104351884\",\n \"us-east-1\": \"763104351884\",\n \"us-east-2\": \"763104351884\",\n \"us-gov-west-1\": \"442386744353\",\n \"us-iso-east-1\": \"886529160074\",\n \"us-west-1\": \"763104351884\",\n \"us-west-2\": \"763104351884\"\n },\n \"repository\": \"tensorflow-training\"\n }\n }\n }\n}\n", "src/sagemaker/image_uris.py": "# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"Functions for generating ECR image URIs for pre-built SageMaker Docker images.\"\"\"\nfrom __future__ import absolute_import\n\nimport json\nimport logging\nimport os\nimport re\nfrom typing import Optional\n\nfrom sagemaker import utils\nfrom sagemaker.jumpstart.utils import is_jumpstart_model_input\nfrom sagemaker.spark import defaults\nfrom sagemaker.jumpstart import artifacts\nfrom sagemaker.workflow import is_pipeline_variable\nfrom sagemaker.workflow.utilities import override_pipeline_parameter_var\n\nlogger = logging.getLogger(__name__)\n\nECR_URI_TEMPLATE = \"{registry}.dkr.{hostname}/{repository}\"\nHUGGING_FACE_FRAMEWORK = \"huggingface\"\n\n\n@override_pipeline_parameter_var\ndef retrieve(\n framework,\n region,\n version=None,\n py_version=None,\n instance_type=None,\n accelerator_type=None,\n image_scope=None,\n container_version=None,\n distribution=None,\n base_framework_version=None,\n training_compiler_config=None,\n model_id=None,\n model_version=None,\n tolerate_vulnerable_model=False,\n tolerate_deprecated_model=False,\n sdk_version=None,\n inference_tool=None,\n serverless_inference_config=None,\n) -> str:\n \"\"\"Retrieves the ECR URI for the Docker image matching the given arguments.\n\n Ideally this function should not be called directly, rather it should be called from the\n fit() function inside framework estimator.\n\n Args:\n framework (str): The name of the framework or algorithm.\n region (str): The AWS region.\n version (str): The framework or algorithm version. This is required if there is\n more than one supported version for the given framework or algorithm.\n py_version (str): The Python version. This is required if there is\n more than one supported Python version for the given framework version.\n instance_type (str): The SageMaker instance type. For supported types, see\n https://aws.amazon.com/sagemaker/pricing. This is required if\n there are different images for different processor types.\n accelerator_type (str): Elastic Inference accelerator type. For more, see\n https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html.\n image_scope (str): The image type, i.e. what it is used for.\n Valid values: \"training\", \"inference\", \"eia\". If ``accelerator_type`` is set,\n ``image_scope`` is ignored.\n container_version (str): the version of docker image.\n Ideally the value of parameter should be created inside the framework.\n For custom use, see the list of supported container versions:\n https://github.com/aws/deep-learning-containers/blob/master/available_images.md\n (default: None).\n distribution (dict): A dictionary with information on how to run distributed training\n training_compiler_config (:class:`~sagemaker.training_compiler.TrainingCompilerConfig`):\n A configuration class for the SageMaker Training Compiler\n (default: None).\n model_id (str): The JumpStart model ID for which to retrieve the image URI\n (default: None).\n model_version (str): The version of the JumpStart model for which to retrieve the\n image URI (default: None).\n tolerate_vulnerable_model (bool): ``True`` if vulnerable versions of model specifications\n should be tolerated without an exception raised. If ``False``, raises an exception if\n the script used by this version of the model has dependencies with known security\n vulnerabilities. (Default: False).\n tolerate_deprecated_model (bool): True if deprecated versions of model specifications\n should be tolerated without an exception raised. If False, raises an exception\n if the version of the model is deprecated. (Default: False).\n sdk_version (str): the version of python-sdk that will be used in the image retrieval.\n (default: None).\n inference_tool (str): the tool that will be used to aid in the inference.\n Valid values: \"neuron, None\"\n (default: None).\n serverless_inference_config (sagemaker.serverless.ServerlessInferenceConfig):\n Specifies configuration related to serverless endpoint. Instance type is\n not provided in serverless inference. So this is used to determine processor type.\n\n Returns:\n str: The ECR URI for the corresponding SageMaker Docker image.\n\n Raises:\n NotImplementedError: If the scope is not supported.\n ValueError: If the combination of arguments specified is not supported or\n any PipelineVariable object is passed in.\n VulnerableJumpStartModelError: If any of the dependencies required by the script have\n known security vulnerabilities.\n DeprecatedJumpStartModelError: If the version of the model is deprecated.\n \"\"\"\n args = dict(locals())\n for name, val in args.items():\n if is_pipeline_variable(val):\n raise ValueError(\n \"When retrieving the image_uri, the argument %s should not be a pipeline variable \"\n \"(%s) since pipeline variables are only interpreted in the pipeline execution time.\"\n % (name, type(val))\n )\n\n if is_jumpstart_model_input(model_id, model_version):\n return artifacts._retrieve_image_uri(\n model_id,\n model_version,\n image_scope,\n framework,\n region,\n version,\n py_version,\n instance_type,\n accelerator_type,\n container_version,\n distribution,\n base_framework_version,\n training_compiler_config,\n tolerate_vulnerable_model,\n tolerate_deprecated_model,\n )\n\n if training_compiler_config and (framework == HUGGING_FACE_FRAMEWORK):\n config = _config_for_framework_and_scope(\n framework + \"-training-compiler\", image_scope, accelerator_type\n )\n else:\n _framework = framework\n if framework == HUGGING_FACE_FRAMEWORK:\n inference_tool = _get_inference_tool(inference_tool, instance_type)\n if inference_tool == \"neuron\":\n _framework = f\"{framework}-{inference_tool}\"\n config = _config_for_framework_and_scope(_framework, image_scope, accelerator_type)\n\n original_version = version\n version = _validate_version_and_set_if_needed(version, config, framework)\n version_config = config[\"versions\"][_version_for_config(version, config)]\n\n if framework == HUGGING_FACE_FRAMEWORK:\n if version_config.get(\"version_aliases\"):\n full_base_framework_version = version_config[\"version_aliases\"].get(\n base_framework_version, base_framework_version\n )\n _validate_arg(full_base_framework_version, list(version_config.keys()), \"base framework\")\n version_config = version_config.get(full_base_framework_version)\n\n py_version = _validate_py_version_and_set_if_needed(py_version, version_config, framework)\n version_config = version_config.get(py_version) or version_config\n registry = _registry_from_region(region, version_config[\"registries\"])\n hostname = utils._botocore_resolver().construct_endpoint(\"ecr\", region)[\"hostname\"]\n\n repo = version_config[\"repository\"]\n\n processor = _processor(\n instance_type,\n config.get(\"processors\") or version_config.get(\"processors\"),\n serverless_inference_config,\n )\n\n # if container version is available in .json file, utilize that\n if version_config.get(\"container_version\"):\n container_version = version_config[\"container_version\"][processor]\n\n if framework == HUGGING_FACE_FRAMEWORK:\n pt_or_tf_version = (\n re.compile(\"^(pytorch|tensorflow)(.*)$\").match(base_framework_version).group(2)\n )\n _version = original_version\n\n if repo in [\n \"huggingface-pytorch-trcomp-training\",\n \"huggingface-tensorflow-trcomp-training\",\n ]:\n _version = version\n if repo in [\"huggingface-pytorch-inference-neuron\"]:\n if not sdk_version:\n sdk_version = _get_latest_versions(version_config[\"sdk_versions\"])\n container_version = sdk_version + \"-\" + container_version\n if config.get(\"version_aliases\").get(original_version):\n _version = config.get(\"version_aliases\")[original_version]\n if (\n config.get(\"versions\", {})\n .get(_version, {})\n .get(\"version_aliases\", {})\n .get(base_framework_version, {})\n ):\n _base_framework_version = config.get(\"versions\")[_version][\"version_aliases\"][\n base_framework_version\n ]\n pt_or_tf_version = (\n re.compile(\"^(pytorch|tensorflow)(.*)$\").match(_base_framework_version).group(2)\n )\n\n tag_prefix = f\"{pt_or_tf_version}-transformers{_version}\"\n else:\n tag_prefix = version_config.get(\"tag_prefix\", version)\n\n tag = _format_tag(tag_prefix, processor, py_version, container_version, inference_tool)\n\n if instance_type is not None and _should_auto_select_container_version(\n instance_type, distribution\n ):\n container_versions = {\n \"tensorflow-2.3-gpu-py37\": \"cu110-ubuntu18.04-v3\",\n \"tensorflow-2.3.1-gpu-py37\": \"cu110-ubuntu18.04\",\n \"tensorflow-2.3.2-gpu-py37\": \"cu110-ubuntu18.04\",\n \"tensorflow-1.15-gpu-py37\": \"cu110-ubuntu18.04-v8\",\n \"tensorflow-1.15.4-gpu-py37\": \"cu110-ubuntu18.04\",\n \"tensorflow-1.15.5-gpu-py37\": \"cu110-ubuntu18.04\",\n \"mxnet-1.8-gpu-py37\": \"cu110-ubuntu16.04-v1\",\n \"mxnet-1.8.0-gpu-py37\": \"cu110-ubuntu16.04\",\n \"pytorch-1.6-gpu-py36\": \"cu110-ubuntu18.04-v3\",\n \"pytorch-1.6.0-gpu-py36\": \"cu110-ubuntu18.04\",\n \"pytorch-1.6-gpu-py3\": \"cu110-ubuntu18.04-v3\",\n \"pytorch-1.6.0-gpu-py3\": \"cu110-ubuntu18.04\",\n }\n key = \"-\".join([framework, tag])\n if key in container_versions:\n tag = \"-\".join([tag, container_versions[key]])\n\n if tag:\n repo += \":{}\".format(tag)\n\n return ECR_URI_TEMPLATE.format(registry=registry, hostname=hostname, repository=repo)\n\n\ndef _config_for_framework_and_scope(framework, image_scope, accelerator_type=None):\n \"\"\"Loads the JSON config for the given framework and image scope.\"\"\"\n config = config_for_framework(framework)\n\n if accelerator_type:\n _validate_accelerator_type(accelerator_type)\n\n if image_scope not in (\"eia\", \"inference\"):\n logger.warning(\n \"Elastic inference is for inference only. Ignoring image scope: %s.\", image_scope\n )\n image_scope = \"eia\"\n\n available_scopes = config.get(\"scope\", config.keys())\n\n if len(available_scopes) == 1:\n if image_scope and image_scope != list(available_scopes)[0]:\n logger.warning(\n \"Defaulting to only supported image scope: %s. Ignoring image scope: %s.\",\n available_scopes[0],\n image_scope,\n )\n image_scope = list(available_scopes)[0]\n\n if not image_scope and \"scope\" in config and set(available_scopes) == {\"training\", \"inference\"}:\n logger.info(\n \"Same images used for training and inference. Defaulting to image scope: %s.\",\n available_scopes[0],\n )\n image_scope = available_scopes[0]\n\n _validate_arg(image_scope, available_scopes, \"image scope\")\n return config if \"scope\" in config else config[image_scope]\n\n\ndef config_for_framework(framework):\n \"\"\"Loads the JSON config for the given framework.\"\"\"\n fname = os.path.join(os.path.dirname(__file__), \"image_uri_config\", \"{}.json\".format(framework))\n with open(fname) as f:\n return json.load(f)\n\n\ndef _get_inference_tool(inference_tool, instance_type):\n \"\"\"Extract the inference tool name from instance type.\"\"\"\n if not inference_tool and instance_type:\n match = re.match(r\"^ml[\\._]([a-z\\d]+)\\.?\\w*$\", instance_type)\n if match and match[1].startswith(\"inf\"):\n return \"neuron\"\n return inference_tool\n\n\ndef _get_latest_versions(list_of_versions):\n \"\"\"Extract the latest version from the input list of available versions.\"\"\"\n return sorted(list_of_versions, reverse=True)[0]\n\n\ndef _validate_accelerator_type(accelerator_type):\n \"\"\"Raises a ``ValueError`` if ``accelerator_type`` is invalid.\"\"\"\n if not accelerator_type.startswith(\"ml.eia\") and accelerator_type != \"local_sagemaker_notebook\":\n raise ValueError(\n \"Invalid SageMaker Elastic Inference accelerator type: {}. \"\n \"See https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html\".format(accelerator_type)\n )\n\n\ndef _validate_version_and_set_if_needed(version, config, framework):\n \"\"\"Checks if the framework/algorithm version is one of the supported versions.\"\"\"\n available_versions = list(config[\"versions\"].keys())\n aliased_versions = list(config.get(\"version_aliases\", {}).keys())\n\n if len(available_versions) == 1 and version not in aliased_versions:\n log_message = \"Defaulting to the only supported framework/algorithm version: {}.\".format(\n available_versions[0]\n )\n if version and version != available_versions[0]:\n logger.warning(\"%s Ignoring framework/algorithm version: %s.\", log_message, version)\n elif not version:\n logger.info(log_message)\n\n return available_versions[0]\n\n _validate_arg(version, available_versions + aliased_versions, \"{} version\".format(framework))\n return version\n\n\ndef _version_for_config(version, config):\n \"\"\"Returns the version string for retrieving a framework version's specific config.\"\"\"\n if \"version_aliases\" in config:\n if version in config[\"version_aliases\"].keys():\n return config[\"version_aliases\"][version]\n\n return version\n\n\ndef _registry_from_region(region, registry_dict):\n \"\"\"Returns the ECR registry (AWS account number) for the given region.\"\"\"\n _validate_arg(region, registry_dict.keys(), \"region\")\n return registry_dict[region]\n\n\ndef _processor(instance_type, available_processors, serverless_inference_config=None):\n \"\"\"Returns the processor type for the given instance type.\"\"\"\n if not available_processors:\n logger.info(\"Ignoring unnecessary instance type: %s.\", instance_type)\n return None\n\n if len(available_processors) == 1 and not instance_type:\n logger.info(\"Defaulting to only supported image scope: %s.\", available_processors[0])\n return available_processors[0]\n\n if serverless_inference_config is not None:\n logger.info(\"Defaulting to CPU type when using serverless inference\")\n return \"cpu\"\n\n if not instance_type:\n raise ValueError(\n \"Empty SageMaker instance type. For options, see: \"\n \"https://aws.amazon.com/sagemaker/pricing/instance-types\"\n )\n\n if instance_type.startswith(\"local\"):\n processor = \"cpu\" if instance_type == \"local\" else \"gpu\"\n elif instance_type.startswith(\"neuron\"):\n processor = \"neuron\"\n else:\n # looks for either \"ml.<family>.<size>\" or \"ml_<family>\"\n match = re.match(r\"^ml[\\._]([a-z\\d]+)\\.?\\w*$\", instance_type)\n if match:\n family = match[1]\n\n # For some frameworks, we have optimized images for specific families, e.g c5 or p3.\n # In those cases, we use the family name in the image tag. In other cases, we use\n # 'cpu' or 'gpu'.\n if family in available_processors:\n processor = family\n elif family.startswith(\"inf\"):\n processor = \"inf\"\n elif family[0] in (\"g\", \"p\"):\n processor = \"gpu\"\n else:\n processor = \"cpu\"\n else:\n raise ValueError(\n \"Invalid SageMaker instance type: {}. For options, see: \"\n \"https://aws.amazon.com/sagemaker/pricing/instance-types\".format(instance_type)\n )\n\n _validate_arg(processor, available_processors, \"processor\")\n return processor\n\n\ndef _should_auto_select_container_version(instance_type, distribution):\n \"\"\"Returns a boolean that indicates whether to use an auto-selected container version.\"\"\"\n p4d = False\n if instance_type:\n # looks for either \"ml.<family>.<size>\" or \"ml_<family>\"\n match = re.match(r\"^ml[\\._]([a-z\\d]+)\\.?\\w*$\", instance_type)\n if match:\n family = match[1]\n p4d = family == \"p4d\"\n\n smdistributed = False\n if distribution:\n smdistributed = \"smdistributed\" in distribution\n\n return p4d or smdistributed\n\n\ndef _validate_py_version_and_set_if_needed(py_version, version_config, framework):\n \"\"\"Checks if the Python version is one of the supported versions.\"\"\"\n if \"repository\" in version_config:\n available_versions = version_config.get(\"py_versions\")\n else:\n available_versions = list(version_config.keys())\n\n if not available_versions:\n if py_version:\n logger.info(\"Ignoring unnecessary Python version: %s.\", py_version)\n return None\n\n if py_version is None and defaults.SPARK_NAME == framework:\n return None\n\n if py_version is None and len(available_versions) == 1:\n logger.info(\"Defaulting to only available Python version: %s\", available_versions[0])\n return available_versions[0]\n\n _validate_arg(py_version, available_versions, \"Python version\")\n return py_version\n\n\ndef _validate_arg(arg, available_options, arg_name):\n \"\"\"Checks if the arg is in the available options, and raises a ``ValueError`` if not.\"\"\"\n if arg not in available_options:\n raise ValueError(\n \"Unsupported {arg_name}: {arg}. You may need to upgrade your SDK version \"\n \"(pip install -U sagemaker) for newer {arg_name}s. Supported {arg_name}(s): \"\n \"{options}.\".format(arg_name=arg_name, arg=arg, options=\", \".join(available_options))\n )\n\n\ndef _format_tag(tag_prefix, processor, py_version, container_version, inference_tool=None):\n \"\"\"Creates a tag for the image URI.\"\"\"\n if inference_tool:\n return \"-\".join(x for x in (tag_prefix, inference_tool, py_version, container_version) if x)\n return \"-\".join(x for x in (tag_prefix, processor, py_version, container_version) if x)\n\n\ndef get_training_image_uri(\n region,\n framework,\n framework_version=None,\n py_version=None,\n image_uri=None,\n distribution=None,\n compiler_config=None,\n tensorflow_version=None,\n pytorch_version=None,\n instance_type=None,\n) -> str:\n \"\"\"Retrieves the image URI for training.\n\n Args:\n region (str): The AWS region to use for image URI.\n framework (str): The framework for which to retrieve an image URI.\n framework_version (str): The framework version for which to retrieve an\n image URI (default: None).\n py_version (str): The python version to use for the image (default: None).\n image_uri (str): If an image URI is supplied, it is returned (default: None).\n distribution (dict): A dictionary with information on how to run distributed\n training (default: None).\n compiler_config (:class:`~sagemaker.training_compiler.TrainingCompilerConfig`):\n A configuration class for the SageMaker Training Compiler\n (default: None).\n tensorflow_version (str): The version of TensorFlow to use. (default: None)\n pytorch_version (str): The version of PyTorch to use. (default: None)\n instance_type (str): The instance type to use. (default: None)\n\n Returns:\n str: The image URI string.\n \"\"\"\n\n if image_uri:\n return image_uri\n\n logger.info(\n \"image_uri is not presented, retrieving image_uri based on instance_type, framework etc.\"\n )\n base_framework_version: Optional[str] = None\n\n if tensorflow_version is not None or pytorch_version is not None:\n processor = _processor(instance_type, [\"cpu\", \"gpu\"])\n is_native_huggingface_gpu = processor == \"gpu\" and not compiler_config\n container_version = \"cu110-ubuntu18.04\" if is_native_huggingface_gpu else None\n if tensorflow_version is not None:\n base_framework_version = f\"tensorflow{tensorflow_version}\"\n else:\n base_framework_version = f\"pytorch{pytorch_version}\"\n else:\n container_version = None\n base_framework_version = None\n\n return retrieve(\n framework,\n region,\n instance_type=instance_type,\n version=framework_version,\n py_version=py_version,\n image_scope=\"training\",\n distribution=distribution,\n base_framework_version=base_framework_version,\n container_version=container_version,\n training_compiler_config=compiler_config,\n )\n"}
|
diff --git a/src/sagemaker/image_uri_config/pytorch.json b/src/sagemaker/image_uri_config/pytorch.json
index a88f7f1c50..74127a1fda 100644
--- a/src/sagemaker/image_uri_config/pytorch.json
+++ b/src/sagemaker/image_uri_config/pytorch.json
@@ -654,6 +654,51 @@
}
}
},
+ "inference_graviton": {
+ "processors": [
+ "cpu"
+ ],
+ "version_aliases": {
+ "1.12": "1.12.1"
+ },
+ "versions": {
+ "1.12.1": {
+ "py_versions": [
+ "py38"
+ ],
+ "registries": {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ap-southeast-3": "907027046896",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884"
+ },
+ "repository": "pytorch-inference-graviton",
+ "container_version": {"cpu": "ubuntu20.04"}
+ }
+ }
+ },
"training": {
"processors": [
"cpu",
diff --git a/src/sagemaker/image_uri_config/tensorflow.json b/src/sagemaker/image_uri_config/tensorflow.json
index 6a2318ddbe..0f5a390c8d 100644
--- a/src/sagemaker/image_uri_config/tensorflow.json
+++ b/src/sagemaker/image_uri_config/tensorflow.json
@@ -1471,6 +1471,51 @@
}
}
},
+ "inference_graviton": {
+ "processors": [
+ "cpu"
+ ],
+ "version_aliases": {
+ "2.9": "2.9.1"
+ },
+ "versions": {
+ "2.9.1": {
+ "py_versions": [
+ "py38"
+ ],
+ "registries": {
+ "af-south-1": "626614931356",
+ "ap-east-1": "871362719292",
+ "ap-northeast-1": "763104351884",
+ "ap-northeast-2": "763104351884",
+ "ap-northeast-3": "364406365360",
+ "ap-south-1": "763104351884",
+ "ap-southeast-1": "763104351884",
+ "ap-southeast-2": "763104351884",
+ "ap-southeast-3": "907027046896",
+ "ca-central-1": "763104351884",
+ "cn-north-1": "727897471807",
+ "cn-northwest-1": "727897471807",
+ "eu-central-1": "763104351884",
+ "eu-north-1": "763104351884",
+ "eu-west-1": "763104351884",
+ "eu-west-2": "763104351884",
+ "eu-west-3": "763104351884",
+ "eu-south-1": "692866216735",
+ "me-south-1": "217643126080",
+ "sa-east-1": "763104351884",
+ "us-east-1": "763104351884",
+ "us-east-2": "763104351884",
+ "us-gov-west-1": "442386744353",
+ "us-iso-east-1": "886529160074",
+ "us-west-1": "763104351884",
+ "us-west-2": "763104351884"
+ },
+ "repository": "tensorflow-inference-graviton",
+ "container_version": {"cpu": "ubuntu20.04"}
+ }
+ }
+ },
"training": {
"processors": [
"cpu",
|
{"src/sagemaker/image_uris.py": [{"type": "function", "name": "_get_image_scope_for_instance_type", "lines": [295, 301], "signature": "def _get_image_scope_for_instance_type(framework, instance_type, image_scope):", "doc": "Extract the image scope from instance type."}]}
| null |
["tests/unit/sagemaker/image_uris/test_graviton.py::test_graviton_tensorflow", "tests/unit/sagemaker/image_uris/test_graviton.py::test_graviton_pytorch"]
|
[]
|
03738861995e0c3fda73958d251e83465aba3c04
|
{"first_commit_time": 1665021205.0, "pr_title": "feature: Graviton support for PyTorch and Tensorflow frameworks", "pr_body": "*Issue #, if available:*\r\n\r\n*Description of changes:*\r\nAdded the necessary changes to incorporate the [Graviton instances](https://github.com/aws/deep-learning-containers/releases/tag/v1.0-pt-graviton-sagemaker-1.12.1-inf-cpu-py38) into the `PyTorch` and `Tensorflow` frameworks.\r\n\r\n*Testing done:*\r\nLocally tested the feature to extract image string with the following code, where the output should match with expected DLC image URI's for PyTorch\r\n(`763104351884.dkr.ecr.us-west-2.amazonaws.com/pytorch-inference-graviton:1.12.1-cpu-py38-ubuntu20.04-sagemaker`) \r\nand Tensorflow\r\n(`763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-inference-graviton:2.9.1-cpu-py38-ubuntu20.04-sagemaker`).\r\n\r\n```bash\r\n>>> sagemaker.image_uris.retrieve(\"pytorch\",\"us-west-2\", instance_type=\"ml.c6g.4xlarge\")\r\n'763104351884.dkr.ecr.us-west-2.amazonaws.com/pytorch-inference-graviton:1.12.1-cpu-py38-ubuntu20.04-sagemaker'\r\n\r\n>>> sagemaker.image_uris.retrieve(\"tensorflow\",\"us-west-2\", instance_type=\"ml.t4g.2xlarge\")\r\n'763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-inference-graviton:2.9.1-cpu-py38-ubuntu20.04-sagemaker'\r\n\r\n```\r\n\r\n## Merge Checklist\r\n\r\n_Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request._\r\n\r\n#### General\r\n\r\n- [x] I have read the [CONTRIBUTING](https://github.com/aws/sagemaker-python-sdk/blob/master/CONTRIBUTING.md) doc\r\n- [x] I certify that the changes I am introducing will be backward compatible, and I have discussed concerns about this, if any, with the Python SDK team\r\n- [x] I used the commit message format described in [CONTRIBUTING](https://github.com/aws/sagemaker-python-sdk/blob/master/CONTRIBUTING.md#committing-your-change)\r\n- [ ] I have passed the region in to all S3 and STS clients that I've initialized as part of this change.\r\n- [ ] I have updated any necessary documentation, including [READMEs](https://github.com/aws/sagemaker-python-sdk/blob/master/README.rst) and [API docs](https://github.com/aws/sagemaker-python-sdk/tree/master/doc) (if appropriate)\r\n\r\n#### Tests\r\n\r\n- [x] I have added tests that prove my fix is effective or that my feature works (if appropriate)\r\n- [x] I have added unit and/or integration tests as appropriate to ensure backward compatibility of the changes\r\n- [ ] I have checked that my tests are not configured for a specific region or account (if appropriate)\r\n- [ ] I have used [`unique_name_from_base`](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/utils.py#L77) to create resource names in integ tests (if appropriate)\r\n\r\nBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.\r\n", "pr_timeline": [{"time": 1666369021.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: 347ffa38f450d7f9f8dde7b8ed1623a86975bbd6\n* Result: FAILED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=d8673ccb-b44f-4fbc-961c-9c13cec2ff28%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666370107.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: 347ffa38f450d7f9f8dde7b8ed1623a86975bbd6\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=9b3c0cef-92c2-4e27-9d52-4d559752561f%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666416214.0, "comment": "# [Codecov](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) Report\n> Merging [#3432](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) (d79d151) into [master](https://codecov.io/gh/aws/sagemaker-python-sdk/commit/1fa23772efa2cff2bb8e422b4e69325a8f5ea618?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) (1fa2377) will **decrease** coverage by `0.22%`.\n> The diff coverage is `83.86%`.\n\n```diff\n@@ Coverage Diff @@\n## master #3432 +/- ##\n==========================================\n- Coverage 89.17% 88.94% -0.23% \n==========================================\n Files 204 205 +1 \n Lines 18979 19136 +157 \n==========================================\n+ Hits 16924 17021 +97 \n- Misses 2055 2115 +60 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) | Coverage \u0394 | |\n|---|---|---|\n| [src/sagemaker/sklearn/model.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9za2xlYXJuL21vZGVsLnB5) | `92.98% <\u00f8> (-1.76%)` | :arrow_down: |\n| [...sagemaker/workflow/monitor\\_batch\\_transform\\_step.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci93b3JrZmxvdy9tb25pdG9yX2JhdGNoX3RyYW5zZm9ybV9zdGVwLnB5) | `0.00% <0.00%> (\u00f8)` | |\n| [src/sagemaker/workflow/utilities.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci93b3JrZmxvdy91dGlsaXRpZXMucHk=) | `94.31% <\u00f8> (\u00f8)` | |\n| [src/sagemaker/xgboost/model.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci94Z2Jvb3N0L21vZGVsLnB5) | `96.36% <\u00f8> (-1.82%)` | :arrow_down: |\n| [src/sagemaker/session.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9zZXNzaW9uLnB5) | `71.41% <28.57%> (-0.25%)` | :arrow_down: |\n| [src/sagemaker/training\\_compiler/config.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci90cmFpbmluZ19jb21waWxlci9jb25maWcucHk=) | `80.43% <60.00%> (-2.90%)` | :arrow_down: |\n| [src/sagemaker/pytorch/estimator.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9weXRvcmNoL2VzdGltYXRvci5weQ==) | `94.73% <63.63%> (-5.27%)` | :arrow_down: |\n| [src/sagemaker/sklearn/estimator.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9za2xlYXJuL2VzdGltYXRvci5weQ==) | `93.93% <63.63%> (-6.07%)` | :arrow_down: |\n| [src/sagemaker/estimator.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9lc3RpbWF0b3IucHk=) | `89.43% <71.42%> (-0.08%)` | :arrow_down: |\n| [src/sagemaker/amazon/amazon\\_estimator.py](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws#diff-c3JjL3NhZ2VtYWtlci9hbWF6b24vYW1hem9uX2VzdGltYXRvci5weQ==) | `87.82% <81.25%> (-0.62%)` | :arrow_down: |\n| ... and [22 more](https://codecov.io/gh/aws/sagemaker-python-sdk/pull/3432/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws) | |\n\nHelp us with your feedback. Take ten seconds to tell us [how you rate us](https://about.codecov.io/nps?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws). Have a feature suggestion? [Share it here.](https://app.codecov.io/gh/feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=aws)\n"}, {"time": 1666370500.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: 347ffa38f450d7f9f8dde7b8ed1623a86975bbd6\n* Result: FAILED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=a117689d-d7fd-4547-9a72-37aa685d7a58%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666371167.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: 347ffa38f450d7f9f8dde7b8ed1623a86975bbd6\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=90610b7e-3636-467a-b325-ca2171c9adcc%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666376286.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-unit-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: SUCCEEDED\n* [Build Logs](https://prwmnp3ir9.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=d8e1be62-7201-410d-985a-c4b71340c89e%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666377159.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-local-mode-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: SUCCEEDED\n* [Build Logs](https://0lfhakzi32.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=5f2c4335-a1d9-4ef3-867d-ffc2a73148ca%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666377965.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-pr\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: SUCCEEDED\n* [Build Logs](https://sxsf0yqucb.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=bc121512-b4a5-4740-9fbd-993bd60b9737%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666378500.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-notebook-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: SUCCEEDED\n* [Build Logs](https://jr6hkjvk5d.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=55581e9b-cea2-41f0-a191-7a8ec8bf52ce%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666384784.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=bf3eebf0-13e5-4aa1-999a-8672d9fc0b6a%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666390893.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=8c634381-e798-450c-8c9e-9cfe2a455884%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666402825.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=4681f3bb-4468-4847-ad5c-27006c21d509%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666404886.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: FAILED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=f11e9ae5-a8a1-4a2b-bb59-8dca9049f8b1%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}, {"time": 1666416162.0, "comment": "\n### AWS CodeBuild CI Report\n\n* CodeBuild project: sagemaker-python-sdk-slow-tests\n* Commit ID: d79d151db88d5c23b5ea615779ee4d4ff6ef56fd\n* Result: SUCCEEDED\n* [Build Logs](https://we1smikb89.execute-api.us-west-2.amazonaws.com/Prod/buildlogs?key=ef44b518-c11e-4270-97f7-96200d3b69c3%2Fbuild.log) (available for 30 days)\n\n*Powered by [github-codebuild-logs](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:277187709615:applications~github-codebuild-logs), available on the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/)*\n\n\n<!--\nCREATED BY GITHUB-CODEBUILD-LOGS\n-->\n\n"}], "issues": {}}
|
boto/boto3
| 605
|
https://github.com/boto/boto3/pull/605
|
boto__boto3-605
|
[]
|
c0b2d38ccc9f09cbd4af5e05875b620ce2e581f7
|
diff --git a/.changes/next-release/feature-DynamoDB.json b/.changes/next-release/feature-DynamoDB.json
new file mode 100644
index 0000000000..e7c6841edc
--- /dev/null
+++ b/.changes/next-release/feature-DynamoDB.json
@@ -0,0 +1,5 @@
+{
+ "category": "DynamoDB",
+ "type": "feature",
+ "description": "Add request auto de-duplication based on specified primary keys for batch_writer. (``#605 <https://github.com/boto/boto3/issues/605>`__ <https"
+}
\ No newline at end of file
diff --git a/boto3/dynamodb/table.py b/boto3/dynamodb/table.py
index a4b7f17cd2..6e32e190b9 100644
--- a/boto3/dynamodb/table.py
+++ b/boto3/dynamodb/table.py
@@ -29,7 +29,7 @@ class TableResource(object):
def __init__(self, *args, **kwargs):
super(TableResource, self).__init__(*args, **kwargs)
- def batch_writer(self):
+ def batch_writer(self, overwrite_by_pkeys=None):
"""Create a batch writer object.
This method creates a context manager for writing
@@ -39,7 +39,9 @@ def batch_writer(self):
in batches. In addition, the batch writer will also automatically
handle any unprocessed items and resend them as needed. All you need
to do is call ``put_item`` for any items you want to add, and
- ``delete_item`` for any items you want to delete.
+ ``delete_item`` for any items you want to delete. In addition, you can
+ specify ``auto_dedup`` if the batch might contain duplicated requests
+ and you want this writer to handle de-dup for you.
Example usage::
@@ -50,13 +52,18 @@ def batch_writer(self):
# You can also delete_items in a batch.
batch.delete_item(Key={'HashKey': 'SomeHashKey'})
+ :type overwrite_by_pkeys: list(string)
+ :param overwrite_by_pkeys: De-duplicate request items in buffer
+ if match new request item on specified primary keys. i.e
+ ``["partition_key1", "sort_key2", "sort_key3"]``
+
"""
- return BatchWriter(self.name, self.meta.client)
+ return BatchWriter(self.name, self.meta.client, overwrite_by_pkeys=overwrite_by_pkeys)
class BatchWriter(object):
"""Automatically handle batch writes to DynamoDB for a single table."""
- def __init__(self, table_name, client, flush_amount=25):
+ def __init__(self, table_name, client, flush_amount=25, overwrite_by_pkeys=None):
"""
:type table_name: str
@@ -78,21 +85,44 @@ def __init__(self, table_name, client, flush_amount=25):
a local buffer before sending a batch_write_item
request to DynamoDB.
+ :type overwrite_by_pkeys: list(string)
+ :param overwrite_by_pkeys: De-duplicate request items in buffer
+ if match new request item on specified primary keys. i.e
+ ``["partition_key1", "sort_key2", "sort_key3"]``
"""
self._table_name = table_name
self._client = client
self._items_buffer = []
self._flush_amount = flush_amount
+ self._overwrite_by_pkeys = overwrite_by_pkeys
def put_item(self, Item):
- self._items_buffer.append({'PutRequest': {'Item': Item}})
- self._flush_if_needed()
+ self._add_request_and_process({'PutRequest': {'Item': Item}})
def delete_item(self, Key):
- self._items_buffer.append({'DeleteRequest': {'Key': Key}})
+ self._add_request_and_process({'DeleteRequest': {'Key': Key}})
+
+ def _add_request_and_process(self, request):
+ if self._overwrite_by_pkeys:
+ self._remove_dup_pkeys_request_if_any(request)
+ logger.debug("With overwrite_by_pkeys enabled, skipping request:%s", request)
+ self._items_buffer.append(request)
self._flush_if_needed()
+ def _remove_dup_pkeys_request_if_any(self, request):
+ pkey_values_new = self._extract_pkey_values(request)
+ for item in self._items_buffer:
+ if self._extract_pkey_values(item) == pkey_values_new:
+ self._items_buffer.remove(item)
+
+ def _extract_pkey_values(self, request):
+ if request.get('PutRequest'):
+ return [ request['PutRequest']['Item'][key] for key in self._overwrite_by_pkeys ]
+ elif request.get('DeleteRequest'):
+ return [ request['DeleteRequest']['Key'][key] for key in self._overwrite_by_pkeys ]
+ return None
+
def _flush_if_needed(self):
if len(self._items_buffer) >= self._flush_amount:
self._flush()
diff --git a/docs/source/guide/dynamodb.rst b/docs/source/guide/dynamodb.rst
index bf58c506c7..ade7d636a2 100644
--- a/docs/source/guide/dynamodb.rst
+++ b/docs/source/guide/dynamodb.rst
@@ -274,6 +274,64 @@ table.
}
)
+The batch writer can help to de-duplicate request by specifying ``overwrite_by_pkeys=['partition_key', 'sort_key']``
+if you want to bypass no duplication limitation of single batch write request as
+``botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the BatchWriteItem operation: Provided list of item keys contains duplicates``.
+
+It will drop request items in the buffer if their primary keys(composite) values are
+the same as newly added one, as eventually consistent with streams of individual
+put/delete operations on the same item.
+
+::
+
+ with table.batch_writer(overwrite_by_pkeys=['partition_key', 'sort_key']) as batch:
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's1',
+ 'other': '111',
+ }
+ )
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's1',
+ 'other': '222',
+ }
+ )
+ batch.delete_item(
+ Key={
+ 'partition_key': 'p1',
+ 'sort_key': 's2'
+ }
+ )
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's2',
+ 'other': '444',
+ }
+ )
+
+after de-duplicate:
+
+::
+
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's1',
+ 'other': '222',
+ }
+ )
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's2',
+ 'other': '444',
+ }
+ )
+
Querying and Scanning
---------------------
|
diff --git a/tests/functional/docs/test_dynamodb.py b/tests/functional/docs/test_dynamodb.py
index 6bc22ceafe..b751e1fe4e 100644
--- a/tests/functional/docs/test_dynamodb.py
+++ b/tests/functional/docs/test_dynamodb.py
@@ -27,7 +27,7 @@ def test_batch_writer_is_documented(self):
self.assert_contains_lines_in_order([
'.. py:class:: DynamoDB.Table(name)',
' * :py:meth:`batch_writer()`',
- ' .. py:method:: batch_writer()'],
+ ' .. py:method:: batch_writer(overwrite_by_pkeys=None)'],
self.generated_contents
)
diff --git a/tests/unit/dynamodb/test_table.py b/tests/unit/dynamodb/test_table.py
index 127dce270d..66b8f335fb 100644
--- a/tests/unit/dynamodb/test_table.py
+++ b/tests/unit/dynamodb/test_table.py
@@ -285,3 +285,104 @@ def test_repeated_flushing_on_exit(self):
}
self.assert_batch_write_calls_are([first_batch, second_batch,
third_batch])
+
+ def test_auto_dedup_for_dup_requests(self):
+ with BatchWriter(self.table_name, self.client,
+ flush_amount=5, overwrite_by_pkeys=["pkey", "skey"]) as b:
+ # dup 1
+ b.put_item(Item={
+ 'pkey': 'foo1',
+ 'skey': 'bar1',
+ 'other': 'other1'
+ })
+ b.put_item(Item={
+ 'pkey': 'foo1',
+ 'skey': 'bar1',
+ 'other': 'other2'
+ })
+ # dup 2
+ b.delete_item(Key={
+ 'pkey': 'foo1',
+ 'skey': 'bar2',
+ })
+ b.put_item(Item={
+ 'pkey': 'foo1',
+ 'skey': 'bar2',
+ 'other': 'other3'
+ })
+ # dup 3
+ b.put_item(Item={
+ 'pkey': 'foo2',
+ 'skey': 'bar2',
+ 'other': 'other3'
+ })
+ b.delete_item(Key={
+ 'pkey': 'foo2',
+ 'skey': 'bar2',
+ })
+ # dup 4
+ b.delete_item(Key={
+ 'pkey': 'foo2',
+ 'skey': 'bar3',
+ })
+ b.delete_item(Key={
+ 'pkey': 'foo2',
+ 'skey': 'bar3',
+ })
+ # 5
+ b.delete_item(Key={
+ 'pkey': 'foo3',
+ 'skey': 'bar3',
+ })
+ # 2nd batch
+ b.put_item(Item={
+ 'pkey': 'foo1',
+ 'skey': 'bar1',
+ 'other': 'other1'
+ })
+ b.put_item(Item={
+ 'pkey': 'foo1',
+ 'skey': 'bar1',
+ 'other': 'other2'
+ })
+
+ first_batch = {
+ 'RequestItems': {
+ self.table_name: [
+ {'PutRequest': { 'Item': {
+ 'pkey': 'foo1',
+ 'skey': 'bar1',
+ 'other': 'other2'
+ }}},
+ {'PutRequest': { 'Item': {
+ 'pkey': 'foo1',
+ 'skey': 'bar2',
+ 'other': 'other3'
+ }}},
+ {'DeleteRequest': {'Key': {
+ 'pkey': 'foo2',
+ 'skey': 'bar2',
+ }}},
+ {'DeleteRequest': {'Key': {
+ 'pkey': 'foo2',
+ 'skey': 'bar3',
+ }}},
+ {'DeleteRequest': {'Key': {
+ 'pkey': 'foo3',
+ 'skey': 'bar3',
+ }}},
+ ]
+ }
+ }
+ second_batch = {
+ 'RequestItems': {
+ self.table_name: [
+ {'PutRequest': { 'Item': {
+ 'pkey': 'foo1',
+ 'skey': 'bar1',
+ 'other': 'other2'
+ }}},
+ ]
+ }
+ }
+ self.assert_batch_write_calls_are([first_batch, second_batch])
| 2016-04-25T07:38:12
|
{}
|
{".changes/next-release/feature-DynamoDB.json": null, "boto3/dynamodb/table.py": "# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport logging\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef register_table_methods(base_classes, **kwargs):\n base_classes.insert(0, TableResource)\n\n\n# This class can be used to add any additional methods we want\n# onto a table resource. Ideally to avoid creating a new\n# base class for every method we can just update this\n# class instead. Just be sure to move the bulk of the\n# actual method implementation to another class.\nclass TableResource(object):\n def __init__(self, *args, **kwargs):\n super(TableResource, self).__init__(*args, **kwargs)\n\n def batch_writer(self):\n \"\"\"Create a batch writer object.\n\n This method creates a context manager for writing\n objects to Amazon DynamoDB in batch.\n\n The batch writer will automatically handle buffering and sending items\n in batches. In addition, the batch writer will also automatically\n handle any unprocessed items and resend them as needed. All you need\n to do is call ``put_item`` for any items you want to add, and\n ``delete_item`` for any items you want to delete.\n\n Example usage::\n\n with table.batch_writer() as batch:\n for _ in xrange(1000000):\n batch.put_item(Item={'HashKey': '...',\n 'Otherstuff': '...'})\n # You can also delete_items in a batch.\n batch.delete_item(Key={'HashKey': 'SomeHashKey'})\n\n \"\"\"\n return BatchWriter(self.name, self.meta.client)\n\n\nclass BatchWriter(object):\n \"\"\"Automatically handle batch writes to DynamoDB for a single table.\"\"\"\n def __init__(self, table_name, client, flush_amount=25):\n \"\"\"\n\n :type table_name: str\n :param table_name: The name of the table. The class handles\n batch writes to a single table.\n\n :type client: ``botocore.client.Client``\n :param client: A botocore client. Note this client\n **must** have the dynamodb customizations applied\n to it for transforming AttributeValues into the\n wire protocol. What this means in practice is that\n you need to use a client that comes from a DynamoDB\n resource if you're going to instantiate this class\n directly, i.e\n ``boto3.resource('dynamodb').Table('foo').meta.client``.\n\n :type flush_amount: int\n :param flush_amount: The number of items to keep in\n a local buffer before sending a batch_write_item\n request to DynamoDB.\n\n\n \"\"\"\n self._table_name = table_name\n self._client = client\n self._items_buffer = []\n self._flush_amount = flush_amount\n\n def put_item(self, Item):\n self._items_buffer.append({'PutRequest': {'Item': Item}})\n self._flush_if_needed()\n\n def delete_item(self, Key):\n self._items_buffer.append({'DeleteRequest': {'Key': Key}})\n self._flush_if_needed()\n\n def _flush_if_needed(self):\n if len(self._items_buffer) >= self._flush_amount:\n self._flush()\n\n def _flush(self):\n items_to_send = self._items_buffer[:self._flush_amount]\n self._items_buffer = self._items_buffer[self._flush_amount:]\n response = self._client.batch_write_item(\n RequestItems={self._table_name: items_to_send})\n unprocessed_items = response['UnprocessedItems']\n\n if unprocessed_items and unprocessed_items[self._table_name]:\n # Any unprocessed_items are immediately added to the\n # next batch we send.\n self._items_buffer.extend(unprocessed_items[self._table_name])\n else:\n self._items_buffer = []\n logger.debug(\"Batch write sent %s, unprocessed: %s\",\n len(items_to_send), len(self._items_buffer))\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_value, tb):\n # When we exit, we need to keep flushing whatever's left\n # until there's nothing left in our items buffer.\n while self._items_buffer:\n self._flush()\n", "docs/source/guide/dynamodb.rst": ".. _dynamodb_guide:\n\nDynamoDB\n========\nBy following this guide, you will learn how to use the\n:py:class:`DynamoDB.ServiceResource` and :py:class:`DynamoDB.Table`\nresources in order to create tables, write items to tables, modify existing\nitems, retrieve items, and query/filter the items in the table.\n\n\nCreating a New Table\n--------------------\n\nIn order to create a new table, use the\n:py:meth:`DynamoDB.ServiceResource.create_table` method::\n\n import boto3\n \n # Get the service resource.\n dynamodb = boto3.resource('dynamodb')\n\n # Create the DynamoDB table.\n table = dynamodb.create_table(\n TableName='users',\n KeySchema=[\n {\n 'AttributeName': 'username',\n 'KeyType': 'HASH'\n },\n {\n 'AttributeName': 'last_name',\n 'KeyType': 'RANGE'\n }\n ],\n AttributeDefinitions=[\n {\n 'AttributeName': 'username',\n 'AttributeType': 'S'\n },\n {\n 'AttributeName': 'last_name',\n 'AttributeType': 'S'\n },\n\n ],\n ProvisionedThroughput={\n 'ReadCapacityUnits': 5,\n 'WriteCapacityUnits': 5\n }\n )\n\n # Wait until the table exists.\n table.meta.client.get_waiter('table_exists').wait(TableName='users')\n\n # Print out some data about the table.\n print(table.item_count)\n\nExpected Output::\n\n 0\n\nThis creates a table named ``users`` that respectively has the hash and\nrange primary keys ``username`` and ``last_name``.\nThis method will return a :py:class:`DynamoDB.Table` resource to call\nadditional methods on the created table.\n\n\nUsing an Existing Table\n-----------------------\nIt is also possible to create a :py:class:`DynamoDB.Table` resource from\nan existing table::\n\n import boto3\n\n # Get the service resource.\n dynamodb = boto3.resource('dynamodb')\n\n # Instantiate a table resource object without actually\n # creating a DynamoDB table. Note that the attributes of this table\n # are lazy-loaded: a request is not made nor are the attribute\n # values populated until the attributes\n # on the table resource are accessed or its load() method is called.\n table = dynamodb.Table('users')\n\n # Print out some data about the table.\n # This will cause a request to be made to DynamoDB and its attribute\n # values will be set based on the response.\n print(table.creation_date_time)\n\nExpected Output (Pleas note that probably the actual times will not match up)::\n\n 2015-06-26 12:42:45.149000-07:00\n\n\nCreating a New Item\n-------------------\n\nOnce you have a :py:class:`DynamoDB.Table` resource you can add new items\nto the table using :py:meth:`DynamoDB.Table.put_item`::\n\n table.put_item(\n Item={\n 'username': 'janedoe',\n 'first_name': 'Jane',\n 'last_name': 'Doe',\n 'age': 25,\n 'account_type': 'standard_user',\n }\n )\n\nFor all of the valid types that can be used for an item, refer to\n:ref:`ref_valid_dynamodb_types`.\n\n\nGetting an Item\n---------------\nYou can then retrieve the object using :py:meth:`DynamoDB.Table.get_item`::\n\n response = table.get_item(\n Key={\n 'username': 'janedoe',\n 'last_name': 'Doe'\n }\n )\n item = response['Item']\n print(item)\n\n\nExpected Output::\n\n {u'username': u'janedoe',\n u'first_name': u'Jane',\n u'last_name': u'Doe',\n u'account_type': u'standard_user',\n u'age': Decimal('25')}\n\n\nUpdating Item\n-------------\n\nYou can then update attributes of the item in the table::\n\n table.update_item(\n Key={\n 'username': 'janedoe',\n 'last_name': 'Doe'\n },\n UpdateExpression='SET age = :val1',\n ExpressionAttributeValues={\n ':val1': 26\n }\n )\n\nThen if you retrieve the item again, it will be updated appropriately::\n\n response = table.get_item(\n Key={\n 'username': 'janedoe',\n 'last_name': 'Doe'\n }\n )\n item = response['Item']\n print(item)\n\n\nExpected Output::\n\n {u'username': u'janedoe',\n u'first_name': u'Jane',\n u'last_name': u'Doe',\n u'account_type': u'standard_user',\n u'age': Decimal('26')}\n\n\nDeleting Item\n-------------\nYou can also delete the item using :py:meth:`DynamoDB.Table.delete_item`::\n \n table.delete_item(\n Key={\n 'username': 'janedoe',\n 'last_name': 'Doe'\n }\n )\n\n\nBatch Writing\n-------------\nIf you are loading a lot of data at a time, you can make use of\n:py:meth:`DyanmoDB.Table.batch_writer` so you can both speed up the process and\nreduce the number of write requests made to the service.\n\nThis method returns a handle to a batch writer object that will automatically\nhandle buffering and sending items in batches. In addition, the\nbatch writer will also automatically handle any unprocessed items and\nresend them as needed. All you need to do is call ``put_item`` for any\nitems you want to add, and ``delete_item`` for any items you want to delete::\n\n with table.batch_writer() as batch:\n batch.put_item(\n Item={\n 'account_type': 'standard_user',\n 'username': 'johndoe',\n 'first_name': 'John',\n 'last_name': 'Doe',\n 'age': 25,\n 'address': {\n 'road': '1 Jefferson Street',\n 'city': 'Los Angeles',\n 'state': 'CA',\n 'zipcode': 90001\n }\n }\n )\n batch.put_item(\n Item={\n 'account_type': 'super_user',\n 'username': 'janedoering',\n 'first_name': 'Jane',\n 'last_name': 'Doering',\n 'age': 40,\n 'address': {\n 'road': '2 Washington Avenue',\n 'city': 'Seattle',\n 'state': 'WA',\n 'zipcode': 98109\n }\n }\n )\n batch.put_item(\n Item={\n 'account_type': 'standard_user',\n 'username': 'bobsmith',\n 'first_name': 'Bob',\n 'last_name': 'Smith',\n 'age': 18,\n 'address': {\n 'road': '3 Madison Lane',\n 'city': 'Louisville',\n 'state': 'KY',\n 'zipcode': 40213\n }\n }\n )\n batch.put_item(\n Item={\n 'account_type': 'super_user',\n 'username': 'alicedoe',\n 'first_name': 'Alice',\n 'last_name': 'Doe',\n 'age': 27,\n 'address': {\n 'road': '1 Jefferson Street',\n 'city': 'Los Angeles',\n 'state': 'CA',\n 'zipcode': 90001\n }\n }\n )\n\nThe batch writer is even able to handle a very large amount of writes to the\ntable.\n\n::\n\n with table.batch_writer() as batch:\n for i in range(50):\n batch.put_item(\n Item={\n 'account_type': 'anonymous',\n 'username': 'user' + str(i),\n 'first_name': 'unknown',\n 'last_name': 'unknown'\n }\n )\n\n\nQuerying and Scanning\n---------------------\n\nWith the table full of items, you can then query or scan the items in the table\nusing the :py:meth:`DynamoDB.Table.query` or :py:meth:`DynamoDB.Table.scan`\nmethods respectively. To add conditions to scanning and querying the table,\nyou will need to import the :py:class:`boto3.dynamodb.conditions.Key` and\n:py:class:`boto3.dynamodb.conditions.Attr` classes. The\n:py:class:`boto3.dynamodb.conditions.Key` should be used when the\ncondition is related to the key of the item.\nThe :py:class:`boto3.dynamodb.conditions.Attr` should be used when the\ncondition is related to an attribute of the item::\n\n from boto3.dynamodb.conditions import Key, Attr\n \n\nThis queries for all of the users whose ``username`` key equals ``johndoe``::\n\n response = table.query(\n KeyConditionExpression=Key('username').eq('johndoe')\n )\n items = response['Items']\n print(items)\n\n\nExpected Output::\n\n [{u'username': u'johndoe',\n u'first_name': u'John',\n u'last_name': u'Doe',\n u'account_type': u'standard_user',\n u'age': Decimal('25'),\n u'address': {u'city': u'Los Angeles',\n u'state': u'CA',\n u'zipcode': Decimal('90001'),\n u'road': u'1 Jefferson Street'}}]\n\n\nSimiliarly you can scan the table based on attributes of the items. For\nexample, this scans for all the users whose ``age`` is less than ``27``::\n\n response = table.scan(\n FilterExpression=Attr('age').lt(27)\n )\n items = response['Items']\n print(items)\n\n\nExpected Output::\n\n [{u'username': u'johndoe',\n u'first_name': u'John',\n u'last_name': u'Doe',\n u'account_type': u'standard_user',\n u'age': Decimal('25'),\n u'address': {u'city': u'Los Angeles',\n u'state': u'CA',\n u'zipcode': Decimal('90001'),\n u'road': u'1 Jefferson Street'}},\n {u'username': u'bobsmith',\n u'first_name': u'Bob',\n u'last_name': u'Smith',\n u'account_type': u'standard_user',\n u'age': Decimal('18'),\n u'address': {u'city': u'Louisville',\n u'state': u'KY',\n u'zipcode': Decimal('40213'),\n u'road': u'3 Madison Lane'}}]\n\n\nYou are also able to chain conditions together using the logical operators:\n``&`` (and), ``|`` (or), and ``~`` (not). For example, this scans for all\nusers whose ``first_name`` starts with ``J`` and whose ``account_type`` is\n``super_user``::\n \n response = table.scan(\n FilterExpression=Attr('first_name').begins_with('J') & Attr('account_type').eq('super_user')\n )\n items = response['Items']\n print(items)\n\n\nExpected Output::\n\n [{u'username': u'janedoering',\n u'first_name': u'Jane',\n u'last_name': u'Doering',\n u'account_type': u'super_user',\n u'age': Decimal('40'),\n u'address': {u'city': u'Seattle',\n u'state': u'WA',\n u'zipcode': Decimal('98109'),\n u'road': u'2 Washington Avenue'}}]\n\n\nYou can even scan based on conditions of a nested attribute. For example this\nscans for all users whose ``state`` in their ``address`` is ``CA``::\n\n response = table.scan(\n FilterExpression=Attr('address.state').eq('CA')\n )\n items = response['Items']\n print(items)\n\n\nExpected Output::\n\n [{u'username': u'johndoe',\n u'first_name': u'John',\n u'last_name': u'Doe',\n u'account_type': u'standard_user',\n u'age': Decimal('25'),\n u'address': {u'city': u'Los Angeles',\n u'state': u'CA',\n u'zipcode': Decimal('90001'),\n u'road': u'1 Jefferson Street'}},\n {u'username': u'alicedoe',\n u'first_name': u'Alice',\n u'last_name': u'Doe',\n u'account_type': u'super_user',\n u'age': Decimal('27'),\n u'address': {u'city': u'Los Angeles',\n u'state': u'CA',\n u'zipcode': Decimal('90001'),\n u'road': u'1 Jefferson Street'}}]\n\n\nFor more information on the various conditions you can use for queries and\nscans, refer to :ref:`ref_dynamodb_conditions`.\n\n\nDeleting a Table\n----------------\nFinally, if you want to delete your table call\n:py:meth:`DynamoDB.Table.delete`::\n\n table.delete()\n"}
|
diff --git a/.changes/next-release/feature-DynamoDB.json b/.changes/next-release/feature-DynamoDB.json
new file mode 100644
index 0000000000..e7c6841edc
--- /dev/null
+++ b/.changes/next-release/feature-DynamoDB.json
@@ -0,0 +1,5 @@
+{
+ "category": "DynamoDB",
+ "type": "feature",
+ "description": "Add request auto de-duplication based on specified primary keys for batch_writer. (``#605 <https://github.com/boto/boto3/issues/605>`__ <https"
+}
\ No newline at end of file
diff --git a/docs/source/guide/dynamodb.rst b/docs/source/guide/dynamodb.rst
index bf58c506c7..ade7d636a2 100644
--- a/docs/source/guide/dynamodb.rst
+++ b/docs/source/guide/dynamodb.rst
@@ -274,6 +274,64 @@ table.
}
)
+The batch writer can help to de-duplicate request by specifying ``overwrite_by_pkeys=['partition_key', 'sort_key']``
+if you want to bypass no duplication limitation of single batch write request as
+``botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the BatchWriteItem operation: Provided list of item keys contains duplicates``.
+
+It will drop request items in the buffer if their primary keys(composite) values are
+the same as newly added one, as eventually consistent with streams of individual
+put/delete operations on the same item.
+
+::
+
+ with table.batch_writer(overwrite_by_pkeys=['partition_key', 'sort_key']) as batch:
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's1',
+ 'other': '111',
+ }
+ )
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's1',
+ 'other': '222',
+ }
+ )
+ batch.delete_item(
+ Key={
+ 'partition_key': 'p1',
+ 'sort_key': 's2'
+ }
+ )
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's2',
+ 'other': '444',
+ }
+ )
+
+after de-duplicate:
+
+::
+
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's1',
+ 'other': '222',
+ }
+ )
+ batch.put_item(
+ Item={
+ 'partition_key': 'p1',
+ 'sort_key': 's2',
+ 'other': '444',
+ }
+ )
+
Querying and Scanning
---------------------
|
{"boto3/dynamodb/table.py": [{"type": "function", "name": "BatchWriter._add_request_and_process", "lines": [106, 111], "signature": "def _add_request_and_process(self, request):", "doc": ""}, {"type": "function", "name": "BatchWriter._remove_dup_pkeys_request_if_any", "lines": [113, 117], "signature": "def _remove_dup_pkeys_request_if_any(self, request):", "doc": ""}, {"type": "function", "name": "BatchWriter._extract_pkey_values", "lines": [119, 124], "signature": "def _extract_pkey_values(self, request):", "doc": ""}]}
| null |
["tests/functional/docs/test_dynamodb.py::TestDynamoDBCustomizations::test_batch_writer_is_documented", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_auto_dedup_for_dup_requests"]
|
["tests/functional/docs/test_dynamodb.py::TestDynamoDBCustomizations::test_conditions_is_documented", "tests/functional/docs/test_dynamodb.py::TestDynamoDBCustomizations::test_document_interface_is_documented", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_all_items_flushed_on_exit", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_batch_write_does_not_immediately_write", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_batch_write_flushes_at_flush_amount", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_can_handle_puts_and_deletes", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_multiple_batch_calls_with_mixed_deletes", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_multiple_flushes_reset_items_to_put", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_never_send_more_than_max_batch_size", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_repeated_flushing_on_exit", "tests/unit/dynamodb/test_table.py::BaseTransformationTest::test_unprocessed_items_added_to_next_batch"]
|
196a2da7490a1a661a0103b8770bd31e34e147f2
|
{"first_commit_time": 1461569146.0, "pr_title": "dynamodb: add request auto de-duplication for batch_writer", "pr_body": "# Motivation\n\nFor scenarios like parsing some values from several sources like server log, user upload data which might contain value duplication, and write them to dynamoDB as unique values.\n\nYou want to bypass no duplication limitation of single batch write request within `boto3` rather than adding another layer to deal with value duplication.\n\nThe no duplication error looks like\n`botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the BatchWriteItem operation: Provided list of item keys contains duplicates`.\n# Solution\n\nTo de-duplicate request by specifying `auto_dedup=True`\nIt will just write out a single request since all requests are the same in below example.\n\n``` python\n with table.batch_writer(auto_dedup=True) as batch:\n for _ in range(50):\n batch.put_item(\n Item={\n 'account_type': 'anonymous',\n 'username': 'user',\n 'first_name': 'unknown',\n 'last_name': 'unknown'\n }\n )\n```\n", "pr_timeline": [{"time": 1461700380.0, "comment": "I think this is an interesting idea.\n\nWhat about the case where the hash/range key are the same but the other keys are different? We won't detect that case and you'll still get the same error message:\n\n```\n with table.batch_writer(auto_dedup=True) as batch:\n for i in range(50):\n batch.put_item(\n Item={\n 'PRIMARY_KEY': 'same_key',\n 'changing': i,\n }\n )\n```\n\nHowever, if we then only guarantee uniqueness based on a provided hash/range key, we could be silently ignoring items that have the same hash/range key but different values for the other keys.\n\nWhat are your thoughts on that?\n"}, {"time": 1461803808.0, "comment": "@jamesls ,\nThanks for the nice catch :)\nMy thoughts as below but not yet implemented completely,\n\n**drop request item in buffer if its primary key(or composite) values are the same as newly added one.**\n\ninterface might look like\n\n``` python\nwith table.batch_writer(overwrite_by_pkeys=['primary_key', 'sort_key']) as batch:\n batch.put_item(\n Item={\n 'primary_key': 'p1',\n 'sort_key': 's1',\n 'other': '111',\n }\n )\n batch.put_item(\n Item={\n 'primary_key': 'p1',\n 'sort_key': 's1',\n 'other': '222',\n }\n )\n batch.delete_item(\n Key={\n 'primary_key': 'p1',\n 'sort_key': 's2'\n }\n )\n batch.put_item(\n Item={\n 'primary_key': 'p1',\n 'sort_key': 's2',\n 'other': '444',\n }\n )\n```\n\nrequest items after dedup:\n\n``` python\nbatch.put_item(\n Item={\n 'primary_key': 'p1',\n 'sort_key': 's1',\n 'other': '222',\n }\n)\nbatch.put_item(\n Item={\n 'primary_key': 'p1',\n 'sort_key': 's2',\n 'other': '444',\n }\n)\n```\n\nthe reason behind is:\n1. treat batch write as stream of individual puts/deletes, new operations on the same primary key item always overwrite old ones.\n2. so if we drop older request with the same primary key in buffer but keep the latest one, it is acting like eventually consistent with streams of individual puts/deletes without actual operations on old requests. \n\nThen it almost meet user's expectation if they want the same behavior of streams of individual puts/deletes. Thanks for your further advise :)\n"}, {"time": 1461874111.0, "comment": "Ah good idea. I think that would work.\n\nAny concerns @kyleknap @JordonPhillips?\n"}, {"time": 1461956202.0, "comment": "No concerns. Looks good. :ship: One more thing that would be nice is could you run the `boto3/scripts/new-change` script to add a changelog entry for this?\n"}, {"time": 1461972176.0, "comment": "> No concerns. Looks good. :ship: One more thing that would be nice is could you run the boto3/scripts/new-change script to add a changelog entry for this?\n\n@kyleknap done !\n"}, {"time": 1462416398.0, "comment": "@jamesls , @kyleknap , @JordonPhillips ,\nThanks for reviewing, please kindly let me know if anything further should be done :)\n"}, {"time": 1462468283.0, "comment": "Thanks! Sorry about not noticing the update before. Merging.\n"}], "issues": {}}
|
boto/boto3
| 74
|
https://github.com/boto/boto3/pull/74
|
boto__boto3-74
|
[]
|
ebc0f95261025aa02c474ec8ffa3e0a0604cb3c6
|
diff --git a/boto3/docs.py b/boto3/docs.py
index 9aace7848a..f5535f5e2d 100644
--- a/boto3/docs.py
+++ b/boto3/docs.py
@@ -52,6 +52,7 @@ def py_type_name(type_name):
:rtype: string
"""
return {
+ 'blob': 'bytes',
'character': 'string',
'double': 'float',
'long': 'integer',
@@ -88,7 +89,7 @@ def py_default(type_name):
}.get(type_name, '...')
-def html_to_rst(html, indent=0, indentFirst=False):
+def html_to_rst(html, indent=0, indent_first=False):
"""
Use bcdoc to convert html to rst.
@@ -96,8 +97,8 @@ def html_to_rst(html, indent=0, indentFirst=False):
:param html: Input HTML to be converted
:type indent: int
:param indent: Number of spaces to indent each line
- :type indentFirst: boolean
- :param indentFirst: Whether to indent the first line
+ :type indent_first: boolean
+ :param indent_first: Whether to indent the first line
:rtype: string
"""
doc = ReSTDocument()
@@ -113,7 +114,7 @@ def html_to_rst(html, indent=0, indentFirst=False):
if indent:
rst = '\n'.join([(' ' * indent) + line for line in rst.splitlines()])
- if not indentFirst:
+ if not indent_first:
rst = rst.strip()
return rst
@@ -563,7 +564,7 @@ def document_operation(operation_model, service_name, operation_name=None,
if description is None:
description = html_to_rst(
operation_model._operation_model.get('documentation', ''),
- indent=6, indentFirst=True)
+ indent=6, indent_first=True)
docs = ' .. py:method:: {0}({1})\n\n{2}\n\n'.format(
operation_name, param_desc, description)
@@ -591,11 +592,97 @@ def document_operation(operation_model, service_name, operation_name=None,
if key in ignore_params:
continue
param_type = py_type_name(value.type_name)
+
+ # Convert the description from HTML to RST (to later be converted
+ # into HTML... don't ask). If the parameter is a nested structure
+ # then we also describe its members.
+ param_desc = html_to_rst(
+ value.documentation, indent=9, indent_first=True)
+ if param_type in ['list', 'dict']:
+ param_desc = ('\n Structure description::\n\n' +
+ ' ' + key + ' = ' +
+ document_structure(
+ key, value, indent=12, indent_first=False) +
+ '\n' + param_desc)
required = key in required_params and 'Required' or 'Optional'
docs += (' :param {0} {1}: *{2}* - {3}\n'.format(
- param_type, key, required,
- html_to_rst(value.documentation, indent=9)))
+ param_type, key, required, param_desc))
if rtype is not None:
- docs += '\n\n :rtype: {0}\n\n'.format(rtype)
+ docs += ' :rtype: {0}\n\n'.format(rtype)
+
+ # Only document the return structure if it isn't a resource. Usually
+ # this means either a list or structure.
+ output_shape = operation_model.output_shape
+ if rtype in ['list', 'dict'] and output_shape is not None:
+ docs += (' :return:\n Structure description::\n\n' +
+ document_structure(None, output_shape, indent=12) + '\n')
+
+ return docs
+
+
+def document_structure(name, shape, indent=0, indent_first=True,
+ parent_type=None, eol='\n'):
+ """
+ Document a nested structure (list or dict) parameter or return value as
+ a snippet of Python code with dummy placeholders. For example:
+
+ {
+ 'Param1': [
+ STRING,
+ ...
+ ],
+ 'Param2': BOOLEAN,
+ 'Param3': {
+ 'Param4': FLOAT,
+ 'Param5': INTEGER
+ }
+ }
+
+ """
+ docs = ''
+
+ # Add spaces if the first line is indented.
+ if indent_first:
+ docs += (' ' * indent)
+
+ if shape.type_name == 'structure':
+ # Only include the name if the parent is also a structure.
+ if parent_type == 'structure':
+ docs += "'" + name + '\': {\n'
+ else:
+ docs += '{\n'
+
+ # Go through each member and recursively process them.
+ for i, member_name in enumerate(shape.members):
+ member_eol = '\n'
+ if i < len(shape.members) - 1:
+ member_eol = ',\n'
+ docs += document_structure(
+ member_name, shape.members[member_name],
+ indent=indent + 2, parent_type=shape.type_name,
+ eol=member_eol)
+ docs += (' ' * indent) + '}' + eol
+ elif shape.type_name == 'list':
+ # Only include the name if the parent is a structure.
+ if parent_type == 'structure':
+ docs += "'" + name + '\': [\n'
+ else:
+ docs += '[\n'
+
+ # Lists have only a single member. Here we document it, plus add
+ # an ellipsis to signify that more of the same member type can be
+ # added in a list.
+ docs += document_structure(
+ None, shape.member, indent=indent + 2, eol=',\n')
+ docs += (' ' * indent) + ' ...\n'
+ docs += (' ' * indent) + ']' + eol
+ else:
+ # It's not a structure or list, so document the type. Here we
+ # try to use the equivalent Python type name for clarity.
+ if name is not None:
+ docs += ("'" + name + '\': ' +
+ py_type_name(shape.type_name).upper() + eol)
+ else:
+ docs += py_type_name(shape.type_name).upper() + eol
return docs
diff --git a/boto3/resources/factory.py b/boto3/resources/factory.py
index 7794fb21f4..aaf2015fc1 100644
--- a/boto3/resources/factory.py
+++ b/boto3/resources/factory.py
@@ -246,16 +246,27 @@ def _create_reference(factory_self, name, reference, service_name,
# References are essentially an action with no request
# or response, so we can re-use the response handlers to
# build up resources from identifiers and data members.
- handler = ResourceHandler('', factory_self, resource_defs,
- service_model, reference.resource)
+ handler = ResourceHandler(reference.resource.path, factory_self,
+ resource_defs, service_model,
+ reference.resource)
+
+ # Are there any identifiers that need access to data members?
+ # This is important when building the resource below since
+ # it requires the data to be loaded.
+ needs_data = any(i.source == 'data' for i in
+ reference.resource.identifiers)
def get_reference(self):
# We need to lazy-evaluate the reference to handle circular
# references between resources. We do this by loading the class
# when first accessed.
- # First, though, we need to see if we have the required
- # identifiers to instantiate the resource reference.
- return handler(self, {}, {})
+ # This is using a *response handler* so we need to make sure
+ # our data is loaded (if possible) and pass that data into
+ # the handler as if it were a response. This allows references
+ # to have their data loaded properly.
+ if needs_data and self.meta.data is None and hasattr(self, 'load'):
+ self.load()
+ return handler(self, {}, self.meta.data)
get_reference.__name__ = str(reference.name)
get_reference.__doc__ = 'TODO'
|
diff --git a/tests/unit/resources/test_factory.py b/tests/unit/resources/test_factory.py
index 17b38b11cd..e2aec290f2 100644
--- a/tests/unit/resources/test_factory.py
+++ b/tests/unit/resources/test_factory.py
@@ -553,7 +553,7 @@ def test_resource_loads_waiters(self):
}
}
}
-
+
defs = {
'Bucket': {}
}
@@ -686,6 +686,67 @@ def test_dangling_resource_inequality(self):
self.assertNotEqual(q1, q2)
self.assertNotEqual(q1, m)
+ def test_dangling_resource_loads_data(self):
+ # Given a loadable resource instance that contains a reference
+ # to another resource which has a resource data path, the
+ # referenced resource should be loaded with all of the data
+ # contained at that path. This allows loading references
+ # which would otherwise not be loadable (missing load method)
+ # and prevents extra load calls for others when we already
+ # have the data available.
+ self.defs = {
+ 'Instance': {
+ 'identifiers': [{'name': 'Id'}],
+ 'has': {
+ 'NetworkInterface': {
+ 'resource': {
+ 'type': 'NetworkInterface',
+ 'identifiers': [
+ {'target': 'Id', 'source': 'data',
+ 'path': 'NetworkInterface.Id'}
+ ],
+ 'path': 'NetworkInterface'
+ }
+ }
+ }
+ },
+ 'NetworkInterface': {
+ 'identifiers': [{'name': 'Id'}],
+ 'shape': 'NetworkInterfaceShape'
+ }
+ }
+ self.model = self.defs['Instance']
+ shape = DenormalizedStructureBuilder().with_members({
+ 'Id': {
+ 'type': 'string',
+ },
+ 'PublicIp': {
+ 'type': 'string'
+ }
+ }).build_model()
+ service_model = mock.Mock()
+ service_model.shape_for.return_value = shape
+
+ cls = self.load('test', 'Instance', self.model, self.defs,
+ service_model)
+ instance = cls('instance-id')
+
+ # Set some data as if we had completed a load action.
+ def set_meta_data():
+ instance.meta.data = {
+ 'NetworkInterface': {
+ 'Id': 'network-interface-id',
+ 'PublicIp': '127.0.0.1'
+ }
+ }
+ instance.load = mock.Mock(side_effect=set_meta_data)
+
+ # Now, get the reference and make sure it has its data
+ # set as expected.
+ interface = instance.network_interface
+ self.assertIsNotNone(interface.meta.data)
+ self.assertEqual(interface.public_ip, '127.0.0.1')
+
class TestServiceResourceSubresources(BaseTestResourceFactory):
def setUp(self):
diff --git a/tests/unit/test_docs.py b/tests/unit/test_docs.py
new file mode 100644
index 0000000000..f5093321a9
--- /dev/null
+++ b/tests/unit/test_docs.py
@@ -0,0 +1,86 @@
+# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"). You
+# may not use this file except in compliance with the License. A copy of
+# the License is located at
+#
+# http://aws.amazon.com/apache2.0/
+#
+# or in the "license" file accompanying this file. This file is
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
+# ANY KIND, either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+
+from botocore.model import DenormalizedStructureBuilder, ServiceModel
+
+from boto3.docs import py_type_name, document_structure
+from tests import mock, BaseTestCase
+
+
+class TestPythonTypeName(BaseTestCase):
+ def test_structure(self):
+ self.assertEqual('dict', py_type_name('structure'))
+
+ def test_list(self):
+ self.assertEqual('list', py_type_name('list'))
+
+ def test_map(self):
+ self.assertEqual('dict', py_type_name('map'))
+
+ def test_string(self):
+ self.assertEqual('string', py_type_name('string'))
+
+ def test_character(self):
+ self.assertEqual('string', py_type_name('character'))
+
+ def test_blob(self):
+ self.assertEqual('bytes', py_type_name('blob'))
+
+ def test_timestamp(self):
+ self.assertEqual('datetime', py_type_name('timestamp'))
+
+ def test_integer(self):
+ self.assertEqual('integer', py_type_name('integer'))
+
+ def test_long(self):
+ self.assertEqual('integer', py_type_name('long'))
+
+ def test_float(self):
+ self.assertEqual('float', py_type_name('float'))
+
+ def test_double(self):
+ self.assertEqual('float', py_type_name('double'))
+
+
+class TestDocumentStructure(BaseTestCase):
+ def test_nested_structure(self):
+ # Internally this doesn't use an OrderedDict so we can't
+ # test the full output, but we can test whether the
+ # parameters are all included as expected with the correct
+ # types.
+ shape = DenormalizedStructureBuilder().with_members({
+ 'Param1': {
+ 'type': 'list',
+ 'member': {
+ 'type': 'structure',
+ 'members': {
+ 'Param2': {
+ 'type': 'string'
+ },
+ 'Param3': {
+ 'type': 'float'
+ }
+ }
+ }
+ },
+ 'Param4': {
+ 'type': 'blob'
+ }
+ }).build_model()
+
+ doc = document_structure('Test', shape)
+
+ self.assertIn("'Param1': [", doc)
+ self.assertIn("'Param2': STRING", doc)
+ self.assertIn("'Param3': FLOAT", doc)
+ self.assertIn("'Param4': BYTES", doc)
| 2015-03-20T20:39:55
|
{}
|
{"boto3/docs.py": "# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\n\"\"\"\nThis module is used to generate both high and low-level reference\ndocumentation for services. Currently, it does this by inspecting\nthe service and resource models, as well as instantiating dummy\nclients to introspect some values. It is likely to change\nsignificantly in the future!\n\nCurrently this is not used for docstrings, just for the Sphinx\ndocumentation. RST is generated which Sphinx turns into HTML.\n\nThe generated output can be found here:\n\n http://boto3.readthedocs.org/en/latest/\n\n\"\"\"\n\nimport json\nimport os\n\nimport botocore.session\n\nfrom botocore import xform_name\nfrom bcdoc.restdoc import ReSTDocument\n\nimport boto3\n\nfrom .resources.model import ResourceModel\n\n\ndef py_type_name(type_name):\n \"\"\"\n Get the Python type name for a given model type.\n\n >>> py_type_name('list')\n 'list'\n >>> py_type_name('structure')\n 'dict'\n\n :rtype: string\n \"\"\"\n return {\n 'character': 'string',\n 'double': 'float',\n 'long': 'integer',\n 'map': 'dict',\n 'structure': 'dict',\n 'timestamp': 'datetime',\n }.get(type_name, type_name)\n\n\ndef py_default(type_name):\n \"\"\"\n Get the Python default value for a given model type, useful\n for generated examples.\n\n >>> py_default('string')\n '\\'string\\''\n >>> py_default('list')\n '[...]'\n >>> py_default('unknown')\n '...'\n\n :rtype: string\n \"\"\"\n return {\n 'double': '123.0',\n 'long': '123',\n 'integer': '123',\n 'string': \"'string'\",\n 'blob': \"b'bytes'\",\n 'list': '[...]',\n 'map': '{...}',\n 'structure': '{...}',\n 'timestamp': 'datetime(2015, 1, 1)',\n }.get(type_name, '...')\n\n\ndef html_to_rst(html, indent=0, indentFirst=False):\n \"\"\"\n Use bcdoc to convert html to rst.\n\n :type html: string\n :param html: Input HTML to be converted\n :type indent: int\n :param indent: Number of spaces to indent each line\n :type indentFirst: boolean\n :param indentFirst: Whether to indent the first line\n :rtype: string\n \"\"\"\n doc = ReSTDocument()\n\n # TODO: Remove me, temp workaround to fix doc building\n # because of smart quotes that aren't currently supported.\n html = html.replace(u'\\u2019', \"'\")\n html = html.replace(u'\\u2014', '-')\n\n doc.include_doc_string(html)\n rst = doc.getvalue().decode('utf-8')\n\n if indent:\n rst = '\\n'.join([(' ' * indent) + line for line in rst.splitlines()])\n\n if not indentFirst:\n rst = rst.strip()\n\n return rst\n\n\ndef docs_for(service_name):\n \"\"\"\n Generate reference documentation (low and high level) for a service\n by name. This generates docs for the latest available version.\n\n :type service_name: string\n :param service_name: The service short-name, like 'ec2'\n :rtype: string\n \"\"\"\n session = botocore.session.get_session()\n service_model = session.get_service_model(service_name)\n\n print('Processing {0}-{1}'.format(service_name, service_model.api_version))\n\n # The following creates an official name of 'Amazon Simple Storage\n # Service (S3)' our of 'Amazon Simple Storage Service' and 'Amazon S3'.\n # It tries to be smart, so for `Amazon DynamoDB' and 'DynamoDB' we would\n # get an official name of 'Amazon DynamoDB'.\n official_name = service_model.metadata.get('serviceFullName')\n short_name = service_model.metadata.get('serviceAbbreviation', '')\n if short_name.startswith('Amazon'):\n short_name = short_name[7:]\n if short_name.startswith('AWS'):\n short_name = short_name[4:]\n if short_name and short_name.lower() not in official_name.lower():\n official_name += ' ({0})'.format(short_name)\n\n docs = '{0}\\n{1}\\n\\n'.format(official_name, '=' * len(official_name))\n\n docs += '.. contents:: Table of Contents\\n :depth: 2\\n\\n'\n\n docs += document_client(service_name, official_name, service_model)\n docs += document_client_waiter(session, official_name, service_name,\n service_model)\n\n filename = (os.path.dirname(__file__) + '/data/resources/'\n '{0}-{1}.resources.json').format(service_name,\n service_model.api_version)\n # We can't use a set here because dicts aren't hashable!\n models = {}\n if os.path.exists(filename):\n data = json.load(open(filename))\n model = ResourceModel(service_name, data['service'], data['resources'])\n\n for collection_model in model.collections:\n collection_model.parent_name = model.name\n models[collection_model.name] = {\n 'type': 'collection',\n 'model': collection_model\n }\n\n docs += document_resource(service_name, official_name, model,\n service_model, session)\n\n # First, collect all the models...\n for name, model in sorted(data['resources'].items(),\n key=lambda i:i[0]):\n resource_model = ResourceModel(name, model, data['resources'])\n\n shape = None\n if resource_model.shape:\n shape = service_model.shape_for(resource_model.shape)\n resource_model.load_rename_map(shape)\n\n if name not in models:\n models[name] = {'type': 'resource', 'model': resource_model}\n\n for collection_model in resource_model.collections:\n collection_model.parent_name = xform_name(resource_model.name)\n\n cname = collection_model.name + 'CollectionManager'\n if cname not in models:\n models[cname] = {'type': 'collection',\n 'model': collection_model}\n\n # Then render them out in alphabetical order.\n for name, item in sorted(models.items()):\n model = item['model']\n if item['type'] == 'resource':\n docs += document_resource(service_name, official_name,\n model, service_model, session)\n elif item['type'] == 'collection':\n docs += document_collection(\n service_name, official_name, model,\n model.resource.model, service_model)\n\n return docs\n\ndef document_client(service_name, official_name, service_model):\n \"\"\"\n Generate low-level client documentation for a service. This generates\n documentation for all available operations.\n \"\"\"\n docs = 'Client\\n------\\n\\n'\n docs += '.. py:class:: {0}.Client\\n\\n'.format(service_name)\n docs += ' A low-level client representing {0}::\\n\\n'.format(\n official_name)\n docs += ' import boto3\\n\\n'\n docs += ' {service} = boto3.client(\\'{service}\\')\\n\\n'.format(\n service=service_name)\n\n # TODO: Get this information from the model somehow in the future.\n # For now creating and introspecing a client is a quick and\n # dirty way to access waiters/paginators.\n client = boto3.client(service_name, aws_access_key_id='dummy',\n aws_secret_access_key='dummy',\n region_name='us-east-1')\n\n wdoc = ''\n if client.waiter_names:\n # This gets included in alphabetical order below!\n wdoc += ' .. py:method:: get_waiter(name)\\n\\n'\n wdoc += ' Get a waiter by name. Available waiters:\\n\\n'\n for waiter in client.waiter_names:\n wdoc += ' * `{0}`_\\n'.format(waiter)\n wdoc += '\\n'\n\n waiter_included = False\n for operation_name in service_model.operation_names:\n if not waiter_included and xform_name(operation_name) > 'get_waiter':\n docs += wdoc\n waiter_included = True\n\n operation = service_model.operation_model(operation_name)\n docs += document_operation(\n operation, service_name,\n paginated=client.can_paginate(xform_name(operation_name)),\n example_instance='client', example_response='response')\n\n return docs\n\ndef document_client_waiter(session, official_name, service_name,\n service_model):\n client = boto3.client(service_name, aws_access_key_id='dummy',\n aws_secret_access_key='dummy',\n region_name='us-east-1')\n waiter_spec_doc = ''\n if client.waiter_names:\n waiter_spec_doc = 'Waiter\\n------\\n\\n'\n service_waiter_model = session.get_waiter_model(service_name)\n for waiter in service_waiter_model.waiter_names:\n snake_cased = xform_name(waiter)\n waiter_spec_doc += '{0}\\n{1}\\n\\n'.format(snake_cased,\n '~' * len(snake_cased))\n waiter_model = service_waiter_model.get_waiter(waiter)\n operation_model = service_model.operation_model(\n waiter_model.operation)\n description = (\n ' This polls :py:meth:`{0}.Client.{1}` every {2} '\n 'seconds until a successful state is reached. An error is '\n 'returned after {3} failed checks.'.format(\n service_name, xform_name(waiter_model.operation),\n waiter_model.delay, waiter_model.max_attempts)\n )\n waiter_spec_doc += document_operation(\n operation_model=operation_model, service_name=service_name,\n operation_name='wait', rtype=None, description=description,\n example_instance='client.get_waiter(\\'{0}\\')'.format(\n snake_cased))\n waiter_spec_doc += '\\n'\n\n return waiter_spec_doc\n\ndef document_resource(service_name, official_name, resource_model,\n service_model, session):\n \"\"\"\n Generate reference documentation from a resource model.\n \"\"\"\n model_name = resource_model.name\n title = model_name\n is_service_resource = False\n if model_name == service_name:\n model_name = 'Service'\n title = 'Service Resource'\n is_service_resource = True\n docs = '{0}\\n{1}\\n\\n'.format(title, '-' * len(title))\n docs += '.. py:class:: {0}.{1}({2})\\n\\n'.format(\n service_name, model_name, ', '.join(\n [xform_name(i.name) for i in resource_model.identifiers]))\n\n if is_service_resource:\n docs += (' A resource representing {0}::\\n\\n').format(official_name)\n docs += ' import boto3\\n\\n'\n docs += ' {service} = boto3.resource(\\'{service}\\')\\n\\n'.format(\n service=service_name)\n else:\n identifiers = ', '.join(\n [\"'{0}'\".format(xform_name(i.name)) for i in\n resource_model.identifiers])\n docs += (' A resource representing an {0} {1}::\\n\\n').format(\n official_name, model_name)\n docs += ' import boto3\\n\\n'\n docs += (' {service} = boto3.resource(\\'{service}\\')\\n'\n ' {var} = {service}.{model}({identifiers})\\n\\n').format(\n var=xform_name(model_name), service=service_name,\n model=model_name, identifiers=identifiers)\n\n if not is_service_resource:\n docs += (' .. rst-class:: admonition-title\\n\\n Attributes &'\n 'Identifiers\\n\\n Attributes & identifiers provide access'\n ' to the properties of a resource. Attributes are lazy-'\n 'loaded the first time one is accessed via the'\n ' :py:meth:`load` method, if it exists.\\n\\n'\n ' Identifiers:\\n\\n')\n\n for identifier in sorted(resource_model.identifiers,\n key=lambda i:i.name):\n docs += (' .. py:attribute:: {0}\\n\\n (``string``,'\n ' **identifier**) The {1}\\'s {2} identifier. This'\n ' attribute **must** be set for the actions below to'\n ' work.\\n\\n'.format(\n xform_name(identifier.name), resource_model.name,\n identifier.name))\n\n docs += '\\n\\n'\n\n if resource_model.shape:\n docs += ' Attributes:\\n\\n'\n shape = service_model.shape_for(resource_model.shape)\n\n attributes = resource_model.get_attributes(shape)\n for name, (orig_name, member) in sorted(attributes.items()):\n docs += (' .. py:attribute:: {0}\\n\\n (``{1}``)'\n ' {2}\\n\\n').format(\n xform_name(name), py_type_name(member.type_name),\n html_to_rst(member.documentation, indent=6))\n\n docs += (' .. rst-class:: admonition-title\\n\\n Actions\\n\\n Actions'\n ' call operations on resources, automatically handling the'\n ' passing in of arguments set from identifiers and some'\n ' attributes.\\n\\n')\n for action in sorted(resource_model.actions, key=lambda i:i.name):\n docs += document_action(action, service_name, resource_model,\n service_model)\n\n if resource_model.subresources:\n docs += (' .. rst-class:: admonition-title\\n\\n Sub-resources\\n\\n'\n ' Sub-resources are methods that create a new instance of a'\n ' child resource. This resource\\'s identifiers get passed'\n ' along to the child.\\n\\n')\n\n preset = len(resource_model.identifiers)\n\n if resource_model.subresources:\n for subresource in sorted(resource_model.subresources,\n key=lambda i: i.name):\n identifiers = [\n xform_name(i.target) for i in \\\n subresource.resource.identifiers if i.source == 'input']\n docs += ' .. py:method:: {0}({1})\\n\\n'.format(\n subresource.name,\n ', '.join(identifiers))\n docs += (' Create a :py:class:`{0}.{1}`'\n ' instance.\\n\\n').format(service_name,\n subresource.resource.type)\n\n docs += '\\n\\n'\n\n if resource_model.references:\n docs += (' .. rst-class:: admonition-title\\n\\n References\\n\\n'\n ' References are related resource instances that have'\n ' a belongs-to relationship.\\n\\n')\n for ref in sorted(resource_model.references, key=lambda i: i.name):\n docs += (' .. py:attribute:: {0}\\n\\n '\n '(:py:class:`{1}.{2}`) The related {3} if set,'\n ' otherwise ``None``.\\n\\n').format(\n xform_name(ref.name), service_name,\n ref.resource.type, ref.resource.type)\n\n if resource_model.collections:\n docs += (' .. rst-class:: admonition-title\\n\\n Collections\\n\\n'\n ' Collections provide an interface to iterate and'\n ' manipulate groups of resources.\\n\\n')\n for collection in sorted(resource_model.collections,\n key=lambda i: i.name):\n docs += (' .. py:attribute:: {0}\\n\\n '\n '(:py:class:`{1}.{2}CollectionManager`)'\n ' A collection of :py:class:`{3}.{4}` instances. This'\n ' collection uses the :py:meth:`{5}.Client.{6}` operation'\n ' to get items.\\n\\n').format(\n xform_name(collection.name), service_name,\n collection.name, service_name,\n collection.resource.type, service_name,\n xform_name(collection.request.operation))\n\n if resource_model.waiters:\n docs += (' .. rst-class:: admonition-title\\n\\n Waiters\\n\\n'\n ' Waiters provide an interface to wait for a resource'\n ' to reach a specific state.\\n\\n')\n service_waiter_model = session.get_waiter_model(service_name)\n for waiter in sorted(resource_model.waiters,\n key=lambda i: i.name):\n docs += document_waiter(waiter, service_name, resource_model,\n service_model, service_waiter_model)\n\n return docs\n\ndef document_collection(service_name, official_name, collection_model,\n resource_model, service_model):\n \"\"\"\n Generate reference documentation about a collection and any\n batch actions it might have.\n \"\"\"\n title = collection_model.name + 'Collection'\n docs = '{0}\\n{1}\\n\\n'.format(title, '-' * len(title))\n docs += '.. py:class:: {0}.{1}CollectionManager()\\n\\n'.format(\n service_name, collection_model.name)\n docs += (' A collection of :py:class:`{0}.{1}` resources for {2}. See'\n ' the'\n ' :py:class:`~boto3.resources.collection.CollectionManager`'\n ' base class for additional methods.\\n\\n'\n ' This collection uses the :py:meth:`{3}.Client.{4}`'\n ' operation to get items, and its parameters can be'\n ' used as filters::\\n\\n').format(\n service_name, resource_model.name, official_name,\n service_name, xform_name(collection_model.request.operation))\n docs += (' for {0} in {1}.{2}.all():\\n'\n ' print({0})\\n\\n').format(\n xform_name(collection_model.resource.type),\n collection_model.parent_name,\n xform_name(collection_model.name),\n xform_name(collection_model.resource.type))\n\n if collection_model.batch_actions:\n docs += (' .. rst-class:: admonition-title\\n\\n Batch Actions\\n\\n'\n ' Batch actions provide a way to manipulate groups of'\n ' resources in a single service operation call.\\n\\n')\n for action in sorted(collection_model.batch_actions, key=lambda i:i.name):\n docs += document_action(action, service_name, resource_model,\n service_model)\n\n return docs\n\ndef document_waiter(waiter, service_name, resource_model, service_model,\n service_waiter_model):\n \"\"\"\n Document a resource waiter, including the low-level client waiter\n and parameters.\n \"\"\"\n try:\n waiter_model = service_waiter_model.get_waiter(waiter.waiter_name)\n except:\n print('Cannot get waiter ' + waiter.waiter_name)\n return ''\n\n try:\n operation_model = service_model.operation_model(waiter_model.operation)\n except:\n print('Cannot get operation ' + action.request.operation +\n ' for waiter ' + waiter.waiter_name)\n return ''\n description = (' Waits until this {0} is {1}.\\n'\n ' This method calls ``wait()`` on'\n ' :py:meth:`{2}.Client.get_waiter` using `{3}`_ .').format(\n resource_model.name,\n ' '.join(waiter.name.split('_')[2:]),\n service_name,\n xform_name(waiter.waiter_name))\n\n # Here we split because we only care about top-level parameter names\n ignore_params = [p.target.split('.')[0].strip('[]') for p in waiter.params]\n\n return document_operation(\n operation_model=operation_model, service_name=service_name,\n operation_name=xform_name(waiter.name),\n description=description,\n example_instance = xform_name(resource_model.name),\n ignore_params=ignore_params, rtype=None)\n\ndef document_action(action, service_name, resource_model, service_model,\n action_type='action'):\n \"\"\"\n Document a resource action, including the low-level client operation\n and parameters.\n \"\"\"\n try:\n operation_model = service_model.operation_model(\n action.request.operation)\n except:\n print('Cannot get operation ' + action.request.operation)\n return ''\n\n # Here we split because we only care about top-level parameter names\n ignore_params = [p.target.split('.')[0].strip('[]')\n for p in action.request.params]\n\n rtype = 'dict'\n if action_type == 'action':\n description = (' This method calls'\n ' :py:meth:`{0}.Client.{1}`.').format(\n service_name,\n xform_name(action.request.operation))\n example_response = 'response'\n if action.resource:\n rtype = ':py:class:`{0}.{1}`'.format(\n service_name, action.resource.type)\n example_response = xform_name(action.resource.type)\n\n # Is the response plural? If so we are returning a list!\n if action.path and '[]' in action.path:\n rtype = 'list({0})'.format(rtype)\n\n return document_operation(\n operation_model, service_name, operation_name=xform_name(action.name),\n description=description, ignore_params=ignore_params, rtype=rtype,\n example_instance=xform_name(resource_model.name),\n example_response=example_response)\n\ndef document_operation(operation_model, service_name, operation_name=None,\n description=None, ignore_params=None, rtype='dict',\n paginated=False, example_instance=None,\n example_response=None):\n \"\"\"\n Document an operation. The description can be overridden and certain\n params hidden to support documenting resource actions.\n \"\"\"\n params = {}\n if operation_model.input_shape:\n try:\n params = operation_model.input_shape.members\n except AttributeError:\n print('Cannot find input shape for ' + operation_model.name)\n\n if ignore_params is None:\n ignore_params = []\n\n required = []\n if operation_model.input_shape:\n required = operation_model.input_shape.required_members\n required_params = [k for k in params.keys() if k in required and \\\n k not in ignore_params]\n optional_params = [k for k in params.keys() if k not in required and \\\n k not in ignore_params]\n param_desc = ', '.join([\n ', '.join(['{0}=None'.format(k) for k in required_params]),\n ', '.join(['{0}=None'.format(k) for k in optional_params])\n ])\n\n if operation_name is None:\n operation_name = xform_name(operation_model.name)\n\n if description is None:\n description = html_to_rst(\n operation_model._operation_model.get('documentation', ''),\n indent=6, indentFirst=True)\n\n docs = ' .. py:method:: {0}({1})\\n\\n{2}\\n\\n'.format(\n operation_name, param_desc, description)\n\n if paginated:\n docs += ' This operation can be paginated.\\n\\n'\n\n if example_instance:\n dummy_params = []\n for key, value in params.items():\n if key in ignore_params:\n continue\n if key in required_params:\n default = py_default(value.type_name)\n dummy_params.append('{0}={1}'.format(\n key, default))\n docs += ' Example::\\n\\n '\n if example_response is not None:\n docs += '{0} = '.format(example_response)\n docs += '{0}.{1}({2})\\n\\n'.format(example_instance, operation_name,\n ', '.join(dummy_params))\n\n for key, value in params.items():\n # Skip identifiers as these are automatically set!\n if key in ignore_params:\n continue\n param_type = py_type_name(value.type_name)\n required = key in required_params and 'Required' or 'Optional'\n docs += (' :param {0} {1}: *{2}* - {3}\\n'.format(\n param_type, key, required,\n html_to_rst(value.documentation, indent=9)))\n if rtype is not None:\n docs += '\\n\\n :rtype: {0}\\n\\n'.format(rtype)\n\n return docs\n", "boto3/resources/factory.py": "# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\nimport logging\nfrom functools import partial\n\nfrom .action import ServiceAction\nfrom .action import WaiterAction\nfrom .base import ResourceMeta, ServiceResource\nfrom .collection import CollectionFactory\nfrom .model import ResourceModel\nfrom .response import build_identifiers, ResourceHandler\nfrom ..exceptions import ResourceLoadException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ResourceFactory(object):\n \"\"\"\n A factory to create new :py:class:`~boto3.resources.base.ServiceResource`\n classes from a :py:class:`~boto3.resources.model.ResourceModel`. There are\n two types of lookups that can be done: one on the service itself (e.g. an\n SQS resource) and another on models contained within the service (e.g. an\n SQS Queue resource).\n \"\"\"\n def __init__(self):\n self._collection_factory = CollectionFactory()\n\n def load_from_definition(self, service_name, resource_name, model,\n resource_defs, service_model):\n \"\"\"\n Loads a resource from a model, creating a new\n :py:class:`~boto3.resources.base.ServiceResource` subclass\n with the correct properties and methods, named based on the service\n and resource name, e.g. EC2.Instance.\n\n :type service_name: string\n :param service_name: Name of the service to look up\n :type resource_name: string\n :param resource_name: Name of the resource to look up. For services,\n this should match the ``service_name``.\n :type model: dict\n :param model: The service or resource definition.\n :type resource_defs: dict\n :param resource_defs: The service's resource definitions, used to load\n subresources (e.g. ``sqs.Queue``).\n :type service_model: ``botocore.model.ServiceModel``\n :param service_model: The Botocore service model, required only if the\n resource shape contains members. This is used to\n expose lazy-loaded attributes on the resource.\n :rtype: Subclass of :py:class:`~boto3.resources.base.ServiceResource`\n :return: The service or resource class.\n \"\"\"\n # Set some basic info\n meta = ResourceMeta(service_name)\n attrs = {\n 'meta': meta,\n }\n\n logger.debug('Loading %s:%s', service_name, resource_name)\n\n resource_model = ResourceModel(resource_name, model, resource_defs)\n\n shape = None\n if resource_model.shape:\n shape = service_model.shape_for(resource_model.shape)\n resource_model.load_rename_map(shape)\n\n self._load_identifiers(attrs, meta, resource_model)\n self._load_actions(attrs, resource_model, resource_defs,\n service_model)\n self._load_attributes(attrs, meta, resource_model, service_model)\n self._load_collections(attrs, resource_model, resource_defs,\n service_model)\n self._load_has_relations(attrs, service_name, resource_name,\n resource_model, resource_defs, service_model)\n self._load_waiters(attrs, resource_model)\n\n # Create the name based on the requested service and resource\n cls_name = resource_name\n if service_name != resource_name:\n cls_name = service_name + '.' + cls_name\n\n return type(str(cls_name), (ServiceResource,), attrs)\n\n def _load_identifiers(self, attrs, meta, model):\n \"\"\"\n Populate required identifiers. These are arguments without which\n the resource cannot be used. Identifiers become arguments for\n operations on the resource.\n \"\"\"\n for identifier in model.identifiers:\n meta.identifiers.append(identifier.name)\n attrs[identifier.name] = None\n\n def _load_actions(self, attrs, model, resource_defs, service_model):\n \"\"\"\n Actions on the resource become methods, with the ``load`` method\n being a special case which sets internal data for attributes, and\n ``reload`` is an alias for ``load``.\n \"\"\"\n if model.load:\n attrs['load'] = self._create_action(\n model.load, resource_defs, service_model, is_load=True)\n attrs['reload'] = attrs['load']\n\n for action in model.actions:\n attrs[action.name] = self._create_action(action, resource_defs,\n service_model)\n\n def _load_attributes(self, attrs, meta, model, service_model):\n \"\"\"\n Load resource attributes based on the resource shape. The shape\n name is referenced in the resource JSON, but the shape itself\n is defined in the Botocore service JSON, hence the need for\n access to the ``service_model``.\n \"\"\"\n if model.shape:\n shape = service_model.shape_for(model.shape)\n\n attributes = model.get_attributes(shape)\n for name, (orig_name, member) in attributes.items():\n attrs[name] = self._create_autoload_property(orig_name, name)\n\n def _load_collections(self, attrs, model, resource_defs, service_model):\n \"\"\"\n Load resource collections from the model. Each collection becomes\n a :py:class:`~boto3.resources.collection.CollectionManager` instance\n on the resource instance, which allows you to iterate and filter\n through the collection's items.\n \"\"\"\n for collection_model in model.collections:\n attrs[collection_model.name] = self._create_collection(\n attrs['meta'].service_name, model.name,\n collection_model, resource_defs, service_model)\n\n def _load_has_relations(self, attrs, service_name, resource_name,\n model, resource_defs, service_model):\n \"\"\"\n Load related resources, which are defined via a ``has``\n relationship but conceptually come in two forms:\n\n 1. A reference, which is a related resource instance and can be\n ``None``, such as an EC2 instance's ``vpc``.\n 2. A subresource, which is a resource constructor that will always\n return a resource instance which shares identifiers/data with\n this resource, such as ``s3.Bucket('name').Object('key')``.\n \"\"\"\n for reference in model.references:\n # This is a dangling reference, i.e. we have all\n # the data we need to create the resource, so\n # this instance becomes an attribute on the class.\n attrs[reference.name] = self._create_reference(\n reference.resource.type, reference,\n service_name, resource_name, model, resource_defs,\n service_model)\n\n for subresource in model.subresources:\n # This is a sub-resource class you can create\n # by passing in an identifier, e.g. s3.Bucket(name).\n name = subresource.resource.type\n attrs[subresource.name] = self._create_class_partial(\n name, subresource, service_name, resource_name, model,\n resource_defs, service_model)\n\n\n def _load_waiters(self, attrs, model):\n \"\"\"\n Load resource waiters from the model. Each waiter allows you to\n wait until a resource reaches a specific state by polling the state\n of the resource.\n \"\"\"\n for waiter in model.waiters:\n attrs[waiter.name] = self._create_waiter(waiter)\n\n def _create_autoload_property(factory_self, name, snake_cased):\n \"\"\"\n Creates a new property on the resource to lazy-load its value\n via the resource's ``load`` method (if it exists).\n \"\"\"\n # The property loader will check to see if this resource has already\n # been loaded and return the cached value if possible. If not, then\n # it first checks to see if it CAN be loaded (raise if not), then\n # calls the load before returning the value.\n def property_loader(self):\n if self.meta.data is None:\n if hasattr(self, 'load'):\n self.load()\n else:\n raise ResourceLoadException(\n '{0} has no load method'.format(self.__class__.__name__))\n\n return self.meta.data.get(name)\n\n property_loader.__name__ = str(snake_cased)\n property_loader.__doc__ = 'TODO'\n return property(property_loader)\n\n def _create_waiter(factory_self, waiter_model):\n \"\"\"\n Creates a new wait method for each resource where both a waiter and\n resource model is defined.\n \"\"\"\n waiter = WaiterAction(waiter_model,\n waiter_resource_name=waiter_model.name)\n def do_waiter(self, *args, **kwargs):\n waiter(self, *args, **kwargs)\n\n do_waiter.__name__ = str(waiter_model.name)\n do_waiter.__doc__ = 'TODO'\n return do_waiter\n\n def _create_collection(factory_self, service_name, resource_name,\n collection_model, resource_defs, service_model):\n \"\"\"\n Creates a new property on the resource to lazy-load a collection.\n \"\"\"\n cls = factory_self._collection_factory.load_from_definition(\n service_name, resource_name, collection_model.name,\n collection_model, resource_defs)\n\n def get_collection(self):\n return cls(collection_model, self, factory_self,\n resource_defs, service_model)\n\n get_collection.__name__ = str(collection_model.name)\n get_collection.__doc__ = 'TODO'\n return property(get_collection)\n\n def _create_reference(factory_self, name, reference, service_name,\n resource_name, model, resource_defs, service_model):\n \"\"\"\n Creates a new property on the resource to lazy-load a reference.\n \"\"\"\n # References are essentially an action with no request\n # or response, so we can re-use the response handlers to\n # build up resources from identifiers and data members.\n handler = ResourceHandler('', factory_self, resource_defs,\n service_model, reference.resource)\n\n def get_reference(self):\n # We need to lazy-evaluate the reference to handle circular\n # references between resources. We do this by loading the class\n # when first accessed.\n # First, though, we need to see if we have the required\n # identifiers to instantiate the resource reference.\n return handler(self, {}, {})\n\n get_reference.__name__ = str(reference.name)\n get_reference.__doc__ = 'TODO'\n return property(get_reference)\n\n def _create_class_partial(factory_self, name, subresource,\n service_name, resource_name, model,\n resource_defs, service_model):\n \"\"\"\n Creates a new method which acts as a functools.partial, passing\n along the instance's low-level `client` to the new resource\n class' constructor.\n \"\"\"\n # We need a new method here because we want access to the\n # instance's client.\n def create_resource(self, *args, **kwargs):\n positional_args = []\n\n # We lazy-load the class to handle circular references.\n resource_cls = factory_self.load_from_definition(\n service_name, name, resource_defs.get(name, {}),\n resource_defs, service_model)\n\n # Assumes that identifiers are in order, which lets you do\n # e.g. ``sqs.Queue('foo').Message('bar')`` to create a new message\n # linked with the ``foo`` queue and which has a ``bar`` receipt\n # handle. If we did kwargs here then future positional arguments\n # would lead to failure.\n identifiers = subresource.resource.identifiers\n if identifiers is not None:\n for identifier, value in build_identifiers(identifiers, self):\n positional_args.append(value)\n\n return partial(resource_cls, *positional_args,\n client=self.meta.client)(*args, **kwargs)\n\n create_resource.__name__ = str(name)\n create_resource.__doc__ = 'TODO'\n return create_resource\n\n def _create_action(factory_self, action_model, resource_defs,\n service_model, is_load=False):\n \"\"\"\n Creates a new method which makes a request to the underlying\n AWS service.\n \"\"\"\n # Create the action in in this closure but before the ``do_action``\n # method below is invoked, which allows instances of the resource\n # to share the ServiceAction instance.\n action = ServiceAction(action_model, factory=factory_self,\n resource_defs=resource_defs, service_model=service_model)\n\n # A resource's ``load`` method is special because it sets\n # values on the resource instead of returning the response.\n if is_load:\n # We need a new method here because we want access to the\n # instance via ``self``.\n def do_action(self, *args, **kwargs):\n response = action(self, *args, **kwargs)\n self.meta.data = response\n else:\n # We need a new method here because we want access to the\n # instance via ``self``.\n def do_action(self, *args, **kwargs):\n response = action(self, *args, **kwargs)\n\n if hasattr(self, 'load'):\n # Clear cached data. It will be reloaded the next\n # time that an attribute is accessed.\n # TODO: Make this configurable in the future?\n self.meta.data = None\n\n return response\n\n do_action.__name__ = str(action_model.name)\n do_action.__doc__ = 'TODO'\n return do_action\n"}
|
{"boto3/docs.py": [{"type": "function", "name": "document_structure", "lines": [623, 688], "signature": "def document_structure(name, shape, indent=0, indent_first=True, parent_type=None, eol='\\n'):", "doc": "Document a nested structure (list or dict) parameter or return value as\na snippet of Python code with dummy placeholders. For example:\n\n {\n 'Param1': [\n STRING,\n ...\n ],\n 'Param2': BOOLEAN,\n 'Param3': {\n 'Param4': FLOAT,\n 'Param5': INTEGER\n }\n }"}]}
| null |
["tests/unit/resources/test_factory.py::TestResourceFactory::test_can_instantiate_service_resource", "tests/unit/resources/test_factory.py::TestResourceFactory::test_factory_creates_dangling_resources", "tests/unit/resources/test_factory.py::TestResourceFactory::test_factory_creates_properties", "tests/unit/resources/test_factory.py::TestResourceFactory::test_factory_fails_on_clobber_action", "tests/unit/resources/test_factory.py::TestResourceFactory::test_factory_renames_on_clobber_identifier", "tests/unit/resources/test_factory.py::TestResourceFactory::test_factory_sets_identifiers", "tests/unit/resources/test_factory.py::TestResourceFactory::test_factory_sets_service_name", "tests/unit/resources/test_factory.py::TestResourceFactory::test_get_resource_returns_resource_class", "tests/unit/resources/test_factory.py::TestResourceFactory::test_get_service_returns_resource_class", "tests/unit/resources/test_factory.py::TestResourceFactory::test_identifiers_in_repr", "tests/unit/resources/test_factory.py::TestResourceFactory::test_non_service_resource_missing_defs", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_action_clears_data", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_action_leaves_data", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_calls_action", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_lazy_loads_properties", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_lazy_properties_missing_load", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_loads_collections", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_loads_references", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_loads_waiters", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_meta_repr", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_meta_unique", "tests/unit/resources/test_factory.py::TestResourceFactory::test_resource_waiter_calls_waiter_method", "tests/unit/resources/test_factory.py::TestResourceFactory::test_subresource_requires_only_identifier", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resource_create_with_kwarg", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resource_equality", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resource_inequality", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resource_loads_data", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resource_raises_for_unknown_arg", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resource_requires_identifier", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resource_shares_client", "tests/unit/resources/test_factory.py::TestResourceFactoryDanglingResource::test_dangling_resources_create_resource_instance", "tests/unit/resources/test_factory.py::TestServiceResourceSubresources::test_contains_all_subresources", "tests/unit/resources/test_factory.py::TestServiceResourceSubresources::test_subresource_custom_name", "tests/unit/resources/test_factory.py::TestServiceResourceSubresources::test_subresource_missing_all_subresources", "tests/unit/test_docs.py::TestPythonTypeName::test_blob", "tests/unit/test_docs.py::TestPythonTypeName::test_character", "tests/unit/test_docs.py::TestPythonTypeName::test_double", "tests/unit/test_docs.py::TestPythonTypeName::test_float", "tests/unit/test_docs.py::TestPythonTypeName::test_integer", "tests/unit/test_docs.py::TestPythonTypeName::test_list", "tests/unit/test_docs.py::TestPythonTypeName::test_long", "tests/unit/test_docs.py::TestPythonTypeName::test_map", "tests/unit/test_docs.py::TestPythonTypeName::test_string", "tests/unit/test_docs.py::TestPythonTypeName::test_structure", "tests/unit/test_docs.py::TestPythonTypeName::test_timestamp", "tests/unit/test_docs.py::TestDocumentStructure::test_nested_structure"]
|
[]
|
196a2da7490a1a661a0103b8770bd31e34e147f2
|
{"first_commit_time": 1426883624.0, "pr_title": "Load reference data if a resource path is defined", "pr_body": "This change allows references with a JMESPath query set on the reference\nresource path attribute to be loaded with data at instantiation time if\nthat data is present in the parent (via `meta.data`). If the data has\nnot yet been loaded and the parent is loadable, then a `load` operation\nis incurred.\n\nBefore:\n\n``` python\n>>> ni = ec2.NetworkInterface('abc123')\n>>> ni.association.public_ip\nResourceLoadException: ec2.NetworkInterfaceAssociation has no load method\n```\n\nAfter:\n\n``` python\n>>> ni = ec2.NetworkInterface('abc123')\n>>> ni.association.public_ip\n'127.0.0.1'\n```\n\nAdded a test to ensure this works as expected.\n\n@kyleknap @jamesls please have a look.\n", "pr_timeline": [{"time": 1427304951.0, "comment": "[](https://coveralls.io/builds/2191968)\n\nCoverage increased (+0.01%) to 98.7% when pulling **7dfd42f915c6bc06a068a7bee4111e22b055b119 on ref-load** into **ebc0f95261025aa02c474ec8ffa3e0a0604cb3c6 on develop**.\n"}, {"time": 1427304994.0, "comment": "@jamesls mind taking another look? Thanks! :+1:\n"}, {"time": 1427315270.0, "comment": ":shipit: Looks good, some small suggestions but otherwise looks good.\n"}], "issues": {}}
|
|
conan-io/conan
| 10,154
|
https://github.com/conan-io/conan/pull/10154
|
conan-io__conan-10154
|
[]
|
51a077458b697dcd39e5eb64f1d33a6c52584081
|
diff --git a/conan/tools/files/__init__.py b/conan/tools/files/__init__.py
index d6c2bf10208..7a84ecda977 100644
--- a/conan/tools/files/__init__.py
+++ b/conan/tools/files/__init__.py
@@ -3,3 +3,4 @@
from conan.tools.files.patches import patch, apply_conandata_patches
from conan.tools.files.cpp_package import CppPackage
from conan.tools.files.packager import AutoPackager
+from conan.tools.files.symlinks import symlinks
diff --git a/conan/tools/files/symlinks/__init__.py b/conan/tools/files/symlinks/__init__.py
new file mode 100644
index 00000000000..d826e74d6f8
--- /dev/null
+++ b/conan/tools/files/symlinks/__init__.py
@@ -0,0 +1,2 @@
+from conan.tools.files.symlinks.symlinks import absolute_to_relative_symlinks, \
+ remove_external_symlinks, remove_broken_symlinks, get_symlinks
diff --git a/conan/tools/files/symlinks/symlinks.py b/conan/tools/files/symlinks/symlinks.py
new file mode 100644
index 00000000000..98c4b2040a1
--- /dev/null
+++ b/conan/tools/files/symlinks/symlinks.py
@@ -0,0 +1,52 @@
+import os
+
+
+def get_symlinks(base_folder):
+ """Return the absolute path to the symlink files in base_folder"""
+ for (root, dirnames, filenames) in os.walk(base_folder):
+ for el in filenames + dirnames:
+ fullpath = os.path.join(root, el)
+ if os.path.islink(fullpath):
+ yield fullpath
+
+
+def _path_inside(base, folder):
+ base = os.path.abspath(base)
+ folder = os.path.abspath(folder)
+ return os.path.commonprefix([base, folder]) == base
+
+
+def absolute_to_relative_symlinks(conanfile, base_folder):
+ """Convert the symlinks with absolute paths to relative if they are pointing to a file or
+ directory inside the 'base_folder'. Any absolute symlink pointing outside the 'base_folder'
+ will be ignored"""
+ for fullpath in get_symlinks(base_folder):
+ link_target = os.readlink(fullpath)
+ if not os.path.isabs(link_target):
+ continue
+ folder_of_symlink = os.path.dirname(fullpath)
+ if _path_inside(base_folder, link_target):
+ os.unlink(fullpath)
+ new_link = os.path.relpath(link_target, folder_of_symlink)
+ os.symlink(new_link, fullpath)
+
+
+def remove_external_symlinks(conanfile, base_folder=None):
+ """Remove the symlinks to files that point outside the 'base_folder', no matter if relative or
+ absolute"""
+ for fullpath in get_symlinks(base_folder):
+ link_target = os.readlink(fullpath)
+ if not os.path.isabs(link_target):
+ link_target = os.path.join(base_folder, link_target)
+ if not _path_inside(base_folder, link_target):
+ os.unlink(fullpath)
+
+
+def remove_broken_symlinks(conanfile, base_folder=None):
+ """Remove the broken symlinks, no matter if relative or absolute"""
+ for fullpath in get_symlinks(base_folder):
+ link_target = os.readlink(fullpath)
+ if not os.path.isabs(link_target):
+ link_target = os.path.join(base_folder, link_target)
+ if not os.path.exists(link_target):
+ os.unlink(fullpath)
|
diff --git a/conans/test/unittests/tools/files/test_symlinks.py b/conans/test/unittests/tools/files/test_symlinks.py
new file mode 100644
index 00000000000..7b7398a85b3
--- /dev/null
+++ b/conans/test/unittests/tools/files/test_symlinks.py
@@ -0,0 +1,122 @@
+import os
+
+import pytest
+
+from conan import tools
+from conan.tools.files import mkdir
+from conans.test.utils.test_files import temp_folder
+
+
[email protected]
+def folders():
+ tmp = temp_folder()
+ files = ["foo/var/file.txt"]
+ outside_folder = temp_folder()
+ symlinks = [
+ (os.path.join(tmp, "foo/var/file.txt"), "foo/var/other/absolute.txt"), # Absolute link
+ (os.path.join(tmp, "foo/var"), "foo/var/other/other/myfolder"), # Absolute link folder
+ (os.path.join(tmp, "foo/var/file.txt"), "foo/absolute.txt"), # Absolute link
+ ("foo/var/file.txt", "foo/var/other/relative.txt"), # Relative link
+ ("missing.txt", "foo/var/other/broken.txt"), # Broken link
+ (outside_folder, "foo/var/other/absolute_outside"), # Absolute folder outside the folder
+ ("../../../../../outside", "foo/absolute_outside"), # Relative folder outside the folder
+ ]
+ # Create the files and symlinks
+ for path in files:
+ mkdir(None, os.path.dirname(os.path.join(tmp, path)))
+ with open(os.path.join(tmp, path), "w") as fl:
+ fl.write("foo")
+
+ for link_dst, linked_file in symlinks:
+ mkdir(None, os.path.dirname(os.path.join(tmp, linked_file)))
+ os.symlink(link_dst, os.path.join(tmp, linked_file))
+ return tmp, outside_folder
+
+
+def test_absolute_to_relative_symlinks(folders):
+ """If a symlink is absolute but relative to a file or folder that is contained in
+ the base folder, we can make it relative"""
+
+ folder, outside_folder = folders
+ # Transform the absolute symlinks to relative
+ tools.files.symlinks.absolute_to_relative_symlinks(None, folder)
+
+ # Check the results
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/absolute.txt")).replace("\\", "/")
+ assert linked_to == "../file.txt"
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/other/myfolder")).replace("\\", "/")
+ assert linked_to == "../.."
+
+ linked_to = os.readlink(os.path.join(folder, "foo/absolute.txt")).replace("\\", "/")
+ assert linked_to == "var/file.txt"
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/relative.txt")).replace("\\", "/")
+ assert linked_to == "foo/var/file.txt"
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/broken.txt"))
+ assert linked_to == "missing.txt"
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/absolute_outside"))
+ assert linked_to == outside_folder
+
+
+def test_remove_external_symlinks(folders):
+
+ folder, outside_folder = folders
+ # Remove the external symlinks
+ tools.files.symlinks.remove_external_symlinks(None, folder)
+
+ # Check the results, these are kept the same
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/absolute.txt"))
+ assert linked_to == os.path.join(folder, "foo/var/file.txt")
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/other/myfolder"))
+ assert linked_to == os.path.join(folder, "foo/var")
+
+ linked_to = os.readlink(os.path.join(folder, "foo/absolute.txt"))
+ assert linked_to == os.path.join(folder, "foo/var/file.txt")
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/relative.txt"))
+ assert linked_to == "foo/var/file.txt"
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/broken.txt"))
+ assert linked_to == "missing.txt"
+
+ # This one is removed
+ assert not os.path.islink(os.path.join(folder, "foo/var/other/absolute_outside"))
+ assert not os.path.exists(os.path.join(folder, "foo/var/other/absolute_outside"))
+
+ # This one is removed
+ assert not os.path.islink(os.path.join(folder, "foo/absolute_outside"))
+ assert not os.path.exists(os.path.join(folder, "foo/absolute_outside"))
+
+
+def test_remove_broken_symlinks(folders):
+ folder, outside_folder = folders
+ # Remove the external symlinks
+ tools.files.symlinks.remove_broken_symlinks(None, folder)
+
+ # Check the results, these are kept the same
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/absolute.txt"))
+ assert linked_to == os.path.join(folder, "foo/var/file.txt")
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/other/myfolder"))
+ assert linked_to == os.path.join(folder, "foo/var")
+
+ linked_to = os.readlink(os.path.join(folder, "foo/absolute.txt"))
+ assert linked_to == os.path.join(folder, "foo/var/file.txt")
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/relative.txt"))
+ assert linked_to == "foo/var/file.txt"
+
+ # This one is removed
+ assert not os.path.islink(os.path.join(folder, "foo/var/other/broken.txt"))
+ assert not os.path.exists(os.path.join(folder, "foo/var/other/broken.txt"))
+
+ linked_to = os.readlink(os.path.join(folder, "foo/var/other/absolute_outside"))
+ assert linked_to == outside_folder
+
+ # This is broken also so it is also removed
+ assert not os.path.islink(os.path.join(folder, "foo/absolute_outside"))
+ assert not os.path.exists(os.path.join(folder, "foo/absolute_outside"))
| 2021-12-10T13:11:56
|
{}
|
{"conan/tools/files/__init__.py": "from conan.tools.files.files import load, save, mkdir, ftp_download, download, get, rename, \\\n load_toolchain_args, save_toolchain_args, chdir\nfrom conan.tools.files.patches import patch, apply_conandata_patches\nfrom conan.tools.files.cpp_package import CppPackage\nfrom conan.tools.files.packager import AutoPackager\n", "conan/tools/files/symlinks/__init__.py": null, "conan/tools/files/symlinks/symlinks.py": null}
|
{"conan/tools/files/symlinks/symlinks.py": [{"type": "function", "name": "get_symlinks", "lines": [4, 10], "signature": "def get_symlinks(base_folder):", "doc": "Return the absolute path to the symlink files in base_folder"}, {"type": "function", "name": "_path_inside", "lines": [13, 16], "signature": "def _path_inside(base, folder):", "doc": ""}, {"type": "function", "name": "absolute_to_relative_symlinks", "lines": [19, 31], "signature": "def absolute_to_relative_symlinks(conanfile, base_folder):", "doc": "Convert the symlinks with absolute paths to relative if they are pointing to a file or\ndirectory inside the 'base_folder'. Any absolute symlink pointing outside the 'base_folder'\nwill be ignored"}, {"type": "function", "name": "remove_external_symlinks", "lines": [34, 42], "signature": "def remove_external_symlinks(conanfile, base_folder=None):", "doc": "Remove the symlinks to files that point outside the 'base_folder', no matter if relative or\nabsolute"}, {"type": "function", "name": "remove_broken_symlinks", "lines": [45, 52], "signature": "def remove_broken_symlinks(conanfile, base_folder=None):", "doc": "Remove the broken symlinks, no matter if relative or absolute"}]}
| null |
["conans/test/unittests/tools/files/test_symlinks.py::test_absolute_to_relative_symlinks", "conans/test/unittests/tools/files/test_symlinks.py::test_remove_external_symlinks", "conans/test/unittests/tools/files/test_symlinks.py::test_remove_broken_symlinks"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1639141602.0, "pr_title": "Ported symlinks tools from #10125", "pr_body": "Changelog: Feature: Provided several `conan.tools.files` functions to manage symlinks: Transform absolute to relative symlinks, remove broken symlinks, remove external symlinks and get the symlinks in a folder. These tools will help migrate to Conan 2.0 where the package files won't be automatically cleaned from broken absolute symlinks or external symlinks.\r\nDocs: https://github.com/conan-io/docs/pull/2343\r\n\r\nPending merge of https://github.com/conan-io/conan/pull/10125\r\n\r\nPENDING DOCUMENTATION:\r\n\r\n- Document tools\r\n- Document the migration guide", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 10,437
|
https://github.com/conan-io/conan/pull/10437
|
conan-io__conan-10437
|
[]
|
9c34ea7568ec8b7eb36a3fbc0fd2ec8a30716830
|
diff --git a/conan/tools/microsoft/__init__.py b/conan/tools/microsoft/__init__.py
index eb506393f75..e86a8b19dd3 100644
--- a/conan/tools/microsoft/__init__.py
+++ b/conan/tools/microsoft/__init__.py
@@ -1,6 +1,6 @@
from conan.tools.microsoft.toolchain import MSBuildToolchain
from conan.tools.microsoft.msbuild import MSBuild
from conan.tools.microsoft.msbuilddeps import MSBuildDeps
-from conan.tools.microsoft.visual import msvc_runtime_flag, VCVars, is_msvc
+from conan.tools.microsoft.visual import msvc_runtime_flag, VCVars, is_msvc, is_msvc_static_runtime
from conan.tools.microsoft.subsystems import subsystem_path
from conan.tools.microsoft.layout import vs_layout
diff --git a/conan/tools/microsoft/visual.py b/conan/tools/microsoft/visual.py
index 9dbdc5986fa..fefb5df6590 100644
--- a/conan/tools/microsoft/visual.py
+++ b/conan/tools/microsoft/visual.py
@@ -181,3 +181,11 @@ def is_msvc(conanfile):
"""
settings = conanfile.settings
return settings.get_safe("compiler") in ["Visual Studio", "msvc"]
+
+
+def is_msvc_static_runtime(conanfile):
+ """ Validate when building with Visual Studio or msvc and MT on runtime
+ :param conanfile: ConanFile instance
+ :return: True, if msvc + runtime MT. Otherwise, False
+ """
+ return is_msvc(conanfile) and "MT" in msvc_runtime_flag(conanfile)
|
diff --git a/conans/test/unittests/tools/microsoft/test_msbuild.py b/conans/test/unittests/tools/microsoft/test_msbuild.py
index f6aab1c5dff..14bfbb02a47 100644
--- a/conans/test/unittests/tools/microsoft/test_msbuild.py
+++ b/conans/test/unittests/tools/microsoft/test_msbuild.py
@@ -4,10 +4,10 @@
import pytest
from mock import Mock
-from conan.tools.microsoft import MSBuild, MSBuildToolchain, is_msvc
+from conan.tools.microsoft import MSBuild, MSBuildToolchain, is_msvc, is_msvc_static_runtime
from conans.model.conf import ConfDefinition, Conf
from conans.model.env_info import EnvValues
-from conans.test.utils.mocks import ConanFileMock, MockSettings
+from conans.test.utils.mocks import ConanFileMock, MockSettings, MockOptions, MockConanfile
from conans.test.utils.test_files import temp_folder
from conans.tools import load
from conans import ConanFile, Settings
@@ -189,3 +189,23 @@ def test_is_msvc(compiler, expected):
conanfile.initialize(settings, EnvValues())
conanfile.settings.compiler = compiler
assert is_msvc(conanfile) == expected
+
+
[email protected]("compiler,shared,runtime,build_type,expected", [
+ ("Visual Studio", True, "MT", "Release", True),
+ ("msvc", True, "static", "Release", True),
+ ("Visual Studio", False, "MT", "Release", True),
+ ("Visual Studio", True, "MD", "Release", False),
+ ("msvc", True, "static", "Debug", True),
+ ("clang", True, None, "Debug", False),
+])
+def test_is_msvc_static_runtime(compiler, shared, runtime, build_type, expected):
+ options = MockOptions({"shared": shared})
+ settings = MockSettings({"build_type": build_type,
+ "arch": "x86_64",
+ "compiler": compiler,
+ "compiler.runtime": runtime,
+ "compiler.version": "17",
+ "cppstd": "17"})
+ conanfile = MockConanfile(settings, options)
+ assert is_msvc_static_runtime(conanfile) == expected
| 2022-01-26T17:39:46
|
{}
|
{"conan/tools/microsoft/__init__.py": "from conan.tools.microsoft.toolchain import MSBuildToolchain\nfrom conan.tools.microsoft.msbuild import MSBuild\nfrom conan.tools.microsoft.msbuilddeps import MSBuildDeps\nfrom conan.tools.microsoft.visual import msvc_runtime_flag, VCVars, is_msvc\nfrom conan.tools.microsoft.subsystems import subsystem_path\nfrom conan.tools.microsoft.layout import vs_layout\n", "conan/tools/microsoft/visual.py": "import os\nimport textwrap\n\nfrom conans.client.tools import vs_installation_path\nfrom conans.errors import ConanException\n\nCONAN_VCVARS_FILE = \"conanvcvars.bat\"\n\n\ndef msvc_version_to_vs_ide_version(version):\n _visuals = {'190': '14',\n '191': '15',\n '192': '16',\n '193': '17'}\n return _visuals[str(version)]\n\n\nclass VCVars:\n def __init__(self, conanfile):\n self._conanfile = conanfile\n\n def generate(self, scope=\"build\"):\n \"\"\"\n write a conanvcvars.bat file with the good args from settings\n \"\"\"\n conanfile = self._conanfile\n os_ = conanfile.settings.get_safe(\"os\")\n if os_ != \"Windows\":\n return\n\n compiler = conanfile.settings.get_safe(\"compiler\")\n if compiler != \"Visual Studio\" and compiler != \"msvc\":\n return\n\n vs_version = vs_ide_version(conanfile)\n vcvarsarch = vcvars_arch(conanfile)\n vcvars_ver = _vcvars_vers(conanfile, compiler, vs_version)\n\n vs_install_path = conanfile.conf[\"tools.microsoft.msbuild:installation_path\"]\n # The vs_install_path is like\n # C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\n # C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\n # C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\n vcvars = vcvars_command(vs_version, architecture=vcvarsarch, platform_type=None,\n winsdk_version=None, vcvars_ver=vcvars_ver,\n vs_install_path=vs_install_path)\n\n content = textwrap.dedent(\"\"\"\\\n @echo off\n {}\n \"\"\".format(vcvars))\n from conan.tools.env.environment import create_env_script\n create_env_script(conanfile, content, CONAN_VCVARS_FILE, scope)\n\n\ndef vs_ide_version(conanfile):\n compiler = conanfile.settings.get_safe(\"compiler\")\n compiler_version = (conanfile.settings.get_safe(\"compiler.base.version\") or\n conanfile.settings.get_safe(\"compiler.version\"))\n if compiler == \"msvc\":\n toolset_override = conanfile.conf[\"tools.microsoft.msbuild:vs_version\"]\n if toolset_override:\n visual_version = toolset_override\n else:\n visual_version = msvc_version_to_vs_ide_version(compiler_version)\n else:\n visual_version = compiler_version\n return visual_version\n\n\ndef msvc_runtime_flag(conanfile):\n settings = conanfile.settings\n compiler = settings.get_safe(\"compiler\")\n runtime = settings.get_safe(\"compiler.runtime\")\n if compiler == \"Visual Studio\":\n return runtime\n if compiler == \"msvc\" or compiler == \"intel-cc\":\n runtime_type = settings.get_safe(\"compiler.runtime_type\")\n runtime = \"MT\" if runtime == \"static\" else \"MD\"\n if runtime_type == \"Debug\":\n runtime = \"{}d\".format(runtime)\n return runtime\n\n\ndef vcvars_command(version, architecture=None, platform_type=None, winsdk_version=None,\n vcvars_ver=None, start_dir_cd=True, vs_install_path=None):\n \"\"\" conan-agnostic construction of vcvars command\n https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line\n \"\"\"\n # TODO: This comes from conans/client/tools/win.py vcvars_command()\n cmd = []\n if start_dir_cd:\n cmd.append('set \"VSCMD_START_DIR=%CD%\" &&')\n\n # The \"call\" is useful in case it is called from another .bat script\n cmd.append('call \"%s\" ' % _vcvars_path(version, vs_install_path))\n if architecture:\n cmd.append(architecture)\n if platform_type:\n cmd.append(platform_type)\n if winsdk_version:\n cmd.append(winsdk_version)\n if vcvars_ver:\n cmd.append(\"-vcvars_ver=%s\" % vcvars_ver)\n return \" \".join(cmd)\n\n\ndef _vcvars_path(version, vs_install_path):\n # TODO: This comes from conans/client/tools/win.py vcvars_command()\n vs_path = vs_install_path or vs_installation_path(version)\n if not vs_path or not os.path.isdir(vs_path):\n raise ConanException(\"VS non-existing installation: Visual Studio %s\" % version)\n\n if int(version) > 14:\n vcpath = os.path.join(vs_path, \"VC/Auxiliary/Build/vcvarsall.bat\")\n else:\n vcpath = os.path.join(vs_path, \"VC/vcvarsall.bat\")\n return vcpath\n\n\ndef vcvars_arch(conanfile):\n \"\"\"\n computes the vcvars command line architecture based on conanfile settings (host) and\n settings_build\n :param conanfile:\n :return:\n \"\"\"\n # TODO: This comes from conans/client/tools/win.py vcvars_command()\n settings_host = conanfile.settings\n try:\n settings_build = conanfile.settings_build\n except AttributeError:\n settings_build = settings_host\n\n arch_host = str(settings_host.arch)\n arch_build = str(settings_build.arch)\n\n arch = None\n if arch_build == 'x86_64':\n arch = {'x86': \"amd64_x86\",\n 'x86_64': 'amd64',\n 'armv7': 'amd64_arm',\n 'armv8': 'amd64_arm64'}.get(arch_host)\n elif arch_build == 'x86':\n arch = {'x86': 'x86',\n 'x86_64': 'x86_amd64',\n 'armv7': 'x86_arm',\n 'armv8': 'x86_arm64'}.get(arch_host)\n\n if not arch:\n raise ConanException('vcvars unsupported architectures %s-%s' % (arch_build, arch_host))\n\n return arch\n\n\ndef _vcvars_vers(conanfile, compiler, vs_version):\n if int(vs_version) <= 14:\n return None\n\n vcvars_ver = None\n if compiler == \"Visual Studio\":\n toolset = conanfile.settings.get_safe(\"compiler.toolset\")\n if toolset is not None:\n vcvars_ver = {\"v140\": \"14.0\",\n \"v141\": \"14.1\",\n \"v142\": \"14.2\",\n \"v143\": \"14.3\"}.get(toolset)\n else:\n assert compiler == \"msvc\"\n # Code similar to CMakeToolchain toolset one\n compiler_version = str(conanfile.settings.compiler.version)\n # The equivalent of compiler 192 is toolset 14.2\n vcvars_ver = \"14.{}\".format(compiler_version[-1])\n return vcvars_ver\n\n\ndef is_msvc(conanfile):\n \"\"\" Validate if current compiler in setttings is 'Visual Studio' or 'msvc'\n :param conanfile: ConanFile instance\n :return: True, if the host compiler is related to Visual Studio, otherwise, False.\n \"\"\"\n settings = conanfile.settings\n return settings.get_safe(\"compiler\") in [\"Visual Studio\", \"msvc\"]\n"}
|
{"conan/tools/microsoft/visual.py": [{"type": "function", "name": "is_msvc_static_runtime", "lines": [186, 191], "signature": "def is_msvc_static_runtime(conanfile):", "doc": "Validate when building with Visual Studio or msvc and MT on runtime\n:param conanfile: ConanFile instance\n:return: True, if msvc + runtime MT. Otherwise, False"}]}
| null |
["conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_cpu_count", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_toolset", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_toolset_for_intel_cc[icx-Intel", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_toolset_for_intel_cc[dpcpp-Intel(R)", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_toolset_for_intel_cc[classic-Intel", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_standard", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_resource_compile", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_and_intel_cc_props[icx-Intel", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_and_intel_cc_props[dpcpp-Intel(R)", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_msbuild_and_intel_cc_props[classic-Intel", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_is_msvc[Visual", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_is_msvc[msvc-True]", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_is_msvc[clang-False]", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_is_msvc_static_runtime[Visual", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_is_msvc_static_runtime[msvc-True-static-Release-True]", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_is_msvc_static_runtime[msvc-True-static-Debug-True]", "conans/test/unittests/tools/microsoft/test_msbuild.py::test_is_msvc_static_runtime[clang-True-None-Debug-False]"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1643218103.0, "pr_title": "Identify when using static runtime", "pr_body": "Changelog: Feature: Add `is_msvc_static_runtime` method to `conan.tools.microsoft.visual` to identify when using `msvc` with static runtime.\r\nDocs: https://github.com/conan-io/docs/pull/2372\r\n\r\nRelated to discussion https://github.com/conan-io/conan/pull/10424#issuecomment-1021567713\r\n\r\nThe idea here helping static runtime identification and simplifying conditions in recipes, where can result in a prone error. Example:\r\n\r\n```python\r\ndef validate(self):\r\n if is_msvc_static_runtime(self) and self.options.shared:\r\n raise ConanInvalidConfiguration(\"Shared option is not well supported with static runtime on Windows\")\r\n```\r\n\r\n\r\n\r\n- [ ] Refer to the issue that supports this Pull Request.\r\n- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.\r\n- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n- [x] I've followed the PEP8 style guides for Python code.\r\n- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one. \r\n\r\n<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 10,874
|
https://github.com/conan-io/conan/pull/10874
|
conan-io__conan-10874
|
["10711"]
|
acf5d9a460adfa16bf95b6aadfc248f7d9ace643
|
diff --git a/conan/tools/files/__init__.py b/conan/tools/files/__init__.py
index fcffac80e2e..01f5d8c67ae 100644
--- a/conan/tools/files/__init__.py
+++ b/conan/tools/files/__init__.py
@@ -1,4 +1,4 @@
-from conan.tools.files.files import load, save, mkdir, ftp_download, download, get, rename, \
+from conan.tools.files.files import load, save, mkdir, rmdir, ftp_download, download, get, rename, \
chdir, unzip, replace_in_file, collect_libs, check_md5, check_sha1, check_sha256
from conan.tools.files.patches import patch, apply_conandata_patches
diff --git a/conan/tools/files/files.py b/conan/tools/files/files.py
index 07b1a1e9333..2e331a76aea 100644
--- a/conan/tools/files/files.py
+++ b/conan/tools/files/files.py
@@ -15,7 +15,7 @@
from conan.tools import CONAN_TOOLCHAIN_ARGS_FILE, CONAN_TOOLCHAIN_ARGS_SECTION
from conans.client.downloaders.download import run_downloader
from conans.errors import ConanException
-from conans.util.files import rmdir
+from conans.util.files import rmdir as _internal_rmdir
if six.PY3: # Remove this IF in develop2
from shutil import which
@@ -61,6 +61,10 @@ def mkdir(conanfile, path):
os.makedirs(path)
+def rmdir(conanfile, path):
+ _internal_rmdir(path)
+
+
def get(conanfile, url, md5='', sha1='', sha256='', destination=".", filename="",
keep_permissions=False, pattern=None, verify=True, retry=None, retry_wait=None,
auth=None, headers=None, strip_root=False):
@@ -507,7 +511,7 @@ def swap_child_folder(parent_folder, child_folder):
if os.path.isfile(path):
os.remove(path)
else:
- rmdir(path)
+ _internal_rmdir(path)
child = os.path.join(parent_folder, child_folder)
for f in os.listdir(child):
shutil.move(os.path.join(child, f), os.path.join(parent_folder, f))
|
diff --git a/conans/test/integration/tools/file_tools_test.py b/conans/test/integration/tools/file_tools_test.py
new file mode 100644
index 00000000000..b7d3b80b846
--- /dev/null
+++ b/conans/test/integration/tools/file_tools_test.py
@@ -0,0 +1,30 @@
+import os
+import textwrap
+
+from conans.test.utils.tools import TestClient
+
+
+def test_file_tools():
+
+ conanfile = textwrap.dedent("""
+
+ from conan import ConanFile
+ from conan.tools.files import rmdir, mkdir
+
+ class pkg(ConanFile):
+
+ def layout(self):
+ self.folders.generators = "gen"
+
+ def generate(self):
+ mkdir(self, "folder1")
+ mkdir(self, "folder2")
+ rmdir(self, "folder2")
+
+ """)
+
+ client = TestClient()
+ client.save({"conanfile.py": conanfile})
+ client.run("install . ")
+ assert os.path.exists(os.path.join(client.current_folder, "gen", "folder1"))
+ assert not os.path.exists(os.path.join(client.current_folder, "gen", "folder2"))
| 2022-03-24T11:52:39
|
{}
|
{"conan/tools/files/__init__.py": "from conan.tools.files.files import load, save, mkdir, ftp_download, download, get, rename, \\\n chdir, unzip, replace_in_file, collect_libs, check_md5, check_sha1, check_sha256\n\nfrom conan.tools.files.patches import patch, apply_conandata_patches\nfrom conan.tools.files.cpp_package import CppPackage\nfrom conan.tools.files.packager import AutoPackager\nfrom conan.tools.files.symlinks import symlinks\nfrom conan.tools.files.copy_pattern import copy\nfrom conan.tools.files.conandata import update_conandata\n", "conan/tools/files/files.py": "import configparser\nimport errno\nimport gzip\nimport hashlib\nimport os\nimport platform\nimport shutil\nimport subprocess\nimport sys\nfrom contextlib import contextmanager\nfrom fnmatch import fnmatch\n\nimport six\n\nfrom conan.tools import CONAN_TOOLCHAIN_ARGS_FILE, CONAN_TOOLCHAIN_ARGS_SECTION\nfrom conans.client.downloaders.download import run_downloader\nfrom conans.errors import ConanException\nfrom conans.util.files import rmdir\n\nif six.PY3: # Remove this IF in develop2\n from shutil import which\n\n\ndef load(conanfile, path, encoding=\"utf-8\"):\n \"\"\" Loads a file content \"\"\"\n with open(path, 'rb') as handle:\n tmp = handle.read()\n return tmp.decode(encoding)\n\n\ndef save(conanfile, path, content, append=False, encoding=\"utf-8\"):\n if append:\n mode = \"ab\"\n try:\n os.makedirs(os.path.dirname(path))\n except Exception:\n pass\n else:\n mode = \"wb\"\n dir_path = os.path.dirname(path)\n if not os.path.isdir(dir_path):\n try:\n os.makedirs(dir_path)\n except OSError as error:\n if error.errno not in (errno.EEXIST, errno.ENOENT):\n raise OSError(\"The folder {} does not exist and could not be created ({}).\"\n .format(dir_path, error.strerror))\n except Exception:\n raise\n\n with open(path, mode) as handle:\n if not isinstance(content, bytes):\n content = bytes(content, encoding=encoding)\n handle.write(content)\n\n\ndef mkdir(conanfile, path):\n \"\"\"Recursive mkdir, doesnt fail if already existing\"\"\"\n if os.path.exists(path):\n return\n os.makedirs(path)\n\n\ndef get(conanfile, url, md5='', sha1='', sha256='', destination=\".\", filename=\"\",\n keep_permissions=False, pattern=None, verify=True, retry=None, retry_wait=None,\n auth=None, headers=None, strip_root=False):\n \"\"\" high level downloader + unzipper + (optional hash checker) + delete temporary zip\n \"\"\"\n\n if not filename: # deduce filename from the URL\n url_base = url[0] if isinstance(url, (list, tuple)) else url\n if \"?\" in url_base or \"=\" in url_base:\n raise ConanException(\"Cannot deduce file name from the url: '{}'. Use 'filename' \"\n \"parameter.\".format(url_base))\n filename = os.path.basename(url_base)\n\n download(conanfile, url, filename, verify=verify,\n retry=retry, retry_wait=retry_wait, auth=auth, headers=headers,\n md5=md5, sha1=sha1, sha256=sha256)\n unzip(conanfile, filename, destination=destination, keep_permissions=keep_permissions,\n pattern=pattern, strip_root=strip_root)\n os.unlink(filename)\n\n\ndef ftp_download(conanfile, ip, filename, login='', password=''):\n # TODO: Check if we want to join this method with download() one, based on ftp:// protocol\n # this has been requested by some users, but the signature is a bit divergent\n import ftplib\n ftp = None\n try:\n ftp = ftplib.FTP(ip)\n ftp.login(login, password)\n filepath, filename = os.path.split(filename)\n if filepath:\n ftp.cwd(filepath)\n with open(filename, 'wb') as f:\n ftp.retrbinary('RETR ' + filename, f.write)\n except Exception as e:\n try:\n os.unlink(filename)\n except OSError:\n pass\n raise ConanException(\"Error in FTP download from %s\\n%s\" % (ip, str(e)))\n finally:\n if ftp:\n ftp.quit()\n\n\ndef download(conanfile, url, filename, verify=True, retry=None, retry_wait=None,\n auth=None, headers=None, md5='', sha1='', sha256=''):\n \"\"\"Retrieves a file from a given URL into a file with a given filename.\n It uses certificates from a list of known verifiers for https downloads,\n but this can be optionally disabled.\n\n :param conanfile:\n :param url: URL to download. It can be a list, which only the first one will be downloaded, and\n the follow URLs will be used as mirror in case of download error.\n :param filename: Name of the file to be created in the local storage\n :param verify: When False, disables https certificate validation\n :param retry: Number of retries in case of failure. Default is overriden by general.retry in the\n conan.conf file or an env variable CONAN_RETRY\n :param retry_wait: Seconds to wait between download attempts. Default is overriden by\n general.retry_wait in the conan.conf file or an env variable CONAN_RETRY_WAIT\n :param auth: A tuple of user and password to use HTTPBasic authentication\n :param headers: A dictionary with additional headers\n :param md5: MD5 hash code to check the downloaded file\n :param sha1: SHA-1 hash code to check the downloaded file\n :param sha256: SHA-256 hash code to check the downloaded file\n :return: None\n \"\"\"\n # TODO: Add all parameters to the new conf\n out = conanfile.output\n requester = conanfile._conan_requester\n config = conanfile.conf\n overwrite = True\n\n if config[\"tools.files.download:retry\"]:\n retry = int(config[\"tools.files.download:retry\"])\n elif retry is None:\n retry = 1\n\n if config[\"tools.files.download:retry_wait\"]:\n retry_wait = int(config[\"tools.files.download:retry_wait\"])\n elif retry_wait is None:\n retry_wait = 5\n\n checksum = sha256 or sha1 or md5\n download_cache = config[\"tools.files.download:download_cache\"] if checksum else None\n\n def _download_file(file_url):\n # The download cache is only used if a checksum is provided, otherwise, a normal download\n run_downloader(requester=requester, output=out, verify=verify, download_cache=download_cache,\n user_download=True, url=file_url,\n file_path=filename, retry=retry, retry_wait=retry_wait, overwrite=overwrite,\n auth=auth, headers=headers, md5=md5, sha1=sha1, sha256=sha256)\n out.writeln(\"\")\n\n if not isinstance(url, (list, tuple)):\n _download_file(url)\n else: # We were provided several URLs to try\n for url_it in url:\n try:\n _download_file(url_it)\n break\n except Exception as error:\n message = \"Could not download from the URL {}: {}.\".format(url_it, str(error))\n out.warn(message + \" Trying another mirror.\")\n else:\n raise ConanException(\"All downloads from ({}) URLs have failed.\".format(len(url)))\n\n\ndef rename(conanfile, src, dst):\n \"\"\"\n rename a file or folder to avoid \"Access is denied\" error on Windows\n :param conanfile: conanfile object\n :param src: Source file or folder\n :param dst: Destination file or folder\n :return: None\n \"\"\"\n # FIXME: This function has been copied from legacy. Needs to fix: which() call and wrap subprocess call.\n if os.path.exists(dst):\n raise ConanException(\"rename {} to {} failed, dst exists.\".format(src, dst))\n\n if platform.system() == \"Windows\" and which(\"robocopy\") and os.path.isdir(src):\n # /move Moves files and directories, and deletes them from the source after they are copied.\n # /e Copies subdirectories. Note that this option includes empty directories.\n # /ndl Specifies that directory names are not to be logged.\n # /nfl Specifies that file names are not to be logged.\n process = subprocess.Popen([\"robocopy\", \"/move\", \"/e\", \"/ndl\", \"/nfl\", src, dst],\n stdout=subprocess.PIPE)\n process.communicate()\n if process.returncode > 7: # https://ss64.com/nt/robocopy-exit.html\n raise ConanException(\"rename {} to {} failed.\".format(src, dst))\n else:\n try:\n os.rename(src, dst)\n except Exception as err:\n raise ConanException(\"rename {} to {} failed: {}\".format(src, dst, err))\n\n\ndef load_toolchain_args(generators_folder=None, namespace=None):\n \"\"\"\n Helper function to load the content of any CONAN_TOOLCHAIN_ARGS_FILE\n\n :param generators_folder: `str` folder where is located the CONAN_TOOLCHAIN_ARGS_FILE.\n :param namespace: `str` namespace to be prepended to the filename.\n :return: <class 'configparser.SectionProxy'>\n \"\"\"\n namespace_name = \"{}_{}\".format(namespace, CONAN_TOOLCHAIN_ARGS_FILE) if namespace \\\n else CONAN_TOOLCHAIN_ARGS_FILE\n args_file = os.path.join(generators_folder, namespace_name) if generators_folder \\\n else namespace_name\n toolchain_config = configparser.ConfigParser()\n toolchain_file = toolchain_config.read(args_file)\n if not toolchain_file:\n raise ConanException(\"The file %s does not exist. Please, make sure that it was not\"\n \" generated in another folder.\" % args_file)\n try:\n return toolchain_config[CONAN_TOOLCHAIN_ARGS_SECTION]\n except KeyError:\n raise ConanException(\"The primary section [%s] does not exist in the file %s. Please, add it\"\n \" as the default one of all your configuration variables.\" %\n (CONAN_TOOLCHAIN_ARGS_SECTION, args_file))\n\n\ndef save_toolchain_args(content, generators_folder=None, namespace=None):\n \"\"\"\n Helper function to save the content into the CONAN_TOOLCHAIN_ARGS_FILE\n\n :param content: `dict` all the information to be saved into the toolchain file.\n :param namespace: `str` namespace to be prepended to the filename.\n :param generators_folder: `str` folder where is located the CONAN_TOOLCHAIN_ARGS_FILE\n \"\"\"\n # Let's prune None values\n content_ = {k: v for k, v in content.items() if v is not None}\n namespace_name = \"{}_{}\".format(namespace, CONAN_TOOLCHAIN_ARGS_FILE) if namespace \\\n else CONAN_TOOLCHAIN_ARGS_FILE\n args_file = os.path.join(generators_folder, namespace_name) if generators_folder \\\n else namespace_name\n toolchain_config = configparser.ConfigParser()\n toolchain_config[CONAN_TOOLCHAIN_ARGS_SECTION] = content_\n with open(args_file, \"w\") as f:\n toolchain_config.write(f)\n\n\n@contextmanager\ndef chdir(conanfile, newdir):\n old_path = os.getcwd()\n os.chdir(newdir)\n try:\n yield\n finally:\n os.chdir(old_path)\n\n\ndef unzip(conanfile, filename, destination=\".\", keep_permissions=False, pattern=None,\n strip_root=False):\n \"\"\"\n Unzip a zipped file\n :param filename: Path to the zip file\n :param destination: Destination folder (or file for .gz files)\n :param keep_permissions: Keep the zip permissions. WARNING: Can be\n dangerous if the zip was not created in a NIX system, the bits could\n produce undefined permission schema. Use this option only if you are sure\n that the zip was created correctly.\n :param pattern: Extract only paths matching the pattern. This should be a\n Unix shell-style wildcard, see fnmatch documentation for more details.\n :param flat: If all the contents are in a single dir, flat that directory.\n :return:\n \"\"\"\n\n output = conanfile.output\n if (filename.endswith(\".tar.gz\") or filename.endswith(\".tgz\") or\n filename.endswith(\".tbz2\") or filename.endswith(\".tar.bz2\") or\n filename.endswith(\".tar\")):\n return untargz(filename, destination, pattern, strip_root)\n if filename.endswith(\".gz\"):\n with gzip.open(filename, 'rb') as f:\n file_content = f.read()\n target_name = filename[:-3] if destination == \".\" else destination\n save(conanfile, target_name, file_content)\n return\n if filename.endswith(\".tar.xz\") or filename.endswith(\".txz\"):\n return untargz(filename, destination, pattern, strip_root)\n\n import zipfile\n full_path = os.path.normpath(os.path.join(os.getcwd(), destination))\n\n if hasattr(sys.stdout, \"isatty\") and sys.stdout.isatty():\n def print_progress(the_size, uncomp_size):\n the_size = (the_size * 100.0 / uncomp_size) if uncomp_size != 0 else 0\n txt_msg = \"Unzipping %d %%\"\n if the_size > print_progress.last_size + 1:\n output.rewrite_line(txt_msg % the_size)\n print_progress.last_size = the_size\n if int(the_size) == 99:\n output.rewrite_line(txt_msg % 100)\n else:\n def print_progress(_, __):\n pass\n\n with zipfile.ZipFile(filename, \"r\") as z:\n zip_info = z.infolist()\n if pattern:\n zip_info = [zi for zi in zip_info if fnmatch(zi.filename, pattern)]\n if strip_root:\n names = [n.replace(\"\\\\\", \"/\") for n in z.namelist()]\n common_folder = os.path.commonprefix(names).split(\"/\", 1)[0]\n if not common_folder and len(names) > 1:\n raise ConanException(\"The zip file contains more than 1 folder in the root\")\n if len(names) == 1 and len(names[0].split(\"/\", 1)) == 1:\n raise ConanException(\"The zip file contains a file in the root\")\n # Remove the directory entry if present\n # Note: The \"zip\" format contains the \"/\" at the end if it is a directory\n zip_info = [m for m in zip_info if m.filename != (common_folder + \"/\")]\n for member in zip_info:\n name = member.filename.replace(\"\\\\\", \"/\")\n member.filename = name.split(\"/\", 1)[1]\n\n uncompress_size = sum((file_.file_size for file_ in zip_info))\n if uncompress_size > 100000:\n output.info(\"Unzipping %s, this can take a while\" % _human_size(uncompress_size))\n else:\n output.info(\"Unzipping %s\" % _human_size(uncompress_size))\n extracted_size = 0\n\n print_progress.last_size = -1\n if platform.system() == \"Windows\":\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n else: # duplicated for, to avoid a platform check for each zipped file\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n if keep_permissions:\n # Could be dangerous if the ZIP has been created in a non nix system\n # https://bugs.python.org/issue15795\n perm = file_.external_attr >> 16 & 0xFFF\n os.chmod(os.path.join(full_path, file_.filename), perm)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n output.writeln(\"\")\n\n\ndef untargz(filename, destination=\".\", pattern=None, strip_root=False):\n # NOT EXPOSED at `conan.tools.files` but used in tests\n import tarfile\n with tarfile.TarFile.open(filename, 'r:*') as tarredgzippedFile:\n if not pattern and not strip_root:\n tarredgzippedFile.extractall(destination)\n else:\n members = tarredgzippedFile.getmembers()\n\n if strip_root:\n names = [n.replace(\"\\\\\", \"/\") for n in tarredgzippedFile.getnames()]\n common_folder = os.path.commonprefix(names).split(\"/\", 1)[0]\n if not common_folder and len(names) > 1:\n raise ConanException(\"The tgz file contains more than 1 folder in the root\")\n if len(names) == 1 and len(names[0].split(\"/\", 1)) == 1:\n raise ConanException(\"The tgz file contains a file in the root\")\n # Remove the directory entry if present\n members = [m for m in members if m.name != common_folder]\n for member in members:\n name = member.name.replace(\"\\\\\", \"/\")\n member.name = name.split(\"/\", 1)[1]\n member.path = member.name\n if pattern:\n members = list(filter(lambda m: fnmatch(m.name, pattern),\n tarredgzippedFile.getmembers()))\n tarredgzippedFile.extractall(destination, members=members)\n\n\ndef _human_size(size_bytes):\n \"\"\"\n format a size in bytes into a 'human' file size, e.g. B, KB, MB, GB, TB, PB\n Note that bytes will be reported in whole numbers but KB and above will have\n greater precision. e.g. 43 B, 443 KB, 4.3 MB, 4.43 GB, etc\n \"\"\"\n UNIT_SIZE = 1000.0\n\n suffixes_table = [('B', 0), ('KB', 1), ('MB', 1), ('GB', 2), ('TB', 2), ('PB', 2)]\n\n num = float(size_bytes)\n the_precision = None\n the_suffix = None\n for suffix, precision in suffixes_table:\n the_precision = precision\n the_suffix = suffix\n if num < UNIT_SIZE:\n break\n num /= UNIT_SIZE\n\n if the_precision == 0:\n formatted_size = \"%d\" % num\n else:\n formatted_size = str(round(num, ndigits=the_precision))\n\n return \"%s%s\" % (formatted_size, the_suffix)\n\n\ndef check_sha1(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"sha1\", file_path, signature)\n\n\ndef check_md5(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"md5\", file_path, signature)\n\n\ndef check_sha256(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"sha256\", file_path, signature)\n\n\ndef _check_with_algorithm_sum(algorithm_name, file_path, signature):\n real_signature = _generic_algorithm_sum(file_path, algorithm_name)\n if real_signature != signature.lower():\n raise ConanException(\"%s signature failed for '%s' file. \\n\"\n \" Provided signature: %s \\n\"\n \" Computed signature: %s\" % (algorithm_name,\n os.path.basename(file_path),\n signature,\n real_signature))\n\n\ndef _generic_algorithm_sum(file_path, algorithm_name):\n\n with open(file_path, 'rb') as fh:\n try:\n m = hashlib.new(algorithm_name)\n except ValueError: # FIPS error https://github.com/conan-io/conan/issues/7800\n m = hashlib.new(algorithm_name, usedforsecurity=False)\n while True:\n data = fh.read(8192)\n if not data:\n break\n m.update(data)\n return m.hexdigest()\n\n\ndef replace_in_file(conanfile, file_path, search, replace, strict=True, encoding=\"utf-8\"):\n \"\"\"\n :param conanfile: Conanfile instance\n :param file_path: Path to the file\n :param search: Pattern to search\n :param replace: string to replace the matches\n :param strict: Raise in case \"search\" is not found in the file contents\n :return:\n \"\"\"\n output = conanfile.output\n content = load(conanfile, file_path, encoding=encoding)\n if -1 == content.find(search):\n message = \"replace_in_file didn't find pattern '%s' in '%s' file.\" % (search, file_path)\n if strict:\n raise ConanException(message)\n else:\n output.warn(message)\n return False\n content = content.replace(search, replace)\n save(conanfile, file_path, content, encoding=encoding)\n\n\ndef collect_libs(conanfile, folder=None):\n if not conanfile.package_folder:\n return []\n if folder:\n lib_folders = [os.path.join(conanfile.package_folder, folder)]\n else:\n lib_folders = [os.path.join(conanfile.package_folder, folder)\n for folder in conanfile.cpp_info.libdirs]\n result = []\n for lib_folder in lib_folders:\n if not os.path.exists(lib_folder):\n conanfile.output.warn(\"Lib folder doesn't exist, can't collect libraries: \"\n \"{0}\".format(lib_folder))\n continue\n files = os.listdir(lib_folder)\n for f in files:\n name, ext = os.path.splitext(f)\n if ext in (\".so\", \".lib\", \".a\", \".dylib\", \".bc\"):\n if ext != \".lib\" and name.startswith(\"lib\"):\n name = name[3:]\n if name in result:\n conanfile.output.warn(\"Library '%s' was either already found in a previous \"\n \"'conanfile.cpp_info.libdirs' folder or appears several \"\n \"times with a different file extension\" % name)\n else:\n result.append(name)\n result.sort()\n return result\n\n\n# TODO: Do NOT document this yet. It is unclear the interface, maybe should be split\ndef swap_child_folder(parent_folder, child_folder):\n \"\"\" replaces the current folder contents with the contents of one child folder. This\n is used in the SCM monorepo flow, when it is necessary to use one subproject subfolder\n to replace the whole cloned git repo\n \"\"\"\n for f in os.listdir(parent_folder):\n if f != child_folder:\n path = os.path.join(parent_folder, f)\n if os.path.isfile(path):\n os.remove(path)\n else:\n rmdir(path)\n child = os.path.join(parent_folder, child_folder)\n for f in os.listdir(child):\n shutil.move(os.path.join(child, f), os.path.join(parent_folder, f))\n"}
|
{"conan/tools/files/files.py": [{"type": "function", "name": "rmdir", "lines": [64, 65], "signature": "def rmdir(conanfile, path):", "doc": ""}]}
| null |
["conans/test/integration/tools/file_tools_test.py::test_file_tools"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1648122630.0, "pr_title": "basic conan.tools.files.rmdir", "pr_body": "Changelog: Feature: Added basic `rmdir` tool at `conan.tools.files`.\r\nDocs: https://github.com/conan-io/docs/pull/2470\r\n\r\nClose #10711", "pr_timeline": [], "issues": {"10711": {"issue_title": "[feature] Move [rmdir] to conan.tools.files", "issue_body": "Currently, it is missing in the new namespace.\r\n", "issue_timeline": []}}}
|
|
conan-io/conan
| 11,284
|
https://github.com/conan-io/conan/pull/11284
|
conan-io__conan-11284
|
[]
|
db343feac208a000bc6546556104a8511a584f1e
|
diff --git a/conan/tools/gnu/autotools.py b/conan/tools/gnu/autotools.py
index dc93aca2e51..37520577c9e 100644
--- a/conan/tools/gnu/autotools.py
+++ b/conan/tools/gnu/autotools.py
@@ -11,57 +11,47 @@
class Autotools(object):
- def __init__(self, conanfile, namespace=None, build_script_folder=None):
+ def __init__(self, conanfile, namespace=None):
self._conanfile = conanfile
toolchain_file_content = load_toolchain_args(self._conanfile.generators_folder,
namespace=namespace)
+
self._configure_args = toolchain_file_content.get("configure_args")
self._make_args = toolchain_file_content.get("make_args")
- self.default_configure_install_args = True
- self.build_script_folder = os.path.join(self._conanfile.source_folder, build_script_folder) \
- if build_script_folder else self._conanfile.source_folder
+ self._autoreconf_args = toolchain_file_content.get("autoreconf_args")
- def configure(self):
+ def configure(self, build_script_folder=None, args=None):
"""
http://jingfenghanmax.blogspot.com.es/2010/09/configure-with-host-target-and-build.html
https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html
"""
+ script_folder = os.path.join(self._conanfile.source_folder, build_script_folder) \
+ if build_script_folder else self._conanfile.source_folder
+
configure_args = []
- if self.default_configure_install_args and self._conanfile.package_folder:
- def _get_argument(argument_name, cppinfo_name):
- elements = getattr(self._conanfile.cpp.package, cppinfo_name)
- return "--{}=${{prefix}}/{}".format(argument_name, elements[0]) if elements else ""
-
- # If someone want arguments but not the defaults can pass them in args manually
- configure_args.extend(["--prefix=%s" % self._conanfile.package_folder.replace("\\", "/"),
- _get_argument("bindir", "bindirs"),
- _get_argument("sbindir", "bindirs"),
- _get_argument("libdir", "libdirs"),
- _get_argument("includedir", "includedirs"),
- _get_argument("oldincludedir", "includedirs"),
- _get_argument("datarootdir", "resdirs")])
-
- self._configure_args = "{} {}".format(self._configure_args, args_to_string(configure_args)) \
- if configure_args else self._configure_args
-
- configure_cmd = "{}/configure".format(self.build_script_folder)
+ configure_args.extend(args or [])
+
+ self._configure_args = "{} {}".format(self._configure_args, args_to_string(configure_args))
+
+ configure_cmd = "{}/configure".format(script_folder)
subsystem = deduce_subsystem(self._conanfile, scope="build")
configure_cmd = subsystem_path(subsystem, configure_cmd)
cmd = '"{}" {}'.format(configure_cmd, self._configure_args)
self._conanfile.output.info("Calling:\n > %s" % cmd)
self._conanfile.run(cmd)
- def make(self, target=None):
+ def make(self, target=None, args=None):
make_program = self._conanfile.conf.get("tools.gnu:make_program",
default="mingw32-make" if self._use_win_mingw() else "make")
str_args = self._make_args
+ str_extra_args = " ".join(args) if args is not None else ""
jobs = ""
if "-j" not in str_args and "nmake" not in make_program.lower():
njobs = build_jobs(self._conanfile)
if njobs:
jobs = "-j{}".format(njobs)
- command = join_arguments([make_program, target, str_args, jobs])
+ command = join_arguments([make_program, target, str_args, str_extra_args, jobs])
self._conanfile.run(command)
def _fix_osx_shared_install_name(self):
@@ -75,36 +65,33 @@ def _fix_install_name(lib_name, lib_folder):
lib_name))
self._conanfile.run(command)
- def _is_modified_install_name(lib_name, lib_folder):
+ def _is_modified_install_name(lib_name, full_folder, libdir):
"""
Check that the user did not change the default install_name using the install_name
linker flag in that case we do not touch this field
"""
- command = "otool -D {}".format(os.path.join(lib_folder, lib_name))
- out = check_output_runner(command).strip().split(":")[1]
- return False if str(os.path.join(lib_folder, shared_lib)) in out else True
+ command = "otool -D {}".format(os.path.join(full_folder, lib_name))
+ install_path = check_output_runner(command).strip().split(":")[1].strip()
+ default_path = str(os.path.join("/", libdir, shared_lib))
+ return False if default_path == install_path else True
libdirs = getattr(self._conanfile.cpp.package, "libdirs")
- for folder in libdirs:
- full_folder = os.path.join(self._conanfile.package_folder, folder)
+ for libdir in libdirs:
+ full_folder = os.path.join(self._conanfile.package_folder, libdir)
shared_libs = _osx_collect_dylibs(full_folder)
for shared_lib in shared_libs:
- if not _is_modified_install_name(shared_lib, full_folder):
+ if not _is_modified_install_name(shared_lib, full_folder, libdir):
_fix_install_name(shared_lib, full_folder)
- def install(self):
- # FIXME: we have to run configure twice because the local flow won't work otherwise
- # because there's no package_folder until the package step
- self.configure()
- self.make(target="install")
+ def install(self, args=None):
+ args = args if args is not None else ["DESTDIR={}".format(self._conanfile.package_folder)]
+ self.make(target="install", args=args)
if self._conanfile.settings.get_safe("os") == "Macos" and self._conanfile.options.get_safe("shared", False):
self._fix_osx_shared_install_name()
def autoreconf(self, args=None):
- command = ["autoreconf"]
- args = args or ["--force", "--install"]
- command.extend(args)
- command = join_arguments(command)
+ args = args or []
+ command = join_arguments(["autoreconf", self._autoreconf_args, args_to_string(args)])
with chdir(self, self._conanfile.source_folder):
self._conanfile.run(command)
diff --git a/conan/tools/gnu/autotoolstoolchain.py b/conan/tools/gnu/autotoolstoolchain.py
index 0f5c7b89679..1a428d8978b 100644
--- a/conan/tools/gnu/autotoolstoolchain.py
+++ b/conan/tools/gnu/autotoolstoolchain.py
@@ -17,7 +17,8 @@ def __init__(self, conanfile, namespace=None):
self._conanfile = conanfile
self._namespace = namespace
- self.configure_args = []
+ self.configure_args = self._default_configure_shared_flags() + self._default_configure_install_flags()
+ self.autoreconf_args = self._default_autoreconf_flags()
self.make_args = []
# Flags
@@ -167,19 +168,42 @@ def generate(self, env=None, scope="build"):
self.generate_args()
VCVars(self._conanfile).generate(scope=scope)
- def _shared_static_args(self):
+ def _default_configure_shared_flags(self):
args = []
- if self._conanfile.options.get_safe("shared", False):
- args.extend(["--enable-shared", "--disable-static"])
- else:
- args.extend(["--disable-shared", "--enable-static", "--with-pic"
- if self._conanfile.options.get_safe("fPIC", True)
- else "--without-pic"])
+ # Just add these flags if there's a shared option defined (never add to exe's)
+ # FIXME: For Conan 2.0 use the package_type to decide if adding these flags or not
+ try:
+ if self._conanfile.options.shared:
+ args.extend(["--enable-shared", "--disable-static"])
+ else:
+ args.extend(["--disable-shared", "--enable-static"])
+ except ConanException:
+ pass
+
return args
+ def _default_configure_install_flags(self):
+ configure_install_flags = []
+
+ def _get_argument(argument_name, cppinfo_name):
+ elements = getattr(self._conanfile.cpp.package, cppinfo_name)
+ return "--{}=${{prefix}}/{}".format(argument_name, elements[0]) if elements else ""
+
+ # If someone want arguments but not the defaults can pass them in args manually
+ configure_install_flags.extend(["--prefix=/",
+ _get_argument("bindir", "bindirs"),
+ _get_argument("sbindir", "bindirs"),
+ _get_argument("libdir", "libdirs"),
+ _get_argument("includedir", "includedirs"),
+ _get_argument("oldincludedir", "includedirs"),
+ _get_argument("datarootdir", "resdirs")])
+ return configure_install_flags
+
+ def _default_autoreconf_flags(self):
+ return ["--force", "--install"]
+
def generate_args(self):
configure_args = []
- configure_args.extend(self._shared_static_args())
configure_args.extend(self.configure_args)
user_args_str = args_to_string(self.configure_args)
for flag, var in (("host", self._host), ("build", self._build), ("target", self._target)):
@@ -187,6 +211,7 @@ def generate_args(self):
configure_args.append('--{}={}'.format(flag, var))
args = {"configure_args": args_to_string(configure_args),
- "make_args": args_to_string(self.make_args)}
+ "make_args": args_to_string(self.make_args),
+ "autoreconf_args": args_to_string(self.autoreconf_args)}
save_toolchain_args(args, namespace=self._namespace)
diff --git a/conans/assets/templates/new_v2_autotools.py b/conans/assets/templates/new_v2_autotools.py
index 4134889763c..0039d7b7beb 100644
--- a/conans/assets/templates/new_v2_autotools.py
+++ b/conans/assets/templates/new_v2_autotools.py
@@ -8,7 +8,6 @@
from conan import ConanFile
from conan.tools.gnu import AutotoolsToolchain, Autotools
from conan.tools.layout import basic_layout
- from conan.tools.files import chdir
class {package_name}Conan(ConanFile):
@@ -79,10 +78,9 @@ def package_info(self):
import os
from conan import ConanFile
- from conan.tools.gnu import AutotoolsToolchain, Autotools, AutotoolsDeps
+ from conan.tools.gnu import Autotools
from conan.tools.layout import basic_layout
from conan.tools.build import cross_building
- from conan.tools.files import chdir
class {package_name}TestConan(ConanFile):
|
diff --git a/conans/test/functional/toolchains/gnu/autotools/test_basic.py b/conans/test/functional/toolchains/gnu/autotools/test_basic.py
index ec0933fbfdf..3ef85610ae6 100644
--- a/conans/test/functional/toolchains/gnu/autotools/test_basic.py
+++ b/conans/test/functional/toolchains/gnu/autotools/test_basic.py
@@ -7,7 +7,7 @@
import pytest
from conan.tools.env.environment import environment_wrap_command
-from conans.model.ref import ConanFileReference
+from conans.model.ref import ConanFileReference, PackageReference
from conans.test.assets.autotools import gen_makefile_am, gen_configure_ac, gen_makefile
from conans.test.assets.sources import gen_function_cpp
from conans.test.functional.utils import check_exe_run
@@ -245,3 +245,150 @@ def test_autotools_with_pkgconfigdeps():
assert re.search("I.*hello.*1.0.*include", str(client.out))
assert "-lhello" in client.out
assert re.search("L.*hello.*1.0.*package", str(client.out))
+
+
[email protected](platform.system() not in ["Linux", "Darwin"], reason="Requires Autotools")
[email protected]_autotools()
+def test_autotools_option_checking():
+ # https://github.com/conan-io/conan/issues/11265
+ client = TestClient(path_with_spaces=False)
+ client.run("new mylib/1.0@ -m autotools_lib")
+ conanfile = textwrap.dedent("""
+ import os
+
+ from conan import ConanFile
+ from conan.tools.gnu import AutotoolsToolchain, Autotools
+ from conan.tools.layout import basic_layout
+ from conan.tools.build import cross_building
+ from conan.tools.files import chdir
+
+
+ class MylibTestConan(ConanFile):
+ settings = "os", "compiler", "build_type", "arch"
+ # VirtualBuildEnv and VirtualRunEnv can be avoided if "tools.env.virtualenv:auto_use" is defined
+ # (it will be defined in Conan 2.0)
+ generators = "AutotoolsDeps", "VirtualBuildEnv", "VirtualRunEnv"
+ apply_env = False
+ test_type = "explicit"
+
+ def requirements(self):
+ self.requires(self.tested_reference_str)
+
+ def generate(self):
+ at_toolchain = AutotoolsToolchain(self)
+ # we override the default shared/static flags here
+ at_toolchain.configure_args = ['--enable-option-checking=fatal']
+ at_toolchain.generate()
+
+ def build(self):
+ autotools = Autotools(self)
+ autotools.autoreconf()
+ autotools.configure()
+ autotools.make()
+
+ def layout(self):
+ basic_layout(self)
+
+ def test(self):
+ if not cross_building(self):
+ cmd = os.path.join(self.cpp.build.bindirs[0], "main")
+ self.run(cmd, env="conanrun")
+ """)
+
+ client.save({"test_package/conanfile.py": conanfile})
+ client.run("create . -tf=None")
+
+ # check that the shared flags are not added to the exe's configure, making it fail
+ client.run("test test_package mylib/1.0@")
+ assert "configure: error: unrecognized options: --disable-shared, --enable-static, --with-pic" \
+ not in client.out
+
+
[email protected](platform.system() not in ["Linux", "Darwin"], reason="Requires Autotools")
[email protected]_autotools()
+def test_autotools_arguments_override():
+ client = TestClient(path_with_spaces=False)
+ client.run("new mylib/1.0@ -m autotools_lib")
+ conanfile = textwrap.dedent("""
+ import os
+
+ from conan import ConanFile
+ from conan.tools.gnu import AutotoolsToolchain, Autotools
+ from conan.tools.layout import basic_layout
+
+
+ class MyLibConan(ConanFile):
+ name = "mylib"
+ version = "1.0"
+
+ # Binary configuration
+ settings = "os", "compiler", "build_type", "arch"
+
+ exports_sources = "configure.ac", "Makefile.am", "src/*"
+
+ def config_options(self):
+ if self.settings.os == "Windows":
+ del self.options.fPIC
+
+ def layout(self):
+ basic_layout(self)
+
+ def generate(self):
+ at_toolchain = AutotoolsToolchain(self)
+ at_toolchain.configure_args = ['--disable-shared']
+ at_toolchain.make_args = ['--warn-undefined-variables']
+ at_toolchain.autoreconf_args = ['--verbose']
+ at_toolchain.generate()
+
+ def build(self):
+ autotools = Autotools(self)
+ autotools.autoreconf(args=['--install'])
+ autotools.configure(args=['--prefix=/', '--libdir=${prefix}/customlibfolder',
+ '--includedir=${prefix}/customincludefolder',
+ '--pdfdir=${prefix}/res'])
+ autotools.make(args=['--keep-going'])
+
+ def package(self):
+ autotools = Autotools(self)
+ autotools.install(args=['DESTDIR={}/somefolder'.format(self.package_folder)])
+
+ def package_info(self):
+ self.cpp_info.libs = ["mylib"]
+ self.cpp_info.libdirs = ["somefolder/customlibfolder"]
+ self.cpp_info.includedirs = ["somefolder/customincludefolder"]
+ """)
+ client.run("config set log.print_run_commands=1")
+ client.save({"conanfile.py": conanfile})
+ client.run("create . -tf=None")
+
+ # autoreconf args --force that is default should not be there
+ assert "--force" not in client.out
+ assert "--install" in client.out
+
+ package_id = re.search(r"mylib\/1.0: Package (\S+)", str(client.out)).group(1).replace("'", "")
+ pref = PackageReference(ConanFileReference.loads("mylib/1.0"), package_id)
+ package_folder = client.cache.package_layout(pref.ref).package(pref)
+
+ # we override the default DESTDIR in the install
+ assert 'DESTDIR={} '.format(package_folder) not in client.out
+ assert 'DESTDIR={}/somefolder '.format(package_folder) in client.out
+
+ # we did override the default install args
+ for arg in ['--bindir=${prefix}/bin', '--sbindir=${prefix}/bin',
+ '--libdir=${prefix}/lib', '--includedir=${prefix}/include',
+ '--oldincludedir=${prefix}/include', '--datarootdir=${prefix}/res']:
+ assert arg not in client.out
+
+ # and use our custom arguments
+ for arg in ['--prefix=/', '--libdir=${prefix}/customlibfolder',
+ '--includedir=${prefix}/customincludefolder', '--pdfdir=${prefix}/res']:
+ assert arg in client.out
+
+ # check the other arguments we set are there
+ assert "--disable-shared" in client.out
+ assert "--warn-undefined-variables" in client.out
+ assert "--verbose" in client.out
+ assert "--keep-going" in client.out
+
+ client.run("test test_package mylib/1.0@")
+ assert "mylib/1.0: Hello World Release!" in client.out
diff --git a/conans/test/functional/toolchains/gnu/autotools/test_ios.py b/conans/test/functional/toolchains/gnu/autotools/test_ios.py
index 272d9571c9a..d0a6c45c2de 100644
--- a/conans/test/functional/toolchains/gnu/autotools/test_ios.py
+++ b/conans/test/functional/toolchains/gnu/autotools/test_ios.py
@@ -54,10 +54,8 @@ class TestConan(ConanFile):
generators = "AutotoolsToolchain", "AutotoolsDeps"
def build(self):
- self.run("aclocal")
- self.run("autoconf")
- self.run("automake --add-missing --foreign")
autotools = Autotools(self)
+ autotools.autoreconf()
autotools.configure()
autotools.make()
""")
@@ -76,5 +74,11 @@ def build(self):
conanbuild = load_toolchain_args(client.current_folder)
configure_args = conanbuild["configure_args"]
- assert configure_args == "'--disable-shared' '--enable-static' '--with-pic' " \
+ make_args = conanbuild["make_args"]
+ autoreconf_args = conanbuild["autoreconf_args"]
+ assert configure_args == "'--prefix=/' '--bindir=${prefix}/bin' '--sbindir=${prefix}/bin' " \
+ "'--libdir=${prefix}/lib' '--includedir=${prefix}/include' " \
+ "'--oldincludedir=${prefix}/include' '--datarootdir=${prefix}/res' " \
"'--host=aarch64-apple-ios' '--build=x86_64-apple-darwin'"
+ assert make_args == ""
+ assert autoreconf_args == "'--force' '--install'"
diff --git a/conans/test/functional/toolchains/gnu/test_v2_autotools_template.py b/conans/test/functional/toolchains/gnu/test_v2_autotools_template.py
index 32382f7674c..c9c568e93cb 100644
--- a/conans/test/functional/toolchains/gnu/test_v2_autotools_template.py
+++ b/conans/test/functional/toolchains/gnu/test_v2_autotools_template.py
@@ -57,9 +57,14 @@ def test_autotools_exe_template():
# Create works
client.run("create .")
+ # check that for exe's we don't add any static/shared flag
+ for flag in ["--enable-static", "--disable-static", "--disable-shared", "--with-pic"]:
+ assert flag not in client.out
assert "greet/0.1: Hello World Release!" in client.out
client.run("create . -s build_type=Debug")
+ for flag in ["--enable-static", "--disable-static", "--disable-shared", "--with-pic"]:
+ assert flag not in client.out
assert "greet/0.1: Hello World Debug!" in client.out
diff --git a/conans/test/unittests/tools/gnu/autotools_test.py b/conans/test/unittests/tools/gnu/autotools_test.py
index 50a312f2b50..1493942673b 100644
--- a/conans/test/unittests/tools/gnu/autotools_test.py
+++ b/conans/test/unittests/tools/gnu/autotools_test.py
@@ -11,16 +11,17 @@ def test_source_folder_works():
os.chdir(folder)
save_toolchain_args({
"configure_args": "-foo bar",
- "make_args": ""}
+ "make_args": "",
+ "autoreconf_args": ""}
)
conanfile = ConanFileMock()
conanfile.folders.set_base_install(folder)
sources = "/path/to/sources"
conanfile.folders.set_base_source(sources)
- autotools = Autotools(conanfile, build_script_folder="subfolder")
- autotools.configure()
- assert conanfile.command.replace("\\", "/") == '"/path/to/sources/subfolder/configure" -foo bar'
+ autotools = Autotools(conanfile)
+ autotools.configure(build_script_folder="subfolder")
+ assert conanfile.command.replace("\\", "/") == '"/path/to/sources/subfolder/configure" -foo bar '
autotools = Autotools(conanfile)
autotools.configure()
- assert conanfile.command.replace("\\", "/") == '"/path/to/sources/configure" -foo bar'
+ assert conanfile.command.replace("\\", "/") == '"/path/to/sources/configure" -foo bar '
diff --git a/conans/test/utils/mocks.py b/conans/test/utils/mocks.py
index 45997592a69..4103bc4f3a9 100644
--- a/conans/test/utils/mocks.py
+++ b/conans/test/utils/mocks.py
@@ -9,9 +9,10 @@
from conans import ConanFile, Options
from conans.client.output import ConanOutput
from conans.client.userio import UserIO
+from conans.errors import ConanException
from conans.model.conf import ConfDefinition
from conans.model.env_info import DepsEnvInfo, EnvInfo, EnvValues
-from conans.model.layout import Folders
+from conans.model.layout import Folders, Infos
from conans.model.options import PackageOptions
from conans.model.user_info import DepsUserInfo
@@ -83,8 +84,14 @@ class MockSettings(object):
def __init__(self, values):
self.values = values
- def get_safe(self, value):
- return self.values.get(value, None)
+ def get_safe(self, value, default=None):
+ return self.values.get(value, default)
+
+ def __getattr__(self, name):
+ try:
+ return self.values[name]
+ except KeyError:
+ raise ConanException("'%s' value not defined" % name)
class MockCppInfo(object):
@@ -182,6 +189,7 @@ def __init__(self, shared=None, options=None, options_values=None):
self.env_scripts = {}
self.win_bash = None
self.conf = ConfDefinition().get_conanfile_conf(None)
+ self.cpp = Infos()
def run(self, command, win_bash=False, subsystem=None, env=None, ignore_errors=False):
assert win_bash is False
| 2022-05-17T16:50:25
|
{}
|
{"conan/tools/gnu/autotools.py": "import os\n\nfrom conan.tools.build import build_jobs\nfrom conan.tools.files.files import load_toolchain_args\nfrom conans.client.subsystems import subsystem_path, deduce_subsystem\nfrom conans.client.build import join_arguments\nfrom conans.tools import args_to_string\nfrom conan.tools.files import chdir\nfrom conans.util.runners import check_output_runner\n\n\nclass Autotools(object):\n\n def __init__(self, conanfile, namespace=None, build_script_folder=None):\n self._conanfile = conanfile\n\n toolchain_file_content = load_toolchain_args(self._conanfile.generators_folder,\n namespace=namespace)\n self._configure_args = toolchain_file_content.get(\"configure_args\")\n self._make_args = toolchain_file_content.get(\"make_args\")\n self.default_configure_install_args = True\n self.build_script_folder = os.path.join(self._conanfile.source_folder, build_script_folder) \\\n if build_script_folder else self._conanfile.source_folder\n\n def configure(self):\n \"\"\"\n http://jingfenghanmax.blogspot.com.es/2010/09/configure-with-host-target-and-build.html\n https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html\n \"\"\"\n configure_args = []\n if self.default_configure_install_args and self._conanfile.package_folder:\n def _get_argument(argument_name, cppinfo_name):\n elements = getattr(self._conanfile.cpp.package, cppinfo_name)\n return \"--{}=${{prefix}}/{}\".format(argument_name, elements[0]) if elements else \"\"\n\n # If someone want arguments but not the defaults can pass them in args manually\n configure_args.extend([\"--prefix=%s\" % self._conanfile.package_folder.replace(\"\\\\\", \"/\"),\n _get_argument(\"bindir\", \"bindirs\"),\n _get_argument(\"sbindir\", \"bindirs\"),\n _get_argument(\"libdir\", \"libdirs\"),\n _get_argument(\"includedir\", \"includedirs\"),\n _get_argument(\"oldincludedir\", \"includedirs\"),\n _get_argument(\"datarootdir\", \"resdirs\")])\n\n self._configure_args = \"{} {}\".format(self._configure_args, args_to_string(configure_args)) \\\n if configure_args else self._configure_args\n\n configure_cmd = \"{}/configure\".format(self.build_script_folder)\n subsystem = deduce_subsystem(self._conanfile, scope=\"build\")\n configure_cmd = subsystem_path(subsystem, configure_cmd)\n cmd = '\"{}\" {}'.format(configure_cmd, self._configure_args)\n self._conanfile.output.info(\"Calling:\\n > %s\" % cmd)\n self._conanfile.run(cmd)\n\n def make(self, target=None):\n make_program = self._conanfile.conf.get(\"tools.gnu:make_program\",\n default=\"mingw32-make\" if self._use_win_mingw() else \"make\")\n str_args = self._make_args\n jobs = \"\"\n if \"-j\" not in str_args and \"nmake\" not in make_program.lower():\n njobs = build_jobs(self._conanfile)\n if njobs:\n jobs = \"-j{}\".format(njobs)\n command = join_arguments([make_program, target, str_args, jobs])\n self._conanfile.run(command)\n\n def _fix_osx_shared_install_name(self):\n\n def _osx_collect_dylibs(lib_folder):\n return [f for f in os.listdir(lib_folder) if f.endswith(\".dylib\")\n and not os.path.islink(os.path.join(lib_folder, f))]\n\n def _fix_install_name(lib_name, lib_folder):\n command = \"install_name_tool -id @rpath/{} {}\".format(lib_name, os.path.join(lib_folder,\n lib_name))\n self._conanfile.run(command)\n\n def _is_modified_install_name(lib_name, lib_folder):\n \"\"\"\n Check that the user did not change the default install_name using the install_name\n linker flag in that case we do not touch this field\n \"\"\"\n command = \"otool -D {}\".format(os.path.join(lib_folder, lib_name))\n out = check_output_runner(command).strip().split(\":\")[1]\n return False if str(os.path.join(lib_folder, shared_lib)) in out else True\n\n libdirs = getattr(self._conanfile.cpp.package, \"libdirs\")\n for folder in libdirs:\n full_folder = os.path.join(self._conanfile.package_folder, folder)\n shared_libs = _osx_collect_dylibs(full_folder)\n for shared_lib in shared_libs:\n if not _is_modified_install_name(shared_lib, full_folder):\n _fix_install_name(shared_lib, full_folder)\n\n def install(self):\n # FIXME: we have to run configure twice because the local flow won't work otherwise\n # because there's no package_folder until the package step\n self.configure()\n self.make(target=\"install\")\n if self._conanfile.settings.get_safe(\"os\") == \"Macos\" and self._conanfile.options.get_safe(\"shared\", False):\n self._fix_osx_shared_install_name()\n\n def autoreconf(self, args=None):\n command = [\"autoreconf\"]\n args = args or [\"--force\", \"--install\"]\n command.extend(args)\n command = join_arguments(command)\n with chdir(self, self._conanfile.source_folder):\n self._conanfile.run(command)\n\n def _use_win_mingw(self):\n if hasattr(self._conanfile, 'settings_build'):\n os_build = self._conanfile.settings_build.get_safe('os')\n else:\n os_build = self._conanfile.settings.get_safe(\"os\")\n\n if os_build == \"Windows\":\n compiler = self._conanfile.settings.get_safe(\"compiler\")\n sub = self._conanfile.settings.get_safe(\"os.subsystem\")\n if sub in (\"cygwin\", \"msys2\", \"msys\") or compiler == \"qcc\":\n return False\n else:\n if self._conanfile.win_bash:\n return False\n return True\n return False\n", "conan/tools/gnu/autotoolstoolchain.py": "from conan.tools._check_build_profile import check_using_build_profile\nfrom conan.tools._compilers import architecture_flag, build_type_flags, cppstd_flag, \\\n build_type_link_flags\nfrom conan.tools.apple.apple import apple_min_version_flag, to_apple_arch, \\\n apple_sdk_path\nfrom conan.tools.build.cross_building import cross_building, get_cross_building_settings\nfrom conan.tools.env import Environment\nfrom conan.tools.files.files import save_toolchain_args\nfrom conan.tools.gnu.get_gnu_triplet import _get_gnu_triplet\nfrom conan.tools.microsoft import VCVars, is_msvc, msvc_runtime_flag\nfrom conans.errors import ConanException\nfrom conans.tools import args_to_string\n\n\nclass AutotoolsToolchain:\n def __init__(self, conanfile, namespace=None):\n self._conanfile = conanfile\n self._namespace = namespace\n\n self.configure_args = []\n self.make_args = []\n\n # Flags\n self.cxxflags = []\n self.cflags = []\n self.ldflags = []\n self.defines = []\n\n # Defines\n self.gcc_cxx11_abi = self._get_cxx11_abi_define()\n self.ndebug = None\n build_type = self._conanfile.settings.get_safe(\"build_type\")\n if build_type in ['Release', 'RelWithDebInfo', 'MinSizeRel']:\n self.ndebug = \"NDEBUG\"\n\n # TODO: This is also covering compilers like Visual Studio, necessary to test it (&remove?)\n self.build_type_flags = build_type_flags(self._conanfile.settings)\n self.build_type_link_flags = build_type_link_flags(self._conanfile.settings)\n\n self.cppstd = cppstd_flag(self._conanfile.settings)\n self.arch_flag = architecture_flag(self._conanfile.settings)\n self.libcxx = self._get_libcxx_flag()\n self.fpic = self._conanfile.options.get_safe(\"fPIC\")\n self.msvc_runtime_flag = self._get_msvc_runtime_flag()\n\n # Cross build\n self._host = None\n self._build = None\n self._target = None\n\n self.apple_arch_flag = self.apple_isysroot_flag = None\n self.apple_min_version_flag = apple_min_version_flag(self._conanfile)\n\n if cross_building(self._conanfile):\n os_build, arch_build, os_host, arch_host = get_cross_building_settings(self._conanfile)\n compiler = self._conanfile.settings.get_safe(\"compiler\")\n self._host = _get_gnu_triplet(os_host, arch_host, compiler=compiler)\n self._build = _get_gnu_triplet(os_build, arch_build, compiler=compiler)\n\n # Apple Stuff\n if os_build == \"Macos\":\n sdk_path = apple_sdk_path(conanfile)\n apple_arch = to_apple_arch(self._conanfile.settings.get_safe(\"arch\"))\n # https://man.archlinux.org/man/clang.1.en#Target_Selection_Options\n self.apple_arch_flag = \"-arch {}\".format(apple_arch) if apple_arch else None\n # -isysroot makes all includes for your library relative to the build directory\n self.apple_isysroot_flag = \"-isysroot {}\".format(sdk_path) if sdk_path else None\n\n check_using_build_profile(self._conanfile)\n\n def _get_cxx11_abi_define(self):\n # https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html\n # The default is libstdc++11, only specify the contrary '_GLIBCXX_USE_CXX11_ABI=0'\n settings = self._conanfile.settings\n libcxx = settings.get_safe(\"compiler.libcxx\")\n if not libcxx:\n return\n\n compiler = settings.get_safe(\"compiler.base\") or settings.get_safe(\"compiler\")\n if compiler in ['clang', 'apple-clang', 'gcc']:\n if libcxx == 'libstdc++':\n return '_GLIBCXX_USE_CXX11_ABI=0'\n elif libcxx == \"libstdc++11\" and self._conanfile.conf.get(\"tools.gnu:define_libcxx11_abi\",\n check_type=bool):\n return '_GLIBCXX_USE_CXX11_ABI=1'\n\n def _get_msvc_runtime_flag(self):\n flag = msvc_runtime_flag(self._conanfile)\n if flag:\n flag = \"-{}\".format(flag)\n return flag\n\n def _get_libcxx_flag(self):\n settings = self._conanfile.settings\n libcxx = settings.get_safe(\"compiler.libcxx\")\n if not libcxx:\n return\n\n compiler = settings.get_safe(\"compiler.base\") or settings.get_safe(\"compiler\")\n\n if compiler in ['clang', 'apple-clang']:\n if libcxx in ['libstdc++', 'libstdc++11']:\n return '-stdlib=libstdc++'\n elif libcxx == 'libc++':\n return '-stdlib=libc++'\n elif compiler == 'sun-cc':\n return ({\"libCstd\": \"-library=Cstd\",\n \"libstdcxx\": \"-library=stdcxx4\",\n \"libstlport\": \"-library=stlport4\",\n \"libstdc++\": \"-library=stdcpp\"}.get(libcxx))\n elif compiler == \"qcc\":\n return \"-Y _%s\" % str(libcxx)\n\n @staticmethod\n def _filter_list_empty_fields(v):\n return list(filter(bool, v))\n\n def _get_extra_flags(self):\n # Now, it's time to get all the flags defined by the user\n cxxflags = self._conanfile.conf.get(\"tools.build:cxxflags\", default=[], check_type=list)\n cflags = self._conanfile.conf.get(\"tools.build:cflags\", default=[], check_type=list)\n sharedlinkflags = self._conanfile.conf.get(\"tools.build:sharedlinkflags\", default=[], check_type=list)\n exelinkflags = self._conanfile.conf.get(\"tools.build:exelinkflags\", default=[], check_type=list)\n defines = self._conanfile.conf.get(\"tools.build:defines\", default=[], check_type=list)\n return {\n \"cxxflags\": cxxflags,\n \"cflags\": cflags,\n \"defines\": defines,\n \"ldflags\": sharedlinkflags + exelinkflags\n }\n\n def environment(self):\n env = Environment()\n\n apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]\n fpic = \"-fPIC\" if self.fpic else None\n extra_flags = self._get_extra_flags()\n\n self.cxxflags.extend([self.libcxx, self.cppstd,\n self.arch_flag, fpic, self.msvc_runtime_flag]\n + self.build_type_flags + apple_flags + extra_flags[\"cxxflags\"])\n self.cflags.extend([self.arch_flag, fpic, self.msvc_runtime_flag]\n + self.build_type_flags + apple_flags + extra_flags[\"cflags\"])\n self.ldflags.extend([self.arch_flag] + self.build_type_link_flags\n + apple_flags + extra_flags[\"ldflags\"])\n self.defines.extend([self.ndebug, self.gcc_cxx11_abi] + extra_flags[\"defines\"])\n\n if is_msvc(self._conanfile):\n env.define(\"CXX\", \"cl\")\n env.define(\"CC\", \"cl\")\n\n env.append(\"CPPFLAGS\", [\"-D{}\".format(d) for d in self._filter_list_empty_fields(self.defines)])\n env.append(\"CXXFLAGS\", self._filter_list_empty_fields(self.cxxflags))\n env.append(\"CFLAGS\", self._filter_list_empty_fields(self.cflags))\n env.append(\"LDFLAGS\", self._filter_list_empty_fields(self.ldflags))\n env.prepend_path(\"PKG_CONFIG_PATH\", self._conanfile.generators_folder)\n\n return env\n\n def vars(self):\n return self.environment().vars(self._conanfile, scope=\"build\")\n\n def generate(self, env=None, scope=\"build\"):\n env = env or self.environment()\n env = env.vars(self._conanfile, scope=scope)\n env.save_script(\"conanautotoolstoolchain\")\n self.generate_args()\n VCVars(self._conanfile).generate(scope=scope)\n\n def _shared_static_args(self):\n args = []\n if self._conanfile.options.get_safe(\"shared\", False):\n args.extend([\"--enable-shared\", \"--disable-static\"])\n else:\n args.extend([\"--disable-shared\", \"--enable-static\", \"--with-pic\"\n if self._conanfile.options.get_safe(\"fPIC\", True)\n else \"--without-pic\"])\n return args\n\n def generate_args(self):\n configure_args = []\n configure_args.extend(self._shared_static_args())\n configure_args.extend(self.configure_args)\n user_args_str = args_to_string(self.configure_args)\n for flag, var in ((\"host\", self._host), (\"build\", self._build), (\"target\", self._target)):\n if var and flag not in user_args_str:\n configure_args.append('--{}={}'.format(flag, var))\n\n args = {\"configure_args\": args_to_string(configure_args),\n \"make_args\": args_to_string(self.make_args)}\n\n save_toolchain_args(args, namespace=self._namespace)\n", "conans/assets/templates/new_v2_autotools.py": "import textwrap\n\nfrom conans.assets.templates.new_v2_cmake import source_cpp, source_h, test_main\n\nconanfile_lib = textwrap.dedent(\"\"\"\n import os\n\n from conan import ConanFile\n from conan.tools.gnu import AutotoolsToolchain, Autotools\n from conan.tools.layout import basic_layout\n from conan.tools.files import chdir\n\n\n class {package_name}Conan(ConanFile):\n name = \"{name}\"\n version = \"{version}\"\n\n # Optional metadata\n license = \"<Put the package license here>\"\n author = \"<Put your name here> <And your email here>\"\n url = \"<Package recipe repository url here, for issues about the package>\"\n description = \"<Description of {package_name} here>\"\n topics = (\"<Put some tag here>\", \"<here>\", \"<and here>\")\n\n # Binary configuration\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n options = {{\"shared\": [True, False], \"fPIC\": [True, False]}}\n default_options = {{\"shared\": False, \"fPIC\": True}}\n\n exports_sources = \"configure.ac\", \"Makefile.am\", \"src/*\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def layout(self):\n basic_layout(self)\n\n def generate(self):\n at_toolchain = AutotoolsToolchain(self)\n at_toolchain.generate()\n\n def build(self):\n autotools = Autotools(self)\n autotools.autoreconf()\n autotools.configure()\n autotools.make()\n\n def package(self):\n autotools = Autotools(self)\n autotools.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"{name}\"]\n \"\"\")\n\nconfigure_ac = textwrap.dedent(\"\"\"\n AC_INIT([{name}], [{version}], [])\n AM_INIT_AUTOMAKE([-Wall -Werror foreign])\n AC_PROG_CXX\n AM_PROG_AR\n LT_INIT\n AC_CONFIG_FILES([Makefile src/Makefile])\n AC_OUTPUT\n \"\"\")\n\nmakefile_am = textwrap.dedent(\"\"\"\n SUBDIRS = src\n \"\"\")\n\nmakefile_am_lib = textwrap.dedent(\"\"\"\n lib_LTLIBRARIES = lib{name}.la\n lib{name}_la_SOURCES = {name}.cpp {name}.h\n lib{name}_la_HEADERS = {name}.h\n lib{name}_ladir = $(includedir)\n \"\"\")\n\ntest_conanfile = textwrap.dedent(\"\"\"\n import os\n\n from conan import ConanFile\n from conan.tools.gnu import AutotoolsToolchain, Autotools, AutotoolsDeps\n from conan.tools.layout import basic_layout\n from conan.tools.build import cross_building\n from conan.tools.files import chdir\n\n\n class {package_name}TestConan(ConanFile):\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n # VirtualBuildEnv and VirtualRunEnv can be avoided if \"tools.env.virtualenv:auto_use\" is defined\n # (it will be defined in Conan 2.0)\n generators = \"AutotoolsDeps\", \"AutotoolsToolchain\", \"VirtualBuildEnv\", \"VirtualRunEnv\"\n apply_env = False\n test_type = \"explicit\"\n\n def requirements(self):\n self.requires(self.tested_reference_str)\n\n def build(self):\n autotools = Autotools(self)\n autotools.autoreconf()\n autotools.configure()\n autotools.make()\n\n def layout(self):\n basic_layout(self)\n\n def test(self):\n if not cross_building(self):\n cmd = os.path.join(self.cpp.build.bindirs[0], \"main\")\n self.run(cmd, env=\"conanrun\")\n \"\"\")\n\ntest_configure_ac = textwrap.dedent(\"\"\"\n AC_INIT([main], [1.0], [])\n AM_INIT_AUTOMAKE([-Wall -Werror foreign])\n AC_PROG_CXX\n AC_PROG_RANLIB\n AM_PROG_AR\n AC_CONFIG_FILES([Makefile])\n AC_OUTPUT\n \"\"\")\n\ntest_makefile_am = textwrap.dedent(\"\"\"\n bin_PROGRAMS = main\n main_SOURCES = main.cpp\n \"\"\")\n\n\ndef get_autotools_lib_files(name, version, package_name=\"Pkg\"):\n files = {\"conanfile.py\": conanfile_lib.format(name=name, version=version,\n package_name=package_name),\n \"src/Makefile.am\": makefile_am_lib.format(name=name, version=version),\n \"src/{}.cpp\".format(name): source_cpp.format(name=name, version=version),\n \"src/{}.h\".format(name): source_h.format(name=name, version=version),\n \"configure.ac\": configure_ac.format(name=name, version=version),\n \"Makefile.am\": makefile_am.format(name=name, version=version),\n \"test_package/conanfile.py\": test_conanfile.format(name=name, version=version,\n package_name=package_name),\n \"test_package/main.cpp\": test_main.format(name=name),\n \"test_package/configure.ac\": test_configure_ac.format(name=name, version=version),\n \"test_package/Makefile.am\": test_makefile_am.format(name=name, version=version)}\n return files\n\n\nconanfile_exe = textwrap.dedent(\"\"\"\n import os\n\n from conan import ConanFile\n from conan.tools.gnu import AutotoolsToolchain, Autotools\n from conan.tools.layout import basic_layout\n from conan.tools.files import chdir\n\n\n class {package_name}Conan(ConanFile):\n name = \"{name}\"\n version = \"{version}\"\n\n # Optional metadata\n license = \"<Put the package license here>\"\n author = \"<Put your name here> <And your email here>\"\n url = \"<Package recipe repository url here, for issues about the package>\"\n description = \"<Description of {package_name} here>\"\n topics = (\"<Put some tag here>\", \"<here>\", \"<and here>\")\n\n # Binary configuration\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n\n # Sources are located in the same place as this recipe, copy them to the recipe\n exports_sources = \"configure.ac\", \"Makefile.am\", \"src/*\"\n\n def layout(self):\n basic_layout(self)\n\n def generate(self):\n at_toolchain = AutotoolsToolchain(self)\n at_toolchain.generate()\n\n def build(self):\n autotools = Autotools(self)\n autotools.autoreconf()\n autotools.configure()\n autotools.make()\n\n def package(self):\n autotools = Autotools(self)\n autotools.install()\n \"\"\")\n\ntest_conanfile_exe = textwrap.dedent(\"\"\"\n import os\n from conan import ConanFile\n from conan.tools.build import cross_building\n from conan.tools.layout import basic_layout\n\n\n class {package_name}TestConan(ConanFile):\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n # VirtualRunEnv can be avoided if \"tools.env.virtualenv:auto_use\" is defined\n # (it will be defined in Conan 2.0)\n generators = \"VirtualRunEnv\"\n apply_env = False\n test_type = \"explicit\"\n\n def requirements(self):\n self.requires(self.tested_reference_str)\n\n def layout(self):\n basic_layout(self)\n\n def test(self):\n if not cross_building(self):\n self.run(\"{name}\", env=\"conanrun\")\n \"\"\")\n\nmakefile_am_exe = textwrap.dedent(\"\"\"\n bin_PROGRAMS = {name}\n {name}_SOURCES = main.cpp {name}.cpp {name}.h\n \"\"\")\n\n\ndef get_autotools_exe_files(name, version, package_name=\"Pkg\"):\n files = {\"conanfile.py\": conanfile_exe.format(name=name, version=version,\n package_name=package_name),\n \"src/Makefile.am\": makefile_am_exe.format(name=name, version=version),\n \"src/main.cpp\": test_main.format(name=name),\n \"src/{}.cpp\".format(name): source_cpp.format(name=name, version=version),\n \"src/{}.h\".format(name): source_h.format(name=name, version=version),\n \"configure.ac\": configure_ac.format(name=name, version=version),\n \"Makefile.am\": makefile_am.format(name=name, version=version),\n \"test_package/conanfile.py\": test_conanfile_exe.format(name=name, version=version,\n package_name=package_name)}\n return files\n"}
|
{"conan/tools/gnu/autotoolstoolchain.py": [{"type": "function", "name": "AutotoolsToolchain._default_configure_shared_flags", "lines": [171, 183], "signature": "def _default_configure_shared_flags(self):", "doc": ""}, {"type": "function", "name": "AutotoolsToolchain._default_configure_install_flags", "lines": [185, 200], "signature": "def _default_configure_install_flags(self):", "doc": ""}, {"type": "function", "name": "AutotoolsToolchain._default_configure_install_flags._get_argument", "lines": [188, 190], "signature": "def _get_argument(argument_name, cppinfo_name):", "doc": ""}, {"type": "function", "name": "AutotoolsToolchain._default_autoreconf_flags", "lines": [202, 203], "signature": "def _default_autoreconf_flags(self):", "doc": ""}]}
| null |
["conans/test/unittests/tools/gnu/autotools_test.py::test_source_folder_works"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1652804521.0, "pr_title": "More flexibility in Autotools tools to override arguments and avoid all default arguments", "pr_body": "Changelog: Fix: Use `DESTDIR` argument in `make install` step instead of using the `--prefix` in configure.\r\nChangelog: Feature: More flexibility in Autotools tools to override arguments and avoid all default arguments for `make`, `autoreconf` and `configure`.\r\nDocs: https://github.com/conan-io/docs/pull/2562\r\n\r\nSome notes:\r\n- As suggested in [this issue](https://github.com/conan-io/conan/issues/11264), we are using now DESTDIR argument for make install.\r\n- As a side effect, we can remove the second configure in install and put the `build_script_folder` argument back in the configure method instead of having it in the constructor.\r\n- Also, now that we don't have to use the package_folder for the --prefix argument, I have removed the `default_configure_install_args` attribute and moved the default install arguments to the toolchain.\r\n\r\n\r\nCloses: https://github.com/conan-io/conan/issues/11265\r\nCloses: https://github.com/conan-io/conan/issues/11264\r\n\r\n", "pr_timeline": [{"time": 1653474365.0, "comment": "@mattyclarkson this is looking better for me now, but it would be great if you could have a look to the changes and see if they align with your expectations."}, {"time": 1653654305.0, "comment": "Merged, because we need to start branching 1.49, but please @mattyclarkson if you have any feedback, it would be great to have it. Thanks!"}, {"time": 1654264847.0, "comment": "> Merged, because we need to start branching 1.49, but please @mattyclarkson if you have any feedback, it would be great to have it. Thanks!\r\n\r\nYo, sorry, I was AFK for a while taking a break. This looks great :tada: Thanks for the changes. We will roll to the newer versions and try it out. If we have any extra feedback, I'll make sure to create an issue but this looks in great shape for 2.0."}], "issues": {}}
|
|
conan-io/conan
| 11,569
|
https://github.com/conan-io/conan/pull/11569
|
conan-io__conan-11569
|
[]
|
345be91a038e1bda707e07a19889953412d358dc
|
diff --git a/conan/tools/files/files.py b/conan/tools/files/files.py
index 7eca6af893f..3f69573b979 100644
--- a/conan/tools/files/files.py
+++ b/conan/tools/files/files.py
@@ -11,6 +11,8 @@
from fnmatch import fnmatch
import six
+from urllib.parse import urlparse
+from urllib.request import url2pathname
from conan.tools import CONAN_TOOLCHAIN_ARGS_FILE, CONAN_TOOLCHAIN_ARGS_SECTION
from conan.tools.apple.apple import is_apple_os
@@ -165,10 +167,13 @@ def download(conanfile, url, filename, verify=True, retry=None, retry_wait=None,
def _download_file(file_url):
# The download cache is only used if a checksum is provided, otherwise, a normal download
- run_downloader(requester=requester, output=out, verify=verify, download_cache=download_cache,
- user_download=True, url=file_url,
- file_path=filename, retry=retry, retry_wait=retry_wait, overwrite=overwrite,
- auth=auth, headers=headers, md5=md5, sha1=sha1, sha256=sha256)
+ if file_url.startswith("file:"):
+ _copy_local_file_from_uri(conanfile, url=file_url, file_path=filename, md5=md5, sha1=sha1, sha256=sha256)
+ else:
+ run_downloader(requester=requester, output=out, verify=verify, download_cache=download_cache,
+ user_download=True, url=file_url,
+ file_path=filename, retry=retry, retry_wait=retry_wait, overwrite=overwrite,
+ auth=auth, headers=headers, md5=md5, sha1=sha1, sha256=sha256)
out.writeln("")
if not isinstance(url, (list, tuple)):
@@ -184,6 +189,21 @@ def _download_file(file_url):
else:
raise ConanException("All downloads from ({}) URLs have failed.".format(len(url)))
+def _copy_local_file_from_uri(conanfile, url, file_path, md5=None, sha1=None, sha256=None):
+ file_origin = _path_from_file_uri(url)
+ shutil.copyfile(file_origin, file_path)
+
+ if md5:
+ check_md5(conanfile, file_path, md5)
+ if sha1:
+ check_sha1(conanfile, file_path, sha1)
+ if sha256:
+ check_sha256(conanfile, file_path, sha256)
+
+def _path_from_file_uri(uri):
+ path = urlparse(uri).path
+ return url2pathname(path)
+
def rename(conanfile, src, dst):
"""
|
diff --git a/conans/test/unittests/tools/files/test_downloads.py b/conans/test/unittests/tools/files/test_downloads.py
index 30675c976e2..1ec8077ca78 100644
--- a/conans/test/unittests/tools/files/test_downloads.py
+++ b/conans/test/unittests/tools/files/test_downloads.py
@@ -1,4 +1,5 @@
import os
+import platform
import pytest
import requests
@@ -157,6 +158,32 @@ def test_download_no_retries_errors(self, bottle_server):
assert "Waiting" not in str(conanfile.output)
assert "retry" not in str(conanfile.output)
+ def test_download_localfile(self):
+ conanfile = ConanFileMock()
+ conanfile._conan_requester = requests
+
+ file_location = os.path.join(temp_folder(), "file.txt")
+ save(file_location, "this is some content")
+
+ file_url = f"file:///{file_location}"
+ file_md5 = "736db904ad222bf88ee6b8d103fceb8e"
+
+ dest = os.path.join(temp_folder(), "downloaded_file.txt")
+ download(conanfile, file_url, dest, md5=file_md5)
+ content = load(dest)
+ assert "this is some content" == content
+
+ def test_download_localfile_notfound(self):
+ conanfile = ConanFileMock()
+ conanfile._conan_requester = requests
+
+ file_url = "file:///path/to/missing/file.txt"
+ dest = os.path.join(temp_folder(), "file.txt")
+
+ with pytest.raises(FileNotFoundError) as exc:
+ download(conanfile, file_url, dest)
+
+ assert "No such file" in str(exc.value)
@pytest.fixture()
def bottle_server_zip():
| 2022-07-04T14:24:18
|
{}
|
{"conan/tools/files/files.py": "import configparser\nimport errno\nimport gzip\nimport hashlib\nimport os\nimport platform\nimport shutil\nimport subprocess\nimport sys\nfrom contextlib import contextmanager\nfrom fnmatch import fnmatch\n\nimport six\n\nfrom conan.tools import CONAN_TOOLCHAIN_ARGS_FILE, CONAN_TOOLCHAIN_ARGS_SECTION\nfrom conan.tools.apple.apple import is_apple_os\nfrom conans.client.downloaders.download import run_downloader\nfrom conans.errors import ConanException\nfrom conans.util.files import rmdir as _internal_rmdir\nfrom conans.util.runners import check_output_runner\n\nif six.PY3: # Remove this IF in develop2\n from shutil import which\n\n\ndef load(conanfile, path, encoding=\"utf-8\"):\n \"\"\" Loads a file content \"\"\"\n with open(path, 'rb') as handle:\n tmp = handle.read()\n return tmp.decode(encoding)\n\n\ndef save(conanfile, path, content, append=False, encoding=\"utf-8\"):\n if append:\n mode = \"ab\"\n try:\n os.makedirs(os.path.dirname(path))\n except Exception:\n pass\n else:\n mode = \"wb\"\n dir_path = os.path.dirname(path)\n if not os.path.isdir(dir_path):\n try:\n os.makedirs(dir_path)\n except OSError as error:\n if error.errno not in (errno.EEXIST, errno.ENOENT):\n raise OSError(\"The folder {} does not exist and could not be created ({}).\"\n .format(dir_path, error.strerror))\n except Exception:\n raise\n\n with open(path, mode) as handle:\n if not isinstance(content, bytes):\n content = bytes(content, encoding=encoding)\n handle.write(content)\n\n\ndef mkdir(conanfile, path):\n \"\"\"Recursive mkdir, doesnt fail if already existing\"\"\"\n if os.path.exists(path):\n return\n os.makedirs(path)\n\n\ndef rmdir(conanfile, path):\n _internal_rmdir(path)\n\n\ndef rm(conanfile, pattern, folder, recursive=False):\n for root, _, filenames in os.walk(folder):\n for filename in filenames:\n if fnmatch(filename, pattern):\n fullname = os.path.join(root, filename)\n os.unlink(fullname)\n if not recursive:\n break\n\n\ndef get(conanfile, url, md5='', sha1='', sha256='', destination=\".\", filename=\"\",\n keep_permissions=False, pattern=None, verify=True, retry=None, retry_wait=None,\n auth=None, headers=None, strip_root=False):\n \"\"\" high level downloader + unzipper + (optional hash checker) + delete temporary zip\n \"\"\"\n\n if not filename: # deduce filename from the URL\n url_base = url[0] if isinstance(url, (list, tuple)) else url\n if \"?\" in url_base or \"=\" in url_base:\n raise ConanException(\"Cannot deduce file name from the url: '{}'. Use 'filename' \"\n \"parameter.\".format(url_base))\n filename = os.path.basename(url_base)\n\n download(conanfile, url, filename, verify=verify,\n retry=retry, retry_wait=retry_wait, auth=auth, headers=headers,\n md5=md5, sha1=sha1, sha256=sha256)\n unzip(conanfile, filename, destination=destination, keep_permissions=keep_permissions,\n pattern=pattern, strip_root=strip_root)\n os.unlink(filename)\n\n\ndef ftp_download(conanfile, ip, filename, login='', password=''):\n # TODO: Check if we want to join this method with download() one, based on ftp:// protocol\n # this has been requested by some users, but the signature is a bit divergent\n import ftplib\n ftp = None\n try:\n ftp = ftplib.FTP(ip)\n ftp.login(login, password)\n filepath, filename = os.path.split(filename)\n if filepath:\n ftp.cwd(filepath)\n with open(filename, 'wb') as f:\n ftp.retrbinary('RETR ' + filename, f.write)\n except Exception as e:\n try:\n os.unlink(filename)\n except OSError:\n pass\n raise ConanException(\"Error in FTP download from %s\\n%s\" % (ip, str(e)))\n finally:\n if ftp:\n ftp.quit()\n\n\ndef download(conanfile, url, filename, verify=True, retry=None, retry_wait=None,\n auth=None, headers=None, md5='', sha1='', sha256=''):\n \"\"\"Retrieves a file from a given URL into a file with a given filename.\n It uses certificates from a list of known verifiers for https downloads,\n but this can be optionally disabled.\n\n :param conanfile:\n :param url: URL to download. It can be a list, which only the first one will be downloaded, and\n the follow URLs will be used as mirror in case of download error.\n :param filename: Name of the file to be created in the local storage\n :param verify: When False, disables https certificate validation\n :param retry: Number of retries in case of failure. Default is overriden by general.retry in the\n conan.conf file or an env variable CONAN_RETRY\n :param retry_wait: Seconds to wait between download attempts. Default is overriden by\n general.retry_wait in the conan.conf file or an env variable CONAN_RETRY_WAIT\n :param auth: A tuple of user and password to use HTTPBasic authentication\n :param headers: A dictionary with additional headers\n :param md5: MD5 hash code to check the downloaded file\n :param sha1: SHA-1 hash code to check the downloaded file\n :param sha256: SHA-256 hash code to check the downloaded file\n :return: None\n \"\"\"\n # TODO: Add all parameters to the new conf\n out = conanfile.output\n requester = conanfile._conan_requester\n config = conanfile.conf\n overwrite = True\n\n if config[\"tools.files.download:retry\"]:\n retry = int(config[\"tools.files.download:retry\"])\n elif retry is None:\n retry = 1\n\n if config[\"tools.files.download:retry_wait\"]:\n retry_wait = int(config[\"tools.files.download:retry_wait\"])\n elif retry_wait is None:\n retry_wait = 5\n\n checksum = sha256 or sha1 or md5\n download_cache = config[\"tools.files.download:download_cache\"] if checksum else None\n\n def _download_file(file_url):\n # The download cache is only used if a checksum is provided, otherwise, a normal download\n run_downloader(requester=requester, output=out, verify=verify, download_cache=download_cache,\n user_download=True, url=file_url,\n file_path=filename, retry=retry, retry_wait=retry_wait, overwrite=overwrite,\n auth=auth, headers=headers, md5=md5, sha1=sha1, sha256=sha256)\n out.writeln(\"\")\n\n if not isinstance(url, (list, tuple)):\n _download_file(url)\n else: # We were provided several URLs to try\n for url_it in url:\n try:\n _download_file(url_it)\n break\n except Exception as error:\n message = \"Could not download from the URL {}: {}.\".format(url_it, str(error))\n out.warn(message + \" Trying another mirror.\")\n else:\n raise ConanException(\"All downloads from ({}) URLs have failed.\".format(len(url)))\n\n\ndef rename(conanfile, src, dst):\n \"\"\"\n rename a file or folder to avoid \"Access is denied\" error on Windows\n :param conanfile: conanfile object\n :param src: Source file or folder\n :param dst: Destination file or folder\n :return: None\n \"\"\"\n # FIXME: This function has been copied from legacy. Needs to fix: which() call and wrap subprocess call.\n if os.path.exists(dst):\n raise ConanException(\"rename {} to {} failed, dst exists.\".format(src, dst))\n\n if platform.system() == \"Windows\" and which(\"robocopy\") and os.path.isdir(src):\n # /move Moves files and directories, and deletes them from the source after they are copied.\n # /e Copies subdirectories. Note that this option includes empty directories.\n # /ndl Specifies that directory names are not to be logged.\n # /nfl Specifies that file names are not to be logged.\n process = subprocess.Popen([\"robocopy\", \"/move\", \"/e\", \"/ndl\", \"/nfl\", src, dst],\n stdout=subprocess.PIPE)\n process.communicate()\n if process.returncode > 7: # https://ss64.com/nt/robocopy-exit.html\n raise ConanException(\"rename {} to {} failed.\".format(src, dst))\n else:\n try:\n os.rename(src, dst)\n except Exception as err:\n raise ConanException(\"rename {} to {} failed: {}\".format(src, dst, err))\n\n\ndef load_toolchain_args(generators_folder=None, namespace=None):\n \"\"\"\n Helper function to load the content of any CONAN_TOOLCHAIN_ARGS_FILE\n\n :param generators_folder: `str` folder where is located the CONAN_TOOLCHAIN_ARGS_FILE.\n :param namespace: `str` namespace to be prepended to the filename.\n :return: <class 'configparser.SectionProxy'>\n \"\"\"\n namespace_name = \"{}_{}\".format(namespace, CONAN_TOOLCHAIN_ARGS_FILE) if namespace \\\n else CONAN_TOOLCHAIN_ARGS_FILE\n args_file = os.path.join(generators_folder, namespace_name) if generators_folder \\\n else namespace_name\n toolchain_config = configparser.ConfigParser()\n toolchain_file = toolchain_config.read(args_file)\n if not toolchain_file:\n raise ConanException(\"The file %s does not exist. Please, make sure that it was not\"\n \" generated in another folder.\" % args_file)\n try:\n return toolchain_config[CONAN_TOOLCHAIN_ARGS_SECTION]\n except KeyError:\n raise ConanException(\"The primary section [%s] does not exist in the file %s. Please, add it\"\n \" as the default one of all your configuration variables.\" %\n (CONAN_TOOLCHAIN_ARGS_SECTION, args_file))\n\n\ndef save_toolchain_args(content, generators_folder=None, namespace=None):\n \"\"\"\n Helper function to save the content into the CONAN_TOOLCHAIN_ARGS_FILE\n\n :param content: `dict` all the information to be saved into the toolchain file.\n :param namespace: `str` namespace to be prepended to the filename.\n :param generators_folder: `str` folder where is located the CONAN_TOOLCHAIN_ARGS_FILE\n \"\"\"\n # Let's prune None values\n content_ = {k: v for k, v in content.items() if v is not None}\n namespace_name = \"{}_{}\".format(namespace, CONAN_TOOLCHAIN_ARGS_FILE) if namespace \\\n else CONAN_TOOLCHAIN_ARGS_FILE\n args_file = os.path.join(generators_folder, namespace_name) if generators_folder \\\n else namespace_name\n toolchain_config = configparser.ConfigParser()\n toolchain_config[CONAN_TOOLCHAIN_ARGS_SECTION] = content_\n with open(args_file, \"w\") as f:\n toolchain_config.write(f)\n\n\n@contextmanager\ndef chdir(conanfile, newdir):\n old_path = os.getcwd()\n os.chdir(newdir)\n try:\n yield\n finally:\n os.chdir(old_path)\n\n\ndef unzip(conanfile, filename, destination=\".\", keep_permissions=False, pattern=None,\n strip_root=False):\n \"\"\"\n Unzip a zipped file\n :param filename: Path to the zip file\n :param destination: Destination folder (or file for .gz files)\n :param keep_permissions: Keep the zip permissions. WARNING: Can be\n dangerous if the zip was not created in a NIX system, the bits could\n produce undefined permission schema. Use this option only if you are sure\n that the zip was created correctly.\n :param pattern: Extract only paths matching the pattern. This should be a\n Unix shell-style wildcard, see fnmatch documentation for more details.\n :param flat: If all the contents are in a single dir, flat that directory.\n :return:\n \"\"\"\n\n output = conanfile.output\n if (filename.endswith(\".tar.gz\") or filename.endswith(\".tgz\") or\n filename.endswith(\".tbz2\") or filename.endswith(\".tar.bz2\") or\n filename.endswith(\".tar\")):\n return untargz(filename, destination, pattern, strip_root)\n if filename.endswith(\".gz\"):\n with gzip.open(filename, 'rb') as f:\n file_content = f.read()\n target_name = filename[:-3] if destination == \".\" else destination\n save(conanfile, target_name, file_content)\n return\n if filename.endswith(\".tar.xz\") or filename.endswith(\".txz\"):\n return untargz(filename, destination, pattern, strip_root)\n\n import zipfile\n full_path = os.path.normpath(os.path.join(os.getcwd(), destination))\n\n if hasattr(sys.stdout, \"isatty\") and sys.stdout.isatty():\n def print_progress(the_size, uncomp_size):\n the_size = (the_size * 100.0 / uncomp_size) if uncomp_size != 0 else 0\n txt_msg = \"Unzipping %d %%\"\n if the_size > print_progress.last_size + 1:\n output.rewrite_line(txt_msg % the_size)\n print_progress.last_size = the_size\n if int(the_size) == 99:\n output.rewrite_line(txt_msg % 100)\n else:\n def print_progress(_, __):\n pass\n\n with zipfile.ZipFile(filename, \"r\") as z:\n zip_info = z.infolist()\n if pattern:\n zip_info = [zi for zi in zip_info if fnmatch(zi.filename, pattern)]\n if strip_root:\n names = [n.replace(\"\\\\\", \"/\") for n in z.namelist()]\n common_folder = os.path.commonprefix(names).split(\"/\", 1)[0]\n if not common_folder and len(names) > 1:\n raise ConanException(\"The zip file contains more than 1 folder in the root\")\n if len(names) == 1 and len(names[0].split(\"/\", 1)) == 1:\n raise ConanException(\"The zip file contains a file in the root\")\n # Remove the directory entry if present\n # Note: The \"zip\" format contains the \"/\" at the end if it is a directory\n zip_info = [m for m in zip_info if m.filename != (common_folder + \"/\")]\n for member in zip_info:\n name = member.filename.replace(\"\\\\\", \"/\")\n member.filename = name.split(\"/\", 1)[1]\n\n uncompress_size = sum((file_.file_size for file_ in zip_info))\n if uncompress_size > 100000:\n output.info(\"Unzipping %s, this can take a while\" % _human_size(uncompress_size))\n else:\n output.info(\"Unzipping %s\" % _human_size(uncompress_size))\n extracted_size = 0\n\n print_progress.last_size = -1\n if platform.system() == \"Windows\":\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n else: # duplicated for, to avoid a platform check for each zipped file\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n if keep_permissions:\n # Could be dangerous if the ZIP has been created in a non nix system\n # https://bugs.python.org/issue15795\n perm = file_.external_attr >> 16 & 0xFFF\n os.chmod(os.path.join(full_path, file_.filename), perm)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n output.writeln(\"\")\n\n\ndef untargz(filename, destination=\".\", pattern=None, strip_root=False):\n # NOT EXPOSED at `conan.tools.files` but used in tests\n import tarfile\n with tarfile.TarFile.open(filename, 'r:*') as tarredgzippedFile:\n if not pattern and not strip_root:\n tarredgzippedFile.extractall(destination)\n else:\n members = tarredgzippedFile.getmembers()\n\n if strip_root:\n names = [n.replace(\"\\\\\", \"/\") for n in tarredgzippedFile.getnames()]\n common_folder = os.path.commonprefix(names).split(\"/\", 1)[0]\n if not common_folder and len(names) > 1:\n raise ConanException(\"The tgz file contains more than 1 folder in the root\")\n if len(names) == 1 and len(names[0].split(\"/\", 1)) == 1:\n raise ConanException(\"The tgz file contains a file in the root\")\n # Remove the directory entry if present\n members = [m for m in members if m.name != common_folder]\n for member in members:\n name = member.name.replace(\"\\\\\", \"/\")\n member.name = name.split(\"/\", 1)[1]\n member.path = member.name\n if member.linkpath.startswith(common_folder):\n # https://github.com/conan-io/conan/issues/11065\n linkpath = member.linkpath.replace(\"\\\\\", \"/\")\n member.linkpath = linkpath.split(\"/\", 1)[1]\n member.linkname = member.linkpath\n if pattern:\n members = list(filter(lambda m: fnmatch(m.name, pattern),\n tarredgzippedFile.getmembers()))\n tarredgzippedFile.extractall(destination, members=members)\n\n\ndef _human_size(size_bytes):\n \"\"\"\n format a size in bytes into a 'human' file size, e.g. B, KB, MB, GB, TB, PB\n Note that bytes will be reported in whole numbers but KB and above will have\n greater precision. e.g. 43 B, 443 KB, 4.3 MB, 4.43 GB, etc\n \"\"\"\n UNIT_SIZE = 1000.0\n\n suffixes_table = [('B', 0), ('KB', 1), ('MB', 1), ('GB', 2), ('TB', 2), ('PB', 2)]\n\n num = float(size_bytes)\n the_precision = None\n the_suffix = None\n for suffix, precision in suffixes_table:\n the_precision = precision\n the_suffix = suffix\n if num < UNIT_SIZE:\n break\n num /= UNIT_SIZE\n\n if the_precision == 0:\n formatted_size = \"%d\" % num\n else:\n formatted_size = str(round(num, ndigits=the_precision))\n\n return \"%s%s\" % (formatted_size, the_suffix)\n\n\ndef check_sha1(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"sha1\", file_path, signature)\n\n\ndef check_md5(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"md5\", file_path, signature)\n\n\ndef check_sha256(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"sha256\", file_path, signature)\n\n\ndef _check_with_algorithm_sum(algorithm_name, file_path, signature):\n real_signature = _generic_algorithm_sum(file_path, algorithm_name)\n if real_signature != signature.lower():\n raise ConanException(\"%s signature failed for '%s' file. \\n\"\n \" Provided signature: %s \\n\"\n \" Computed signature: %s\" % (algorithm_name,\n os.path.basename(file_path),\n signature,\n real_signature))\n\n\ndef _generic_algorithm_sum(file_path, algorithm_name):\n\n with open(file_path, 'rb') as fh:\n try:\n m = hashlib.new(algorithm_name)\n except ValueError: # FIPS error https://github.com/conan-io/conan/issues/7800\n m = hashlib.new(algorithm_name, usedforsecurity=False)\n while True:\n data = fh.read(8192)\n if not data:\n break\n m.update(data)\n return m.hexdigest()\n\n\ndef replace_in_file(conanfile, file_path, search, replace, strict=True, encoding=\"utf-8\"):\n \"\"\"\n :param conanfile: Conanfile instance\n :param file_path: Path to the file\n :param search: Pattern to search\n :param replace: string to replace the matches\n :param strict: Raise in case \"search\" is not found in the file contents\n :return:\n \"\"\"\n output = conanfile.output\n content = load(conanfile, file_path, encoding=encoding)\n if -1 == content.find(search):\n message = \"replace_in_file didn't find pattern '%s' in '%s' file.\" % (search, file_path)\n if strict:\n raise ConanException(message)\n else:\n output.warn(message)\n return False\n content = content.replace(search, replace)\n save(conanfile, file_path, content, encoding=encoding)\n\n\ndef collect_libs(conanfile, folder=None):\n if not conanfile.package_folder:\n return []\n if folder:\n lib_folders = [os.path.join(conanfile.package_folder, folder)]\n else:\n lib_folders = [os.path.join(conanfile.package_folder, folder)\n for folder in conanfile.cpp_info.libdirs]\n result = []\n for lib_folder in lib_folders:\n if not os.path.exists(lib_folder):\n conanfile.output.warn(\"Lib folder doesn't exist, can't collect libraries: \"\n \"{0}\".format(lib_folder))\n continue\n files = os.listdir(lib_folder)\n for f in files:\n name, ext = os.path.splitext(f)\n if ext in (\".so\", \".lib\", \".a\", \".dylib\", \".bc\"):\n if ext != \".lib\" and name.startswith(\"lib\"):\n name = name[3:]\n if name in result:\n conanfile.output.warn(\"Library '%s' was either already found in a previous \"\n \"'conanfile.cpp_info.libdirs' folder or appears several \"\n \"times with a different file extension\" % name)\n else:\n result.append(name)\n result.sort()\n return result\n\n\n# TODO: Do NOT document this yet. It is unclear the interface, maybe should be split\ndef swap_child_folder(parent_folder, child_folder):\n \"\"\" replaces the current folder contents with the contents of one child folder. This\n is used in the SCM monorepo flow, when it is necessary to use one subproject subfolder\n to replace the whole cloned git repo\n \"\"\"\n for f in os.listdir(parent_folder):\n if f != child_folder:\n path = os.path.join(parent_folder, f)\n if os.path.isfile(path):\n os.remove(path)\n else:\n _internal_rmdir(path)\n child = os.path.join(parent_folder, child_folder)\n for f in os.listdir(child):\n shutil.move(os.path.join(child, f), os.path.join(parent_folder, f))\n"}
|
{"conan/tools/files/files.py": [{"type": "function", "name": "_copy_local_file_from_uri", "lines": [192, 201], "signature": "def _copy_local_file_from_uri(conanfile, url, file_path, md5=None, sha1=None, sha256=None):", "doc": ""}, {"type": "function", "name": "_path_from_file_uri", "lines": [203, 205], "signature": "def _path_from_file_uri(uri):", "doc": ""}]}
| null |
["conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_localfile", "conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_localfile_notfound"]
|
["conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download", "conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_iterate_url", "conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_forbidden", "conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_authorized", "conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_retries_errors", "conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_retries_500_errors", "conans/test/unittests/tools/files/test_downloads.py::TestDownload::test_download_no_retries_errors", "conans/test/unittests/tools/files/test_downloads.py::TestGet::test_get_tgz", "conans/test/unittests/tools/files/test_downloads.py::TestGet::test_get_tgz_strip_root", "conans/test/unittests/tools/files/test_downloads.py::TestGet::test_get_gunzip", "conans/test/unittests/tools/files/test_downloads.py::TestGet::test_get_gunzip_destination", "conans/test/unittests/tools/files/test_downloads.py::TestGet::test_get_gunzip_destination_subfolder", "conans/test/unittests/tools/files/test_downloads.py::TestGet::test_get_filename_error"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1656943741.0, "pr_title": "Add ability to reference files in the local filesystem with file:// s\u2026", "pr_body": "Changelog: Feature: Add ability to download files in the local filesystem using `file://` URIs.\r\nDocs: https://github.com/conan-io/docs/pull/2635\r\nClose https://github.com/conan-io/conan/issues/8846\r\n\r\n## Description\r\nIt has been requested be able to download local files with `tools.download()` and `tools.get()` (https://github.com/conan-io/conan/issues/8846 and https://github.com/conan-io/conan-center-index/issues/10623).\r\n\r\nThis draft PR gives `conan.tools.files.download()` (which is used by `.get()`) the ability to resolve URLs that start with `file://` to files in the local filesystem. In that scenario, the URL provided will be converted to a filesystem path, and `shutil.copy()` will be used to copy the file into the destination.\r\n\r\nThe underling implementation uses `urllib` functionality to perform this conversion, rather than any internal logic. It should be noted that while the `file://` URI scheme is specified by [RFC 8089](https://datatracker.ietf.org/doc/html/rfc8089), different platforms, applications and implementations may behave slightly differently (see discussion [here](https://discuss.python.org/t/file-uris-in-python/15600)).\r\n\r\nNote that it is possible for a `file://` uri to point to resources on a remote system - this implementation makes an assumption that the path returned by the `urllib` library can be accessed without any additional handling. \r\nOn windows this may not cover all cases, see: https://docs.microsoft.com/en-us/dotnet/api/system.uri.isunc?view=net-6.0 \r\n\r\nNote that with this proposed implementation the downloads cache is skipped - under the assumption that the files to retrieve are already in the local filesystem. This may not always be true, since it's possible to locally mount network resources - happy to try and look into making this interact with the downloads cache if necessary!\r\n\r\n\r\n`pyfakefs` allows mocking a filesystem for python functions", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 11,585
|
https://github.com/conan-io/conan/pull/11585
|
conan-io__conan-11585
|
[]
|
b20b72beb77c063d8c27c55db37425349e232a26
|
diff --git a/conans/client/loader.py b/conans/client/loader.py
index 4fafe819775..d1d89130dd2 100644
--- a/conans/client/loader.py
+++ b/conans/client/loader.py
@@ -8,6 +8,8 @@
import yaml
+from pathlib import Path
+
from conan.tools.cmake import cmake_layout
from conan.tools.google import bazel_layout
from conan.tools.microsoft import vs_layout
@@ -73,6 +75,7 @@ def load_basic_module(self, conanfile_path, lock_python_requires=None, user=None
self._pyreq_loader.load_py_requires(conanfile, lock_python_requires, self)
conanfile.recipe_folder = os.path.dirname(conanfile_path)
+ conanfile.recipe_path = Path(conanfile.recipe_folder)
# If the scm is inherited, create my own instance
if hasattr(conanfile, "scm") and "scm" not in conanfile.__class__.__dict__:
diff --git a/conans/model/conan_file.py b/conans/model/conan_file.py
index ce9a2ba2a8e..69fe57e1a4d 100644
--- a/conans/model/conan_file.py
+++ b/conans/model/conan_file.py
@@ -1,6 +1,7 @@
import os
import platform
from contextlib import contextmanager
+from pathlib import Path
import six
from six import string_types
@@ -258,6 +259,11 @@ def new_cpp_info(self):
def source_folder(self):
return self.folders.source_folder
+ @property
+ def source_path(self) -> Path:
+ assert self.source_folder is not None, "`source_folder` is `None`"
+ return Path(self.source_folder)
+
@property
def export_sources_folder(self):
"""points to the base source folder when calling source() and to the cache export sources
@@ -265,18 +271,38 @@ def export_sources_folder(self):
'no_copy_export_sources' and point to the right location always."""
return self.folders.base_export_sources
+ @property
+ def export_sources_path(self) -> Path:
+ assert self.export_sources_folder is not None, "`export_sources_folder` is `None`"
+ return Path(self.export_sources_folder)
+
@property
def export_folder(self):
return self.folders.base_export
+ @property
+ def export_path(self) -> Path:
+ assert self.export_folder is not None, "`export_folder` is `None`"
+ return Path(self.export_folder)
+
@property
def build_folder(self):
return self.folders.build_folder
+ @property
+ def build_path(self) -> Path:
+ assert self.build_folder is not None, "`build_folder` is `None`"
+ return Path(self.build_folder)
+
@property
def package_folder(self):
return self.folders.base_package
+ @property
+ def package_path(self) -> Path:
+ assert self.package_folder is not None, "`package_folder` is `None`"
+ return Path(self.package_folder)
+
@property
def install_folder(self):
# FIXME: Remove in 2.0, no self.install_folder
@@ -287,6 +313,11 @@ def generators_folder(self):
# FIXME: Remove in 2.0, no self.install_folder
return self.folders.generators_folder if self.folders.generators else self.install_folder
+ @property
+ def generators_path(self) -> Path:
+ assert self.generators_folder is not None, "`generators_folder` is `None`"
+ return Path(self.generators_folder)
+
@property
def imports_folder(self):
return self.folders.imports_folder
diff --git a/conans/model/conanfile_interface.py b/conans/model/conanfile_interface.py
index 22617a72d28..fd9c8b5d127 100644
--- a/conans/model/conanfile_interface.py
+++ b/conans/model/conanfile_interface.py
@@ -1,3 +1,4 @@
+from pathlib import Path
from conans.client.graph.graph import CONTEXT_BUILD
@@ -29,6 +30,11 @@ def __ne__(self, other):
def package_folder(self):
return self._conanfile.package_folder
+ @property
+ def package_path(self) -> Path:
+ assert self.package_folder is not None, "`package_folder` is `None`"
+ return Path(self.package_folder)
+
@property
def ref(self):
return self._conanfile.ref
|
diff --git a/conans/test/integration/conanfile/folders_access_test.py b/conans/test/integration/conanfile/folders_access_test.py
index 3a6a635f81c..741643aa6ab 100644
--- a/conans/test/integration/conanfile/folders_access_test.py
+++ b/conans/test/integration/conanfile/folders_access_test.py
@@ -22,6 +22,7 @@ def package_info(self):
conanfile = """
import os
+from pathlib import Path
from conans import ConanFile
class AConan(ConanFile):
@@ -45,6 +46,8 @@ def assert_in_local_cache(self):
def source(self):
assert(self.source_folder == os.getcwd())
+ assert(isinstance(self.source_path, Path))
+ assert(str(self.source_path) == self.source_folder)
self.assert_in_local_cache()
# Prevented to use them, it's dangerous, because the source is run only for the first
@@ -65,6 +68,8 @@ def assert_deps_infos(self):
def build(self):
assert(self.build_folder == os.getcwd())
+ assert(isinstance(self.build_path, Path))
+ assert(str(self.build_path) == self.build_folder)
self.assert_in_local_cache()
self.assert_deps_infos()
@@ -77,11 +82,15 @@ def build(self):
self.install_folder
assert(self.package_folder is not None)
+ assert(isinstance(self.package_path, Path))
+ assert(str(self.package_path) == self.package_folder)
self.copy_build_folder = self.build_folder
def package(self):
assert(self.install_folder is not None)
assert(self.build_folder == os.getcwd())
+ assert(isinstance(self.build_path, Path))
+ assert(str(self.build_path) == self.build_folder)
self.assert_in_local_cache()
self.assert_deps_infos()
@@ -97,6 +106,8 @@ def package(self):
def package_info(self):
assert(self.package_folder == os.getcwd())
+ assert(isinstance(self.package_path, Path))
+ assert(str(self.package_path) == self.package_folder)
assert(self.in_local_cache == True)
assert(self.source_folder is None)
| 2022-07-07T14:41:53
|
{}
|
{"conans/client/loader.py": "import fnmatch\nimport imp\nimport inspect\nimport os\nimport sys\nimport types\nimport uuid\n\nimport yaml\n\nfrom conan.tools.cmake import cmake_layout\nfrom conan.tools.google import bazel_layout\nfrom conan.tools.microsoft import vs_layout\nfrom conans.client.conf.required_version import validate_conan_version\nfrom conans.client.loader_txt import ConanFileTextLoader\nfrom conans.client.tools.files import chdir\nfrom conans.errors import ConanException, NotFoundException, ConanInvalidConfiguration, \\\n conanfile_exception_formatter\nfrom conans.model.conan_file import ConanFile\nfrom conans.model.conan_generator import Generator\nfrom conans.model.options import OptionsValues\nfrom conans.model.ref import ConanFileReference\nfrom conans.model.settings import Settings\nfrom conans.paths import DATA_YML\nfrom conans.util.files import load\n\n\nclass ConanFileLoader(object):\n\n def __init__(self, runner, output, python_requires, generator_manager=None, pyreq_loader=None,\n requester=None):\n self._runner = runner\n self._generator_manager = generator_manager\n self._output = output\n self._pyreq_loader = pyreq_loader\n self._python_requires = python_requires\n sys.modules[\"conans\"].python_requires = python_requires\n self._cached_conanfile_classes = {}\n self._requester = requester\n\n def load_basic(self, conanfile_path, lock_python_requires=None, user=None, channel=None,\n display=\"\"):\n \"\"\" loads a conanfile basic object without evaluating anything\n \"\"\"\n return self.load_basic_module(conanfile_path, lock_python_requires, user, channel,\n display)[0]\n\n def load_basic_module(self, conanfile_path, lock_python_requires=None, user=None, channel=None,\n display=\"\"):\n \"\"\" loads a conanfile basic object without evaluating anything, returns the module too\n \"\"\"\n cached = self._cached_conanfile_classes.get(conanfile_path)\n if cached and cached[1] == lock_python_requires:\n conanfile = cached[0](self._output, self._runner, display, user, channel)\n conanfile._conan_requester = self._requester\n if hasattr(conanfile, \"init\") and callable(conanfile.init):\n with conanfile_exception_formatter(str(conanfile), \"init\"):\n conanfile.init()\n return conanfile, cached[2]\n\n if lock_python_requires is not None:\n self._python_requires.locked_versions = {r.name: r for r in lock_python_requires}\n try:\n self._python_requires.valid = True\n module, conanfile = parse_conanfile(conanfile_path, self._python_requires,\n self._generator_manager)\n self._python_requires.valid = False\n\n self._python_requires.locked_versions = None\n\n # This is the new py_requires feature, to supersede the old python_requires\n if self._pyreq_loader:\n self._pyreq_loader.load_py_requires(conanfile, lock_python_requires, self)\n\n conanfile.recipe_folder = os.path.dirname(conanfile_path)\n\n # If the scm is inherited, create my own instance\n if hasattr(conanfile, \"scm\") and \"scm\" not in conanfile.__class__.__dict__:\n if isinstance(conanfile.scm, dict):\n conanfile.scm = conanfile.scm.copy()\n\n # Load and populate dynamic fields from the data file\n conan_data = self._load_data(conanfile_path)\n conanfile.conan_data = conan_data\n if conan_data and '.conan' in conan_data:\n scm_data = conan_data['.conan'].get('scm')\n if scm_data:\n conanfile.scm.update(scm_data)\n\n self._cached_conanfile_classes[conanfile_path] = (conanfile, lock_python_requires,\n module)\n result = conanfile(self._output, self._runner, display, user, channel)\n result._conan_requester = self._requester\n if hasattr(result, \"init\") and callable(result.init):\n with conanfile_exception_formatter(str(result), \"init\"):\n result.init()\n return result, module\n except ConanException as e:\n raise ConanException(\"Error loading conanfile at '{}': {}\".format(conanfile_path, e))\n\n def load_generators(self, conanfile_path):\n \"\"\" Load generator classes from a module. Any non-generator classes\n will be ignored. python_requires is not processed.\n \"\"\"\n \"\"\" Parses a python in-memory module and adds any generators found\n to the provided generator list\n @param conanfile_module: the module to be processed\n \"\"\"\n conanfile_module, module_id = _parse_conanfile(conanfile_path)\n for name, attr in conanfile_module.__dict__.items():\n if (name.startswith(\"_\") or not inspect.isclass(attr) or\n attr.__dict__.get(\"__module__\") != module_id):\n continue\n if issubclass(attr, Generator) and attr != Generator:\n self._generator_manager.add(attr.__name__, attr, custom=True)\n\n @staticmethod\n def _load_data(conanfile_path):\n data_path = os.path.join(os.path.dirname(conanfile_path), DATA_YML)\n if not os.path.exists(data_path):\n return None\n\n try:\n data = yaml.safe_load(load(data_path))\n except Exception as e:\n raise ConanException(\"Invalid yml format at {}: {}\".format(DATA_YML, e))\n\n return data or {}\n\n def load_named(self, conanfile_path, name, version, user, channel, lock_python_requires=None):\n \"\"\" loads the basic conanfile object and evaluates its name and version\n \"\"\"\n conanfile, _ = self.load_basic_module(conanfile_path, lock_python_requires, user, channel)\n\n # Export does a check on existing name & version\n if name:\n if conanfile.name and name != conanfile.name:\n raise ConanException(\"Package recipe with name %s!=%s\" % (name, conanfile.name))\n conanfile.name = name\n\n if version:\n if conanfile.version and version != conanfile.version:\n raise ConanException(\"Package recipe with version %s!=%s\"\n % (version, conanfile.version))\n conanfile.version = version\n\n if hasattr(conanfile, \"set_name\"):\n with conanfile_exception_formatter(\"conanfile.py\", \"set_name\"):\n conanfile.set_name()\n if name and name != conanfile.name:\n raise ConanException(\"Package recipe with name %s!=%s\" % (name, conanfile.name))\n if hasattr(conanfile, \"set_version\"):\n with conanfile_exception_formatter(\"conanfile.py\", \"set_version\"):\n conanfile.set_version()\n if version and version != conanfile.version:\n raise ConanException(\"Package recipe with version %s!=%s\"\n % (version, conanfile.version))\n\n return conanfile\n\n def load_export(self, conanfile_path, name, version, user, channel, lock_python_requires=None):\n \"\"\" loads the conanfile and evaluates its name, version, and enforce its existence\n \"\"\"\n conanfile = self.load_named(conanfile_path, name, version, user, channel,\n lock_python_requires)\n if not conanfile.name:\n raise ConanException(\"conanfile didn't specify name\")\n if not conanfile.version:\n raise ConanException(\"conanfile didn't specify version\")\n\n # FIXME Conan 2.0, conanfile.version should be a string, not a version object\n\n ref = ConanFileReference(conanfile.name, conanfile.version, user, channel)\n conanfile.display_name = str(ref)\n conanfile.output.scope = conanfile.display_name\n return conanfile\n\n @staticmethod\n def _initialize_conanfile(conanfile, profile):\n # Prepare the settings for the loaded conanfile\n # Mixing the global settings with the specified for that name if exist\n tmp_settings = profile.processed_settings.copy()\n package_settings_values = profile.package_settings_values\n if conanfile._conan_user is not None:\n ref_str = \"%s/%s@%s/%s\" % (conanfile.name, conanfile.version,\n conanfile._conan_user, conanfile._conan_channel)\n else:\n ref_str = \"%s/%s\" % (conanfile.name, conanfile.version)\n if package_settings_values:\n # First, try to get a match directly by name (without needing *)\n # TODO: Conan 2.0: We probably want to remove this, and leave a pure fnmatch\n pkg_settings = package_settings_values.get(conanfile.name)\n\n if conanfile.develop and \"&\" in package_settings_values:\n # \"&\" overrides the \"name\" scoped settings.\n pkg_settings = package_settings_values.get(\"&\")\n\n if pkg_settings is None: # If there is not exact match by package name, do fnmatch\n for pattern, settings in package_settings_values.items():\n if fnmatch.fnmatchcase(ref_str, pattern):\n pkg_settings = settings\n break\n if pkg_settings:\n tmp_settings.update_values(pkg_settings)\n\n conanfile.initialize(tmp_settings, profile.env_values, profile.buildenv)\n conanfile.conf = profile.conf.get_conanfile_conf(ref_str)\n\n def load_consumer(self, conanfile_path, profile_host, name=None, version=None, user=None,\n channel=None, lock_python_requires=None, require_overrides=None):\n \"\"\" loads a conanfile.py in user space. Might have name/version or not\n \"\"\"\n conanfile = self.load_named(conanfile_path, name, version, user, channel,\n lock_python_requires)\n\n ref = ConanFileReference(conanfile.name, conanfile.version, user, channel, validate=False)\n if str(ref):\n conanfile.display_name = \"%s (%s)\" % (os.path.basename(conanfile_path), str(ref))\n else:\n conanfile.display_name = os.path.basename(conanfile_path)\n conanfile.output.scope = conanfile.display_name\n conanfile.in_local_cache = False\n try:\n conanfile.develop = True\n self._initialize_conanfile(conanfile, profile_host)\n\n # The consumer specific\n profile_host.user_options.descope_options(conanfile.name)\n conanfile.options.initialize_upstream(profile_host.user_options,\n name=conanfile.name)\n profile_host.user_options.clear_unscoped_options()\n\n if require_overrides is not None:\n for req_override in require_overrides:\n req_override = ConanFileReference.loads(req_override)\n conanfile.requires.override(req_override)\n\n return conanfile\n except ConanInvalidConfiguration:\n raise\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n def load_conanfile(self, conanfile_path, profile, ref, lock_python_requires=None):\n \"\"\" load a conanfile with a full reference, name, version, user and channel are obtained\n from the reference, not evaluated. Main way to load from the cache\n \"\"\"\n try:\n conanfile, _ = self.load_basic_module(conanfile_path, lock_python_requires,\n ref.user, ref.channel, str(ref))\n except Exception as e:\n raise ConanException(\"%s: Cannot load recipe.\\n%s\" % (str(ref), str(e)))\n\n conanfile.name = ref.name\n # FIXME Conan 2.0, version should be a string not a Version object\n conanfile.version = ref.version\n\n if profile.dev_reference and profile.dev_reference == ref:\n conanfile.develop = True\n try:\n self._initialize_conanfile(conanfile, profile)\n return conanfile\n except ConanInvalidConfiguration:\n raise\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n def load_conanfile_txt(self, conan_txt_path, profile_host, ref=None, require_overrides=None):\n if not os.path.exists(conan_txt_path):\n raise NotFoundException(\"Conanfile not found!\")\n\n contents = load(conan_txt_path)\n path, basename = os.path.split(conan_txt_path)\n display_name = \"%s (%s)\" % (basename, ref) if ref and ref.name else basename\n conanfile = self._parse_conan_txt(contents, path, display_name, profile_host)\n\n if require_overrides is not None:\n for req_override in require_overrides:\n req_override = ConanFileReference.loads(req_override)\n conanfile.requires.override(req_override)\n\n return conanfile\n\n def _parse_conan_txt(self, contents, path, display_name, profile):\n conanfile = ConanFile(self._output, self._runner, display_name)\n tmp_settings = profile.processed_settings.copy()\n package_settings_values = profile.package_settings_values\n if \"&\" in package_settings_values:\n pkg_settings = package_settings_values.get(\"&\")\n if pkg_settings:\n tmp_settings.update_values(pkg_settings)\n conanfile.initialize(Settings(), profile.env_values, profile.buildenv)\n conanfile.conf = profile.conf.get_conanfile_conf(None)\n # It is necessary to copy the settings, because the above is only a constraint of\n # conanfile settings, and a txt doesn't define settings. Necessary for generators,\n # as cmake_multi, that check build_type.\n conanfile.settings = tmp_settings.copy_values()\n\n try:\n parser = ConanFileTextLoader(contents)\n except Exception as e:\n raise ConanException(\"%s:\\n%s\" % (path, str(e)))\n for reference in parser.requirements:\n ref = ConanFileReference.loads(reference) # Raise if invalid\n conanfile.requires.add_ref(ref)\n for build_reference in parser.build_requirements:\n ConanFileReference.loads(build_reference)\n if not hasattr(conanfile, \"build_requires\"):\n conanfile.build_requires = []\n conanfile.build_requires.append(build_reference)\n if parser.layout:\n layout_method = {\"cmake_layout\": cmake_layout,\n \"vs_layout\": vs_layout,\n \"bazel_layout\": bazel_layout}.get(parser.layout)\n if not layout_method:\n raise ConanException(\"Unknown predefined layout '{}' declared in \"\n \"conanfile.txt\".format(parser.layout))\n\n def layout(self):\n layout_method(self)\n\n conanfile.layout = types.MethodType(layout, conanfile)\n\n conanfile.generators = parser.generators\n try:\n options = OptionsValues.loads(parser.options)\n except Exception:\n raise ConanException(\"Error while parsing [options] in conanfile\\n\"\n \"Options should be specified as 'pkg:option=value'\")\n conanfile.options.values = options\n conanfile.options.initialize_upstream(profile.user_options)\n\n # imports method\n conanfile.imports = parser.imports_method(conanfile)\n return conanfile\n\n def load_virtual(self, references, profile_host, scope_options=True,\n build_requires_options=None, is_build_require=False, require_overrides=None):\n # If user don't specify namespace in options, assume that it is\n # for the reference (keep compatibility)\n conanfile = ConanFile(self._output, self._runner, display_name=\"virtual\")\n conanfile.initialize(profile_host.processed_settings.copy(),\n profile_host.env_values, profile_host.buildenv)\n conanfile.conf = profile_host.conf.get_conanfile_conf(None)\n conanfile.settings = profile_host.processed_settings.copy_values()\n\n if is_build_require:\n conanfile.build_requires = [str(r) for r in references]\n else:\n for reference in references:\n conanfile.requires.add_ref(reference)\n\n if require_overrides is not None:\n for req_override in require_overrides:\n req_override = ConanFileReference.loads(req_override)\n conanfile.requires.override(req_override)\n\n # Allows options without package namespace in conan install commands:\n # conan install zlib/1.2.8@lasote/stable -o shared=True\n if scope_options:\n assert len(references) == 1\n profile_host.user_options.scope_options(references[0].name)\n if build_requires_options:\n conanfile.options.initialize_upstream(build_requires_options)\n else:\n conanfile.options.initialize_upstream(profile_host.user_options)\n\n conanfile.generators = [] # remove the default txt generator\n return conanfile\n\n\ndef _parse_module(conanfile_module, module_id, generator_manager):\n \"\"\" Parses a python in-memory module, to extract the classes, mainly the main\n class defining the Recipe, but also process possible existing generators\n @param conanfile_module: the module to be processed\n @return: the main ConanFile class from the module\n \"\"\"\n result = None\n for name, attr in conanfile_module.__dict__.items():\n if (name.startswith(\"_\") or not inspect.isclass(attr) or\n attr.__dict__.get(\"__module__\") != module_id):\n continue\n\n if issubclass(attr, ConanFile) and attr != ConanFile:\n if result is None:\n result = attr\n else:\n raise ConanException(\"More than 1 conanfile in the file\")\n elif issubclass(attr, Generator) and attr != Generator:\n generator_manager.add(attr.__name__, attr, custom=True)\n\n if result is None:\n raise ConanException(\"No subclass of ConanFile\")\n\n return result\n\n\ndef parse_conanfile(conanfile_path, python_requires, generator_manager):\n with python_requires.capture_requires() as py_requires:\n module, filename = _parse_conanfile(conanfile_path)\n try:\n conanfile = _parse_module(module, filename, generator_manager)\n\n # Check for duplicates\n # TODO: move it into PythonRequires\n py_reqs = {}\n for it in py_requires:\n if it.ref.name in py_reqs:\n dupes = [str(it.ref), str(py_reqs[it.ref.name].ref)]\n raise ConanException(\"Same python_requires with different versions not allowed\"\n \" for a conanfile. Found '{}'\".format(\"', '\".join(dupes)))\n py_reqs[it.ref.name] = it\n\n # Make them available to the conanfile itself\n if py_reqs:\n conanfile.python_requires = py_reqs\n return module, conanfile\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n\ndef _parse_conanfile(conan_file_path):\n \"\"\" From a given path, obtain the in memory python import module\n \"\"\"\n\n if not os.path.exists(conan_file_path):\n raise NotFoundException(\"%s not found!\" % conan_file_path)\n\n module_id = str(uuid.uuid1())\n current_dir = os.path.dirname(conan_file_path)\n sys.path.insert(0, current_dir)\n try:\n old_modules = list(sys.modules.keys())\n with chdir(current_dir):\n old_dont_write_bytecode = sys.dont_write_bytecode\n sys.dont_write_bytecode = True\n loaded = imp.load_source(module_id, conan_file_path)\n sys.dont_write_bytecode = old_dont_write_bytecode\n\n required_conan_version = getattr(loaded, \"required_conan_version\", None)\n if required_conan_version:\n validate_conan_version(required_conan_version)\n\n # These lines are necessary, otherwise local conanfile imports with same name\n # collide, but no error, and overwrite other packages imports!!\n added_modules = set(sys.modules).difference(old_modules)\n for added in added_modules:\n module = sys.modules[added]\n if module:\n try:\n try:\n # Most modules will have __file__ != None\n folder = os.path.dirname(module.__file__)\n except (AttributeError, TypeError):\n # But __file__ might not exist or equal None\n # Like some builtins and Namespace packages py3\n folder = module.__path__._path[0]\n except AttributeError: # In case the module.__path__ doesn't exist\n pass\n else:\n if folder.startswith(current_dir):\n module = sys.modules.pop(added)\n sys.modules[\"%s.%s\" % (module_id, added)] = module\n except ConanException:\n raise\n except Exception:\n import traceback\n trace = traceback.format_exc().split('\\n')\n raise ConanException(\"Unable to load conanfile in %s\\n%s\" % (conan_file_path,\n '\\n'.join(trace[3:])))\n finally:\n sys.path.pop(0)\n\n return loaded, module_id\n", "conans/model/conan_file.py": "import os\nimport platform\nfrom contextlib import contextmanager\n\nimport six\nfrom six import string_types\n\n\nfrom conans.client import tools\nfrom conans.client.output import ScopedOutput\nfrom conans.client.subsystems import command_env_wrapper\nfrom conans.client.tools.env import environment_append, no_op, pythonpath\nfrom conans.client.tools.oss import OSInfo\nfrom conans.errors import ConanException, ConanInvalidConfiguration\nfrom conans.model.build_info import DepsCppInfo\nfrom conans.model.conf import Conf\nfrom conans.model.dependencies import ConanFileDependencies\nfrom conans.model.env_info import DepsEnvInfo\nfrom conans.model.layout import Folders, Infos\nfrom conans.model.new_build_info import from_old_cppinfo\nfrom conans.model.options import Options, OptionsValues, PackageOptions\nfrom conans.model.requires import Requirements\nfrom conans.model.user_info import DepsUserInfo\nfrom conans.paths import RUN_LOG_NAME\nfrom conans.util.conan_v2_mode import conan_v2_error\n\n\ndef create_options(conanfile):\n try:\n package_options = PackageOptions(getattr(conanfile, \"options\", None))\n options = Options(package_options)\n\n default_options = getattr(conanfile, \"default_options\", None)\n if default_options:\n if isinstance(default_options, dict):\n default_values = OptionsValues(default_options)\n elif isinstance(default_options, (list, tuple)):\n conan_v2_error(\"Declare 'default_options' as a dictionary\")\n default_values = OptionsValues(default_options)\n elif isinstance(default_options, six.string_types):\n conan_v2_error(\"Declare 'default_options' as a dictionary\")\n default_values = OptionsValues.loads(default_options)\n else:\n raise ConanException(\"Please define your default_options as list, \"\n \"multiline string or dictionary\")\n options.values = default_values\n return options\n except Exception as e:\n raise ConanException(\"Error while initializing options. %s\" % str(e))\n\n\ndef create_requirements(conanfile):\n try:\n # Actual requirements of this package\n if not hasattr(conanfile, \"requires\"):\n return Requirements()\n else:\n if not conanfile.requires:\n return Requirements()\n if isinstance(conanfile.requires, (tuple, list)):\n return Requirements(*conanfile.requires)\n else:\n return Requirements(conanfile.requires, )\n except Exception as e:\n raise ConanException(\"Error while initializing requirements. %s\" % str(e))\n\n\ndef create_settings(conanfile, settings):\n try:\n defined_settings = getattr(conanfile, \"settings\", None)\n if isinstance(defined_settings, str):\n defined_settings = [defined_settings]\n current = defined_settings or {}\n settings.constraint(current)\n return settings\n except Exception as e:\n raise ConanInvalidConfiguration(\"The recipe %s is constraining settings. %s\" % (\n conanfile.display_name, str(e)))\n\n\n@contextmanager\ndef _env_and_python(conanfile):\n with environment_append(conanfile.env):\n # FIXME Conan 2.0, Remove old ways of reusing python code\n with pythonpath(conanfile):\n yield\n\n\ndef get_env_context_manager(conanfile, without_python=False):\n if not conanfile.apply_env:\n return no_op()\n if without_python:\n return environment_append(conanfile.env)\n return _env_and_python(conanfile)\n\n\nclass ConanFile(object):\n \"\"\" The base class for all package recipes\n \"\"\"\n\n name = None\n version = None # Any str, can be \"1.1\" or whatever\n url = None # The URL where this File is located, as github, to collaborate in package\n # The license of the PACKAGE, just a shortcut, does not replace or\n # change the actual license of the source code\n license = None\n author = None # Main maintainer/responsible for the package, any format\n description = None\n topics = None\n homepage = None\n build_policy = None\n short_paths = False\n apply_env = True # Apply environment variables from requires deps_env_info and profiles\n exports = None\n exports_sources = None\n generators = [\"txt\"]\n revision_mode = \"hash\"\n\n # Vars to control the build steps (build(), package())\n should_configure = True\n should_build = True\n should_install = True\n should_test = True\n in_local_cache = True\n develop = False\n\n # Defaulting the reference fields\n default_channel = None\n default_user = None\n\n # Settings and Options\n settings = None\n options = None\n default_options = None\n\n provides = None\n deprecated = None\n\n # Folders\n folders = None\n patterns = None\n\n # Run in windows bash\n win_bash = None\n tested_reference_str = None\n\n def __init__(self, output, runner, display_name=\"\", user=None, channel=None):\n # an output stream (writeln, info, warn error)\n self.output = ScopedOutput(display_name, output)\n self.display_name = display_name\n # something that can run commands, as os.sytem\n self._conan_runner = runner\n self._conan_user = user\n self._conan_channel = channel\n\n self.compatible_packages = []\n self._conan_using_build_profile = False\n self._conan_requester = None\n from conan.tools.env import Environment\n self.buildenv_info = Environment()\n self.runenv_info = Environment()\n # At the moment only for build_requires, others will be ignored\n self.conf_info = Conf()\n self._conan_buildenv = None # The profile buildenv, will be assigned initialize()\n self._conan_node = None # access to container Node object, to access info, context, deps...\n self._conan_new_cpp_info = None # Will be calculated lazy in the getter\n self._conan_dependencies = None\n\n self.env_scripts = {} # Accumulate the env scripts generated in order\n\n # layout() method related variables:\n self.folders = Folders()\n self.cpp = Infos()\n\n self.cpp.package.includedirs = [\"include\"]\n self.cpp.package.libdirs = [\"lib\"]\n self.cpp.package.bindirs = [\"bin\"]\n self.cpp.package.resdirs = []\n self.cpp.package.builddirs = [\"\"]\n self.cpp.package.frameworkdirs = []\n\n @property\n def context(self):\n return self._conan_node.context\n\n @property\n def dependencies(self):\n # Caching it, this object is requested many times\n if self._conan_dependencies is None:\n self._conan_dependencies = ConanFileDependencies.from_node(self._conan_node)\n return self._conan_dependencies\n\n @property\n def ref(self):\n return self._conan_node.ref\n\n @property\n def pref(self):\n return self._conan_node.pref\n\n @property\n def buildenv(self):\n # Lazy computation of the package buildenv based on the profileone\n from conan.tools.env import Environment\n if not isinstance(self._conan_buildenv, Environment):\n # TODO: missing user/channel\n ref_str = \"{}/{}\".format(self.name, self.version)\n self._conan_buildenv = self._conan_buildenv.get_profile_env(ref_str)\n return self._conan_buildenv\n\n def initialize(self, settings, env, buildenv=None):\n self._conan_buildenv = buildenv\n if isinstance(self.generators, str):\n self.generators = [self.generators]\n # User defined options\n self.options = create_options(self)\n self.requires = create_requirements(self)\n self.settings = create_settings(self, settings)\n\n conan_v2_error(\"Setting 'cppstd' is deprecated in favor of 'compiler.cppstd',\"\n \" please update your recipe.\", 'cppstd' in self.settings.fields)\n\n # needed variables to pack the project\n self.cpp_info = None # Will be initialized at processing time\n self._conan_dep_cpp_info = None # Will be initialized at processing time\n self.deps_cpp_info = DepsCppInfo()\n\n # environment variables declared in the package_info\n self.env_info = None # Will be initialized at processing time\n self.deps_env_info = DepsEnvInfo()\n\n # user declared variables\n self.user_info = None\n # Keys are the package names (only 'host' if different contexts)\n self.deps_user_info = DepsUserInfo()\n\n # user specified env variables\n self._conan_env_values = env.copy() # user specified -e\n\n if self.description is not None and not isinstance(self.description, six.string_types):\n raise ConanException(\"Recipe 'description' must be a string.\")\n\n if not hasattr(self, \"virtualbuildenv\"): # Allow the user to override it with True or False\n self.virtualbuildenv = True\n if not hasattr(self, \"virtualrunenv\"): # Allow the user to override it with True or False\n self.virtualrunenv = True\n\n @property\n def new_cpp_info(self):\n if not self._conan_new_cpp_info:\n self._conan_new_cpp_info = from_old_cppinfo(self.cpp_info)\n # The new_cpp_info will be already absolute paths if layout() is defined\n if self.package_folder is not None: # to not crash when editable and layout()\n self._conan_new_cpp_info.set_relative_base_folder(self.package_folder)\n return self._conan_new_cpp_info\n\n @property\n def source_folder(self):\n return self.folders.source_folder\n\n @property\n def export_sources_folder(self):\n \"\"\"points to the base source folder when calling source() and to the cache export sources\n folder while calling the exports_sources() method. Prepared in case we want to introduce a\n 'no_copy_export_sources' and point to the right location always.\"\"\"\n return self.folders.base_export_sources\n\n @property\n def export_folder(self):\n return self.folders.base_export\n\n @property\n def build_folder(self):\n return self.folders.build_folder\n\n @property\n def package_folder(self):\n return self.folders.base_package\n\n @property\n def install_folder(self):\n # FIXME: Remove in 2.0, no self.install_folder\n return self.folders.base_install\n\n @property\n def generators_folder(self):\n # FIXME: Remove in 2.0, no self.install_folder\n return self.folders.generators_folder if self.folders.generators else self.install_folder\n\n @property\n def imports_folder(self):\n return self.folders.imports_folder\n\n @property\n def env(self):\n \"\"\"Apply the self.deps_env_info into a copy of self._conan_env_values (will prioritize the\n self._conan_env_values, user specified from profiles or -e first, then inherited)\"\"\"\n # Cannot be lazy cached, because it's called in configure node, and we still don't have\n # the deps_env_info objects available\n tmp_env_values = self._conan_env_values.copy()\n tmp_env_values.update(self.deps_env_info)\n ret, multiple = tmp_env_values.env_dicts(self.name, self.version, self._conan_user,\n self._conan_channel)\n ret.update(multiple)\n return ret\n\n @property\n def channel(self):\n if not self._conan_channel:\n _env_channel = os.getenv(\"CONAN_CHANNEL\")\n conan_v2_error(\"Environment variable 'CONAN_CHANNEL' is deprecated\", _env_channel)\n self._conan_channel = _env_channel or self.default_channel\n if not self._conan_channel:\n raise ConanException(\"channel not defined, but self.channel is used in conanfile\")\n return self._conan_channel\n\n @property\n def user(self):\n if not self._conan_user:\n _env_username = os.getenv(\"CONAN_USERNAME\")\n conan_v2_error(\"Environment variable 'CONAN_USERNAME' is deprecated\", _env_username)\n self._conan_user = _env_username or self.default_user\n if not self._conan_user:\n raise ConanException(\"user not defined, but self.user is used in conanfile\")\n return self._conan_user\n\n def collect_libs(self, folder=None):\n conan_v2_error(\"'self.collect_libs' is deprecated, use 'tools.collect_libs(self)' instead\")\n return tools.collect_libs(self, folder=folder)\n\n @property\n def build_policy_missing(self):\n return self.build_policy == \"missing\"\n\n @property\n def build_policy_always(self):\n return self.build_policy == \"always\"\n\n def source(self):\n pass\n\n def system_requirements(self):\n \"\"\" this method can be overwritten to implement logic for system package\n managers, as apt-get\n\n You can define self.global_system_requirements = True, if you want the installation\n to be for all packages (not depending on settings/options/requirements)\n \"\"\"\n\n def config_options(self):\n \"\"\" modify options, probably conditioned to some settings. This call is executed\n before config_settings. E.g.\n if self.settings.os == \"Windows\":\n del self.options.shared # shared/static not supported in win\n \"\"\"\n\n def configure(self):\n \"\"\" modify settings, probably conditioned to some options. This call is executed\n after config_options. E.g.\n if self.options.header_only:\n self.settings.clear()\n This is also the place for conditional requirements\n \"\"\"\n\n def build(self):\n \"\"\" build your project calling the desired build tools as done in the command line.\n E.g. self.run(\"cmake --build .\") Or use the provided build helpers. E.g. cmake.build()\n \"\"\"\n self.output.warn(\"This conanfile has no build step\")\n\n def package(self):\n \"\"\" package the needed files from source and build folders.\n E.g. self.copy(\"*.h\", src=\"src/includes\", dst=\"includes\")\n \"\"\"\n self.output.warn(\"This conanfile has no package step\")\n\n def package_info(self):\n \"\"\" define cpp_build_info, flags, etc\n \"\"\"\n\n def run(self, command, output=True, cwd=None, win_bash=False, subsystem=None, msys_mingw=True,\n ignore_errors=False, run_environment=False, with_login=True, env=\"conanbuild\"):\n # NOTE: \"self.win_bash\" is the new parameter \"win_bash\" for Conan 2.0\n\n def _run(cmd, _env):\n # FIXME: run in windows bash is not using output\n if platform.system() == \"Windows\":\n if win_bash:\n return tools.run_in_windows_bash(self, bashcmd=cmd, cwd=cwd, subsystem=subsystem,\n msys_mingw=msys_mingw, with_login=with_login)\n envfiles_folder = self.generators_folder or os.getcwd()\n _env = [_env] if _env and isinstance(_env, str) else []\n wrapped_cmd = command_env_wrapper(self, cmd, _env, envfiles_folder=envfiles_folder)\n return self._conan_runner(wrapped_cmd, output, os.path.abspath(RUN_LOG_NAME), cwd)\n\n if run_environment:\n # When using_build_profile the required environment is already applied through\n # 'conanfile.env' in the contextmanager 'get_env_context_manager'\n with tools.run_environment(self) if not self._conan_using_build_profile else no_op():\n if OSInfo().is_macos and isinstance(command, string_types):\n # Security policy on macOS clears this variable when executing /bin/sh. To\n # keep its value, set it again inside the shell when running the command.\n command = 'DYLD_LIBRARY_PATH=\"%s\" DYLD_FRAMEWORK_PATH=\"%s\" %s' % \\\n (os.environ.get('DYLD_LIBRARY_PATH', ''),\n os.environ.get(\"DYLD_FRAMEWORK_PATH\", ''),\n command)\n retcode = _run(command, env)\n else:\n retcode = _run(command, env)\n\n if not ignore_errors and retcode != 0:\n raise ConanException(\"Error %d while executing %s\" % (retcode, command))\n\n return retcode\n\n def package_id(self):\n \"\"\" modify the binary info, typically to narrow values\n e.g.: self.info.settings.compiler = \"Any\" => All compilers will generate same ID\n \"\"\"\n\n def test(self):\n \"\"\" test the generated executable.\n E.g. self.run(\"./example\")\n \"\"\"\n raise ConanException(\"You need to create a method 'test' in your test/conanfile.py\")\n\n def __repr__(self):\n return self.display_name\n", "conans/model/conanfile_interface.py": "from conans.client.graph.graph import CONTEXT_BUILD\n\n\nclass ConanFileInterface:\n \"\"\" this is just a protective wrapper to give consumers\n a limited view of conanfile dependencies, \"read\" only,\n and only to some attributes, not methods\n \"\"\"\n def __str__(self):\n return str(self._conanfile)\n\n def __init__(self, conanfile):\n self._conanfile = conanfile\n\n def __eq__(self, other):\n \"\"\"\n The conanfile is a different entity per node, and conanfile equality is identity\n :type other: ConanFileInterface\n \"\"\"\n return self._conanfile == other._conanfile\n\n def __hash__(self):\n return hash(self._conanfile)\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n @property\n def package_folder(self):\n return self._conanfile.package_folder\n\n @property\n def ref(self):\n return self._conanfile.ref\n\n @property\n def pref(self):\n return self._conanfile.pref\n\n @property\n def buildenv_info(self):\n return self._conanfile.buildenv_info\n\n @property\n def runenv_info(self):\n return self._conanfile.runenv_info\n\n @property\n def cpp_info(self):\n return self._conanfile.new_cpp_info\n\n @property\n def settings(self):\n return self._conanfile.settings\n\n @property\n def settings_build(self):\n return self._conanfile.settings_build\n\n @property\n def options(self):\n return self._conanfile.options\n\n @property\n def context(self):\n return self._conanfile.context\n\n @property\n def conf_info(self):\n return self._conanfile.conf_info\n\n @property\n def dependencies(self):\n return self._conanfile.dependencies\n\n @property\n def is_build_context(self):\n return self._conanfile.context == CONTEXT_BUILD\n"}
|
{"conans/model/conan_file.py": [{"type": "function", "name": "ConanFile.source_path", "lines": [263, 265], "signature": "def source_path(self) -> Path:", "doc": ""}, {"type": "function", "name": "ConanFile.export_sources_path", "lines": [275, 277], "signature": "def export_sources_path(self) -> Path:", "doc": ""}, {"type": "function", "name": "ConanFile.export_path", "lines": [284, 286], "signature": "def export_path(self) -> Path:", "doc": ""}, {"type": "function", "name": "ConanFile.build_path", "lines": [293, 295], "signature": "def build_path(self) -> Path:", "doc": ""}, {"type": "function", "name": "ConanFile.package_path", "lines": [302, 304], "signature": "def package_path(self) -> Path:", "doc": ""}, {"type": "function", "name": "ConanFile.generators_path", "lines": [317, 319], "signature": "def generators_path(self) -> Path:", "doc": ""}], "conans/model/conanfile_interface.py": [{"type": "function", "name": "ConanFileInterface.package_path", "lines": [34, 36], "signature": "def package_path(self) -> Path:", "doc": ""}]}
| null |
["conans/test/integration/conanfile/folders_access_test.py::TestFoldersAccess::test_build_local_command", "conans/test/integration/conanfile/folders_access_test.py::TestFoldersAccess::test_deploy", "conans/test/integration/conanfile/folders_access_test.py::TestFoldersAccess::test_full_install", "conans/test/integration/conanfile/folders_access_test.py::TestFoldersAccess::test_package_local_command", "conans/test/integration/conanfile/folders_access_test.py::TestFoldersAccess::test_source_local_command"]
|
["conans/test/integration/conanfile/folders_access_test.py::TestFoldersAccess::test_imports_local", "conans/test/integration/conanfile/folders_access_test.py::RecipeFolderTest::test_editable", "conans/test/integration/conanfile/folders_access_test.py::RecipeFolderTest::test_local_flow", "conans/test/integration/conanfile/folders_access_test.py::RecipeFolderTest::test_recipe_folder"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1657204846.0, "pr_title": "Feature : provide Path accesssors", "pr_body": "closes: #11304 \r\n\r\nChangelog: Feature: Provide Path accessors in Conanfile.\r\nDocs: omit\r\n\r\n- [x] Refer to the issue that supports this Pull Request.\r\n- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.\r\n- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n- [x] I've followed the PEP8 style guides for Python code.\r\n- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one. \r\n\r\n<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>\r\n", "pr_timeline": [{"time": 1657207643.0, "comment": "> Please add some basic `integration` test that checks these added methods.\r\n\r\nadded (updated an existing test with more checks)"}, {"time": 1657220712.0, "comment": "Out of curiosity: why not to change existing fields but add new ones? What are the potential issues with changing the type?"}, {"time": 1657220901.0, "comment": "> Out of curiosity: why not to change existing fields but add new ones? What are the potential issues with changing the type?\r\n\r\nit may break existing recipes. e.g. the one doing something like `isinstance(self.package_folder, str)` or similar."}, {"time": 1657222145.0, "comment": "> it may break existing recipes. e.g. the one doing something like `isinstance(self.package_folder, str)` or similar.\r\n\r\nThank you for quick reply. That part I have figured out. Was more interested if there are recipes which do this and why.\r\n\r\nIn my limited python experience calling `isinstance` is discouraged. The most common check for type I know is `str` vs `list` of strings (where `str` is converted to a single-element `list`, but the check is for `list`, because it can be something else) for caller convenience. Other examples `str` vs `bytes` - just call `encode`, and there is `is None` for optionals."}, {"time": 1657223191.0, "comment": "I don't know any recipe doing it out of my mind. still, it's possible, some recipe does `some_python_func(self.package_folder)` where `some_python_func` calls `is_instance` under the hood.\r\n\r\nnethertheless, there could be other breaking cases, e.g.:\r\n```\r\nvar = self.package_folder.replace(\"foo\", \"bar\")\r\n```\r\nwill crash if `self.package_folder` is `pathlib.Path`, but it will work if it's just a regular string. and `replace` for sure is done in many recipes.\r\nan example: https://github.com/conan-io/conan-center-index/blob/c102ed11805c88f636198d1feec892eea8c20579/recipes/bison/all/conanfile.py#L156\r\n\r\nfor sure it could be changed in 2.0, but for 1.x I would say it's too risky."}, {"time": 1657292730.0, "comment": "I am merging this because it is done, but after talking to the team, please let me clarify:\r\n\r\n- We really, really need to stop adding new things to Conan 1.X. Only add things that are strictly necessary for the migration to 2.0. This was clearly not necessary.\r\n- We will not document it yet, now things added to 1.X basically need to be documented twice, and in 2.0 is complicated because we are still discussing many things around the docs.\r\n- This will be there as experimental and not supported yet. If it becomes a maintenance or support burden, we will remove it, but we will not invest more time in things that could derive from this feature, until Conan 2.X\r\n\r\nThanks for understanding"}], "issues": {}}
|
|
conan-io/conan
| 11,675
|
https://github.com/conan-io/conan/pull/11675
|
conan-io__conan-11675
|
[]
|
2e7692bfd24210918d8a1e4203e6e79a0bca7878
|
diff --git a/conans/paths/__init__.py b/conans/paths/__init__.py
index 6457f1ad010..5992053360d 100644
--- a/conans/paths/__init__.py
+++ b/conans/paths/__init__.py
@@ -1,7 +1,7 @@
# coding=utf-8
-
import os
import platform
+from pathlib import Path
if platform.system() == "Windows":
from conans.util.windows import conan_expand_user
@@ -12,7 +12,34 @@
def get_conan_user_home():
- user_home = os.getenv("CONAN_HOME")
+
+ def _find_conanrc_file():
+ path = Path(os.getcwd())
+ while path.is_dir() and len(path.parts) > 1: # finish at '/'
+ conanrc_file = path / ".conanrc"
+ if conanrc_file.is_file():
+ return conanrc_file
+ else:
+ path = path.parent
+
+ def _user_home_from_conanrc_file():
+ try:
+ conanrc_path = _find_conanrc_file()
+
+ with open(conanrc_path) as conanrc_file:
+ values = {k: str(v) for k, v in
+ (line.split('=') for line in conanrc_file.read().splitlines() if
+ not line.startswith("#"))}
+
+ conan_home = values["conan_home"]
+ # check if it's a local folder
+ if conan_home[:2] in ("./", ".\\") or conan_home.startswith(".."):
+ conan_home = conanrc_path.parent.absolute() / conan_home
+ return conan_home
+ except (OSError, KeyError, TypeError):
+ return None
+
+ user_home = _user_home_from_conanrc_file() or os.getenv("CONAN_HOME")
if user_home is None:
# the default, in the user home
user_home = os.path.join(conan_expand_user("~"), DEFAULT_CONAN_HOME)
|
diff --git a/conans/test/unittests/paths/user_home_test.py b/conans/test/unittests/paths/user_home_test.py
new file mode 100644
index 00000000000..8b9912bac7a
--- /dev/null
+++ b/conans/test/unittests/paths/user_home_test.py
@@ -0,0 +1,83 @@
+import os
+import platform
+from pathlib import Path
+
+from conans.paths import get_conan_user_home, DEFAULT_CONAN_HOME
+from conans.test.utils.test_files import temp_folder
+from conans.util.files import chdir
+
+if platform.system() == "Windows":
+ from conans.util.windows import conan_expand_user
+else:
+ conan_expand_user = os.path.expanduser
+
+
+def test_conanrc_abs_path_get_conan_user_home():
+ _temp_folder = temp_folder(path_with_spaces=True)
+ folder_conan_runs = os.path.join(_temp_folder, "folder_where_conan_runs")
+ os.mkdir(folder_conan_runs)
+ with open(os.path.join(_temp_folder, ".conanrc"), 'w+') as file:
+ file.write(f'conan_home={_temp_folder}\n')
+ with chdir(folder_conan_runs):
+ conan_home = get_conan_user_home()
+ assert _temp_folder == conan_home
+
+
+def test_conanrc_local_path_get_conan_user_home():
+ _temp_folder = temp_folder(path_with_spaces=True)
+ subfolder = "subfolder inside temp"
+ with chdir(_temp_folder):
+ with open(os.path.join(_temp_folder, ".conanrc"), 'w+') as file:
+ file.write(f'conan_home=.{os.sep}{subfolder}\n')
+ conan_home = get_conan_user_home()
+ assert str(os.path.join(_temp_folder, subfolder)) == conan_home
+
+
+def test_conanrc_local_path_run_conan_subfolder_get_conan_user_home():
+ _temp_folder = temp_folder(path_with_spaces=True)
+ folder_conan_runs = os.path.join(_temp_folder, "folder_where_conan_runs")
+ os.mkdir(folder_conan_runs)
+ with open(os.path.join(_temp_folder, ".conanrc"), 'w+') as file:
+ file.write(f'conan_home=.{os.sep}\n')
+ with chdir(folder_conan_runs):
+ conan_home = get_conan_user_home()
+ assert str(os.path.join(_temp_folder)) == conan_home
+
+
+def test_conanrc_local_outside_folder_path_get_conan_user_home():
+ _temp_folder = temp_folder(path_with_spaces=True)
+ folder1 = os.path.join(_temp_folder, "folder1")
+ os.mkdir(folder1)
+ with chdir(folder1):
+ with open(os.path.join(folder1, ".conanrc"), 'w+') as file:
+ file.write(f'conan_home=..{os.sep}folder2\n')
+ conan_home = get_conan_user_home()
+ this_path = Path(_temp_folder) / "folder1" / f"..{os.sep}folder2"
+ assert str(this_path) == str(conan_home)
+
+
+def test_conanrc_comments():
+ _temp_folder = temp_folder(path_with_spaces=True)
+ with chdir(_temp_folder):
+ with open(os.path.join(_temp_folder, ".conanrc"), 'w+') as file:
+ file.write(f'#commenting something\nconan_home={_temp_folder}\n')
+ conan_home = get_conan_user_home()
+ assert _temp_folder == conan_home
+
+
+def test_conanrc_wrong_format():
+ _temp_folder = temp_folder(path_with_spaces=True)
+ with chdir(_temp_folder):
+ with open(os.path.join(_temp_folder, ".conanrc"), 'w+') as file:
+ file.write(f'ronan_jome={_temp_folder}\n')
+ conan_home = get_conan_user_home()
+
+ assert os.path.join(conan_expand_user("~"), DEFAULT_CONAN_HOME) == conan_home
+ assert _temp_folder not in conan_home
+
+
+def test_conanrc_not_existing():
+ _temp_folder = temp_folder(path_with_spaces=True)
+ with chdir(_temp_folder):
+ conan_home = get_conan_user_home()
+ assert os.path.join(conan_expand_user("~"), DEFAULT_CONAN_HOME) == conan_home
| 2022-07-20T07:03:56
|
{}
|
{"conans/paths/__init__.py": "# coding=utf-8\n\nimport os\nimport platform\n\nif platform.system() == \"Windows\":\n from conans.util.windows import conan_expand_user\nelse:\n conan_expand_user = os.path.expanduser\n\nDEFAULT_CONAN_HOME = \".conan2\"\n\n\ndef get_conan_user_home():\n user_home = os.getenv(\"CONAN_HOME\")\n if user_home is None:\n # the default, in the user home\n user_home = os.path.join(conan_expand_user(\"~\"), DEFAULT_CONAN_HOME)\n else: # Do an expansion, just in case the user is using ~/something/here\n user_home = conan_expand_user(user_home)\n if not os.path.isabs(user_home):\n raise Exception(\"Invalid CONAN_HOME value '%s', \"\n \"please specify an absolute or path starting with ~/ \"\n \"(relative to user home)\" % user_home)\n return user_home\n\n\n# Files\nCONANFILE = 'conanfile.py'\nCONANFILE_TXT = \"conanfile.txt\"\nCONAN_MANIFEST = \"conanmanifest.txt\"\nCONANINFO = \"conaninfo.txt\"\nPACKAGE_TGZ_NAME = \"conan_package.tgz\"\nEXPORT_TGZ_NAME = \"conan_export.tgz\"\nEXPORT_SOURCES_TGZ_NAME = \"conan_sources.tgz\"\nDEFAULT_PROFILE_NAME = \"default\"\nDATA_YML = \"conandata.yml\"\n"}
|
{"conans/paths/__init__.py": [{"type": "function", "name": "get_conan_user_home._find_conanrc_file", "lines": [16, 23], "signature": "def _find_conanrc_file():", "doc": ""}, {"type": "function", "name": "get_conan_user_home._user_home_from_conanrc_file", "lines": [25, 40], "signature": "def _user_home_from_conanrc_file():", "doc": ""}]}
| null |
["conans/test/unittests/paths/user_home_test.py::test_conanrc_abs_path_get_conan_user_home", "conans/test/unittests/paths/user_home_test.py::test_conanrc_local_path_get_conan_user_home", "conans/test/unittests/paths/user_home_test.py::test_conanrc_local_path_run_conan_subfolder_get_conan_user_home", "conans/test/unittests/paths/user_home_test.py::test_conanrc_local_outside_folder_path_get_conan_user_home", "conans/test/unittests/paths/user_home_test.py::test_conanrc_comments"]
|
["conans/test/unittests/paths/user_home_test.py::test_conanrc_wrong_format", "conans/test/unittests/paths/user_home_test.py::test_conanrc_not_existing"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1658300211.0, "pr_title": "Add conan.conanrc file to setup the conan user home", "pr_body": "Closes: https://github.com/conan-io/conan/issues/11542\r\n\r\nYou can create a _.conanrc_ file in the folder where you are running conan (or any parent folder), it can have this content:\r\n\r\nSet the conan home to an absolute folder\r\n```\r\n# accepts comments\r\nconan_home=/absolute/folder\r\n```\r\n\r\nSet the conan home to a relative folder inside the current folder\r\n```\r\n# accepts comments\r\nconan_home=./relative folder/inside current folder\r\n```\r\n\r\nSet the conan home to a relative folder outside the current folder\r\n```\r\n# accepts comments\r\nconan_home=../relative folder/outside current folder\r\n```\r\n\r\nSet the conan home to a path containing the `~` that will be expanded to the system's user home\r\n```\r\n# accepts comments\r\nconan_home=~/use the user home to expand it\r\n```\r\nThe _.conanrc_ file is searched for in all parent folders so, if for example, in this structure:\r\n\r\n````\r\n.\r\n.conanrc\r\n\u251c\u2500\u2500 project1\r\n\u2514\u2500\u2500 project2\r\n````\r\n\r\nAnd you are running from folder `project1` the parent folders are traversed recursively until a _.conanrc_ is found in case it exists.\r\n\r\n", "pr_timeline": [{"time": 1658390125.0, "comment": "@lasote the storage path was already merged in https://github.com/conan-io/conan/pull/11672"}, {"time": 1659476884.0, "comment": "This made playing with Conan 2.0 so much easier! \u2764\ufe0f \r\n\r\nNow I just keep `.conanrc` in my root project folder with `conan_home=~/.conan2` and magic \ud83e\uddd9 I get\r\n\r\n```sh\r\n$ conan config home\r\nCurrent Conan home: /Users/christopherm/.conan2\r\n```\r\n\r\n\ud83d\udc4f "}, {"time": 1660601186.0, "comment": "@prince-chrismc To make things clear, ``.conan2`` is already the default name in Conan 2.0 beta to avoid colliding with Conan 1.X. But the feature should work for other names too, please check"}, {"time": 1660616559.0, "comment": "It does, `conan_home=/tmp/meetup/.conan_two`\r\n\r\n```\r\nCurrent Conan home: /Users/christopherm/.conan_two\r\n christopherm@christopherm-mac \ue0b0 ~/meetup \ue0b0 \ue0a0 conan2 \u00b1 \ue0b0 conan config home\r\nInitialized file: '/tmp/meetup/.conan_two/settings.yml'\r\nInitialized file: '/tmp/meetup/.conan_two/extensions/plugins/compatibility/compatibility.py'\r\nInitialized file: '/tmp/meetup/.conan_two/extensions/plugins/compatibility/app_compat.py'\r\nInitialized file: '/tmp/meetup/.conan_two/extensions/plugins/compatibility/cppstd_compat.py'\r\nInitialized file: '/tmp/meetup/.conan_two/extensions/plugins/profile.py'\r\nCurrent Conan home: /tmp/meetup/.conan_two\r\n```"}, {"time": 1660967127.0, "comment": "Amazing, thanks for this! This will make working in multiple repositories with different, conflicting hooks a dream. Also, monorepo \u2764\ufe0f "}], "issues": {}}
|
|
conan-io/conan
| 11,678
|
https://github.com/conan-io/conan/pull/11678
|
conan-io__conan-11678
|
[]
|
f4d5c49f0fbba650cf5162107f2135ca48336778
|
diff --git a/conan/tools/gnu/autotoolstoolchain.py b/conan/tools/gnu/autotoolstoolchain.py
index 52ee17a3939..d10f36d3b05 100644
--- a/conan/tools/gnu/autotoolstoolchain.py
+++ b/conan/tools/gnu/autotoolstoolchain.py
@@ -22,10 +22,10 @@ def __init__(self, conanfile, namespace=None):
self.make_args = []
# Flags
- self.cxxflags = []
- self.cflags = []
- self.ldflags = []
- self.defines = []
+ self.extra_cxxflags = []
+ self.extra_cflags = []
+ self.extra_ldflags = []
+ self.extra_defines = []
# Defines
self.gcc_cxx11_abi = self._get_cxx11_abi_define()
@@ -122,46 +122,52 @@ def _get_libcxx_flag(self):
def _filter_list_empty_fields(v):
return list(filter(bool, v))
- def _get_extra_flags(self):
- # Now, it's time to get all the flags defined by the user
- cxxflags = self._conanfile.conf.get("tools.build:cxxflags", default=[], check_type=list)
- cflags = self._conanfile.conf.get("tools.build:cflags", default=[], check_type=list)
- sharedlinkflags = self._conanfile.conf.get("tools.build:sharedlinkflags", default=[], check_type=list)
- exelinkflags = self._conanfile.conf.get("tools.build:exelinkflags", default=[], check_type=list)
- defines = self._conanfile.conf.get("tools.build:defines", default=[], check_type=list)
- return {
- "cxxflags": cxxflags,
- "cflags": cflags,
- "defines": defines,
- "ldflags": sharedlinkflags + exelinkflags
- }
-
- def environment(self):
- env = Environment()
-
+ @property
+ def cxxflags(self):
+ fpic = "-fPIC" if self.fpic else None
+ ret = [self.libcxx, self.cppstd, self.arch_flag, fpic, self.msvc_runtime_flag,
+ self.sysroot_flag]
apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]
+ conf_flags = self._conanfile.conf.get("tools.build:cxxflags", default=[], check_type=list)
+ ret = ret + self.build_type_flags + apple_flags + conf_flags + self.extra_cxxflags
+ return self._filter_list_empty_fields(ret)
+
+ @property
+ def cflags(self):
fpic = "-fPIC" if self.fpic else None
- extra_flags = self._get_extra_flags()
+ ret = [self.arch_flag, fpic, self.msvc_runtime_flag, self.sysroot_flag]
+ apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]
+ conf_flags = self._conanfile.conf.get("tools.build:cflags", default=[], check_type=list)
+ ret = ret + self.build_type_flags + apple_flags + conf_flags + self.extra_cflags
+ return self._filter_list_empty_fields(ret)
- self.cxxflags.extend([self.libcxx, self.cppstd,
- self.arch_flag, fpic, self.msvc_runtime_flag, self.sysroot_flag]
- + self.build_type_flags + apple_flags + extra_flags["cxxflags"])
- self.cflags.extend([self.arch_flag, fpic, self.msvc_runtime_flag, self.sysroot_flag]
- + self.build_type_flags + apple_flags + extra_flags["cflags"])
- self.ldflags.extend([self.arch_flag, self.sysroot_flag] + self.build_type_link_flags
- + apple_flags + extra_flags["ldflags"])
- self.defines.extend([self.ndebug, self.gcc_cxx11_abi] + extra_flags["defines"])
+ @property
+ def ldflags(self):
+ ret = [self.arch_flag, self.sysroot_flag]
+ apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]
+ conf_flags = self._conanfile.conf.get("tools.build:sharedlinkflags", default=[],
+ check_type=list)
+ conf_flags.extend(self._conanfile.conf.get("tools.build:exelinkflags", default=[],
+ check_type=list))
+ ret = ret + apple_flags + conf_flags + self.build_type_link_flags + self.extra_ldflags
+ return self._filter_list_empty_fields(ret)
+
+ @property
+ def defines(self):
+ conf_flags = self._conanfile.conf.get("tools.build:defines", default=[], check_type=list)
+ ret = [self.ndebug, self.gcc_cxx11_abi] + conf_flags + self.extra_defines
+ return self._filter_list_empty_fields(ret)
+ def environment(self):
+ env = Environment()
if is_msvc(self._conanfile):
env.define("CXX", "cl")
env.define("CC", "cl")
-
- env.append("CPPFLAGS", ["-D{}".format(d) for d in self._filter_list_empty_fields(self.defines)])
- env.append("CXXFLAGS", self._filter_list_empty_fields(self.cxxflags))
- env.append("CFLAGS", self._filter_list_empty_fields(self.cflags))
- env.append("LDFLAGS", self._filter_list_empty_fields(self.ldflags))
+ env.append("CPPFLAGS", ["-D{}".format(d) for d in self.defines])
+ env.append("CXXFLAGS", self.cxxflags)
+ env.append("CFLAGS", self.cflags)
+ env.append("LDFLAGS", self.ldflags)
env.prepend_path("PKG_CONFIG_PATH", self._conanfile.generators_folder)
-
return env
def vars(self):
|
diff --git a/conans/test/integration/toolchains/gnu/test_autotoolstoolchain.py b/conans/test/integration/toolchains/gnu/test_autotoolstoolchain.py
index 7a20a026036..36ede715b8f 100644
--- a/conans/test/integration/toolchains/gnu/test_autotoolstoolchain.py
+++ b/conans/test/integration/toolchains/gnu/test_autotoolstoolchain.py
@@ -42,3 +42,28 @@ def test_extra_flags_via_conf():
assert 'export CXXFLAGS="$CXXFLAGS -O3 -s --flag1 --flag2"' in toolchain
assert 'export CFLAGS="$CFLAGS -O3 -s --flag3 --flag4"' in toolchain
assert 'export LDFLAGS="$LDFLAGS --flag5 --flag6"' in toolchain
+
+
+def test_not_none_values():
+
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+ from conan.tools.gnu import AutotoolsToolchain
+
+ class Foo(ConanFile):
+ name = "foo"
+ version = "1.0"
+
+ def generate(self):
+ tc = AutotoolsToolchain(self)
+ assert None not in tc.defines
+ assert None not in tc.cxxflags
+ assert None not in tc.cflags
+ assert None not in tc.ldflags
+
+ """)
+
+ client = TestClient()
+ client.save({"conanfile.py": conanfile})
+ client.run("install .")
+
diff --git a/conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py b/conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py
index a9232a1e622..fb9ba023148 100644
--- a/conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py
+++ b/conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py
@@ -364,7 +364,12 @@ def test_custom_defines():
"os.version": "14",
"arch": "armv8"})
be = AutotoolsToolchain(conanfile)
- be.defines = ["MyDefine1", "MyDefine2"]
+ be.extra_defines = ["MyDefine1", "MyDefine2"]
+
+ assert "MyDefine1" in be.defines
+ assert "MyDefine2" in be.defines
+ assert "NDEBUG" in be.defines
+
env = be.vars()
assert "-DMyDefine1" in env["CPPFLAGS"]
assert "-DMyDefine2" in env["CPPFLAGS"]
@@ -380,7 +385,14 @@ def test_custom_cxxflags():
"os.version": "14",
"arch": "armv8"})
be = AutotoolsToolchain(conanfile)
- be.cxxflags = ["MyFlag1", "MyFlag2"]
+ be.extra_cxxflags = ["MyFlag1", "MyFlag2"]
+
+ assert "MyFlag1" in be.cxxflags
+ assert "MyFlag2" in be.cxxflags
+ assert "-mios-version-min=14" in be.cxxflags
+ assert "MyFlag" not in be.cflags
+ assert "MyFlag" not in be.ldflags
+
env = be.vars()
assert "MyFlag1" in env["CXXFLAGS"]
assert "MyFlag2" in env["CXXFLAGS"]
@@ -399,7 +411,14 @@ def test_custom_cflags():
"os.version": "14",
"arch": "armv8"})
be = AutotoolsToolchain(conanfile)
- be.cflags = ["MyFlag1", "MyFlag2"]
+ be.extra_cflags = ["MyFlag1", "MyFlag2"]
+
+ assert "MyFlag1" in be.cflags
+ assert "MyFlag2" in be.cflags
+ assert "-mios-version-min=14" in be.cflags
+ assert "MyFlag" not in be.cxxflags
+ assert "MyFlag" not in be.ldflags
+
env = be.vars()
assert "MyFlag1" in env["CFLAGS"]
assert "MyFlag2" in env["CFLAGS"]
@@ -418,7 +437,14 @@ def test_custom_ldflags():
"os.version": "14",
"arch": "armv8"})
be = AutotoolsToolchain(conanfile)
- be.ldflags = ["MyFlag1", "MyFlag2"]
+ be.extra_ldflags = ["MyFlag1", "MyFlag2"]
+
+ assert "MyFlag1" in be.ldflags
+ assert "MyFlag2" in be.ldflags
+ assert "-mios-version-min=14" in be.ldflags
+ assert "MyFlag" not in be.cxxflags
+ assert "MyFlag" not in be.cflags
+
env = be.vars()
assert "MyFlag1" in env["LDFLAGS"]
assert "MyFlag2" in env["LDFLAGS"]
| 2022-07-20T08:20:27
|
{}
|
{"conan/tools/gnu/autotoolstoolchain.py": "from conan.tools._check_build_profile import check_using_build_profile\nfrom conan.tools._compilers import architecture_flag, build_type_flags, cppstd_flag, \\\n build_type_link_flags\nfrom conan.tools.apple.apple import apple_min_version_flag, to_apple_arch, \\\n apple_sdk_path, is_apple_os\nfrom conan.tools.build.cross_building import cross_building, get_cross_building_settings\nfrom conan.tools.env import Environment\nfrom conan.tools.files.files import save_toolchain_args\nfrom conan.tools.gnu.get_gnu_triplet import _get_gnu_triplet\nfrom conan.tools.microsoft import VCVars, is_msvc, msvc_runtime_flag\nfrom conans.errors import ConanException\nfrom conans.tools import args_to_string\n\n\nclass AutotoolsToolchain:\n def __init__(self, conanfile, namespace=None):\n self._conanfile = conanfile\n self._namespace = namespace\n\n self.configure_args = self._default_configure_shared_flags() + self._default_configure_install_flags()\n self.autoreconf_args = self._default_autoreconf_flags()\n self.make_args = []\n\n # Flags\n self.cxxflags = []\n self.cflags = []\n self.ldflags = []\n self.defines = []\n\n # Defines\n self.gcc_cxx11_abi = self._get_cxx11_abi_define()\n self.ndebug = None\n build_type = self._conanfile.settings.get_safe(\"build_type\")\n if build_type in ['Release', 'RelWithDebInfo', 'MinSizeRel']:\n self.ndebug = \"NDEBUG\"\n\n # TODO: This is also covering compilers like Visual Studio, necessary to test it (&remove?)\n self.build_type_flags = build_type_flags(self._conanfile.settings)\n self.build_type_link_flags = build_type_link_flags(self._conanfile.settings)\n\n self.cppstd = cppstd_flag(self._conanfile.settings)\n self.arch_flag = architecture_flag(self._conanfile.settings)\n self.libcxx = self._get_libcxx_flag()\n self.fpic = self._conanfile.options.get_safe(\"fPIC\")\n self.msvc_runtime_flag = self._get_msvc_runtime_flag()\n\n # Cross build\n self._host = None\n self._build = None\n self._target = None\n\n self.apple_arch_flag = self.apple_isysroot_flag = None\n self.apple_min_version_flag = apple_min_version_flag(self._conanfile)\n\n self.sysroot_flag = None\n\n if cross_building(self._conanfile):\n os_build, arch_build, os_host, arch_host = get_cross_building_settings(self._conanfile)\n compiler = self._conanfile.settings.get_safe(\"compiler\")\n self._host = _get_gnu_triplet(os_host, arch_host, compiler=compiler)\n self._build = _get_gnu_triplet(os_build, arch_build, compiler=compiler)\n\n # Apple Stuff\n if os_build == \"Macos\":\n sdk_path = apple_sdk_path(conanfile)\n apple_arch = to_apple_arch(self._conanfile.settings.get_safe(\"arch\"))\n # https://man.archlinux.org/man/clang.1.en#Target_Selection_Options\n self.apple_arch_flag = \"-arch {}\".format(apple_arch) if apple_arch else None\n # -isysroot makes all includes for your library relative to the build directory\n self.apple_isysroot_flag = \"-isysroot {}\".format(sdk_path) if sdk_path else None\n\n sysroot = self._conanfile.conf.get(\"tools.build:sysroot\")\n sysroot = sysroot.replace(\"\\\\\", \"/\") if sysroot is not None else None\n self.sysroot_flag = \"--sysroot {}\".format(sysroot) if sysroot else None\n\n check_using_build_profile(self._conanfile)\n\n def _get_cxx11_abi_define(self):\n # https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html\n # The default is libstdc++11, only specify the contrary '_GLIBCXX_USE_CXX11_ABI=0'\n settings = self._conanfile.settings\n libcxx = settings.get_safe(\"compiler.libcxx\")\n if not libcxx:\n return\n\n compiler = settings.get_safe(\"compiler.base\") or settings.get_safe(\"compiler\")\n if compiler in ['clang', 'apple-clang', 'gcc']:\n if libcxx == 'libstdc++':\n return '_GLIBCXX_USE_CXX11_ABI=0'\n elif libcxx == \"libstdc++11\" and self._conanfile.conf.get(\"tools.gnu:define_libcxx11_abi\",\n check_type=bool):\n return '_GLIBCXX_USE_CXX11_ABI=1'\n\n def _get_msvc_runtime_flag(self):\n flag = msvc_runtime_flag(self._conanfile)\n if flag:\n flag = \"-{}\".format(flag)\n return flag\n\n def _get_libcxx_flag(self):\n settings = self._conanfile.settings\n libcxx = settings.get_safe(\"compiler.libcxx\")\n if not libcxx:\n return\n\n compiler = settings.get_safe(\"compiler.base\") or settings.get_safe(\"compiler\")\n\n if compiler in ['clang', 'apple-clang']:\n if libcxx in ['libstdc++', 'libstdc++11']:\n return '-stdlib=libstdc++'\n elif libcxx == 'libc++':\n return '-stdlib=libc++'\n elif compiler == 'sun-cc':\n return ({\"libCstd\": \"-library=Cstd\",\n \"libstdcxx\": \"-library=stdcxx4\",\n \"libstlport\": \"-library=stlport4\",\n \"libstdc++\": \"-library=stdcpp\"}.get(libcxx))\n elif compiler == \"qcc\":\n return \"-Y _%s\" % str(libcxx)\n\n @staticmethod\n def _filter_list_empty_fields(v):\n return list(filter(bool, v))\n\n def _get_extra_flags(self):\n # Now, it's time to get all the flags defined by the user\n cxxflags = self._conanfile.conf.get(\"tools.build:cxxflags\", default=[], check_type=list)\n cflags = self._conanfile.conf.get(\"tools.build:cflags\", default=[], check_type=list)\n sharedlinkflags = self._conanfile.conf.get(\"tools.build:sharedlinkflags\", default=[], check_type=list)\n exelinkflags = self._conanfile.conf.get(\"tools.build:exelinkflags\", default=[], check_type=list)\n defines = self._conanfile.conf.get(\"tools.build:defines\", default=[], check_type=list)\n return {\n \"cxxflags\": cxxflags,\n \"cflags\": cflags,\n \"defines\": defines,\n \"ldflags\": sharedlinkflags + exelinkflags\n }\n\n def environment(self):\n env = Environment()\n\n apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]\n fpic = \"-fPIC\" if self.fpic else None\n extra_flags = self._get_extra_flags()\n\n self.cxxflags.extend([self.libcxx, self.cppstd,\n self.arch_flag, fpic, self.msvc_runtime_flag, self.sysroot_flag]\n + self.build_type_flags + apple_flags + extra_flags[\"cxxflags\"])\n self.cflags.extend([self.arch_flag, fpic, self.msvc_runtime_flag, self.sysroot_flag]\n + self.build_type_flags + apple_flags + extra_flags[\"cflags\"])\n self.ldflags.extend([self.arch_flag, self.sysroot_flag] + self.build_type_link_flags\n + apple_flags + extra_flags[\"ldflags\"])\n self.defines.extend([self.ndebug, self.gcc_cxx11_abi] + extra_flags[\"defines\"])\n\n if is_msvc(self._conanfile):\n env.define(\"CXX\", \"cl\")\n env.define(\"CC\", \"cl\")\n\n env.append(\"CPPFLAGS\", [\"-D{}\".format(d) for d in self._filter_list_empty_fields(self.defines)])\n env.append(\"CXXFLAGS\", self._filter_list_empty_fields(self.cxxflags))\n env.append(\"CFLAGS\", self._filter_list_empty_fields(self.cflags))\n env.append(\"LDFLAGS\", self._filter_list_empty_fields(self.ldflags))\n env.prepend_path(\"PKG_CONFIG_PATH\", self._conanfile.generators_folder)\n\n return env\n\n def vars(self):\n return self.environment().vars(self._conanfile, scope=\"build\")\n\n def generate(self, env=None, scope=\"build\"):\n env = env or self.environment()\n env = env.vars(self._conanfile, scope=scope)\n env.save_script(\"conanautotoolstoolchain\")\n self.generate_args()\n VCVars(self._conanfile).generate(scope=scope)\n\n def _default_configure_shared_flags(self):\n args = []\n # Just add these flags if there's a shared option defined (never add to exe's)\n # FIXME: For Conan 2.0 use the package_type to decide if adding these flags or not\n try:\n if self._conanfile.options.shared:\n args.extend([\"--enable-shared\", \"--disable-static\"])\n else:\n args.extend([\"--disable-shared\", \"--enable-static\"])\n except ConanException:\n pass\n\n return args\n\n def _default_configure_install_flags(self):\n configure_install_flags = []\n\n def _get_argument(argument_name, cppinfo_name):\n elements = getattr(self._conanfile.cpp.package, cppinfo_name)\n return \"--{}=${{prefix}}/{}\".format(argument_name, elements[0]) if elements else \"\"\n\n # If someone want arguments but not the defaults can pass them in args manually\n configure_install_flags.extend([\"--prefix=/\",\n _get_argument(\"bindir\", \"bindirs\"),\n _get_argument(\"sbindir\", \"bindirs\"),\n _get_argument(\"libdir\", \"libdirs\"),\n _get_argument(\"includedir\", \"includedirs\"),\n _get_argument(\"oldincludedir\", \"includedirs\"),\n _get_argument(\"datarootdir\", \"resdirs\")])\n return [el for el in configure_install_flags if el]\n\n def _default_autoreconf_flags(self):\n return [\"--force\", \"--install\"]\n\n def generate_args(self):\n configure_args = []\n configure_args.extend(self.configure_args)\n user_args_str = args_to_string(self.configure_args)\n for flag, var in ((\"host\", self._host), (\"build\", self._build), (\"target\", self._target)):\n if var and flag not in user_args_str:\n configure_args.append('--{}={}'.format(flag, var))\n\n args = {\"configure_args\": args_to_string(configure_args),\n \"make_args\": args_to_string(self.make_args),\n \"autoreconf_args\": args_to_string(self.autoreconf_args)}\n\n save_toolchain_args(args, namespace=self._namespace)\n"}
|
{"conan/tools/gnu/autotoolstoolchain.py": [{"type": "function", "name": "AutotoolsToolchain.cxxflags", "lines": [126, 133], "signature": "def cxxflags(self):", "doc": ""}, {"type": "function", "name": "AutotoolsToolchain.cflags", "lines": [136, 142], "signature": "def cflags(self):", "doc": ""}, {"type": "function", "name": "AutotoolsToolchain.ldflags", "lines": [145, 153], "signature": "def ldflags(self):", "doc": ""}, {"type": "function", "name": "AutotoolsToolchain.defines", "lines": [156, 159], "signature": "def defines(self):", "doc": ""}]}
| null |
["conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_custom_defines", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_custom_cxxflags", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_custom_cflags", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_custom_ldflags"]
|
["conans/test/integration/toolchains/gnu/test_autotoolstoolchain.py::test_extra_flags_via_conf", "conans/test/integration/toolchains/gnu/test_autotoolstoolchain.py::test_not_none_values", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_modify_environment", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_target_triple", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_invalid_target_triple", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_cppstd", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_fpic", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_ndebug", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config0]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config1]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config2]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config3]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config4]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config5]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config6]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config7]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config8]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config9]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config10]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config11]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config12]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config13]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_libcxx[config14]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_cxx11_abi_define", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_architecture_flag[config0]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_architecture_flag[config1]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_build_type_flag[Visual", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_build_type_flag[msvc]", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_apple_arch_flag", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_apple_min_os_flag", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_apple_isysrootflag", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_sysrootflag", "conans/test/unittests/client/toolchain/autotools/autotools_toolchain_test.py::test_extra_flags_via_conf"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1658304411.0, "pr_title": "AutotoolsToolchain: empty None values from self fields + Refactor", "pr_body": "Changelog: Bugfix: The `AutotoolsToolchain` now clears `None` values from the attributes `.cxxflags`, `.cflags`, `.ldflags` and `.defines`.\r\nChangelog: Feature: The `AutotoolsToolchain` attributes `.cxxflags`, `.cflags`, `.ldflags` and `.defines` can be read at any moment, now is not needed to call `.environment()` to get them calculated. In the other hand, if you want to add additional flags the following attributes have to be used: `.extra_cxxflags`, `.extra_cflags`, `.extra_ldflags` and `.extra_defines`\r\nDocs: https://github.com/conan-io/docs/pull/2660\r\n\r\n", "pr_timeline": [{"time": 1658392226.0, "comment": "The mechanism was already there, I suppose that some recipes (only for the recipe) might need to introduce something in some configuration that Conan is not providing. I would love to remove the 80% of the customization power but I'm afraid we can't."}], "issues": {}}
|
|
conan-io/conan
| 11,908
|
https://github.com/conan-io/conan/pull/11908
|
conan-io__conan-11908
|
[]
|
a5525e523d2c6a3636a062134364686059ba8863
|
diff --git a/conans/client/loader.py b/conans/client/loader.py
index d1d89130dd2..804d73b50d1 100644
--- a/conans/client/loader.py
+++ b/conans/client/loader.py
@@ -2,6 +2,7 @@
import imp
import inspect
import os
+import re
import sys
import types
import uuid
@@ -436,13 +437,20 @@ def _parse_conanfile(conan_file_path):
old_modules = list(sys.modules.keys())
with chdir(current_dir):
old_dont_write_bytecode = sys.dont_write_bytecode
- sys.dont_write_bytecode = True
- loaded = imp.load_source(module_id, conan_file_path)
- sys.dont_write_bytecode = old_dont_write_bytecode
-
- required_conan_version = getattr(loaded, "required_conan_version", None)
- if required_conan_version:
- validate_conan_version(required_conan_version)
+ try:
+ sys.dont_write_bytecode = True
+ # FIXME: imp is deprecated in favour of implib
+ loaded = imp.load_source(module_id, conan_file_path)
+ sys.dont_write_bytecode = old_dont_write_bytecode
+ except ImportError:
+ version_txt = _get_required_conan_version_without_loading(conan_file_path)
+ if version_txt:
+ validate_conan_version(version_txt)
+ raise
+
+ required_conan_version = getattr(loaded, "required_conan_version", None)
+ if required_conan_version:
+ validate_conan_version(required_conan_version)
# These lines are necessary, otherwise local conanfile imports with same name
# collide, but no error, and overwrite other packages imports!!
@@ -475,3 +483,20 @@ def _parse_conanfile(conan_file_path):
sys.path.pop(0)
return loaded, module_id
+
+
+def _get_required_conan_version_without_loading(conan_file_path):
+ # First, try to detect the required_conan_version in "text" mode
+ # https://github.com/conan-io/conan/issues/11239
+ contents = load(conan_file_path)
+
+ txt_version = None
+
+ try:
+ found = re.search(r"required_conan_version\s*=\s*(.*)", contents)
+ if found:
+ txt_version = found.group(1).replace('"', "")
+ except:
+ pass
+
+ return txt_version
|
diff --git a/conans/test/integration/conanfile/required_conan_version_test.py b/conans/test/integration/conanfile/required_conan_version_test.py
index 7f0c11f25c7..f0261c11b03 100644
--- a/conans/test/integration/conanfile/required_conan_version_test.py
+++ b/conans/test/integration/conanfile/required_conan_version_test.py
@@ -35,3 +35,45 @@ class Lib(ConanFile):
client.run("install pkg/1.0@", assert_error=True)
self.assertIn("Current Conan version (%s) does not satisfy the defined one (>=100.0)"
% __version__, client.out)
+
+ def test_required_conan_version_with_loading_issues(self):
+ # https://github.com/conan-io/conan/issues/11239
+ client = TestClient()
+ conanfile = textwrap.dedent("""
+ from conan import missing_import
+
+ required_conan_version = ">=100.0"
+
+ class Lib(ConanFile):
+ pass
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("export . pkg/1.0@", assert_error=True)
+ self.assertIn("Current Conan version (%s) does not satisfy the defined one (>=100.0)"
+ % __version__, client.out)
+
+ # Assigning required_conan_version without spaces
+ conanfile = textwrap.dedent("""
+ from conan import missing_import
+
+ required_conan_version=">=100.0"
+
+ class Lib(ConanFile):
+ pass
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("export . pkg/1.0@", assert_error=True)
+ self.assertIn("Current Conan version (%s) does not satisfy the defined one (>=100.0)"
+ % __version__, client.out)
+
+ # If the range is correct, everything works, of course
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+
+ required_conan_version = ">1.0.0"
+
+ class Lib(ConanFile):
+ pass
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("export . pkg/1.0@")
| 2022-08-18T11:25:43
|
{}
|
{"conans/client/loader.py": "import fnmatch\nimport imp\nimport inspect\nimport os\nimport sys\nimport types\nimport uuid\n\nimport yaml\n\nfrom pathlib import Path\n\nfrom conan.tools.cmake import cmake_layout\nfrom conan.tools.google import bazel_layout\nfrom conan.tools.microsoft import vs_layout\nfrom conans.client.conf.required_version import validate_conan_version\nfrom conans.client.loader_txt import ConanFileTextLoader\nfrom conans.client.tools.files import chdir\nfrom conans.errors import ConanException, NotFoundException, ConanInvalidConfiguration, \\\n conanfile_exception_formatter\nfrom conans.model.conan_file import ConanFile\nfrom conans.model.conan_generator import Generator\nfrom conans.model.options import OptionsValues\nfrom conans.model.ref import ConanFileReference\nfrom conans.model.settings import Settings\nfrom conans.paths import DATA_YML\nfrom conans.util.files import load\n\n\nclass ConanFileLoader(object):\n\n def __init__(self, runner, output, python_requires, generator_manager=None, pyreq_loader=None,\n requester=None):\n self._runner = runner\n self._generator_manager = generator_manager\n self._output = output\n self._pyreq_loader = pyreq_loader\n self._python_requires = python_requires\n sys.modules[\"conans\"].python_requires = python_requires\n self._cached_conanfile_classes = {}\n self._requester = requester\n\n def load_basic(self, conanfile_path, lock_python_requires=None, user=None, channel=None,\n display=\"\"):\n \"\"\" loads a conanfile basic object without evaluating anything\n \"\"\"\n return self.load_basic_module(conanfile_path, lock_python_requires, user, channel,\n display)[0]\n\n def load_basic_module(self, conanfile_path, lock_python_requires=None, user=None, channel=None,\n display=\"\"):\n \"\"\" loads a conanfile basic object without evaluating anything, returns the module too\n \"\"\"\n cached = self._cached_conanfile_classes.get(conanfile_path)\n if cached and cached[1] == lock_python_requires:\n conanfile = cached[0](self._output, self._runner, display, user, channel)\n conanfile._conan_requester = self._requester\n if hasattr(conanfile, \"init\") and callable(conanfile.init):\n with conanfile_exception_formatter(str(conanfile), \"init\"):\n conanfile.init()\n return conanfile, cached[2]\n\n if lock_python_requires is not None:\n self._python_requires.locked_versions = {r.name: r for r in lock_python_requires}\n try:\n self._python_requires.valid = True\n module, conanfile = parse_conanfile(conanfile_path, self._python_requires,\n self._generator_manager)\n self._python_requires.valid = False\n\n self._python_requires.locked_versions = None\n\n # This is the new py_requires feature, to supersede the old python_requires\n if self._pyreq_loader:\n self._pyreq_loader.load_py_requires(conanfile, lock_python_requires, self)\n\n conanfile.recipe_folder = os.path.dirname(conanfile_path)\n conanfile.recipe_path = Path(conanfile.recipe_folder)\n\n # If the scm is inherited, create my own instance\n if hasattr(conanfile, \"scm\") and \"scm\" not in conanfile.__class__.__dict__:\n if isinstance(conanfile.scm, dict):\n conanfile.scm = conanfile.scm.copy()\n\n # Load and populate dynamic fields from the data file\n conan_data = self._load_data(conanfile_path)\n conanfile.conan_data = conan_data\n if conan_data and '.conan' in conan_data:\n scm_data = conan_data['.conan'].get('scm')\n if scm_data:\n conanfile.scm.update(scm_data)\n\n self._cached_conanfile_classes[conanfile_path] = (conanfile, lock_python_requires,\n module)\n result = conanfile(self._output, self._runner, display, user, channel)\n result._conan_requester = self._requester\n if hasattr(result, \"init\") and callable(result.init):\n with conanfile_exception_formatter(str(result), \"init\"):\n result.init()\n return result, module\n except ConanException as e:\n raise ConanException(\"Error loading conanfile at '{}': {}\".format(conanfile_path, e))\n\n def load_generators(self, conanfile_path):\n \"\"\" Load generator classes from a module. Any non-generator classes\n will be ignored. python_requires is not processed.\n \"\"\"\n \"\"\" Parses a python in-memory module and adds any generators found\n to the provided generator list\n @param conanfile_module: the module to be processed\n \"\"\"\n conanfile_module, module_id = _parse_conanfile(conanfile_path)\n for name, attr in conanfile_module.__dict__.items():\n if (name.startswith(\"_\") or not inspect.isclass(attr) or\n attr.__dict__.get(\"__module__\") != module_id):\n continue\n if issubclass(attr, Generator) and attr != Generator:\n self._generator_manager.add(attr.__name__, attr, custom=True)\n\n @staticmethod\n def _load_data(conanfile_path):\n data_path = os.path.join(os.path.dirname(conanfile_path), DATA_YML)\n if not os.path.exists(data_path):\n return None\n\n try:\n data = yaml.safe_load(load(data_path))\n except Exception as e:\n raise ConanException(\"Invalid yml format at {}: {}\".format(DATA_YML, e))\n\n return data or {}\n\n def load_named(self, conanfile_path, name, version, user, channel, lock_python_requires=None):\n \"\"\" loads the basic conanfile object and evaluates its name and version\n \"\"\"\n conanfile, _ = self.load_basic_module(conanfile_path, lock_python_requires, user, channel)\n\n # Export does a check on existing name & version\n if name:\n if conanfile.name and name != conanfile.name:\n raise ConanException(\"Package recipe with name %s!=%s\" % (name, conanfile.name))\n conanfile.name = name\n\n if version:\n if conanfile.version and version != conanfile.version:\n raise ConanException(\"Package recipe with version %s!=%s\"\n % (version, conanfile.version))\n conanfile.version = version\n\n if hasattr(conanfile, \"set_name\"):\n with conanfile_exception_formatter(\"conanfile.py\", \"set_name\"):\n conanfile.set_name()\n if name and name != conanfile.name:\n raise ConanException(\"Package recipe with name %s!=%s\" % (name, conanfile.name))\n if hasattr(conanfile, \"set_version\"):\n with conanfile_exception_formatter(\"conanfile.py\", \"set_version\"):\n conanfile.set_version()\n if version and version != conanfile.version:\n raise ConanException(\"Package recipe with version %s!=%s\"\n % (version, conanfile.version))\n\n return conanfile\n\n def load_export(self, conanfile_path, name, version, user, channel, lock_python_requires=None):\n \"\"\" loads the conanfile and evaluates its name, version, and enforce its existence\n \"\"\"\n conanfile = self.load_named(conanfile_path, name, version, user, channel,\n lock_python_requires)\n if not conanfile.name:\n raise ConanException(\"conanfile didn't specify name\")\n if not conanfile.version:\n raise ConanException(\"conanfile didn't specify version\")\n\n # FIXME Conan 2.0, conanfile.version should be a string, not a version object\n\n ref = ConanFileReference(conanfile.name, conanfile.version, user, channel)\n conanfile.display_name = str(ref)\n conanfile.output.scope = conanfile.display_name\n return conanfile\n\n @staticmethod\n def _initialize_conanfile(conanfile, profile):\n # Prepare the settings for the loaded conanfile\n # Mixing the global settings with the specified for that name if exist\n tmp_settings = profile.processed_settings.copy()\n package_settings_values = profile.package_settings_values\n if conanfile._conan_user is not None:\n ref_str = \"%s/%s@%s/%s\" % (conanfile.name, conanfile.version,\n conanfile._conan_user, conanfile._conan_channel)\n else:\n ref_str = \"%s/%s\" % (conanfile.name, conanfile.version)\n if package_settings_values:\n # First, try to get a match directly by name (without needing *)\n # TODO: Conan 2.0: We probably want to remove this, and leave a pure fnmatch\n pkg_settings = package_settings_values.get(conanfile.name)\n\n if conanfile.develop and \"&\" in package_settings_values:\n # \"&\" overrides the \"name\" scoped settings.\n pkg_settings = package_settings_values.get(\"&\")\n\n if pkg_settings is None: # If there is not exact match by package name, do fnmatch\n for pattern, settings in package_settings_values.items():\n if fnmatch.fnmatchcase(ref_str, pattern):\n pkg_settings = settings\n break\n if pkg_settings:\n tmp_settings.update_values(pkg_settings)\n\n conanfile.initialize(tmp_settings, profile.env_values, profile.buildenv)\n conanfile.conf = profile.conf.get_conanfile_conf(ref_str)\n\n def load_consumer(self, conanfile_path, profile_host, name=None, version=None, user=None,\n channel=None, lock_python_requires=None, require_overrides=None):\n \"\"\" loads a conanfile.py in user space. Might have name/version or not\n \"\"\"\n conanfile = self.load_named(conanfile_path, name, version, user, channel,\n lock_python_requires)\n\n ref = ConanFileReference(conanfile.name, conanfile.version, user, channel, validate=False)\n if str(ref):\n conanfile.display_name = \"%s (%s)\" % (os.path.basename(conanfile_path), str(ref))\n else:\n conanfile.display_name = os.path.basename(conanfile_path)\n conanfile.output.scope = conanfile.display_name\n conanfile.in_local_cache = False\n try:\n conanfile.develop = True\n self._initialize_conanfile(conanfile, profile_host)\n\n # The consumer specific\n profile_host.user_options.descope_options(conanfile.name)\n conanfile.options.initialize_upstream(profile_host.user_options,\n name=conanfile.name)\n profile_host.user_options.clear_unscoped_options()\n\n if require_overrides is not None:\n for req_override in require_overrides:\n req_override = ConanFileReference.loads(req_override)\n conanfile.requires.override(req_override)\n\n return conanfile\n except ConanInvalidConfiguration:\n raise\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n def load_conanfile(self, conanfile_path, profile, ref, lock_python_requires=None):\n \"\"\" load a conanfile with a full reference, name, version, user and channel are obtained\n from the reference, not evaluated. Main way to load from the cache\n \"\"\"\n try:\n conanfile, _ = self.load_basic_module(conanfile_path, lock_python_requires,\n ref.user, ref.channel, str(ref))\n except Exception as e:\n raise ConanException(\"%s: Cannot load recipe.\\n%s\" % (str(ref), str(e)))\n\n conanfile.name = ref.name\n # FIXME Conan 2.0, version should be a string not a Version object\n conanfile.version = ref.version\n\n if profile.dev_reference and profile.dev_reference == ref:\n conanfile.develop = True\n try:\n self._initialize_conanfile(conanfile, profile)\n return conanfile\n except ConanInvalidConfiguration:\n raise\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n def load_conanfile_txt(self, conan_txt_path, profile_host, ref=None, require_overrides=None):\n if not os.path.exists(conan_txt_path):\n raise NotFoundException(\"Conanfile not found!\")\n\n contents = load(conan_txt_path)\n path, basename = os.path.split(conan_txt_path)\n display_name = \"%s (%s)\" % (basename, ref) if ref and ref.name else basename\n conanfile = self._parse_conan_txt(contents, path, display_name, profile_host)\n\n if require_overrides is not None:\n for req_override in require_overrides:\n req_override = ConanFileReference.loads(req_override)\n conanfile.requires.override(req_override)\n\n return conanfile\n\n def _parse_conan_txt(self, contents, path, display_name, profile):\n conanfile = ConanFile(self._output, self._runner, display_name)\n tmp_settings = profile.processed_settings.copy()\n package_settings_values = profile.package_settings_values\n if \"&\" in package_settings_values:\n pkg_settings = package_settings_values.get(\"&\")\n if pkg_settings:\n tmp_settings.update_values(pkg_settings)\n conanfile.initialize(Settings(), profile.env_values, profile.buildenv)\n conanfile.conf = profile.conf.get_conanfile_conf(None)\n # It is necessary to copy the settings, because the above is only a constraint of\n # conanfile settings, and a txt doesn't define settings. Necessary for generators,\n # as cmake_multi, that check build_type.\n conanfile.settings = tmp_settings.copy_values()\n\n try:\n parser = ConanFileTextLoader(contents)\n except Exception as e:\n raise ConanException(\"%s:\\n%s\" % (path, str(e)))\n for reference in parser.requirements:\n ref = ConanFileReference.loads(reference) # Raise if invalid\n conanfile.requires.add_ref(ref)\n for build_reference in parser.build_requirements:\n ConanFileReference.loads(build_reference)\n if not hasattr(conanfile, \"build_requires\"):\n conanfile.build_requires = []\n conanfile.build_requires.append(build_reference)\n if parser.layout:\n layout_method = {\"cmake_layout\": cmake_layout,\n \"vs_layout\": vs_layout,\n \"bazel_layout\": bazel_layout}.get(parser.layout)\n if not layout_method:\n raise ConanException(\"Unknown predefined layout '{}' declared in \"\n \"conanfile.txt\".format(parser.layout))\n\n def layout(self):\n layout_method(self)\n\n conanfile.layout = types.MethodType(layout, conanfile)\n\n conanfile.generators = parser.generators\n try:\n options = OptionsValues.loads(parser.options)\n except Exception:\n raise ConanException(\"Error while parsing [options] in conanfile\\n\"\n \"Options should be specified as 'pkg:option=value'\")\n conanfile.options.values = options\n conanfile.options.initialize_upstream(profile.user_options)\n\n # imports method\n conanfile.imports = parser.imports_method(conanfile)\n return conanfile\n\n def load_virtual(self, references, profile_host, scope_options=True,\n build_requires_options=None, is_build_require=False, require_overrides=None):\n # If user don't specify namespace in options, assume that it is\n # for the reference (keep compatibility)\n conanfile = ConanFile(self._output, self._runner, display_name=\"virtual\")\n conanfile.initialize(profile_host.processed_settings.copy(),\n profile_host.env_values, profile_host.buildenv)\n conanfile.conf = profile_host.conf.get_conanfile_conf(None)\n conanfile.settings = profile_host.processed_settings.copy_values()\n\n if is_build_require:\n conanfile.build_requires = [str(r) for r in references]\n else:\n for reference in references:\n conanfile.requires.add_ref(reference)\n\n if require_overrides is not None:\n for req_override in require_overrides:\n req_override = ConanFileReference.loads(req_override)\n conanfile.requires.override(req_override)\n\n # Allows options without package namespace in conan install commands:\n # conan install zlib/1.2.8@lasote/stable -o shared=True\n if scope_options:\n assert len(references) == 1\n profile_host.user_options.scope_options(references[0].name)\n if build_requires_options:\n conanfile.options.initialize_upstream(build_requires_options)\n else:\n conanfile.options.initialize_upstream(profile_host.user_options)\n\n conanfile.generators = [] # remove the default txt generator\n return conanfile\n\n\ndef _parse_module(conanfile_module, module_id, generator_manager):\n \"\"\" Parses a python in-memory module, to extract the classes, mainly the main\n class defining the Recipe, but also process possible existing generators\n @param conanfile_module: the module to be processed\n @return: the main ConanFile class from the module\n \"\"\"\n result = None\n for name, attr in conanfile_module.__dict__.items():\n if (name.startswith(\"_\") or not inspect.isclass(attr) or\n attr.__dict__.get(\"__module__\") != module_id):\n continue\n\n if issubclass(attr, ConanFile) and attr != ConanFile:\n if result is None:\n result = attr\n else:\n raise ConanException(\"More than 1 conanfile in the file\")\n elif issubclass(attr, Generator) and attr != Generator:\n generator_manager.add(attr.__name__, attr, custom=True)\n\n if result is None:\n raise ConanException(\"No subclass of ConanFile\")\n\n return result\n\n\ndef parse_conanfile(conanfile_path, python_requires, generator_manager):\n with python_requires.capture_requires() as py_requires:\n module, filename = _parse_conanfile(conanfile_path)\n try:\n conanfile = _parse_module(module, filename, generator_manager)\n\n # Check for duplicates\n # TODO: move it into PythonRequires\n py_reqs = {}\n for it in py_requires:\n if it.ref.name in py_reqs:\n dupes = [str(it.ref), str(py_reqs[it.ref.name].ref)]\n raise ConanException(\"Same python_requires with different versions not allowed\"\n \" for a conanfile. Found '{}'\".format(\"', '\".join(dupes)))\n py_reqs[it.ref.name] = it\n\n # Make them available to the conanfile itself\n if py_reqs:\n conanfile.python_requires = py_reqs\n return module, conanfile\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n\ndef _parse_conanfile(conan_file_path):\n \"\"\" From a given path, obtain the in memory python import module\n \"\"\"\n\n if not os.path.exists(conan_file_path):\n raise NotFoundException(\"%s not found!\" % conan_file_path)\n\n module_id = str(uuid.uuid1())\n current_dir = os.path.dirname(conan_file_path)\n sys.path.insert(0, current_dir)\n try:\n old_modules = list(sys.modules.keys())\n with chdir(current_dir):\n old_dont_write_bytecode = sys.dont_write_bytecode\n sys.dont_write_bytecode = True\n loaded = imp.load_source(module_id, conan_file_path)\n sys.dont_write_bytecode = old_dont_write_bytecode\n\n required_conan_version = getattr(loaded, \"required_conan_version\", None)\n if required_conan_version:\n validate_conan_version(required_conan_version)\n\n # These lines are necessary, otherwise local conanfile imports with same name\n # collide, but no error, and overwrite other packages imports!!\n added_modules = set(sys.modules).difference(old_modules)\n for added in added_modules:\n module = sys.modules[added]\n if module:\n try:\n try:\n # Most modules will have __file__ != None\n folder = os.path.dirname(module.__file__)\n except (AttributeError, TypeError):\n # But __file__ might not exist or equal None\n # Like some builtins and Namespace packages py3\n folder = module.__path__._path[0]\n except AttributeError: # In case the module.__path__ doesn't exist\n pass\n else:\n if folder.startswith(current_dir):\n module = sys.modules.pop(added)\n sys.modules[\"%s.%s\" % (module_id, added)] = module\n except ConanException:\n raise\n except Exception:\n import traceback\n trace = traceback.format_exc().split('\\n')\n raise ConanException(\"Unable to load conanfile in %s\\n%s\" % (conan_file_path,\n '\\n'.join(trace[3:])))\n finally:\n sys.path.pop(0)\n\n return loaded, module_id\n"}
|
{"conans/client/loader.py": [{"type": "function", "name": "_get_required_conan_version_without_loading", "lines": [488, 502], "signature": "def _get_required_conan_version_without_loading(conan_file_path):", "doc": ""}]}
| null |
["conans/test/integration/conanfile/required_conan_version_test.py::RequiredConanVersionTest::test_required_conan_version_with_loading_issues"]
|
["conans/test/integration/conanfile/required_conan_version_test.py::RequiredConanVersionTest::test_required_conan_version"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1660821352.0, "pr_title": "Read required_conan_version in text mode before loading the recipe", "pr_body": "Changelog: Feature: Fail sooner and with a meaningful error if the specified required version is not satisfied.\r\nDocs: omit\r\n\r\nConan will try to read the `required_conan_version` from the recipes without loading the recipe (python interpreter) to be able to check it even if there are imports broken/missing for the user conan current installed version. So Conan can fail sooner and with a meaningful error if the specified required version is not satisfied.\r\n\r\nClose https://github.com/conan-io/conan/issues/11239", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 12,094
|
https://github.com/conan-io/conan/pull/12094
|
conan-io__conan-12094
|
[]
|
91c5aaf1115fd3a9c9932c8d804a14d894076c49
|
diff --git a/conans/model/options.py b/conans/model/options.py
index 642c4afe2d1..8c270238059 100644
--- a/conans/model/options.py
+++ b/conans/model/options.py
@@ -231,6 +231,12 @@ def get_safe(self, attr):
return None
return getattr(self._package_values, attr)
+ def rm_safe(self, attr):
+ try:
+ delattr(self._package_values, attr)
+ except ConanException:
+ pass
+
def __getitem__(self, item):
return self._reqs_options.setdefault(item, PackageOptionValues())
@@ -425,6 +431,12 @@ def loads(text):
def get_safe(self, field, default=None):
return self._data.get(field, default)
+ def rm_safe(self, field):
+ try:
+ delattr(self, field)
+ except ConanException:
+ pass
+
def validate(self):
for child in self._data.values():
child.validate()
@@ -584,6 +596,9 @@ def __delattr__(self, field):
except ConanException:
pass
+ def rm_safe(self, field):
+ self._package_options.rm_safe(field)
+
@property
def values(self):
result = OptionsValues()
diff --git a/conans/model/settings.py b/conans/model/settings.py
index 8a7e697aa3b..096483ed38c 100644
--- a/conans/model/settings.py
+++ b/conans/model/settings.py
@@ -213,6 +213,19 @@ def get_safe(self, name, default=None):
return str(tmp)
return default
+ def rm_safe(self, name):
+ try:
+ tmp = self
+ attr_ = name
+ if "." in name:
+ fields = name.split(".")
+ attr_ = fields.pop()
+ for prop in fields:
+ tmp = getattr(tmp, prop)
+ delattr(tmp, attr_)
+ except ConanException:
+ pass
+
def copy(self):
""" deepcopy, recursive
"""
|
diff --git a/conans/test/integration/settings/remove_subsetting_test.py b/conans/test/integration/settings/remove_subsetting_test.py
index 5c8e6d8a275..209838e1b99 100644
--- a/conans/test/integration/settings/remove_subsetting_test.py
+++ b/conans/test/integration/settings/remove_subsetting_test.py
@@ -1,4 +1,5 @@
import os
+import textwrap
import unittest
from conans.test.utils.tools import TestClient
@@ -144,3 +145,57 @@ def build(self):
client.run("package .")
self.assertIn("ERROR: PACKAGE 'settings.compiler.libcxx' doesn't exist for 'gcc'",
client.out)
+
+
+def test_settings_and_options_rm_safe():
+ client = TestClient()
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+ class Pkg(ConanFile):
+ settings = "os", "build_type", "compiler"
+ options = {"opt1": [True, False], "opt2": [True, False]}
+ default_options = "opt1=True", "opt2=False"
+
+ def configure(self):
+ # setting
+ self.settings.rm_safe("build_type")
+ # sub-setting
+ self.settings.rm_safe("compiler.version")
+ # wrong settings
+ self.settings.rm_safe("fake_field")
+ self.settings.rm_safe("fake_field.version")
+
+ def config_options(self):
+ # option
+ self.options.rm_safe("opt2")
+ # wrong option
+ self.options.rm_safe("opt15")
+
+ def build(self):
+ try:
+ self.settings.build_type
+ except Exception as exc:
+ self.output.warn(str(exc))
+ try:
+ self.settings.compiler.version
+ except Exception as exc:
+ self.output.warn(str(exc))
+ try:
+ self.options.opt2
+ except Exception as exc:
+ self.output.warn(str(exc))
+ assert "opt2" not in self.options
+ """)
+ client.save({"conanfile.py": conanfile})
+ build_folder = os.path.join(client.current_folder, "build")
+ mkdir(build_folder)
+ client.current_folder = build_folder
+
+ client.run("install ..")
+ client.run("build ..")
+ assert "'settings.build_type' doesn't exist" in client.out
+ assert "'settings' possible configurations are ['compiler', 'os']" in client.out
+ assert "'settings.compiler.version' doesn't exist" in client.out
+ assert "'settings.compiler' possible configurations are [" in client.out
+ assert "option 'opt2' doesn't exist" in client.out
+ assert "Possible options are ['opt1']" in client.out
| 2022-09-12T15:01:24
|
{}
|
{"conans/model/options.py": "\nimport fnmatch\n\nimport six\nimport yaml\n\nfrom conans.errors import ConanException\nfrom conans.util.sha import sha1\n\n_falsey_options = [\"false\", \"none\", \"0\", \"off\", \"\"]\n\n\ndef option_wrong_value_msg(name, value, value_range):\n \"\"\" The provided value is not among the range of values that it should\n be\n \"\"\"\n return (\"'%s' is not a valid 'options.%s' value.\\nPossible values are %s\"\n % (value, name, value_range))\n\n\ndef option_not_exist_msg(option_name, existing_options):\n \"\"\" Someone is referencing an option that is not available in the current package\n options\n \"\"\"\n result = [\"option '%s' doesn't exist\" % option_name,\n \"Possible options are %s\" % existing_options or \"none\"]\n return \"\\n\".join(result)\n\n\ndef option_undefined_msg(name):\n return \"'%s' value not defined\" % name\n\n\nclass PackageOptionValue(str):\n \"\"\" thin wrapper around a string value that allows to check for several false string\n and also promote other types to string for homegeneous comparison\n \"\"\"\n def __bool__(self):\n return self.lower() not in _falsey_options\n\n def __nonzero__(self):\n return self.__bool__()\n\n def __eq__(self, other):\n return str(other).__eq__(self)\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n\nclass PackageOptionValues(object):\n \"\"\" set of key(string)-value(PackageOptionValue) for options of a package.\n Not prefixed by package name:\n static: True\n optimized: 2\n These are non-validating, not constrained.\n Used for UserOptions, which is a dict{package_name: PackageOptionValues}\n \"\"\"\n def __init__(self):\n self._dict = {} # {option_name: PackageOptionValue}\n self._modified = {}\n self._freeze = False\n\n def __bool__(self):\n return bool(self._dict)\n\n def __contains__(self, key):\n return key in self._dict\n\n def __nonzero__(self):\n return self.__bool__()\n\n def __getattr__(self, attr):\n if attr not in self._dict:\n raise ConanException(option_not_exist_msg(attr, list(self._dict.keys())))\n return self._dict[attr]\n\n def __delattr__(self, attr):\n if attr not in self._dict:\n return\n del self._dict[attr]\n\n def clear(self):\n self._dict.clear()\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __eq__(self, other):\n return self._dict == other._dict\n\n def __setattr__(self, attr, value):\n if attr[0] == \"_\":\n return super(PackageOptionValues, self).__setattr__(attr, value)\n self._dict[attr] = PackageOptionValue(value)\n\n def copy(self):\n result = PackageOptionValues()\n for k, v in self._dict.items():\n result._dict[k] = v\n return result\n\n @property\n def fields(self):\n return sorted(list(self._dict.keys()))\n\n def keys(self):\n return self._dict.keys()\n\n def items(self):\n return sorted(list(self._dict.items()))\n\n def add(self, option_text):\n assert isinstance(option_text, six.string_types)\n name, value = option_text.split(\"=\")\n self._dict[name.strip()] = PackageOptionValue(value.strip())\n\n def add_option(self, option_name, option_value):\n self._dict[option_name] = PackageOptionValue(option_value)\n\n def update(self, other):\n assert isinstance(other, PackageOptionValues)\n self._dict.update(other._dict)\n\n def remove(self, option_name):\n del self._dict[option_name]\n\n def freeze(self):\n self._freeze = True\n\n def propagate_upstream(self, down_package_values, down_ref, own_ref, package_name):\n if not down_package_values:\n return\n\n assert isinstance(down_package_values, PackageOptionValues)\n for (name, value) in down_package_values.items():\n if name in self._dict and self._dict.get(name) == value:\n continue\n\n if self._freeze:\n raise ConanException(\"%s tried to change %s option %s to %s\\n\"\n \"but it was already defined as %s\"\n % (down_ref, own_ref, name, value, self._dict.get(name)))\n\n modified = self._modified.get(name)\n if modified is not None:\n modified_value, modified_ref = modified\n raise ConanException(\"%s tried to change %s option %s:%s to %s\\n\"\n \"but it was already assigned to %s by %s\"\n % (down_ref, own_ref, package_name, name, value,\n modified_value, modified_ref))\n else:\n self._modified[name] = (value, down_ref)\n self._dict[name] = value\n\n def serialize(self):\n return self.items()\n\n @property\n def sha(self):\n result = []\n for name, value in self.items():\n # It is important to discard None values, so migrations in settings can be done\n # without breaking all existing packages SHAs, by adding a first \"None\" option\n # that doesn't change the final sha\n if value:\n result.append(\"%s=%s\" % (name, value))\n return sha1('\\n'.join(result).encode())\n\n\nclass OptionsValues(object):\n \"\"\" static= True,\n Boost.static = False,\n Poco.optimized = True\n \"\"\"\n def __init__(self, values=None):\n self._package_values = PackageOptionValues()\n self._reqs_options = {} # {name(\"Boost\": PackageOptionValues}\n if not values:\n return\n\n # convert tuple \"Pkg:option=value\", \"...\" to list of tuples(name, value)\n if isinstance(values, tuple):\n values = [item.split(\"=\", 1) for item in values]\n\n # convert dict {\"Pkg:option\": \"value\", \"..\": \"..\", ...} to list of tuples (name, value)\n if isinstance(values, dict):\n values = [(k, v) for k, v in values.items()]\n\n # handle list of tuples (name, value)\n for (k, v) in values:\n k = k.strip()\n v = v.strip() if isinstance(v, six.string_types) else v\n tokens = k.split(\":\")\n if len(tokens) == 2:\n package, option = tokens\n if package.endswith(\"/*\"):\n # Compatibility with 2.0, only allowed /*, at Conan 2.0 a version or any\n # pattern would be allowed\n package = package[:-2]\n package_values = self._reqs_options.setdefault(package.strip(),\n PackageOptionValues())\n package_values.add_option(option, v)\n else:\n self._package_values.add_option(k, v)\n\n def update(self, other):\n self._package_values.update(other._package_values)\n for package_name, package_values in other._reqs_options.items():\n pkg_values = self._reqs_options.setdefault(package_name, PackageOptionValues())\n pkg_values.update(package_values)\n\n def scope_options(self, name):\n if self._package_values:\n self._reqs_options.setdefault(name, PackageOptionValues()).update(self._package_values)\n self._package_values = PackageOptionValues()\n\n def descope_options(self, name):\n package_values = self._reqs_options.pop(name, None)\n if package_values:\n self._package_values.update(package_values)\n\n def clear_unscoped_options(self):\n self._package_values.clear()\n\n def __contains__(self, item):\n return item in self._package_values\n\n def get_safe(self, attr):\n if attr not in self._package_values:\n return None\n return getattr(self._package_values, attr)\n\n def __getitem__(self, item):\n return self._reqs_options.setdefault(item, PackageOptionValues())\n\n def __setitem__(self, item, value):\n self._reqs_options[item] = value\n\n def pop(self, item):\n return self._reqs_options.pop(item, None)\n\n def remove(self, name, package=None):\n if package:\n self._reqs_options[package].remove(name)\n else:\n self._package_values.remove(name)\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __eq__(self, other):\n if not self._package_values == other._package_values:\n return False\n # It is possible that the entry in the dict is not defined\n for key, pkg_values in self._reqs_options.items():\n other_values = other[key]\n if not pkg_values == other_values:\n return False\n return True\n\n def __repr__(self):\n return self.dumps()\n\n def __getattr__(self, attr):\n return getattr(self._package_values, attr)\n\n def copy(self):\n result = OptionsValues()\n result._package_values = self._package_values.copy()\n for k, v in self._reqs_options.items():\n result._reqs_options[k] = v.copy()\n return result\n\n def __setattr__(self, attr, value):\n if attr[0] == \"_\":\n return super(OptionsValues, self).__setattr__(attr, value)\n return setattr(self._package_values, attr, value)\n\n def __delattr__(self, attr):\n delattr(self._package_values, attr)\n\n def clear_indirect(self):\n for v in self._reqs_options.values():\n v.clear()\n\n def filter_used(self, used_pkg_names):\n self._reqs_options = {k: v for k, v in self._reqs_options.items() if k in used_pkg_names}\n\n def as_list(self):\n result = []\n options_list = self._package_values.items()\n if options_list:\n result.extend(options_list)\n for package_name, package_values in sorted(self._reqs_options.items()):\n for option_name, option_value in package_values.items():\n result.append((\"%s:%s\" % (package_name, option_name), option_value))\n return result\n\n def dumps(self):\n result = []\n for key, value in self.as_list():\n result.append(\"%s=%s\" % (key, value))\n return \"\\n\".join(result)\n\n @staticmethod\n def loads(text):\n \"\"\" parses a multiline text in the form\n Package:option=value\n other_option=3\n OtherPack:opt3=12.1\n \"\"\"\n options = tuple(line.strip() for line in text.splitlines() if line.strip())\n return OptionsValues(options)\n\n @property\n def sha(self):\n result = [self._package_values.sha]\n for key in sorted(list(self._reqs_options.keys())):\n result.append(self._reqs_options[key].sha)\n return sha1('\\n'.join(result).encode())\n\n def serialize(self):\n ret = {\"options\": self._package_values.serialize(),\n \"req_options\": {}}\n for name, values in self._reqs_options.items():\n ret[\"req_options\"][name] = values.serialize()\n return ret\n\n def clear(self):\n self._package_values.clear()\n self._reqs_options.clear()\n\n\nclass PackageOption(object):\n def __init__(self, possible_values, name):\n self._name = name\n self._value = None\n if possible_values == \"ANY\" or (isinstance(possible_values, list) and\n \"ANY\" in possible_values):\n self._possible_values = \"ANY\"\n else:\n self._possible_values = sorted(str(v) for v in possible_values)\n\n def copy(self):\n result = PackageOption(self._possible_values, self._name)\n return result\n\n def __bool__(self):\n if not self._value:\n return False\n return self._value.lower() not in _falsey_options\n\n def __nonzero__(self):\n return self.__bool__()\n\n def __str__(self):\n return str(self._value)\n\n def __int__(self):\n return int(self._value)\n\n def _check_option_value(self, value):\n \"\"\" checks that the provided value is allowed by current restrictions\n \"\"\"\n if self._possible_values != \"ANY\" and value not in self._possible_values:\n raise ConanException(option_wrong_value_msg(self._name, value, self._possible_values))\n\n def __eq__(self, other):\n if other is None:\n return self._value is None\n other = str(other)\n self._check_option_value(other)\n return other == self.__str__()\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def remove(self, values):\n if self._possible_values == \"ANY\":\n return\n if not isinstance(values, (list, tuple, set)):\n values = [values]\n values = [str(v) for v in values]\n self._possible_values = [v for v in self._possible_values if v not in values]\n\n if self._value is not None:\n self._check_option_value(self._value)\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, v):\n v = str(v)\n self._check_option_value(v)\n self._value = v\n\n def validate(self):\n if self._value is None and \"None\" not in self._possible_values:\n raise ConanException(option_undefined_msg(self._name))\n\n\nclass PackageOptions(object):\n def __init__(self, definition):\n definition = definition or {}\n self._data = {str(k): PackageOption(v, str(k))\n for k, v in definition.items()}\n self._modified = {}\n self._freeze = False\n\n def copy(self):\n result = PackageOptions(None)\n result._data = {k: v.copy() for k, v in self._data.items()}\n return result\n\n def __contains__(self, option):\n return str(option) in self._data\n\n @staticmethod\n def loads(text):\n return PackageOptions(yaml.safe_load(text) or {})\n\n def get_safe(self, field, default=None):\n return self._data.get(field, default)\n\n def validate(self):\n for child in self._data.values():\n child.validate()\n\n @property\n def fields(self):\n return sorted(list(self._data.keys()))\n\n def remove(self, item):\n if not isinstance(item, (list, tuple, set)):\n item = [item]\n for it in item:\n it = str(it)\n self._data.pop(it, None)\n\n def clear(self):\n self._data = {}\n\n def _ensure_exists(self, field):\n if field not in self._data:\n raise ConanException(option_not_exist_msg(field, list(self._data.keys())))\n\n def __getattr__(self, field):\n assert field[0] != \"_\", \"ERROR %s\" % field\n self._ensure_exists(field)\n return self._data[field]\n\n def __delattr__(self, field):\n assert field[0] != \"_\", \"ERROR %s\" % field\n self._ensure_exists(field)\n del self._data[field]\n\n def __setattr__(self, field, value):\n if field[0] == \"_\" or field.startswith(\"values\"):\n return super(PackageOptions, self).__setattr__(field, value)\n\n self._ensure_exists(field)\n self._data[field].value = value\n\n @property\n def values(self):\n result = PackageOptionValues()\n for field, package_option in self._data.items():\n result.add_option(field, package_option.value)\n return result\n\n def _items(self):\n result = []\n for field, package_option in sorted(list(self._data.items())):\n result.append((field, package_option.value))\n return result\n\n def items(self):\n return self._items()\n\n def iteritems(self):\n return self._items()\n\n @values.setter\n def values(self, vals):\n assert isinstance(vals, PackageOptionValues)\n for (name, value) in vals.items():\n self._ensure_exists(name)\n self._data[name].value = value\n\n def initialize_patterns(self, values):\n # Need to apply only those that exists\n for option, value in values.items():\n if option in self._data:\n self._data[option].value = value\n\n def freeze(self):\n self._freeze = True\n\n def propagate_upstream(self, package_values, down_ref, own_ref, pattern_options):\n \"\"\"\n :param: package_values: PackageOptionValues({\"shared\": \"True\"}\n :param: pattern_options: Keys from the \"package_values\" e.g. [\"shared\"] that shouldn't raise\n if they are not existing options for the current object\n \"\"\"\n if not package_values:\n return\n\n for (name, value) in package_values.items():\n if name in self._data and self._data.get(name) == value:\n continue\n\n if self._freeze:\n raise ConanException(\"%s tried to change %s option %s to %s\\n\"\n \"but it was already defined as %s\"\n % (down_ref, own_ref, name, value, self._data.get(name)))\n modified = self._modified.get(name)\n if modified is not None:\n modified_value, modified_ref = modified\n raise ConanException(\"%s tried to change %s option %s to %s\\n\"\n \"but it was already assigned to %s by %s\"\n % (down_ref, own_ref, name, value,\n modified_value, modified_ref))\n else:\n if name in pattern_options: # If it is a pattern-matched option, should check field\n if name in self._data:\n self._data[name].value = value\n self._modified[name] = (value, down_ref)\n else:\n self._ensure_exists(name)\n self._data[name].value = value\n self._modified[name] = (value, down_ref)\n\n\nclass Options(object):\n \"\"\" All options of a package, both its own options and the upstream ones.\n Owned by ConanFile.\n \"\"\"\n def __init__(self, options):\n assert isinstance(options, PackageOptions)\n self._package_options = options\n # Addressed only by name, as only 1 configuration is allowed\n # if more than 1 is present, 1 should be \"private\" requirement and its options\n # are not public, not overridable\n self._deps_package_values = {} # {name(\"Boost\": PackageOptionValues}\n\n def copy(self):\n \"\"\" deepcopy, same as Settings\"\"\"\n result = Options(self._package_options.copy())\n result._deps_package_values = {k: v.copy() for k, v in self._deps_package_values.items()}\n return result\n\n def freeze(self):\n self._package_options.freeze()\n for v in self._deps_package_values.values():\n v.freeze()\n\n @property\n def deps_package_values(self):\n return self._deps_package_values\n\n def clear(self):\n self._package_options.clear()\n\n def __contains__(self, option):\n return option in self._package_options\n\n def __getitem__(self, item):\n return self._deps_package_values.setdefault(item, PackageOptionValues())\n\n def __getattr__(self, attr):\n return getattr(self._package_options, attr)\n\n def __setattr__(self, attr, value):\n if attr[0] == \"_\" or attr == \"values\":\n return super(Options, self).__setattr__(attr, value)\n return setattr(self._package_options, attr, value)\n\n def __delattr__(self, field):\n try:\n self._package_options.__delattr__(field)\n except ConanException:\n pass\n\n @property\n def values(self):\n result = OptionsValues()\n result._package_values = self._package_options.values\n for k, v in self._deps_package_values.items():\n result._reqs_options[k] = v.copy()\n return result\n\n @values.setter\n def values(self, v):\n assert isinstance(v, OptionsValues)\n self._package_options.values = v._package_values\n self._deps_package_values.clear()\n for k, v in v._reqs_options.items():\n self._deps_package_values[k] = v.copy()\n\n def propagate_upstream(self, down_package_values, down_ref, own_ref):\n \"\"\" used to propagate from downstream the options to the upper requirements\n :param: down_package_values => {\"*\": PackageOptionValues({\"shared\": \"True\"})}\n :param: down_ref\n :param: own_ref: Reference of the current package => ConanFileReference\n \"\"\"\n if not down_package_values:\n return\n\n assert isinstance(down_package_values, dict)\n option_values = PackageOptionValues()\n # First step is to accumulate all matching patterns, in sorted()=alphabetical order\n # except the exact match\n\n for package_pattern, package_option_values in sorted(down_package_values.items()):\n if own_ref.name != package_pattern and fnmatch.fnmatch(own_ref.name, package_pattern):\n option_values.update(package_option_values)\n # These are pattern options, shouldn't raise if not existing\n pattern_options = list(option_values.keys())\n # Now, update with the exact match, that has higher priority\n down_options = down_package_values.get(own_ref.name)\n if down_options is not None:\n option_values.update(down_options)\n\n self._package_options.propagate_upstream(option_values, down_ref, own_ref,\n pattern_options=pattern_options)\n\n # Upstream propagation to deps\n for name, option_values in sorted(list(down_package_values.items())):\n if name != own_ref.name:\n pkg_values = self._deps_package_values.setdefault(name, PackageOptionValues())\n pkg_values.propagate_upstream(option_values, down_ref, own_ref, name)\n\n def initialize_upstream(self, user_values, name=None):\n \"\"\" used to propagate from downstream the options to the upper requirements\n \"\"\"\n if user_values is not None:\n assert isinstance(user_values, OptionsValues)\n # This code is necessary to process patterns like *:shared=True\n # To apply to the current consumer, which might not have name\n for pattern, pkg_options in sorted(user_values._reqs_options.items()):\n # pattern = & means the consumer, irrespective of name\n if fnmatch.fnmatch(name or \"\", pattern) or pattern == \"&\":\n self._package_options.initialize_patterns(pkg_options)\n # Then, the normal assignment of values, which could override patterns\n self._package_options.values = user_values._package_values\n for package_name, package_values in user_values._reqs_options.items():\n pkg_values = self._deps_package_values.setdefault(package_name,\n PackageOptionValues())\n pkg_values.update(package_values)\n\n def validate(self):\n return self._package_options.validate()\n\n def propagate_downstream(self, ref, options):\n assert isinstance(options, OptionsValues)\n self._deps_package_values[ref.name] = options._package_values\n for k, v in options._reqs_options.items():\n self._deps_package_values[k] = v.copy()\n\n def clear_unused(self, prefs):\n \"\"\" remove all options not related to the passed references,\n that should be the upstream requirements\n \"\"\"\n existing_names = [pref.ref.name for pref in prefs]\n self._deps_package_values = {k: v for k, v in self._deps_package_values.items()\n if k in existing_names}\n", "conans/model/settings.py": "import yaml\n\nfrom conans.errors import ConanException\nfrom conans.model.values import Values\n\n\ndef bad_value_msg(name, value, value_range):\n tip = \"\"\n if \"settings\" in name:\n tip = '\\nRead \"http://docs.conan.io/en/latest/faq/troubleshooting.html' \\\n '#error-invalid-setting\"'\n\n return (\"Invalid setting '%s' is not a valid '%s' value.\\nPossible values are %s%s\"\n % (value, name, value_range, tip))\n\n\ndef undefined_field(name, field, fields=None, value=None):\n value_str = \" for '%s'\" % value if value else \"\"\n result = [\"'%s.%s' doesn't exist%s\" % (name, field, value_str),\n \"'%s' possible configurations are %s\" % (name, fields or \"none\")]\n return ConanException(\"\\n\".join(result))\n\n\ndef undefined_value(name):\n return ConanException(\"'%s' value not defined\" % name)\n\n\nclass SettingsItem(object):\n \"\"\" represents a setting value and its child info, which could be:\n - A range of valid values: [Debug, Release] (for settings.compiler.runtime of VS)\n - \"ANY\", as string to accept any value\n - List [\"None\", \"ANY\"] to accept None or any value\n - A dict {subsetting: definition}, e.g. {version: [], runtime: []} for VS\n \"\"\"\n def __init__(self, definition, name):\n self._name = name # settings.compiler\n self._value = None # gcc\n if isinstance(definition, dict):\n self._definition = {}\n # recursive\n for k, v in definition.items():\n k = str(k)\n self._definition[k] = Settings(v, name, k)\n elif definition == \"ANY\":\n self._definition = \"ANY\"\n else:\n # list or tuple of possible values\n self._definition = [str(v) for v in definition]\n\n def __contains__(self, value):\n return value in (self._value or \"\")\n\n def copy(self):\n \"\"\" deepcopy, recursive\n \"\"\"\n result = SettingsItem({}, name=self._name)\n result._value = self._value\n if self.is_final:\n result._definition = self._definition[:]\n else:\n result._definition = {k: v.copy() for k, v in self._definition.items()}\n return result\n\n def copy_values(self):\n if self._value is None and \"None\" not in self._definition:\n return None\n\n result = SettingsItem({}, name=self._name)\n result._value = self._value\n if self.is_final:\n result._definition = self._definition[:]\n else:\n result._definition = {k: v.copy_values() for k, v in self._definition.items()}\n return result\n\n @property\n def is_final(self):\n return not isinstance(self._definition, dict)\n\n def __bool__(self):\n if not self._value:\n return False\n return self._value.lower() not in [\"false\", \"none\", \"0\", \"off\"]\n\n def __nonzero__(self):\n return self.__bool__()\n\n def __str__(self):\n return str(self._value)\n\n def _not_any(self):\n return self._definition != \"ANY\" and \"ANY\" not in self._definition\n\n def __eq__(self, other):\n if other is None:\n return self._value is None\n other = str(other)\n if self._not_any() and other not in self.values_range:\n raise ConanException(bad_value_msg(self._name, other, self.values_range))\n return other == self.__str__()\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __delattr__(self, item):\n \"\"\" This is necessary to remove libcxx subsetting from compiler in config()\n del self.settings.compiler.stdlib\n \"\"\"\n try:\n self._get_child(self._value).remove(item)\n except Exception:\n pass\n\n def remove(self, values):\n if not isinstance(values, (list, tuple, set)):\n values = [values]\n for v in values:\n v = str(v)\n if isinstance(self._definition, dict):\n self._definition.pop(v, None)\n elif self._definition == \"ANY\":\n if v == \"ANY\":\n self._definition = []\n elif v in self._definition:\n self._definition.remove(v)\n\n if self._value is not None and self._value not in self._definition and self._not_any():\n raise ConanException(bad_value_msg(self._name, self._value, self.values_range))\n\n def _get_child(self, item):\n if not isinstance(self._definition, dict):\n raise undefined_field(self._name, item, None, self._value)\n if self._value is None:\n raise undefined_value(self._name)\n return self._definition[self._value]\n\n def __getattr__(self, item):\n item = str(item)\n sub_config_dict = self._get_child(item)\n return getattr(sub_config_dict, item)\n\n def __setattr__(self, item, value):\n if item[0] == \"_\" or item.startswith(\"value\"):\n return super(SettingsItem, self).__setattr__(item, value)\n\n item = str(item)\n sub_config_dict = self._get_child(item)\n return setattr(sub_config_dict, item, value)\n\n def __getitem__(self, value):\n value = str(value)\n try:\n return self._definition[value]\n except Exception:\n raise ConanException(bad_value_msg(self._name, value, self.values_range))\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, v):\n v = str(v)\n if self._not_any() and v not in self.values_range:\n raise ConanException(bad_value_msg(self._name, v, self.values_range))\n self._value = v\n\n @property\n def values_range(self):\n try:\n return sorted(list(self._definition.keys()))\n except Exception:\n return self._definition\n\n @property\n def values_list(self):\n if self._value is None:\n return []\n result = []\n partial_name = \".\".join(self._name.split(\".\")[1:])\n result.append((partial_name, self._value))\n if isinstance(self._definition, dict):\n sub_config_dict = self._definition[self._value]\n result.extend(sub_config_dict.values_list)\n return result\n\n def validate(self):\n if self._value is None and \"None\" not in self._definition:\n raise undefined_value(self._name)\n if isinstance(self._definition, dict):\n key = \"None\" if self._value is None else self._value\n self._definition[key].validate()\n\n\nclass Settings(object):\n def __init__(self, definition=None, name=\"settings\", parent_value=None):\n if parent_value == \"None\" and definition:\n raise ConanException(\"settings.yml: None setting can't have subsettings\")\n definition = definition or {}\n self._name = name # settings, settings.compiler\n self._parent_value = parent_value # gcc, x86\n self._data = {str(k): SettingsItem(v, \"%s.%s\" % (name, k))\n for k, v in definition.items()}\n\n def get_safe(self, name, default=None):\n try:\n tmp = self\n for prop in name.split(\".\"):\n tmp = getattr(tmp, prop, None)\n except ConanException:\n return default\n if tmp is not None and tmp.value and tmp.value != \"None\": # In case of subsettings is None\n return str(tmp)\n return default\n\n def copy(self):\n \"\"\" deepcopy, recursive\n \"\"\"\n result = Settings({}, name=self._name, parent_value=self._parent_value)\n for k, v in self._data.items():\n result._data[k] = v.copy()\n return result\n\n def copy_values(self):\n \"\"\" deepcopy, recursive\n \"\"\"\n result = Settings({}, name=self._name, parent_value=self._parent_value)\n for k, v in self._data.items():\n value = v.copy_values()\n if value is not None:\n result._data[k] = value\n return result\n\n @staticmethod\n def loads(text):\n try:\n return Settings(yaml.safe_load(text) or {})\n except (yaml.YAMLError, AttributeError) as ye:\n raise ConanException(\"Invalid settings.yml format: {}\".format(ye))\n\n def validate(self):\n for field in self.fields:\n child = self._data[field]\n child.validate()\n\n @property\n def fields(self):\n return sorted(list(self._data.keys()))\n\n def remove(self, item):\n if not isinstance(item, (list, tuple, set)):\n item = [item]\n for it in item:\n it = str(it)\n self._data.pop(it, None)\n\n def clear(self):\n self._data = {}\n\n def _check_field(self, field):\n if field not in self._data:\n raise undefined_field(self._name, field, self.fields, self._parent_value)\n\n def __getattr__(self, field):\n assert field[0] != \"_\", \"ERROR %s\" % field\n self._check_field(field)\n return self._data[field]\n\n def __delattr__(self, field):\n assert field[0] != \"_\", \"ERROR %s\" % field\n self._check_field(field)\n del self._data[field]\n\n def __setattr__(self, field, value):\n if field[0] == \"_\" or field.startswith(\"values\"):\n return super(Settings, self).__setattr__(field, value)\n\n self._check_field(field)\n self._data[field].value = value\n\n @property\n def values(self):\n return Values.from_list(self.values_list)\n\n @property\n def values_list(self):\n result = []\n for field in self.fields:\n config_item = self._data[field]\n result.extend(config_item.values_list)\n return result\n\n def items(self):\n return self.values_list\n\n def iteritems(self):\n return self.values_list\n\n def update_values(self, vals):\n \"\"\" receives a list of tuples (compiler.version, value)\n This is more an updated than a setter\n \"\"\"\n assert isinstance(vals, list), vals\n for (name, value) in vals:\n list_settings = name.split(\".\")\n attr = self\n for setting in list_settings[:-1]:\n attr = getattr(attr, setting)\n setattr(attr, list_settings[-1], str(value))\n\n @values.setter\n def values(self, vals):\n assert isinstance(vals, Values)\n self.update_values(vals.as_list())\n\n def constraint(self, constraint_def):\n \"\"\" allows to restrict a given Settings object with the input of another Settings object\n 1. The other Settings object MUST be exclusively a subset of the former.\n No additions allowed\n 2. If the other defines {\"compiler\": None} means to keep the full specification\n \"\"\"\n if isinstance(constraint_def, (list, tuple, set)):\n constraint_def = {str(k): None for k in constraint_def or []}\n else:\n constraint_def = {str(k): v for k, v in constraint_def.items()}\n\n fields_to_remove = []\n for field, config_item in self._data.items():\n if field not in constraint_def:\n fields_to_remove.append(field)\n continue\n\n other_field_def = constraint_def[field]\n if other_field_def is None: # Means leave it as is\n continue\n if isinstance(other_field_def, str):\n other_field_def = [other_field_def]\n\n values_to_remove = []\n for value in config_item.values_range: # value = \"Visual Studio\"\n if value not in other_field_def:\n values_to_remove.append(value)\n else: # recursion\n if (not config_item.is_final and isinstance(other_field_def, dict) and\n other_field_def[value] is not None):\n config_item[value].constraint(other_field_def[value])\n\n # Sanity check of input constraint values\n for value in other_field_def:\n if value not in config_item.values_range:\n raise ConanException(bad_value_msg(field, value, config_item.values_range))\n\n config_item.remove(values_to_remove)\n\n # Sanity check for input constraint wrong fields\n for field in constraint_def:\n if field not in self._data:\n raise undefined_field(self._name, field, self.fields)\n\n # remove settings not defined in the constraint\n self.remove(fields_to_remove)\n"}
|
{"conans/model/options.py": [{"type": "function", "name": "OptionsValues.rm_safe", "lines": [234, 238], "signature": "def rm_safe(self, attr):", "doc": ""}, {"type": "function", "name": "PackageOptions.rm_safe", "lines": [434, 438], "signature": "def rm_safe(self, field):", "doc": ""}, {"type": "function", "name": "Options.rm_safe", "lines": [599, 600], "signature": "def rm_safe(self, field):", "doc": ""}], "conans/model/settings.py": [{"type": "function", "name": "Settings.rm_safe", "lines": [216, 227], "signature": "def rm_safe(self, name):", "doc": ""}]}
| null |
["conans/test/integration/settings/remove_subsetting_test.py::test_settings_and_options_rm_safe"]
|
["conans/test/integration/settings/remove_subsetting_test.py::RemoveSubsettingTest::test_remove_options", "conans/test/integration/settings/remove_subsetting_test.py::RemoveSubsettingTest::test_remove_runtime", "conans/test/integration/settings/remove_subsetting_test.py::RemoveSubsettingTest::test_remove_setting", "conans/test/integration/settings/remove_subsetting_test.py::RemoveSubsettingTest::test_remove_subsetting", "conans/test/integration/settings/remove_subsetting_test.py::RemoveSubsettingTest::test_remove_subsetting_build"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1662544253.0, "pr_title": "[feature] `rm_safe` for settings and options", "pr_body": "Changelog: Feature: Added method `rm_safe` to `settings` and `options`.\r\nDocs: https://github.com/conan-io/docs/pull/2764\r\n\r\nThis feature is trying to avoid this common structure:\r\n\r\n```python\r\n try:\r\n del self.settings.compiler.libcxx\r\n except Exception:\r\n pass\r\n try:\r\n del self.settings.compiler.cppstd\r\n except Exception:\r\n pass\r\n```\r\nNow, it could be:\r\n\r\n```python\r\n self.settings.rm_safe(\"compiler.libcxx\")\r\n self.settings.rm_safe(\"compiler.cppstd\")\r\n```", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 12,243
|
https://github.com/conan-io/conan/pull/12243
|
conan-io__conan-12243
|
[]
|
6f8f9c5d179ce71877a89826d5275e83e9958f7a
|
diff --git a/conan/api/subapi/install.py b/conan/api/subapi/install.py
index 18b37a2b3ab..30e79be593b 100644
--- a/conan/api/subapi/install.py
+++ b/conan/api/subapi/install.py
@@ -49,7 +49,7 @@ def install_consumer(self, deps_graph, generators=None, source_folder=None, outp
if deploy:
base_folder = conanfile.folders.base_build
mkdir(base_folder)
- _do_deploys(self.conan_api, deps_graph, deploy, base_folder)
+ do_deploys(self.conan_api, deps_graph, deploy, base_folder)
conanfile.generators = list(set(conanfile.generators).union(generators or []))
app = ConanApp(self.conan_api.cache_folder)
@@ -87,17 +87,16 @@ def _load(path):
raise ConanException(f"Cannot find deployer '{d}'")
-def _do_deploys(conan_api, graph, deploy, deploy_folder):
+def do_deploys(conan_api, graph, deploy, deploy_folder):
# Handle the deploys
cache = ClientCache(conan_api.cache_folder)
for d in deploy or []:
deployer = _find_deployer(d, cache.deployers_path)
# IMPORTANT: Use always kwargs to not break if it changes in the future
- conanfile = graph.root.conanfile
- deployer(conanfile=conanfile, output_folder=deploy_folder)
+ deployer(graph=graph, output_folder=deploy_folder)
-def full_deploy(conanfile, output_folder):
+def full_deploy(graph, output_folder):
"""
Deploys to output_folder + host/dep/0.1/Release/x86_64 subfolder
"""
@@ -106,6 +105,7 @@ def full_deploy(conanfile, output_folder):
import os
import shutil
+ conanfile = graph.root.conanfile
conanfile.output.info(f"Conan built-in full deployer to {output_folder}")
for dep in conanfile.dependencies.values():
if dep.package_folder is None:
@@ -123,7 +123,7 @@ def full_deploy(conanfile, output_folder):
dep.set_deploy_folder(new_folder)
-def direct_deploy(conanfile, output_folder):
+def direct_deploy(graph, output_folder):
"""
Deploys to output_folder a single package,
"""
@@ -132,6 +132,7 @@ def direct_deploy(conanfile, output_folder):
import os
import shutil
+ conanfile = graph.root.conanfile
conanfile.output.info(f"Conan built-in pkg deployer to {output_folder}")
# If the argument is --requires, the current conanfile is a virtual one with 1 single
# dependency, the "reference" package. If the argument is a local path, then all direct
diff --git a/conan/cli/commands/graph.py b/conan/cli/commands/graph.py
index f67f03dc8d8..7356ac8e96d 100644
--- a/conan/cli/commands/graph.py
+++ b/conan/cli/commands/graph.py
@@ -2,6 +2,7 @@
import os
from conan.api.output import ConanOutput, cli_out_write
+from conan.api.subapi.install import do_deploys
from conan.cli.command import conan_command, COMMAND_GROUPS, conan_subcommand, \
Extender
from conan.cli.commands import make_abs_path
@@ -86,6 +87,8 @@ def graph_info(conan_api, parser, subparser, *args):
help="Show only the specified fields")
subparser.add_argument("--package-filter", nargs=1, action=Extender,
help='Print information only for packages that match the patterns')
+ subparser.add_argument("--deploy", action=Extender,
+ help='Deploy using the provided deployer to the output folder')
args = parser.parse_args(*args)
# parameter validation
@@ -102,6 +105,8 @@ def graph_info(conan_api, parser, subparser, *args):
print_graph_info(deps_graph, args.filter, args.package_filter)
save_lockfile_out(args, deps_graph, lockfile, os.getcwd())
+ if args.deploy:
+ base_folder = os.getcwd()
+ do_deploys(conan_api, deps_graph, args.deploy, base_folder)
return deps_graph, os.path.join(conan_api.cache_folder, "templates")
-
|
diff --git a/conans/test/functional/command/test_install_deploy.py b/conans/test/functional/command/test_install_deploy.py
index d004cc0f5de..7800b06e807 100644
--- a/conans/test/functional/command/test_install_deploy.py
+++ b/conans/test/functional/command/test_install_deploy.py
@@ -21,7 +21,8 @@ def test_install_deploy():
import os, shutil
# USE **KWARGS to be robust against changes
- def deploy(conanfile, output_folder, **kwargs):
+ def deploy(graph, output_folder, **kwargs):
+ conanfile = graph.root.conanfile
for r, d in conanfile.dependencies.items():
new_folder = os.path.join(output_folder, d.ref.name)
shutil.copytree(d.package_folder, new_folder)
@@ -51,7 +52,8 @@ def test_copy_files_deploy():
deploy = textwrap.dedent("""
import os, shutil
- def deploy(conanfile, output_folder, **kwargs):
+ def deploy(graph, output_folder, **kwargs):
+ conanfile = graph.root.conanfile
for r, d in conanfile.dependencies.items():
bindir = os.path.join(d.package_folder, "bin")
for f in os.listdir(bindir):
@@ -73,15 +75,18 @@ def test_multi_deploy():
"""
c = TestClient()
deploy1 = textwrap.dedent("""
- def deploy(conanfile, output_folder, **kwargs):
+ def deploy(graph, output_folder, **kwargs):
+ conanfile = graph.root.conanfile
conanfile.output.info("deploy1!!")
""")
deploy2 = textwrap.dedent("""
- def deploy(conanfile, output_folder, **kwargs):
+ def deploy(graph, output_folder, **kwargs):
+ conanfile = graph.root.conanfile
conanfile.output.info("sub/deploy2!!")
""")
deploy_cache = textwrap.dedent("""
- def deploy(conanfile, output_folder, **kwargs):
+ def deploy(graph, output_folder, **kwargs):
+ conanfile = graph.root.conanfile
conanfile.output.info("deploy cache!!")
""")
save(os.path.join(c.cache_folder, "extensions", "deploy", "deploy_cache.py"), deploy_cache)
diff --git a/conans/test/integration/command/info/info_test.py b/conans/test/integration/command/info/info_test.py
index 2ced3f0f00d..3238f71d094 100644
--- a/conans/test/integration/command/info/info_test.py
+++ b/conans/test/integration/command/info/info_test.py
@@ -227,3 +227,34 @@ def build_requirements(self):
client.run("graph info . " + args)
assert "AttributeError: 'HelloConan' object has no attribute 'tested_reference_str'"\
not in client.out
+
+
+class TestDeployers:
+
+ def test_custom_deploy(self):
+ c = TestClient()
+ conanfile = GenConanfile("pkg", "0.1").with_class_attribute("license = 'MIT'")
+ c.save({"conanfile.py": conanfile})
+ c.run("create .")
+ collectlicenses = textwrap.dedent(r"""
+ from conan.tools.files import save
+
+ def deploy(graph, output_folder, **kwargs):
+ contents = []
+ conanfile = graph.root.conanfile
+ for pkg in graph.nodes:
+ d = pkg.conanfile
+ contents.append("LICENSE {}: {}!".format(d.ref, d.license))
+ contents = "\n".join(contents)
+ conanfile.output.info(contents)
+ save(conanfile, "licenses.txt", contents)
+ """)
+ c.save({"conanfile.py": GenConanfile().with_requires("pkg/0.1")
+ .with_class_attribute("license='GPL'"),
+ "collectlicenses.py": collectlicenses})
+ c.run("graph info . --deploy=collectlicenses")
+ assert "conanfile.py: LICENSE : GPL!" in c.out
+ assert "LICENSE pkg/0.1: MIT!" in c.out
+ contents = c.load("licenses.txt")
+ assert "LICENSE pkg/0.1: MIT!" in contents
+ assert "LICENSE : GPL!" in contents
| 2022-10-04T21:25:40
|
{}
|
{"conan/api/subapi/install.py": "import os\n\nfrom conan.api.subapi import api_method\nfrom conan.api.conan_app import ConanApp\nfrom conans.client.cache.cache import ClientCache\nfrom conans.client.generators import write_generators\nfrom conans.client.installer import BinaryInstaller, call_system_requirements\nfrom conans.client.loader import load_python_file\nfrom conans.errors import ConanException, ConanInvalidConfiguration\nfrom conans.util.files import rmdir, mkdir\n\n\nclass InstallAPI:\n\n def __init__(self, conan_api):\n self.conan_api = conan_api\n\n @api_method\n def install_binaries(self, deps_graph, remotes=None, update=False):\n \"\"\" Install binaries for dependency graph\n :param deps_graph: Dependency graph to intall packages for\n :param remotes:\n :param update:\n \"\"\"\n app = ConanApp(self.conan_api.cache_folder)\n app.load_remotes(remotes, update=update)\n installer = BinaryInstaller(app)\n # TODO: Extract this from the GraphManager, reuse same object, check args earlier\n installer.install(deps_graph)\n\n # TODO: Look for a better name\n def install_consumer(self, deps_graph, generators=None, source_folder=None, output_folder=None,\n deploy=False):\n \"\"\" Once a dependency graph has been installed, there are things to be done, like invoking\n generators for the root consumer.\n This is necessary for example for conanfile.txt/py, or for \"conan install <ref> -g\n \"\"\"\n root_node = deps_graph.root\n conanfile = root_node.conanfile\n\n if conanfile.info is not None and conanfile.info.invalid:\n binary, reason = conanfile.info.invalid\n msg = \"{}: Invalid ID: {}: {}\".format(conanfile, binary, reason)\n raise ConanInvalidConfiguration(msg)\n\n conanfile.folders.set_base_folders(source_folder, output_folder)\n\n # The previous .set_base_folders has already decided between the source_folder and output\n if deploy:\n base_folder = conanfile.folders.base_build\n mkdir(base_folder)\n _do_deploys(self.conan_api, deps_graph, deploy, base_folder)\n\n conanfile.generators = list(set(conanfile.generators).union(generators or []))\n app = ConanApp(self.conan_api.cache_folder)\n write_generators(conanfile, app.hook_manager)\n call_system_requirements(conanfile)\n\n\n# TODO: Look for a better location for the deployers code\ndef _find_deployer(d, cache_deploy_folder):\n \"\"\" Implements the logic of finding a deployer, with priority:\n - 1) absolute paths\n - 2) relative to cwd\n - 3) in the cache/extensions/deploy folder\n - 4) built-in\n \"\"\"\n def _load(path):\n mod, _ = load_python_file(path)\n return mod.deploy\n\n if not d.endswith(\".py\"):\n d += \".py\" # Deployers must be python files\n if os.path.isabs(d):\n return _load(d)\n cwd = os.getcwd()\n local_path = os.path.normpath(os.path.join(cwd, d))\n if os.path.isfile(local_path):\n return _load(local_path)\n cache_path = os.path.join(cache_deploy_folder, d)\n if os.path.isfile(cache_path):\n return _load(cache_path)\n builtin_deploy = {\"full_deploy.py\": full_deploy,\n \"direct_deploy.py\": direct_deploy}.get(d)\n if builtin_deploy is not None:\n return builtin_deploy\n raise ConanException(f\"Cannot find deployer '{d}'\")\n\n\ndef _do_deploys(conan_api, graph, deploy, deploy_folder):\n # Handle the deploys\n cache = ClientCache(conan_api.cache_folder)\n for d in deploy or []:\n deployer = _find_deployer(d, cache.deployers_path)\n # IMPORTANT: Use always kwargs to not break if it changes in the future\n conanfile = graph.root.conanfile\n deployer(conanfile=conanfile, output_folder=deploy_folder)\n\n\ndef full_deploy(conanfile, output_folder):\n \"\"\"\n Deploys to output_folder + host/dep/0.1/Release/x86_64 subfolder\n \"\"\"\n # TODO: This deployer needs to be put somewhere else\n # TODO: Document that this will NOT work with editables\n import os\n import shutil\n\n conanfile.output.info(f\"Conan built-in full deployer to {output_folder}\")\n for dep in conanfile.dependencies.values():\n if dep.package_folder is None:\n continue\n folder_name = os.path.join(dep.context, dep.ref.name, str(dep.ref.version))\n build_type = dep.info.settings.get_safe(\"build_type\")\n arch = dep.info.settings.get_safe(\"arch\")\n if build_type:\n folder_name = os.path.join(folder_name, build_type)\n if arch:\n folder_name = os.path.join(folder_name, arch)\n new_folder = os.path.join(output_folder, folder_name)\n rmdir(new_folder)\n shutil.copytree(dep.package_folder, new_folder)\n dep.set_deploy_folder(new_folder)\n\n\ndef direct_deploy(conanfile, output_folder):\n \"\"\"\n Deploys to output_folder a single package,\n \"\"\"\n # TODO: This deployer needs to be put somewhere else\n # TODO: Document that this will NOT work with editables\n import os\n import shutil\n\n conanfile.output.info(f\"Conan built-in pkg deployer to {output_folder}\")\n # If the argument is --requires, the current conanfile is a virtual one with 1 single\n # dependency, the \"reference\" package. If the argument is a local path, then all direct\n # dependencies\n for dep in conanfile.dependencies.filter({\"direct\": True}).values():\n new_folder = os.path.join(output_folder, dep.ref.name)\n rmdir(new_folder)\n shutil.copytree(dep.package_folder, new_folder)\n dep.set_deploy_folder(new_folder)\n", "conan/cli/commands/graph.py": "import json\nimport os\n\nfrom conan.api.output import ConanOutput, cli_out_write\nfrom conan.cli.command import conan_command, COMMAND_GROUPS, conan_subcommand, \\\n Extender\nfrom conan.cli.commands import make_abs_path\nfrom conan.cli.commands.install import graph_compute, common_graph_args\nfrom conan.cli.common import save_lockfile_out\nfrom conan.cli.formatters.graph import format_graph_html, format_graph_json, format_graph_dot, \\\n print_graph_info\nfrom conans.client.graph.install_graph import InstallGraph\nfrom conans.errors import ConanException\n\n\n@conan_command(group=COMMAND_GROUPS['consumer'])\ndef graph(conan_api, parser, *args):\n \"\"\"\n Computes a dependency graph, without installing or building the binaries\n \"\"\"\n\n\ndef cli_build_order(build_order):\n # TODO: Very simple cli output, probably needs to be improved\n for level in build_order:\n for item in level:\n for package_level in item['packages']:\n for package in package_level:\n cli_out_write(f\"{item['ref']}:{package['package_id']} - {package['binary']}\")\n\n\ndef json_build_order(build_order):\n cli_out_write(json.dumps(build_order, indent=4))\n\n\n@conan_subcommand(formatters={\"text\": cli_build_order, \"json\": json_build_order})\ndef graph_build_order(conan_api, parser, subparser, *args):\n \"\"\"\n Computes the build order of a dependency graph\n \"\"\"\n common_graph_args(subparser)\n args = parser.parse_args(*args)\n\n # parameter validation\n if args.requires and (args.name or args.version or args.user or args.channel):\n raise ConanException(\"Can't use --name, --version, --user or --channel arguments with \"\n \"--requires\")\n\n deps_graph, lockfile = graph_compute(args, conan_api, partial=args.lockfile_partial)\n\n out = ConanOutput()\n out.title(\"Computing the build order\")\n install_graph = InstallGraph(deps_graph)\n install_order_serialized = install_graph.install_build_order()\n return install_order_serialized\n\n\n@conan_subcommand(formatters={\"text\": cli_build_order, \"json\": json_build_order})\ndef graph_build_order_merge(conan_api, parser, subparser, *args):\n \"\"\"\n Merges more than 1 build-order file\n \"\"\"\n subparser.add_argument(\"--file\", nargs=\"?\", action=Extender, help=\"Files to be merged\")\n args = parser.parse_args(*args)\n\n result = InstallGraph()\n for f in args.file:\n f = make_abs_path(f)\n install_graph = InstallGraph.load(f)\n result.merge(install_graph)\n\n install_order_serialized = result.install_build_order()\n return install_order_serialized\n\n\n@conan_subcommand(formatters={\"html\": format_graph_html,\n \"json\": format_graph_json,\n \"dot\": format_graph_dot})\ndef graph_info(conan_api, parser, subparser, *args):\n \"\"\"\n Computes the dependency graph and shows information about it\n \"\"\"\n common_graph_args(subparser)\n subparser.add_argument(\"--check-updates\", default=False, action=\"store_true\")\n subparser.add_argument(\"--filter\", nargs=1, action=Extender,\n help=\"Show only the specified fields\")\n subparser.add_argument(\"--package-filter\", nargs=1, action=Extender,\n help='Print information only for packages that match the patterns')\n args = parser.parse_args(*args)\n\n # parameter validation\n if args.requires and (args.name or args.version or args.user or args.channel):\n raise ConanException(\"Can't use --name, --version, --user or --channel arguments with \"\n \"--requires\")\n\n if args.format is not None and (args.filter or args.package_filter):\n raise ConanException(\"Formatted outputs cannot be filtered\")\n\n deps_graph, lockfile = graph_compute(args, conan_api, partial=args.lockfile_partial,\n allow_error=True)\n if not args.format:\n print_graph_info(deps_graph, args.filter, args.package_filter)\n\n save_lockfile_out(args, deps_graph, lockfile, os.getcwd())\n\n return deps_graph, os.path.join(conan_api.cache_folder, \"templates\")\n\n"}
|
{"conan/api/subapi/install.py": [{"type": "function", "name": "do_deploys", "lines": [90, 96], "signature": "def do_deploys(conan_api, graph, deploy, deploy_folder):", "doc": ""}]}
| null |
["conans/test/functional/command/test_install_deploy.py::test_copy_files_deploy", "conans/test/functional/command/test_install_deploy.py::test_multi_deploy", "conans/test/integration/command/info/info_test.py::TestDeployers::test_custom_deploy"]
|
["conans/test/functional/command/test_install_deploy.py::test_builtin_deploy", "conans/test/functional/command/test_install_deploy.py::test_deploy_reference", "conans/test/functional/command/test_install_deploy.py::test_deploy_overwrite", "conans/test/functional/command/test_install_deploy.py::test_deploy_editable", "conans/test/functional/command/test_install_deploy.py::test_deploy_single_package", "conans/test/integration/command/info/info_test.py::TestBasicCliOutput::test_info_settings", "conans/test/integration/command/info/info_test.py::TestConanfilePath::test_cwd", "conans/test/integration/command/info/info_test.py::TestConanfilePath::test_wrong_path_parameter", "conans/test/integration/command/info/info_test.py::TestFilters::test_filter_fields", "conans/test/integration/command/info/info_test.py::TestJsonOutput::test_json_not_filtered", "conans/test/integration/command/info/info_test.py::TestJsonOutput::test_json_info_outputs", "conans/test/integration/command/info/info_test.py::TestAdvancedCliOutput::test_python_requires", "conans/test/integration/command/info/info_test.py::TestAdvancedCliOutput::test_build_id_info", "conans/test/integration/command/info/info_test.py::TestEditables::test_info_paths", "conans/test/integration/command/info/info_test.py::TestInfoRevisions::test_info_command_showing_revision", "conans/test/integration/command/info/info_test.py::TestInfoTestPackage::test_tested_reference_str"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1664918507.0, "pr_title": "Add --deploy to graph-info", "pr_body": "How to do some quick ``graph info``, and extract custom information from the graph, in an extensible way?\r\n\r\n- Formatters at the moment are very built-in, and seems difficult to change it\r\n- ``--deploy`` only applies to ``install``, so it is necessary to install the binaries, which can be slow\r\n- There are no more global custom generators, only as ``python_requires``\r\n- It could be possible to fork the ``graph_info`` command, but seems a bit too much to just do something equivalent to the deployers\r\n\r\nOn the other hand, I don't love the ``deploy`` name, but I don't want to create yet another name for something that is mostly identical\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 12,873
|
https://github.com/conan-io/conan/pull/12873
|
conan-io__conan-12873
|
[]
|
6e1f2bb60a430f2567480ddeaba19cafb20fbcb2
|
diff --git a/conan/tools/android/__init__.py b/conan/tools/android/__init__.py
new file mode 100644
index 00000000000..c969bbebd07
--- /dev/null
+++ b/conan/tools/android/__init__.py
@@ -0,0 +1,1 @@
+from conan.tools.android.utils import android_abi
diff --git a/conan/tools/android/utils.py b/conan/tools/android/utils.py
new file mode 100644
index 00000000000..12a5e7551db
--- /dev/null
+++ b/conan/tools/android/utils.py
@@ -0,0 +1,31 @@
+from conan.errors import ConanException
+
+
+def android_abi(conanfile, context="host"):
+ """
+ Returns Android-NDK ABI
+ :param conanfile: ConanFile instance
+ :param context: either "host", "build" or "target"
+ :return: Android-NDK ABI
+ """
+ if context not in ("host", "build", "target"):
+ raise ConanException(f"context argument must be either 'host', 'build' or 'target', was '{context}'")
+
+ try:
+ settings = getattr(conanfile, f"settings_{context}")
+ except AttributeError:
+ if context == "host":
+ settings = conanfile.settings
+ else:
+ raise ConanException(f"settings_{context} not declared in recipe")
+ arch = settings.get_safe("arch")
+ # https://cmake.org/cmake/help/latest/variable/CMAKE_ANDROID_ARCH_ABI.html
+ return {
+ "armv5el": "armeabi",
+ "armv5hf": "armeabi",
+ "armv5": "armeabi",
+ "armv6": "armeabi-v6",
+ "armv7": "armeabi-v7a",
+ "armv7hf": "armeabi-v7a",
+ "armv8": "arm64-v8a",
+ }.get(arch, arch)
diff --git a/conan/tools/cmake/toolchain/blocks.py b/conan/tools/cmake/toolchain/blocks.py
index 5846f009705..2e217ec3b2a 100644
--- a/conan/tools/cmake/toolchain/blocks.py
+++ b/conan/tools/cmake/toolchain/blocks.py
@@ -6,6 +6,7 @@
from jinja2 import Template
from conan.tools._compilers import architecture_flag, libcxx_flags
+from conan.tools.android.utils import android_abi
from conan.tools.apple.apple import is_apple_os, to_apple_arch
from conan.tools.build import build_jobs
from conan.tools.build.cross_building import cross_building
@@ -286,11 +287,6 @@ def context(self):
if os_ != "Android":
return
- android_abi = {"x86": "x86",
- "x86_64": "x86_64",
- "armv7": "armeabi-v7a",
- "armv8": "arm64-v8a"}.get(str(self._conanfile.settings.arch))
-
# TODO: only 'c++_shared' y 'c++_static' supported?
# https://developer.android.com/ndk/guides/cpp-support
libcxx_str = self._conanfile.settings.get_safe("compiler.libcxx")
@@ -302,7 +298,7 @@ def context(self):
ctxt_toolchain = {
'android_platform': 'android-' + str(self._conanfile.settings.os.api_level),
- 'android_abi': android_abi,
+ 'android_abi': android_abi(self._conanfile),
'android_stl': libcxx_str,
'android_ndk_path': android_ndk_path,
}
|
diff --git a/conans/test/unittests/tools/android/__init__.py b/conans/test/unittests/tools/android/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/unittests/tools/android/test_android_tools.py b/conans/test/unittests/tools/android/test_android_tools.py
new file mode 100644
index 00000000000..038bd0b461e
--- /dev/null
+++ b/conans/test/unittests/tools/android/test_android_tools.py
@@ -0,0 +1,79 @@
+from conans.test.utils.mocks import ConanFileMock, MockSettings
+from conans.errors import ConanException
+from conan.tools.android import android_abi
+
+from pytest import raises
+
+def test_tools_android_abi():
+ settings_linux = MockSettings({"os": "Linux", "arch": "foo"})
+
+ for (arch, expected) in [
+ ("armv5el", "armeabi"),
+ ("armv5hf", "armeabi"),
+ ("armv5", "armeabi"),
+ ("armv6", "armeabi-v6"),
+ ("armv7", "armeabi-v7a"),
+ ("armv7hf", "armeabi-v7a"),
+ ("armv8", "arm64-v8a"),
+ ("x86", "x86"),
+ ("x86_64", "x86_64"),
+ ("mips", "mips"),
+ ("mips_64", "mips_64"),
+ ]:
+ conanfile = ConanFileMock()
+ settings_android = MockSettings({"os": "Android", "arch": arch})
+
+ # 1 profile (legacy native build)
+ conanfile.settings = settings_android
+ assert android_abi(conanfile) == expected
+ assert android_abi(conanfile, context="host") == expected
+
+ with raises(ConanException):
+ assert android_abi(conanfile, context="build") == expected
+
+ with raises(ConanException):
+ assert android_abi(conanfile, context="target") == expected
+
+ # 2 profiles
+ ## native build
+ conanfile.settings = settings_android
+ conanfile.settings_host = settings_android
+ conanfile.settings_build = settings_android
+ assert android_abi(conanfile) == expected
+ assert android_abi(conanfile, context="host") == expected
+ assert android_abi(conanfile, context="build") == expected
+
+ with raises(ConanException):
+ assert android_abi(conanfile, context="target") == expected
+
+ ## cross-build from Android to Linux (quite hypothetical)
+ conanfile.settings = settings_linux
+ conanfile.settings_host = settings_linux
+ conanfile.settings_build = settings_android
+ assert android_abi(conanfile) != expected
+ assert android_abi(conanfile, context="host") != expected
+ assert android_abi(conanfile, context="build") == expected
+
+ with raises(ConanException):
+ assert android_abi(conanfile, context="target")
+
+ ## cross-build a recipe from Linux to Android:
+ ### test android_abi in recipe itself
+ conanfile.settings = settings_android
+ conanfile.settings_host = settings_android
+ conanfile.settings_build = settings_linux
+ assert android_abi(conanfile) == expected
+ assert android_abi(conanfile, context="host") == expected
+ assert android_abi(conanfile, context="build") != expected
+ with raises(ConanException):
+ android_abi(conanfile, context="target")
+
+ ### test android_abi in "compiler recipe" (ie a android-ndk recipe in tool_requires of recipe being cross-build)
+ conanfile.settings = settings_linux
+ conanfile.settings_host = settings_linux
+ conanfile.settings_build = settings_linux
+ conanfile.settings_target = settings_android
+ assert android_abi(conanfile) != expected
+ assert android_abi(conanfile, context="host") != expected
+ assert android_abi(conanfile, context="build") != expected
+ assert android_abi(conanfile, context="target") == expected
| 2023-01-09T13:36:32
|
{}
|
{"conan/tools/android/__init__.py": null, "conan/tools/android/utils.py": null, "conan/tools/cmake/toolchain/blocks.py": "import os\nimport re\nimport textwrap\nfrom collections import OrderedDict\n\nfrom jinja2 import Template\n\nfrom conan.tools._compilers import architecture_flag, libcxx_flags\nfrom conan.tools.apple.apple import is_apple_os, to_apple_arch\nfrom conan.tools.build import build_jobs\nfrom conan.tools.build.cross_building import cross_building\nfrom conan.tools.cmake.toolchain import CONAN_TOOLCHAIN_FILENAME\nfrom conan.tools.intel import IntelCC\nfrom conan.tools.microsoft.visual import msvc_version_to_toolset_version\nfrom conans.client.subsystems import deduce_subsystem, WINDOWS\nfrom conans.errors import ConanException\nfrom conans.util.files import load\n\n\nclass ToolchainBlocks:\n def __init__(self, conanfile, toolchain, items=None):\n self._blocks = OrderedDict()\n self._conanfile = conanfile\n self._toolchain = toolchain\n if items:\n for name, block in items:\n self._blocks[name] = block(conanfile, toolchain)\n\n def remove(self, name):\n del self._blocks[name]\n\n def __setitem__(self, name, block_type):\n # Create a new class inheriting Block with the elements of the provided one\n block_type = type('proxyUserBlock', (Block,), dict(block_type.__dict__))\n self._blocks[name] = block_type(self._conanfile, self._toolchain)\n\n def __getitem__(self, name):\n return self._blocks[name]\n\n def process_blocks(self):\n result = []\n for b in self._blocks.values():\n content = b.get_rendered_content()\n if content:\n result.append(content)\n return result\n\n\nclass Block(object):\n def __init__(self, conanfile, toolchain):\n self._conanfile = conanfile\n self._toolchain = toolchain\n self._context_values = None\n\n @property\n def values(self):\n if self._context_values is None:\n self._context_values = self.context()\n return self._context_values\n\n @values.setter\n def values(self, context_values):\n self._context_values = context_values\n\n def get_rendered_content(self):\n context = self.values\n if context is None:\n return\n\n def cmake_value(value):\n if isinstance(value, bool):\n return \"ON\" if value else \"OFF\"\n else:\n return '\"{}\"'.format(value)\n\n template = Template(self.template, trim_blocks=True, lstrip_blocks=True)\n template.environment.filters[\"cmake_value\"] = cmake_value\n return template.render(**context)\n\n def context(self):\n return {}\n\n @property\n def template(self):\n raise NotImplementedError()\n\n\nclass VSRuntimeBlock(Block):\n template = textwrap.dedent(\"\"\"\n # Definition of VS runtime, defined from build_type, compiler.runtime, compiler.runtime_type\n {% set genexpr = namespace(str='') %}\n {% for config, value in vs_runtimes.items() %}\n {% set genexpr.str = genexpr.str +\n '$<$<CONFIG:' + config + '>:' + value|string + '>' %}\n {% endfor %}\n cmake_policy(GET CMP0091 POLICY_CMP0091)\n if(NOT \"${POLICY_CMP0091}\" STREQUAL NEW)\n message(FATAL_ERROR \"The CMake policy CMP0091 must be NEW, but is '${POLICY_CMP0091}'\")\n endif()\n set(CMAKE_MSVC_RUNTIME_LIBRARY \"{{ genexpr.str }}\")\n \"\"\")\n\n def context(self):\n # Parsing existing toolchain file to get existing configured runtimes\n settings = self._conanfile.settings\n if settings.get_safe(\"os\") != \"Windows\":\n return\n\n compiler = settings.get_safe(\"compiler\")\n if compiler not in (\"Visual Studio\", \"msvc\", \"clang\", \"intel-cc\"):\n return\n\n runtime = settings.get_safe(\"compiler.runtime\")\n if runtime is None:\n return\n\n config_dict = {}\n if os.path.exists(CONAN_TOOLCHAIN_FILENAME):\n existing_include = load(CONAN_TOOLCHAIN_FILENAME)\n msvc_runtime_value = re.search(r\"set\\(CMAKE_MSVC_RUNTIME_LIBRARY \\\"([^)]*)\\\"\\)\",\n existing_include)\n if msvc_runtime_value:\n capture = msvc_runtime_value.group(1)\n matches = re.findall(r\"\\$<\\$<CONFIG:([A-Za-z]*)>:([A-Za-z]*)>\", capture)\n config_dict = dict(matches)\n\n build_type = settings.get_safe(\"build_type\") # FIXME: change for configuration\n if build_type is None:\n return None\n\n if compiler == \"Visual Studio\":\n config_dict[build_type] = {\"MT\": \"MultiThreaded\",\n \"MTd\": \"MultiThreadedDebug\",\n \"MD\": \"MultiThreadedDLL\",\n \"MDd\": \"MultiThreadedDebugDLL\"}[runtime]\n elif compiler == \"msvc\" or compiler == \"intel-cc\" or compiler == \"clang\":\n runtime_type = settings.get_safe(\"compiler.runtime_type\")\n rt = \"MultiThreadedDebug\" if runtime_type == \"Debug\" else \"MultiThreaded\"\n if runtime != \"static\":\n rt += \"DLL\"\n config_dict[build_type] = rt\n\n # If clang is being used the CMake check of compiler will try to create a simple\n # test application, and will fail because the Debug runtime is not there\n if compiler == \"clang\":\n if config_dict.get(\"Debug\") is None:\n clang_rt = \"MultiThreadedDebug\" + (\"DLL\" if runtime != \"static\" else \"\")\n config_dict[\"Debug\"] = clang_rt\n\n return {\"vs_runtimes\": config_dict}\n\n\nclass FPicBlock(Block):\n template = textwrap.dedent(\"\"\"\n {% if fpic %}\n message(STATUS \"Conan toolchain: Setting CMAKE_POSITION_INDEPENDENT_CODE={{ fpic }} (options.fPIC)\")\n set(CMAKE_POSITION_INDEPENDENT_CODE {{ fpic }} CACHE BOOL \"Position independent code\")\n {% endif %}\n \"\"\")\n\n def context(self):\n fpic = self._conanfile.options.get_safe(\"fPIC\")\n if fpic is None:\n return None\n os_ = self._conanfile.settings.get_safe(\"os\")\n if os_ and \"Windows\" in os_:\n self._conanfile.output.warn(\"Toolchain: Ignoring fPIC option defined for Windows\")\n return None\n return {\"fpic\": \"ON\" if fpic else \"OFF\"}\n\n\nclass GLibCXXBlock(Block):\n template = textwrap.dedent(\"\"\"\n {% if set_libcxx %}\n string(APPEND CONAN_CXX_FLAGS \" {{ set_libcxx }}\")\n {% endif %}\n {% if glibcxx %}\n add_compile_definitions({{ glibcxx }})\n {% endif %}\n \"\"\")\n\n def context(self):\n libcxx, stdlib11 = libcxx_flags(self._conanfile)\n return {\"set_libcxx\": libcxx, \"glibcxx\": stdlib11}\n\n\nclass SkipRPath(Block):\n template = textwrap.dedent(\"\"\"\n {% if skip_rpath %}\n set(CMAKE_SKIP_RPATH 1 CACHE BOOL \"rpaths\" FORCE)\n # Policy CMP0068\n # We want the old behavior, in CMake >= 3.9 CMAKE_SKIP_RPATH won't affect install_name in OSX\n set(CMAKE_INSTALL_NAME_DIR \"\")\n {% endif %}\n \"\"\")\n\n skip_rpath = False\n\n def context(self):\n return {\"skip_rpath\": self.skip_rpath}\n\n\nclass ArchitectureBlock(Block):\n template = textwrap.dedent(\"\"\"\n string(APPEND CONAN_CXX_FLAGS \" {{ arch_flag }}\")\n string(APPEND CONAN_C_FLAGS \" {{ arch_flag }}\")\n string(APPEND CONAN_SHARED_LINKER_FLAGS \" {{ arch_flag }}\")\n string(APPEND CONAN_EXE_LINKER_FLAGS \" {{ arch_flag }}\")\n \"\"\")\n\n def context(self):\n arch_flag = architecture_flag(self._conanfile.settings)\n if not arch_flag:\n return\n return {\"arch_flag\": arch_flag}\n\n\nclass CppStdBlock(Block):\n template = textwrap.dedent(\"\"\"\n message(STATUS \"Conan toolchain: C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}\")\n set(CMAKE_CXX_STANDARD {{ cppstd }})\n set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})\n set(CMAKE_CXX_STANDARD_REQUIRED ON)\n \"\"\")\n\n def context(self):\n compiler_cppstd = self._conanfile.settings.get_safe(\"compiler.cppstd\")\n if compiler_cppstd is None:\n return None\n\n if compiler_cppstd.startswith(\"gnu\"):\n cppstd = compiler_cppstd[3:]\n cppstd_extensions = \"ON\"\n else:\n cppstd = compiler_cppstd\n cppstd_extensions = \"OFF\"\n return {\"cppstd\": cppstd, \"cppstd_extensions\": cppstd_extensions}\n\n\nclass SharedLibBock(Block):\n template = textwrap.dedent(\"\"\"\n message(STATUS \"Conan toolchain: Setting BUILD_SHARED_LIBS = {{ shared_libs }}\")\n set(BUILD_SHARED_LIBS {{ shared_libs }} CACHE BOOL \"Build shared libraries\")\n \"\"\")\n\n def context(self):\n try:\n shared_libs = \"ON\" if self._conanfile.options.shared else \"OFF\"\n return {\"shared_libs\": shared_libs}\n except ConanException:\n return None\n\n\nclass ParallelBlock(Block):\n template = textwrap.dedent(\"\"\"\n string(APPEND CONAN_CXX_FLAGS \" /MP{{ parallel }}\")\n string(APPEND CONAN_C_FLAGS \" /MP{{ parallel }}\")\n \"\"\")\n\n def context(self):\n # TODO: Check this conf\n\n compiler = self._conanfile.settings.get_safe(\"compiler\")\n if compiler not in (\"Visual Studio\", \"msvc\") or \"Visual\" not in self._toolchain.generator:\n return\n\n jobs = build_jobs(self._conanfile)\n if jobs:\n return {\"parallel\": jobs}\n\n\nclass AndroidSystemBlock(Block):\n\n template = textwrap.dedent(\"\"\"\n # New toolchain things\n set(ANDROID_PLATFORM {{ android_platform }})\n {% if android_stl %}\n set(ANDROID_STL {{ android_stl }})\n {% endif %}\n set(ANDROID_ABI {{ android_abi }})\n include({{ android_ndk_path }}/build/cmake/android.toolchain.cmake)\n \"\"\")\n\n def context(self):\n os_ = self._conanfile.settings.get_safe(\"os\")\n if os_ != \"Android\":\n return\n\n android_abi = {\"x86\": \"x86\",\n \"x86_64\": \"x86_64\",\n \"armv7\": \"armeabi-v7a\",\n \"armv8\": \"arm64-v8a\"}.get(str(self._conanfile.settings.arch))\n\n # TODO: only 'c++_shared' y 'c++_static' supported?\n # https://developer.android.com/ndk/guides/cpp-support\n libcxx_str = self._conanfile.settings.get_safe(\"compiler.libcxx\")\n\n android_ndk_path = self._conanfile.conf.get(\"tools.android:ndk_path\")\n if not android_ndk_path:\n raise ConanException('CMakeToolchain needs tools.android:ndk_path configuration defined')\n android_ndk_path = android_ndk_path.replace(\"\\\\\", \"/\")\n\n ctxt_toolchain = {\n 'android_platform': 'android-' + str(self._conanfile.settings.os.api_level),\n 'android_abi': android_abi,\n 'android_stl': libcxx_str,\n 'android_ndk_path': android_ndk_path,\n }\n return ctxt_toolchain\n\n\nclass AppleSystemBlock(Block):\n template = textwrap.dedent(\"\"\"\n # Set the architectures for which to build.\n set(CMAKE_OSX_ARCHITECTURES {{ cmake_osx_architectures }} CACHE STRING \"\" FORCE)\n # Setting CMAKE_OSX_SYSROOT SDK, when using Xcode generator the name is enough\n # but full path is necessary for others\n set(CMAKE_OSX_SYSROOT {{ cmake_osx_sysroot }} CACHE STRING \"\" FORCE)\n {% if cmake_osx_deployment_target is defined %}\n # Setting CMAKE_OSX_DEPLOYMENT_TARGET if \"os.version\" is defined by the used conan profile\n set(CMAKE_OSX_DEPLOYMENT_TARGET \"{{ cmake_osx_deployment_target }}\" CACHE STRING \"\")\n {% endif %}\n set(BITCODE \"\")\n set(FOBJC_ARC \"\")\n set(VISIBILITY \"\")\n {% if enable_bitcode %}\n # Bitcode ON\n set(CMAKE_XCODE_ATTRIBUTE_ENABLE_BITCODE \"YES\")\n set(CMAKE_XCODE_ATTRIBUTE_BITCODE_GENERATION_MODE \"bitcode\")\n {% if enable_bitcode_marker %}\n set(BITCODE \"-fembed-bitcode-marker\")\n {% else %}\n set(BITCODE \"-fembed-bitcode\")\n {% endif %}\n {% elif enable_bitcode is not none %}\n # Bitcode OFF\n set(CMAKE_XCODE_ATTRIBUTE_ENABLE_BITCODE \"NO\")\n {% endif %}\n {% if enable_arc %}\n # ARC ON\n set(FOBJC_ARC \"-fobjc-arc\")\n set(CMAKE_XCODE_ATTRIBUTE_CLANG_ENABLE_OBJC_ARC \"YES\")\n {% elif enable_arc is not none %}\n # ARC OFF\n set(FOBJC_ARC \"-fno-objc-arc\")\n set(CMAKE_XCODE_ATTRIBUTE_CLANG_ENABLE_OBJC_ARC \"NO\")\n {% endif %}\n {% if enable_visibility %}\n # Visibility ON\n set(CMAKE_XCODE_ATTRIBUTE_GCC_SYMBOLS_PRIVATE_EXTERN \"NO\")\n set(VISIBILITY \"-fvisibility=default\")\n {% elif enable_visibility is not none %}\n # Visibility OFF\n set(VISIBILITY \"-fvisibility=hidden -fvisibility-inlines-hidden\")\n set(CMAKE_XCODE_ATTRIBUTE_GCC_SYMBOLS_PRIVATE_EXTERN \"YES\")\n {% endif %}\n #Check if Xcode generator is used, since that will handle these flags automagically\n if(CMAKE_GENERATOR MATCHES \"Xcode\")\n message(DEBUG \"Not setting any manual command-line buildflags, since Xcode is selected as generator.\")\n else()\n string(APPEND CONAN_C_FLAGS \" ${BITCODE} ${FOBJC_ARC}\")\n string(APPEND CONAN_CXX_FLAGS \" ${BITCODE} ${VISIBILITY} ${FOBJC_ARC}\")\n endif()\n \"\"\")\n\n def _apple_sdk_name(self):\n \"\"\"\n Returns the value for the SDKROOT with this preference:\n - 1. The full path set in the conf with tools.apple:sdk_path\n - 2. osd.sdk + os.sdk_version\n Otherwise None\n Every user should specify it because there could be several ones depending\n on the OS architecture.\n\n Note: In case of MacOS it'll be the same for all the architectures.\n \"\"\"\n os_ = self._conanfile.settings.get_safe('os')\n os_sdk = self._conanfile.settings.get_safe('os.sdk')\n os_sdk_version = self._conanfile.settings.get_safe('os.sdk_version') or \"\"\n sdk = self._conanfile.conf.get(\"tools.apple:sdk_path\")\n\n if sdk:\n return sdk\n elif os_ == \"Macos\": # if the host is Macos it can only be \"macosx\"\n return \"{}{}\".format(\"macosx\", os_sdk_version)\n elif os_sdk:\n return \"{}{}\".format(os_sdk, os_sdk_version)\n else:\n raise ConanException(\"Please, specify a suitable value for os.sdk.\")\n\n def context(self):\n os_ = self._conanfile.settings.get_safe(\"os\")\n if not is_apple_os(self._conanfile):\n return None\n\n host_architecture = to_apple_arch(self._conanfile)\n host_os_version = self._conanfile.settings.get_safe(\"os.version\")\n host_sdk_name = self._apple_sdk_name()\n is_debug = self._conanfile.settings.get_safe('build_type') == \"Debug\"\n\n # Reading some configurations to enable or disable some Xcode toolchain flags and variables\n # Issue related: https://github.com/conan-io/conan/issues/9448\n # Based on https://github.com/leetal/ios-cmake repository\n enable_bitcode = self._conanfile.conf.get(\"tools.apple:enable_bitcode\", check_type=bool)\n enable_arc = self._conanfile.conf.get(\"tools.apple:enable_arc\", check_type=bool)\n enable_visibility = self._conanfile.conf.get(\"tools.apple:enable_visibility\", check_type=bool)\n\n ctxt_toolchain = {\n \"enable_bitcode\": enable_bitcode,\n \"enable_bitcode_marker\": all([enable_bitcode, is_debug]),\n \"enable_arc\": enable_arc,\n \"enable_visibility\": enable_visibility\n }\n if host_sdk_name:\n ctxt_toolchain[\"cmake_osx_sysroot\"] = host_sdk_name\n # this is used to initialize the OSX_ARCHITECTURES property on each target as it is created\n if host_architecture:\n ctxt_toolchain[\"cmake_osx_architectures\"] = host_architecture\n\n if host_os_version:\n # https://cmake.org/cmake/help/latest/variable/CMAKE_OSX_DEPLOYMENT_TARGET.html\n # Despite the OSX part in the variable name(s) they apply also to other SDKs than\n # macOS like iOS, tvOS, or watchOS.\n ctxt_toolchain[\"cmake_osx_deployment_target\"] = host_os_version\n\n return ctxt_toolchain\n\n\nclass FindFiles(Block):\n template = textwrap.dedent(\"\"\"\n {% if find_package_prefer_config %}\n set(CMAKE_FIND_PACKAGE_PREFER_CONFIG {{ find_package_prefer_config }})\n {% endif %}\n\n # Definition of CMAKE_MODULE_PATH\n {% if build_build_paths %}\n # Explicitly defined \"buildirs\" of \"build\" context dependencies\n list(PREPEND CMAKE_MODULE_PATH {{ build_build_paths }})\n {% endif %}\n {% if host_build_paths_noroot %}\n # Explicitly defined \"builddirs\" of \"host\" dependencies\n list(PREPEND CMAKE_MODULE_PATH {{ host_build_paths_noroot }})\n {% endif %}\n {% if host_build_paths_root %}\n # The root (which is the default builddirs) path of dependencies in the host context\n list(PREPEND CMAKE_MODULE_PATH {{ host_build_paths_root }})\n {% endif %}\n {% if generators_folder %}\n # the generators folder (where conan generates files, like this toolchain)\n list(PREPEND CMAKE_MODULE_PATH {{ generators_folder }})\n {% endif %}\n\n # Definition of CMAKE_PREFIX_PATH, CMAKE_XXXXX_PATH\n {% if host_build_paths_noroot %}\n # The explicitly defined \"builddirs\" of \"host\" context dependencies must be in PREFIX_PATH\n list(PREPEND CMAKE_PREFIX_PATH {{ host_build_paths_noroot }})\n {% endif %}\n {% if generators_folder %}\n # The Conan local \"generators\" folder, where this toolchain is saved.\n list(PREPEND CMAKE_PREFIX_PATH {{ generators_folder }} )\n {% endif %}\n {% if cmake_program_path %}\n list(PREPEND CMAKE_PROGRAM_PATH {{ cmake_program_path }})\n {% endif %}\n {% if cmake_library_path %}\n list(PREPEND CMAKE_LIBRARY_PATH {{ cmake_library_path }})\n {% endif %}\n {% if is_apple and cmake_framework_path %}\n list(PREPEND CMAKE_FRAMEWORK_PATH {{ cmake_framework_path }})\n {% endif %}\n {% if cmake_include_path %}\n list(PREPEND CMAKE_INCLUDE_PATH {{ cmake_include_path }})\n {% endif %}\n\n {% if cross_building %}\n if(NOT DEFINED CMAKE_FIND_ROOT_PATH_MODE_PACKAGE OR CMAKE_FIND_ROOT_PATH_MODE_PACKAGE STREQUAL \"ONLY\")\n set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE \"BOTH\")\n endif()\n if(NOT DEFINED CMAKE_FIND_ROOT_PATH_MODE_PROGRAM OR CMAKE_FIND_ROOT_PATH_MODE_PROGRAM STREQUAL \"ONLY\")\n set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM \"BOTH\")\n endif()\n if(NOT DEFINED CMAKE_FIND_ROOT_PATH_MODE_LIBRARY OR CMAKE_FIND_ROOT_PATH_MODE_LIBRARY STREQUAL \"ONLY\")\n set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY \"BOTH\")\n endif()\n {% if is_apple %}\n if(NOT DEFINED CMAKE_FIND_ROOT_PATH_MODE_FRAMEWORK OR CMAKE_FIND_ROOT_PATH_MODE_FRAMEWORK STREQUAL \"ONLY\")\n set(CMAKE_FIND_ROOT_PATH_MODE_FRAMEWORK \"BOTH\")\n endif()\n {% endif %}\n if(NOT DEFINED CMAKE_FIND_ROOT_PATH_MODE_INCLUDE OR CMAKE_FIND_ROOT_PATH_MODE_INCLUDE STREQUAL \"ONLY\")\n set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE \"BOTH\")\n endif()\n {% endif %}\n \"\"\")\n\n @staticmethod\n def _join_paths(paths):\n return \" \".join(['\"{}\"'.format(p.replace('\\\\', '/')\n .replace('$', '\\\\$')\n .replace('\"', '\\\\\"')) for p in paths])\n\n def context(self):\n # To find the generated cmake_find_package finders\n # TODO: Change this for parameterized output location of CMakeDeps\n find_package_prefer_config = \"ON\" # assume ON by default if not specified in conf\n prefer_config = self._conanfile.conf.get(\"tools.cmake.cmaketoolchain:find_package_prefer_config\",\n check_type=bool)\n if prefer_config is False:\n find_package_prefer_config = \"OFF\"\n\n os_ = self._conanfile.settings.get_safe(\"os\")\n is_apple_ = is_apple_os(self._conanfile)\n\n # Read information from host context, also including test_requires, which are in host\n # TODO: Add here in 2.0 the \"skip\": False trait\n host_req = self._conanfile.dependencies.filter({\"build\": False}).values()\n host_build_paths_root = []\n host_build_paths_noroot = []\n host_lib_paths = []\n host_framework_paths = []\n host_include_paths = []\n for req in host_req:\n cppinfo = req.cpp_info.aggregated_components()\n # If the builddir is the package_folder, then it is the default \"root\" one\n # The package_folder can be None if editable and layout(), in that case only the\n # host_build_paths_noroot will be populated\n if req.package_folder:\n nf = os.path.normpath(req.package_folder)\n host_build_paths_root.extend(p for p in cppinfo.builddirs if os.path.normpath(p) == nf)\n host_build_paths_noroot.extend(p for p in cppinfo.builddirs if os.path.normpath(p) != nf)\n else:\n host_build_paths_root = []\n host_build_paths_noroot.extend(p for p in cppinfo.builddirs)\n host_lib_paths.extend(cppinfo.libdirs)\n if is_apple_:\n host_framework_paths.extend(cppinfo.frameworkdirs)\n host_include_paths.extend(cppinfo.includedirs)\n\n # Read information from build context\n build_req = self._conanfile.dependencies.build.values()\n build_build_paths = []\n build_bin_paths = []\n for req in build_req:\n cppinfo = req.cpp_info.aggregated_components()\n build_build_paths.extend(cppinfo.builddirs)\n build_bin_paths.extend(cppinfo.bindirs)\n\n return {\n \"find_package_prefer_config\": find_package_prefer_config,\n \"generators_folder\": \"${CMAKE_CURRENT_LIST_DIR}\",\n \"host_build_paths_root\": self._join_paths(host_build_paths_root),\n \"host_build_paths_noroot\": self._join_paths(host_build_paths_noroot),\n \"build_build_paths\": self._join_paths(build_build_paths),\n \"cmake_program_path\": self._join_paths(build_bin_paths),\n \"cmake_library_path\": self._join_paths(host_lib_paths),\n \"cmake_framework_path\": self._join_paths(host_framework_paths),\n \"cmake_include_path\": self._join_paths(host_include_paths),\n \"is_apple\": is_apple_,\n \"cross_building\": cross_building(self._conanfile),\n }\n\n\nclass PkgConfigBlock(Block):\n template = textwrap.dedent(\"\"\"\n {% if pkg_config %}\n set(PKG_CONFIG_EXECUTABLE {{ pkg_config }} CACHE FILEPATH \"pkg-config executable\")\n {% endif %}\n {% if pkg_config_path %}\n if (DEFINED ENV{PKG_CONFIG_PATH})\n set(ENV{PKG_CONFIG_PATH} \"{{ pkg_config_path }}$ENV{PKG_CONFIG_PATH}\")\n else()\n set(ENV{PKG_CONFIG_PATH} \"{{ pkg_config_path }}\")\n endif()\n {% endif %}\n \"\"\")\n\n def context(self):\n pkg_config = self._conanfile.conf.get(\"tools.gnu:pkg_config\", check_type=str)\n if pkg_config:\n pkg_config = pkg_config.replace(\"\\\\\", \"/\")\n pkg_config_path = self._conanfile.generators_folder\n if pkg_config_path:\n # hardcoding scope as \"build\"\n subsystem = deduce_subsystem(self._conanfile, \"build\")\n pathsep = \":\" if subsystem != WINDOWS else \";\"\n pkg_config_path = pkg_config_path.replace(\"\\\\\", \"/\") + pathsep\n return {\"pkg_config\": pkg_config,\n \"pkg_config_path\": pkg_config_path}\n\n\nclass UserToolchain(Block):\n template = textwrap.dedent(\"\"\"\n {% for user_toolchain in paths %}\n include(\"{{user_toolchain}}\")\n {% endfor %}\n \"\"\")\n\n def context(self):\n # This is global [conf] injection of extra toolchain files\n user_toolchain = self._conanfile.conf.get(\"tools.cmake.cmaketoolchain:user_toolchain\",\n default=[], check_type=list)\n return {\"paths\": [ut.replace(\"\\\\\", \"/\") for ut in user_toolchain]}\n\n\nclass ExtraFlagsBlock(Block):\n \"\"\"This block is adding flags directly from user [conf] section\"\"\"\n\n template = textwrap.dedent(\"\"\"\n # Extra c, cxx, linkflags and defines\n {% if cxxflags %}\n string(APPEND CONAN_CXX_FLAGS \"{% for cxxflag in cxxflags %} {{ cxxflag }}{% endfor %}\")\n {% endif %}\n {% if cflags %}\n string(APPEND CONAN_C_FLAGS \"{% for cflag in cflags %} {{ cflag }}{% endfor %}\")\n {% endif %}\n {% if sharedlinkflags %}\n string(APPEND CONAN_SHARED_LINKER_FLAGS \"{% for sharedlinkflag in sharedlinkflags %} {{ sharedlinkflag }}{% endfor %}\")\n {% endif %}\n {% if exelinkflags %}\n string(APPEND CONAN_EXE_LINKER_FLAGS \"{% for exelinkflag in exelinkflags %} {{ exelinkflag }}{% endfor %}\")\n {% endif %}\n {% if defines %}\n add_compile_definitions({% for define in defines %} \"{{ define }}\"{% endfor %})\n {% endif %}\n \"\"\")\n\n def context(self):\n # Now, it's time to get all the flags defined by the user\n cxxflags = self._conanfile.conf.get(\"tools.build:cxxflags\", default=[], check_type=list)\n cflags = self._conanfile.conf.get(\"tools.build:cflags\", default=[], check_type=list)\n sharedlinkflags = self._conanfile.conf.get(\"tools.build:sharedlinkflags\", default=[], check_type=list)\n exelinkflags = self._conanfile.conf.get(\"tools.build:exelinkflags\", default=[], check_type=list)\n defines = self._conanfile.conf.get(\"tools.build:defines\", default=[], check_type=list)\n return {\n \"cxxflags\": cxxflags,\n \"cflags\": cflags,\n \"sharedlinkflags\": sharedlinkflags,\n \"exelinkflags\": exelinkflags,\n \"defines\": [define.replace('\"', '\\\\\"') for define in defines]\n }\n\n\nclass CMakeFlagsInitBlock(Block):\n template = textwrap.dedent(\"\"\"\n if(DEFINED CONAN_CXX_FLAGS)\n string(APPEND CMAKE_CXX_FLAGS_INIT \" ${CONAN_CXX_FLAGS}\")\n endif()\n if(DEFINED CONAN_C_FLAGS)\n string(APPEND CMAKE_C_FLAGS_INIT \" ${CONAN_C_FLAGS}\")\n endif()\n if(DEFINED CONAN_SHARED_LINKER_FLAGS)\n string(APPEND CMAKE_SHARED_LINKER_FLAGS_INIT \" ${CONAN_SHARED_LINKER_FLAGS}\")\n endif()\n if(DEFINED CONAN_EXE_LINKER_FLAGS)\n string(APPEND CMAKE_EXE_LINKER_FLAGS_INIT \" ${CONAN_EXE_LINKER_FLAGS}\")\n endif()\n \"\"\")\n\n\nclass TryCompileBlock(Block):\n template = textwrap.dedent(\"\"\"\n get_property( _CMAKE_IN_TRY_COMPILE GLOBAL PROPERTY IN_TRY_COMPILE )\n if(_CMAKE_IN_TRY_COMPILE)\n message(STATUS \"Running toolchain IN_TRY_COMPILE\")\n return()\n endif()\n \"\"\")\n\n\nclass CompilersBlock(Block):\n template = textwrap.dedent(r\"\"\"\n {% for lang, compiler_path in compilers.items() %}\n set(CMAKE_{{ lang }}_COMPILER \"{{ compiler_path|replace('\\\\', '/') }}\")\n {% endfor %}\n \"\"\")\n\n def context(self):\n # Reading configuration from \"tools.build:compiler_executables\" -> {\"C\": \"/usr/bin/gcc\"}\n compilers_by_conf = self._conanfile.conf.get(\"tools.build:compiler_executables\", default={},\n check_type=dict)\n # Map the possible languages\n compilers = {}\n # Allowed <LANG> variables (and <LANG>_LAUNCHER)\n compilers_mapping = {\"c\": \"C\", \"cuda\": \"CUDA\", \"cpp\": \"CXX\", \"objc\": \"OBJC\",\n \"objcpp\": \"OBJCXX\", \"rc\": \"RC\", 'fortran': \"Fortran\", 'asm': \"ASM\",\n \"hip\": \"HIP\", \"ispc\": \"ISPC\"}\n for comp, lang in compilers_mapping.items():\n # To set CMAKE_<LANG>_COMPILER\n if comp in compilers_by_conf:\n compilers[lang] = compilers_by_conf[comp]\n return {\"compilers\": compilers}\n\n\nclass GenericSystemBlock(Block):\n template = textwrap.dedent(\"\"\"\n {% if cmake_sysroot %}\n set(CMAKE_SYSROOT {{ cmake_sysroot }})\n {% endif %}\n\n {% if cmake_system_name %}\n # Cross building\n set(CMAKE_SYSTEM_NAME {{ cmake_system_name }})\n {% endif %}\n {% if cmake_system_version %}\n set(CMAKE_SYSTEM_VERSION {{ cmake_system_version }})\n {% endif %}\n {% if cmake_system_processor %}\n set(CMAKE_SYSTEM_PROCESSOR {{ cmake_system_processor }})\n {% endif %}\n\n {% if generator_platform %}\n set(CMAKE_GENERATOR_PLATFORM \"{{ generator_platform }}\" CACHE STRING \"\" FORCE)\n {% endif %}\n {% if toolset %}\n set(CMAKE_GENERATOR_TOOLSET \"{{ toolset }}\" CACHE STRING \"\" FORCE)\n {% endif %}\n \"\"\")\n\n def _get_toolset(self, generator):\n if generator is None or (\"Visual\" not in generator and \"Xcode\" not in generator):\n return None\n settings = self._conanfile.settings\n compiler = settings.get_safe(\"compiler\")\n compiler_base = settings.get_safe(\"compiler.base\")\n toolset = None\n if compiler == \"Visual Studio\":\n toolset = settings.get_safe(\"compiler.toolset\")\n elif compiler == \"intel\" and compiler_base == \"Visual Studio\" and \"Visual\" in generator:\n # TODO: This intel toolset needs to be validated too\n compiler_version = settings.get_safe(\"compiler.version\")\n if compiler_version:\n compiler_version = compiler_version if \".\" in compiler_version else \\\n \"%s.0\" % compiler_version\n toolset = \"Intel C++ Compiler \" + compiler_version\n elif compiler == \"intel-cc\":\n return IntelCC(self._conanfile).ms_toolset\n elif compiler == \"msvc\":\n toolset = settings.get_safe(\"compiler.toolset\")\n if toolset is None:\n compiler_version = str(settings.compiler.version)\n compiler_update = str(settings.compiler.update)\n if compiler_update != \"None\": # It is full one(19.28), not generic 19.2X\n # The equivalent of compiler 19.26 is toolset 14.26\n toolset = \"version=14.{}{}\".format(compiler_version[-1], compiler_update)\n else:\n toolset = msvc_version_to_toolset_version(compiler_version)\n elif compiler == \"clang\":\n if generator and \"Visual\" in generator:\n if \"Visual Studio 16\" in generator or \"Visual Studio 17\" in generator:\n toolset = \"ClangCL\"\n else:\n raise ConanException(\"CMakeToolchain with compiler=clang and a CMake \"\n \"'Visual Studio' generator requires VS16 or VS17\")\n toolset_arch = self._conanfile.conf.get(\"tools.cmake.cmaketoolchain:toolset_arch\")\n if toolset_arch is not None:\n toolset_arch = \"host={}\".format(toolset_arch)\n toolset = toolset_arch if toolset is None else \"{},{}\".format(toolset, toolset_arch)\n return toolset\n\n def _get_generator_platform(self, generator):\n settings = self._conanfile.settings\n # Returns the generator platform to be used by CMake\n compiler = settings.get_safe(\"compiler\")\n compiler_base = settings.get_safe(\"compiler.base\")\n arch = settings.get_safe(\"arch\")\n\n if settings.get_safe(\"os\") == \"WindowsCE\":\n return settings.get_safe(\"os.platform\")\n\n if (compiler in (\"Visual Studio\", \"msvc\", \"clang\") or compiler_base == \"Visual Studio\") and \\\n generator and \"Visual\" in generator:\n return {\"x86\": \"Win32\",\n \"x86_64\": \"x64\",\n \"armv7\": \"ARM\",\n \"armv8\": \"ARM64\"}.get(arch)\n return None\n\n def _get_generic_system_name(self):\n os_host = self._conanfile.settings.get_safe(\"os\")\n os_build = self._conanfile.settings_build.get_safe(\"os\")\n arch_host = self._conanfile.settings.get_safe(\"arch\")\n arch_build = self._conanfile.settings_build.get_safe(\"arch\")\n cmake_system_name_map = {\"Neutrino\": \"QNX\",\n \"\": \"Generic\",\n None: \"Generic\"}\n if os_host != os_build:\n return cmake_system_name_map.get(os_host, os_host)\n elif arch_host is not None and arch_host != arch_build:\n if not ((arch_build == \"x86_64\") and (arch_host == \"x86\") or\n (arch_build == \"sparcv9\") and (arch_host == \"sparc\") or\n (arch_build == \"ppc64\") and (arch_host == \"ppc32\")):\n return cmake_system_name_map.get(os_host, os_host)\n\n def _is_apple_cross_building(self):\n os_host = self._conanfile.settings.get_safe(\"os\")\n arch_host = self._conanfile.settings.get_safe(\"arch\")\n arch_build = self._conanfile.settings_build.get_safe(\"arch\")\n return os_host in ('iOS', 'watchOS', 'tvOS') or (\n os_host == 'Macos' and arch_host != arch_build)\n\n def _get_cross_build(self):\n user_toolchain = self._conanfile.conf.get(\"tools.cmake.cmaketoolchain:user_toolchain\")\n if user_toolchain is not None:\n return None, None, None # Will be provided by user_toolchain\n\n system_name = self._conanfile.conf.get(\"tools.cmake.cmaketoolchain:system_name\")\n system_version = self._conanfile.conf.get(\"tools.cmake.cmaketoolchain:system_version\")\n system_processor = self._conanfile.conf.get(\"tools.cmake.cmaketoolchain:system_processor\")\n\n if hasattr(self._conanfile, \"settings_build\"):\n os_host = self._conanfile.settings.get_safe(\"os\")\n arch_host = self._conanfile.settings.get_safe(\"arch\")\n if system_name is None: # Try to deduce\n _system_version = None\n _system_processor = None\n if self._is_apple_cross_building():\n # cross-build in Macos also for M1\n system_name = {'Macos': 'Darwin'}.get(os_host, os_host)\n # CMAKE_SYSTEM_VERSION for Apple sets the sdk version, not the os version\n _system_version = self._conanfile.settings.get_safe(\"os.sdk_version\")\n _system_processor = to_apple_arch(self._conanfile)\n elif os_host != 'Android':\n system_name = self._get_generic_system_name()\n _system_version = self._conanfile.settings.get_safe(\"os.version\")\n _system_processor = arch_host\n\n if system_name is not None and system_version is None:\n system_version = _system_version\n if system_name is not None and system_processor is None:\n system_processor = _system_processor\n\n return system_name, system_version, system_processor\n\n def context(self):\n generator = self._toolchain.generator\n generator_platform = self._get_generator_platform(generator)\n toolset = self._get_toolset(generator)\n system_name, system_version, system_processor = self._get_cross_build()\n\n # This is handled by the tools.apple:sdk_path and CMAKE_OSX_SYSROOT in Apple\n cmake_sysroot = self._conanfile.conf.get(\"tools.build:sysroot\")\n cmake_sysroot = cmake_sysroot.replace(\"\\\\\", \"/\") if cmake_sysroot is not None else None\n\n return {\"toolset\": toolset,\n \"generator_platform\": generator_platform,\n \"cmake_system_name\": system_name,\n \"cmake_system_version\": system_version,\n \"cmake_system_processor\": system_processor,\n \"cmake_sysroot\": cmake_sysroot}\n\n\nclass OutputDirsBlock(Block):\n\n @property\n def template(self):\n if not self._conanfile.package_folder:\n return \"\"\n\n return textwrap.dedent(\"\"\"\n set(CMAKE_INSTALL_PREFIX \"{{package_folder}}\")\n {% if default_bin %}\n set(CMAKE_INSTALL_BINDIR \"{{default_bin}}\")\n set(CMAKE_INSTALL_SBINDIR \"{{default_bin}}\")\n set(CMAKE_INSTALL_LIBEXECDIR \"{{default_bin}}\")\n {% endif %}\n {% if default_lib %}\n set(CMAKE_INSTALL_LIBDIR \"{{default_lib}}\")\n {% endif %}\n {% if default_include %}\n set(CMAKE_INSTALL_INCLUDEDIR \"{{default_include}}\")\n set(CMAKE_INSTALL_OLDINCLUDEDIR \"{{default_include}}\")\n {% endif %}\n {% if default_res %}\n set(CMAKE_INSTALL_DATAROOTDIR \"{{default_res}}\")\n {% endif %}\n \"\"\")\n\n def _get_cpp_info_value(self, name):\n # Why not taking cpp.build? because this variables are used by the \"cmake install\"\n # that correspond to the package folder (even if the root is the build directory)\n elements = getattr(self._conanfile.cpp.package, name)\n return elements[0] if elements else None\n\n def context(self):\n if not self._conanfile.package_folder:\n return {}\n return {\"package_folder\": self._conanfile.package_folder.replace(\"\\\\\", \"/\"),\n \"default_bin\": self._get_cpp_info_value(\"bindirs\"),\n \"default_lib\": self._get_cpp_info_value(\"libdirs\"),\n \"default_include\": self._get_cpp_info_value(\"includedirs\"),\n \"default_res\": self._get_cpp_info_value(\"resdirs\")}\n"}
|
{"conan/tools/android/utils.py": [{"type": "function", "name": "android_abi", "lines": [4, 31], "signature": "def android_abi(conanfile, context=\"host\"):", "doc": "Returns Android-NDK ABI\n:param conanfile: ConanFile instance\n:param context: either \"host\", \"build\" or \"target\"\n:return: Android-NDK ABI"}]}
| null |
["conans/test/unittests/tools/android/test_android_tools.py::test_tools_android_abi"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1673271288.0, "pr_title": "add `conan.tools.android.android_abi()`", "pr_body": "Changelog: Feature: Add `conan.tools.android.android_abi()` function to return the Android standard ABI name based on Conan.\r\nDocs: https://github.com/conan-io/docs/pull/2975\r\n\r\ncloses https://github.com/conan-io/conan/issues/12814\r\n\r\n- [x] Refer to the issue that supports this Pull Request.\r\n- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.\r\n- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n- [x] I've followed the PEP8 style guides for Python code.\r\n- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one. \r\n\r\n<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>\r\n", "pr_timeline": [{"time": 1673361017.0, "comment": "I think this would be cool to have added in the docs :)"}, {"time": 1673871147.0, "comment": "Sure, but when I know it has a chance to be merged. I don't want to work on documentation if this feature is rejected."}, {"time": 1675084390.0, "comment": "We are releasing 1.58 asap, this would be missing ``settings`` raising when not defined, moving to 1.59"}, {"time": 1676415591.0, "comment": "Thanks for the final polishing. It will be useful in android-ndk recipe (but not anymore in opencv recipe, I have a PR removing its usage)."}], "issues": {}}
|
|
conan-io/conan
| 12,886
|
https://github.com/conan-io/conan/pull/12886
|
conan-io__conan-12886
|
[]
|
0f73ed743ffd5d9cfeef54bfbc1547d08ce6436d
|
diff --git a/conan/tools/microsoft/__init__.py b/conan/tools/microsoft/__init__.py
index 79598b28e5f..9c0bb920d5f 100644
--- a/conan/tools/microsoft/__init__.py
+++ b/conan/tools/microsoft/__init__.py
@@ -1,7 +1,7 @@
from conan.tools.microsoft.layout import vs_layout
from conan.tools.microsoft.msbuild import MSBuild
from conan.tools.microsoft.msbuilddeps import MSBuildDeps
-from conan.tools.microsoft.subsystems import unix_path
+from conan.tools.microsoft.subsystems import unix_path, unix_path_package_info_legacy
from conan.tools.microsoft.toolchain import MSBuildToolchain
from conan.tools.microsoft.nmaketoolchain import NMakeToolchain
from conan.tools.microsoft.nmakedeps import NMakeDeps
diff --git a/conan/tools/microsoft/subsystems.py b/conan/tools/microsoft/subsystems.py
index 55789e8ea81..62600065c3e 100644
--- a/conan/tools/microsoft/subsystems.py
+++ b/conan/tools/microsoft/subsystems.py
@@ -4,3 +4,10 @@
def unix_path(conanfile, path, scope="build"):
subsystem = deduce_subsystem(conanfile, scope=scope)
return subsystem_path(subsystem, path)
+
+def unix_path_package_info_legacy(conanfile, path, path_flavor=None):
+ message = f"The use of 'unix_path_legacy_compat' is deprecated in Conan 2.0 and does not " \
+ f"perform path conversions. This is retained for compatibility with Conan 1.x " \
+ f"and will be removed in a future version."
+ conanfile.output.warning(message)
+ return path
|
diff --git a/conans/test/unittests/tools/microsoft/test_subsystem.py b/conans/test/unittests/tools/microsoft/test_subsystem.py
index 65e5423e8c6..3af3cf3c860 100644
--- a/conans/test/unittests/tools/microsoft/test_subsystem.py
+++ b/conans/test/unittests/tools/microsoft/test_subsystem.py
@@ -2,7 +2,7 @@
import pytest
-from conan.tools.microsoft import unix_path
+from conan.tools.microsoft import unix_path, unix_path_package_info_legacy
from conans.model.conf import ConfDefinition
from conans.test.utils.mocks import MockSettings, ConanFileMock
@@ -27,5 +27,9 @@ def test_unix_path(subsystem, expected_path):
conanfile.settings = settings
conanfile.settings_build = settings
- path = unix_path(conanfile, "c:/path/to/stuff")
+ test_path = "c:/path/to/stuff"
+ path = unix_path(conanfile, test_path)
assert expected_path == path
+
+ package_info_legacy_path = unix_path_package_info_legacy(conanfile, test_path, path_flavor=subsystem)
+ assert package_info_legacy_path == test_path
| 2023-01-11T12:19:11
|
{}
|
{"conan/tools/microsoft/__init__.py": "from conan.tools.microsoft.layout import vs_layout\nfrom conan.tools.microsoft.msbuild import MSBuild\nfrom conan.tools.microsoft.msbuilddeps import MSBuildDeps\nfrom conan.tools.microsoft.subsystems import unix_path\nfrom conan.tools.microsoft.toolchain import MSBuildToolchain\nfrom conan.tools.microsoft.nmaketoolchain import NMakeToolchain\nfrom conan.tools.microsoft.nmakedeps import NMakeDeps\nfrom conan.tools.microsoft.visual import msvc_runtime_flag, VCVars, is_msvc, \\\n is_msvc_static_runtime, check_min_vs\n", "conan/tools/microsoft/subsystems.py": "from conans.client.subsystems import deduce_subsystem, subsystem_path\n\n\ndef unix_path(conanfile, path, scope=\"build\"):\n subsystem = deduce_subsystem(conanfile, scope=scope)\n return subsystem_path(subsystem, path)\n"}
|
{"conan/tools/microsoft/subsystems.py": [{"type": "function", "name": "unix_path_package_info_legacy", "lines": [8, 13], "signature": "def unix_path_package_info_legacy(conanfile, path, path_flavor=None):", "doc": ""}]}
| null |
["conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[msys2-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[msys-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[cygwin-/cygdrive/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[wsl-/mnt/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[sfu-/dev/fs/C/path/to/stuff]"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1673439456.0, "pr_title": "Add unix_path_package_info_legacy compatibility function", "pr_body": "Changelog: Feature: Add `unix_path_package_info_legacy` function for those cases in which it is used in `package_info` in recipes that require compatibility with Conan 1.x. In Conan 2, path conversions should not be performed in the `package_info` method.\r\n\r\nDocs: https://github.com/conan-io/docs/pull/XXXX\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 12,887
|
https://github.com/conan-io/conan/pull/12887
|
conan-io__conan-12887
|
[]
|
f3fb6c41246669646243f0c7d83c2c444743d896
|
diff --git a/conan/tools/microsoft/__init__.py b/conan/tools/microsoft/__init__.py
index 79598b28e5f..9c0bb920d5f 100644
--- a/conan/tools/microsoft/__init__.py
+++ b/conan/tools/microsoft/__init__.py
@@ -1,7 +1,7 @@
from conan.tools.microsoft.layout import vs_layout
from conan.tools.microsoft.msbuild import MSBuild
from conan.tools.microsoft.msbuilddeps import MSBuildDeps
-from conan.tools.microsoft.subsystems import unix_path
+from conan.tools.microsoft.subsystems import unix_path, unix_path_package_info_legacy
from conan.tools.microsoft.toolchain import MSBuildToolchain
from conan.tools.microsoft.nmaketoolchain import NMakeToolchain
from conan.tools.microsoft.nmakedeps import NMakeDeps
diff --git a/conan/tools/microsoft/subsystems.py b/conan/tools/microsoft/subsystems.py
index 55789e8ea81..5d1a59e3559 100644
--- a/conan/tools/microsoft/subsystems.py
+++ b/conan/tools/microsoft/subsystems.py
@@ -1,6 +1,12 @@
from conans.client.subsystems import deduce_subsystem, subsystem_path
+from conans.client.tools.win import unix_path as unix_path_legacy_tools
def unix_path(conanfile, path, scope="build"):
subsystem = deduce_subsystem(conanfile, scope=scope)
return subsystem_path(subsystem, path)
+
+def unix_path_package_info_legacy(conanfile, path, path_flavor=None):
+ # Call legacy implementation, which has different logic
+ # to autodeduce the subsystem type for the conversion.
+ return unix_path_legacy_tools(path, path_flavor)
|
diff --git a/conans/test/unittests/tools/microsoft/test_subsystem.py b/conans/test/unittests/tools/microsoft/test_subsystem.py
index 65e5423e8c6..0e010e23afb 100644
--- a/conans/test/unittests/tools/microsoft/test_subsystem.py
+++ b/conans/test/unittests/tools/microsoft/test_subsystem.py
@@ -1,19 +1,20 @@
+import mock
import textwrap
-
import pytest
-from conan.tools.microsoft import unix_path
+from conan.tools.microsoft import unix_path, unix_path_package_info_legacy
from conans.model.conf import ConfDefinition
from conans.test.utils.mocks import MockSettings, ConanFileMock
-
[email protected]("subsystem, expected_path", [
+expected_results = [
("msys2", '/c/path/to/stuff'),
("msys", '/c/path/to/stuff'),
("cygwin", '/cygdrive/c/path/to/stuff'),
("wsl", '/mnt/c/path/to/stuff'),
("sfu", '/dev/fs/C/path/to/stuff')
-])
+]
+
[email protected]("subsystem, expected_path", expected_results)
def test_unix_path(subsystem, expected_path):
c = ConfDefinition()
c.loads(textwrap.dedent("""\
@@ -29,3 +30,19 @@ def test_unix_path(subsystem, expected_path):
path = unix_path(conanfile, "c:/path/to/stuff")
assert expected_path == path
+
[email protected]("platform.system", mock.MagicMock(return_value='Windows'))
[email protected]("subsystem, expected_path", expected_results)
+def test_unix_path_package_info_legacy_windows(subsystem, expected_path):
+ test_path = "c:/path/to/stuff"
+ conanfile = ConanFileMock()
+ package_info_legacy_path = unix_path_package_info_legacy(conanfile, test_path, path_flavor=subsystem)
+ assert expected_path == package_info_legacy_path
+
[email protected]("platform.system", mock.MagicMock(return_value='Darwin'))
[email protected]("subsystem, expected_path", expected_results)
+def test_unix_path_package_info_legacy_not_windows(subsystem, expected_path):
+ test_path = "c:/path/to/stuff"
+ conanfile = ConanFileMock()
+ package_info_legacy_path = unix_path_package_info_legacy(conanfile, test_path, path_flavor=subsystem)
+ assert test_path == package_info_legacy_path
\ No newline at end of file
| 2023-01-11T12:43:17
|
{}
|
{"conan/tools/microsoft/__init__.py": "from conan.tools.microsoft.layout import vs_layout\nfrom conan.tools.microsoft.msbuild import MSBuild\nfrom conan.tools.microsoft.msbuilddeps import MSBuildDeps\nfrom conan.tools.microsoft.subsystems import unix_path\nfrom conan.tools.microsoft.toolchain import MSBuildToolchain\nfrom conan.tools.microsoft.nmaketoolchain import NMakeToolchain\nfrom conan.tools.microsoft.nmakedeps import NMakeDeps\nfrom conan.tools.microsoft.visual import msvc_runtime_flag, VCVars, is_msvc, \\\n is_msvc_static_runtime, check_min_vs\n", "conan/tools/microsoft/subsystems.py": "from conans.client.subsystems import deduce_subsystem, subsystem_path\n\n\ndef unix_path(conanfile, path, scope=\"build\"):\n subsystem = deduce_subsystem(conanfile, scope=scope)\n return subsystem_path(subsystem, path)\n"}
|
{"conan/tools/microsoft/subsystems.py": [{"type": "function", "name": "unix_path_package_info_legacy", "lines": [9, 12], "signature": "def unix_path_package_info_legacy(conanfile, path, path_flavor=None):", "doc": ""}]}
| null |
["conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[msys2-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[msys-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[cygwin-/cygdrive/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[wsl-/mnt/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path[sfu-/dev/fs/C/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_windows[msys2-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_windows[msys-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_windows[cygwin-/cygdrive/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_windows[wsl-/mnt/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_windows[sfu-/dev/fs/C/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_not_windows[msys2-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_not_windows[msys-/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_not_windows[cygwin-/cygdrive/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_not_windows[wsl-/mnt/c/path/to/stuff]", "conans/test/unittests/tools/microsoft/test_subsystem.py::test_unix_path_package_info_legacy_not_windows[sfu-/dev/fs/C/path/to/stuff]"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1673429511.0, "pr_title": "Add unix_path_package_info_legacy compatibility function", "pr_body": "Changelog: Feature: Add `unix_path_package_info_legacy` function for those cases in which it is used in `package_info` in recipes that require compatibility with Conan 1.x. In Conan 2, path conversions should not be performed in the `package_info` method.\r\n\r\nDocs: https://github.com/conan-io/docs/pull/2894", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 12,889
|
https://github.com/conan-io/conan/pull/12889
|
conan-io__conan-12889
|
[]
|
ea235dc8a4719b71e13d3734855bff4600f62b06
|
diff --git a/conan/tools/gnu/autotoolstoolchain.py b/conan/tools/gnu/autotoolstoolchain.py
index 61e159ea4fd..b870bd5b032 100644
--- a/conan/tools/gnu/autotoolstoolchain.py
+++ b/conan/tools/gnu/autotoolstoolchain.py
@@ -224,23 +224,28 @@ def update_autoreconf_args(self, updated_flags):
# FIXME: Remove all these update_xxxx whenever xxxx_args are dicts or new ones replace them
def _update_flags(self, attr_name, updated_flags):
- _new_flags = []
- self_args = getattr(self, attr_name)
- for index, flag in enumerate(self_args):
- flag_name = flag.split("=")[0]
- if flag_name in updated_flags:
- new_flag_value = updated_flags[flag_name]
- # if {"build": None} is passed, then "--build=xxxx" will be pruned
- if new_flag_value is None:
- continue
- elif not new_flag_value:
- _new_flags.append(flag_name)
+
+ def _list_to_dict(flags):
+ ret = {}
+ for flag in flags:
+ # Only splitting if "=" is there
+ option = flag.split("=", 1)
+ if len(option) == 2:
+ ret[option[0]] = option[1]
else:
- _new_flags.append(f"{flag_name}={new_flag_value}")
- else:
- _new_flags.append(flag)
+ ret[option[0]] = ""
+ return ret
+
+ def _dict_to_list(flags):
+ return [f"{k}={v}" if v else k for k, v in flags.items() if v is not None]
+
+ self_args = getattr(self, attr_name)
+ # FIXME: if xxxxx_args -> dict-type at some point, all these lines could be removed
+ options = _list_to_dict(self_args)
+ # Add/update/remove the current xxxxx_args with the new flags given
+ options.update(updated_flags)
# Update the current ones
- setattr(self, attr_name, _new_flags)
+ setattr(self, attr_name, _dict_to_list(options))
def generate_args(self):
args = {"configure_args": args_to_string(self.configure_args),
|
diff --git a/conans/test/unittests/tools/gnu/autotoolschain_test.py b/conans/test/unittests/tools/gnu/autotoolschain_test.py
index 026acc933a0..61a20034a2d 100644
--- a/conans/test/unittests/tools/gnu/autotoolschain_test.py
+++ b/conans/test/unittests/tools/gnu/autotoolschain_test.py
@@ -189,20 +189,22 @@ def test_check_configure_args_overwriting_and_deletion(save_args, cross_building
def test_update_or_prune_any_args(cross_building_conanfile):
at = AutotoolsToolchain(cross_building_conanfile)
at.configure_args.append("--enable-flag1=false")
- at.make_args.append("--complex-flag=complex-value")
# Update configure_args
at.update_configure_args({"--prefix": "/my/other/prefix",
"--build": None, # prune value
- "--enable-flag1": ""})
+ "--enable-flag1": "", # without value
+ "-NEW-FLAG": "no" # new flag
+ })
new_configure_args = args_to_string(at.configure_args)
assert "--prefix=/my/other/prefix" in new_configure_args
assert "--build=" not in new_configure_args # pruned
assert "--enable-flag1" in new_configure_args # flag without value
+ assert "-NEW-FLAG=no" in new_configure_args # new flag
# Update autoreconf_args
at.update_autoreconf_args({"--force": None})
new_autoreconf_args = args_to_string(at.autoreconf_args)
assert "'--force" not in new_autoreconf_args
- # Update make_args
- at.update_make_args({"--complex-flag": "new-value"})
+ # Add new value to make_args
+ at.update_make_args({"--new-complex-flag": "new-value"})
new_make_args = args_to_string(at.make_args)
- assert "--complex-flag=new-value" in new_make_args
+ assert "--new-complex-flag=new-value" in new_make_args
| 2023-01-11T15:05:44
|
{}
|
{"conan/tools/gnu/autotoolstoolchain.py": "from conan.tools._check_build_profile import check_using_build_profile\nfrom conan.tools._compilers import architecture_flag, build_type_flags, cppstd_flag, \\\n build_type_link_flags, libcxx_flags\nfrom conan.tools.apple.apple import apple_min_version_flag, to_apple_arch, \\\n apple_sdk_path\nfrom conan.tools.build.cross_building import cross_building, get_cross_building_settings\nfrom conan.tools.env import Environment\nfrom conan.tools.files.files import save_toolchain_args\nfrom conan.tools.gnu.get_gnu_triplet import _get_gnu_triplet\nfrom conan.tools.microsoft import VCVars, msvc_runtime_flag\nfrom conans.errors import ConanException\nfrom conans.tools import args_to_string\n\n\nclass AutotoolsToolchain:\n def __init__(self, conanfile, namespace=None, prefix=\"/\"):\n self._conanfile = conanfile\n self._namespace = namespace\n self._prefix = prefix\n\n # Flags\n self.extra_cxxflags = []\n self.extra_cflags = []\n self.extra_ldflags = []\n self.extra_defines = []\n\n # Defines\n self.ndebug = None\n build_type = self._conanfile.settings.get_safe(\"build_type\")\n if build_type in ['Release', 'RelWithDebInfo', 'MinSizeRel']:\n self.ndebug = \"NDEBUG\"\n\n # TODO: This is also covering compilers like Visual Studio, necessary to test it (&remove?)\n self.build_type_flags = build_type_flags(self._conanfile.settings)\n self.build_type_link_flags = build_type_link_flags(self._conanfile.settings)\n\n self.cppstd = cppstd_flag(self._conanfile.settings)\n self.arch_flag = architecture_flag(self._conanfile.settings)\n self.libcxx, self.gcc_cxx11_abi = libcxx_flags(self._conanfile)\n self.fpic = self._conanfile.options.get_safe(\"fPIC\")\n self.msvc_runtime_flag = self._get_msvc_runtime_flag()\n\n # Cross build triplets\n self._host = self._conanfile.conf.get(\"tools.gnu:host_triplet\")\n self._build = None\n self._target = None\n\n self.apple_arch_flag = self.apple_isysroot_flag = None\n self.apple_min_version_flag = apple_min_version_flag(self._conanfile)\n\n self.sysroot_flag = None\n\n if cross_building(self._conanfile):\n # Host triplet\n os_build, arch_build, os_host, arch_host = get_cross_building_settings(self._conanfile)\n compiler = self._conanfile.settings.get_safe(\"compiler\")\n if not self._host:\n self._host = _get_gnu_triplet(os_host, arch_host, compiler=compiler)\n # Build triplet\n self._build = _get_gnu_triplet(os_build, arch_build, compiler=compiler)\n # Apple Stuff\n if os_build == \"Macos\":\n sdk_path = apple_sdk_path(conanfile)\n apple_arch = to_apple_arch(self._conanfile)\n # https://man.archlinux.org/man/clang.1.en#Target_Selection_Options\n self.apple_arch_flag = \"-arch {}\".format(apple_arch) if apple_arch else None\n # -isysroot makes all includes for your library relative to the build directory\n self.apple_isysroot_flag = \"-isysroot {}\".format(sdk_path) if sdk_path else None\n\n sysroot = self._conanfile.conf.get(\"tools.build:sysroot\")\n sysroot = sysroot.replace(\"\\\\\", \"/\") if sysroot is not None else None\n self.sysroot_flag = \"--sysroot {}\".format(sysroot) if sysroot else None\n\n self.configure_args = self._default_configure_shared_flags() + \\\n self._default_configure_install_flags() + \\\n self._get_triplets()\n self.autoreconf_args = self._default_autoreconf_flags()\n self.make_args = []\n\n check_using_build_profile(self._conanfile)\n\n def _get_msvc_runtime_flag(self):\n flag = msvc_runtime_flag(self._conanfile)\n if flag:\n flag = \"-{}\".format(flag)\n return flag\n\n @staticmethod\n def _filter_list_empty_fields(v):\n return list(filter(bool, v))\n\n @property\n def cxxflags(self):\n fpic = \"-fPIC\" if self.fpic else None\n ret = [self.libcxx, self.cppstd, self.arch_flag, fpic, self.msvc_runtime_flag,\n self.sysroot_flag]\n apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]\n conf_flags = self._conanfile.conf.get(\"tools.build:cxxflags\", default=[], check_type=list)\n ret = ret + self.build_type_flags + apple_flags + conf_flags + self.extra_cxxflags\n return self._filter_list_empty_fields(ret)\n\n @property\n def cflags(self):\n fpic = \"-fPIC\" if self.fpic else None\n ret = [self.arch_flag, fpic, self.msvc_runtime_flag, self.sysroot_flag]\n apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]\n conf_flags = self._conanfile.conf.get(\"tools.build:cflags\", default=[], check_type=list)\n ret = ret + self.build_type_flags + apple_flags + conf_flags + self.extra_cflags\n return self._filter_list_empty_fields(ret)\n\n @property\n def ldflags(self):\n ret = [self.arch_flag, self.sysroot_flag]\n apple_flags = [self.apple_isysroot_flag, self.apple_arch_flag, self.apple_min_version_flag]\n conf_flags = self._conanfile.conf.get(\"tools.build:sharedlinkflags\", default=[],\n check_type=list)\n conf_flags.extend(self._conanfile.conf.get(\"tools.build:exelinkflags\", default=[],\n check_type=list))\n linker_scripts = self._conanfile.conf.get(\"tools.build:linker_scripts\", default=[], check_type=list)\n conf_flags.extend([\"-T'\" + linker_script + \"'\" for linker_script in linker_scripts])\n ret = ret + apple_flags + conf_flags + self.build_type_link_flags + self.extra_ldflags\n return self._filter_list_empty_fields(ret)\n\n @property\n def defines(self):\n conf_flags = self._conanfile.conf.get(\"tools.build:defines\", default=[], check_type=list)\n ret = [self.ndebug, self.gcc_cxx11_abi] + conf_flags + self.extra_defines\n return self._filter_list_empty_fields(ret)\n\n def environment(self):\n env = Environment()\n compilers_by_conf = self._conanfile.conf.get(\"tools.build:compiler_executables\", default={}, check_type=dict)\n if compilers_by_conf:\n compilers_mapping = {\"c\": \"CC\", \"cpp\": \"CXX\", \"cuda\": \"NVCC\", \"fortran\": \"FC\"}\n for comp, env_var in compilers_mapping.items():\n if comp in compilers_by_conf:\n env.define(env_var, compilers_by_conf[comp])\n env.append(\"CPPFLAGS\", [\"-D{}\".format(d) for d in self.defines])\n env.append(\"CXXFLAGS\", self.cxxflags)\n env.append(\"CFLAGS\", self.cflags)\n env.append(\"LDFLAGS\", self.ldflags)\n env.prepend_path(\"PKG_CONFIG_PATH\", self._conanfile.generators_folder)\n return env\n\n def vars(self):\n return self.environment().vars(self._conanfile, scope=\"build\")\n\n def generate(self, env=None, scope=\"build\"):\n env = env or self.environment()\n env = env.vars(self._conanfile, scope=scope)\n env.save_script(\"conanautotoolstoolchain\")\n self.generate_args()\n VCVars(self._conanfile).generate(scope=scope)\n\n def _default_configure_shared_flags(self):\n args = []\n # Just add these flags if there's a shared option defined (never add to exe's)\n # FIXME: For Conan 2.0 use the package_type to decide if adding these flags or not\n try:\n if self._conanfile.options.shared:\n args.extend([\"--enable-shared\", \"--disable-static\"])\n else:\n args.extend([\"--disable-shared\", \"--enable-static\"])\n except ConanException:\n pass\n\n return args\n\n def _default_configure_install_flags(self):\n configure_install_flags = []\n\n def _get_argument(argument_name, cppinfo_name):\n elements = getattr(self._conanfile.cpp.package, cppinfo_name)\n return \"--{}=${{prefix}}/{}\".format(argument_name, elements[0]) if elements else \"\"\n\n # If someone want arguments but not the defaults can pass them in args manually\n configure_install_flags.extend([f\"--prefix={self._prefix}\",\n _get_argument(\"bindir\", \"bindirs\"),\n _get_argument(\"sbindir\", \"bindirs\"),\n _get_argument(\"libdir\", \"libdirs\"),\n _get_argument(\"includedir\", \"includedirs\"),\n _get_argument(\"oldincludedir\", \"includedirs\"),\n _get_argument(\"datarootdir\", \"resdirs\")])\n return [el for el in configure_install_flags if el]\n\n @staticmethod\n def _default_autoreconf_flags():\n return [\"--force\", \"--install\"]\n\n def _get_triplets(self):\n triplets = []\n for flag, value in ((\"--host=\", self._host), (\"--build=\", self._build),\n (\"--target=\", self._target)):\n if value:\n triplets.append(f'{flag}{value}')\n return triplets\n\n def update_configure_args(self, updated_flags):\n \"\"\"\n Helper to update/prune flags from ``self.configure_args``.\n\n :param updated_flags: ``dict`` with arguments as keys and their argument values.\n Notice that if argument value is ``None``, this one will be pruned.\n \"\"\"\n self._update_flags(\"configure_args\", updated_flags)\n\n def update_make_args(self, updated_flags):\n \"\"\"\n Helper to update/prune arguments from ``self.make_args``.\n\n :param updated_flags: ``dict`` with arguments as keys and their argument values.\n Notice that if argument value is ``None``, this one will be pruned.\n \"\"\"\n self._update_flags(\"make_args\", updated_flags)\n\n def update_autoreconf_args(self, updated_flags):\n \"\"\"\n Helper to update/prune arguments from ``self.autoreconf_args``.\n\n :param updated_flags: ``dict`` with arguments as keys and their argument values.\n Notice that if argument value is ``None``, this one will be pruned.\n \"\"\"\n self._update_flags(\"autoreconf_args\", updated_flags)\n\n # FIXME: Remove all these update_xxxx whenever xxxx_args are dicts or new ones replace them\n def _update_flags(self, attr_name, updated_flags):\n _new_flags = []\n self_args = getattr(self, attr_name)\n for index, flag in enumerate(self_args):\n flag_name = flag.split(\"=\")[0]\n if flag_name in updated_flags:\n new_flag_value = updated_flags[flag_name]\n # if {\"build\": None} is passed, then \"--build=xxxx\" will be pruned\n if new_flag_value is None:\n continue\n elif not new_flag_value:\n _new_flags.append(flag_name)\n else:\n _new_flags.append(f\"{flag_name}={new_flag_value}\")\n else:\n _new_flags.append(flag)\n # Update the current ones\n setattr(self, attr_name, _new_flags)\n\n def generate_args(self):\n args = {\"configure_args\": args_to_string(self.configure_args),\n \"make_args\": args_to_string(self.make_args),\n \"autoreconf_args\": args_to_string(self.autoreconf_args)}\n save_toolchain_args(args, namespace=self._namespace)\n"}
|
{"conan/tools/gnu/autotoolstoolchain.py": [{"type": "function", "name": "AutotoolsToolchain._update_flags._list_to_dict", "lines": [228, 237], "signature": "def _list_to_dict(flags):", "doc": ""}, {"type": "function", "name": "AutotoolsToolchain._update_flags._dict_to_list", "lines": [239, 240], "signature": "def _dict_to_list(flags):", "doc": ""}]}
| null |
["conans/test/unittests/tools/gnu/autotoolschain_test.py::test_update_or_prune_any_args"]
|
["conans/test/unittests/tools/gnu/autotoolschain_test.py::test_get_gnu_triplet_for_cross_building", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_get_toolchain_cppstd", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_msvc_runtime[static-Debug-MTd]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_msvc_runtime[static-Release-MT]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_msvc_runtime[dynamic-Debug-MDd]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_msvc_runtime[dynamic-Release-MD]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_visual_runtime[MTd]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_visual_runtime[MT]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_visual_runtime[MDd]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_visual_runtime[MD]", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_get_gnu_triplet_for_cross_building_raise_error", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_compilers_mapping", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_linker_scripts", "conans/test/unittests/tools/gnu/autotoolschain_test.py::test_check_configure_args_overwriting_and_deletion"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1669899197.0, "pr_title": "[AutotoolsToolchain] Improve `update_xxxxx_args` behavior", "pr_body": "Changelog: Feature: AutotoolsToolchain helper functions: `update_configure_args`, `update_make_args`, and `update_autoreconf_args` can also add new values\r\nDocs: https://github.com/conan-io/docs/pull/2895\r\n\r\nSummary: before this change, it was only possible to update/remove existing values. Now, you could add new ones. Trying to avoid weird situations like:\r\n\r\n```python\r\n tc = AutotoolsToolchain(self)\r\n tc.configure_args.extend([\r\n \"--datarootdir=${prefix}/lib\", # do not use share\r\n \"--disable-layoutex\",\r\n \"--disable-layout\"])\r\n tc.update_configure_args({\"--force\": None})\r\n```\r\nAfter my change, then it could be reduced to:\r\n```python\r\n tc = AutotoolsToolchain(self)\r\n tc.update_configure_args({\r\n \"--force\": None,\r\n \"--datarootdir\": \"${prefix}/lib\", # do not use share\r\n \"--disable-layoutex\": \"\",\r\n \"--disable-layout\": \"\"})\r\n```\r\n\r\nBoth are working, but I think the latest one could be the expected behavior. WDYT?", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 12,980
|
https://github.com/conan-io/conan/pull/12980
|
conan-io__conan-12980
|
[]
|
7931acac1b4627f22bc9f3db5cc57d29ddfa16ac
|
diff --git a/conans/client/cache/cache.py b/conans/client/cache/cache.py
index c29f087bb17..7e6f0bb7804 100644
--- a/conans/client/cache/cache.py
+++ b/conans/client/cache/cache.py
@@ -2,6 +2,7 @@
import platform
from typing import List
+import yaml
from jinja2 import Template
@@ -10,6 +11,7 @@
from conans.client.cache.remote_registry import RemoteRegistry
from conans.client.conf import default_settings_yml
from conans.client.store.localdb import LocalDB
+from conans.errors import ConanException
from conans.model.conf import ConfDefinition
from conans.model.package_ref import PkgReference
from conans.model.recipe_ref import RecipeReference
@@ -202,8 +204,37 @@ def settings(self):
"""Returns {setting: [value, ...]} defining all the possible
settings without values"""
self.initialize_settings()
- content = load(self.settings_path)
- return Settings.loads(content)
+
+ def _load_settings(path):
+ try:
+ return yaml.safe_load(load(path)) or {}
+ except yaml.YAMLError as ye:
+ raise ConanException("Invalid settings.yml format: {}".format(ye))
+
+ settings = _load_settings(self.settings_path)
+ user_settings_file = os.path.join(self.cache_folder, "settings_user.yml")
+ if os.path.exists(user_settings_file):
+ settings_user = _load_settings(user_settings_file)
+
+ def appending_recursive_dict_update(d, u):
+ # Not the same behavior as conandata_update, because this append lists
+ for k, v in u.items():
+ if isinstance(v, list):
+ current = d.get(k) or []
+ d[k] = current + [value for value in v if value not in current]
+ elif isinstance(v, dict):
+ current = d.get(k) or {}
+ d[k] = appending_recursive_dict_update(current, v)
+ else:
+ d[k] = v
+ return d
+
+ appending_recursive_dict_update(settings, settings_user)
+
+ try:
+ return Settings(settings)
+ except AttributeError as e:
+ raise ConanException("Invalid settings.yml format: {}".format(e))
def initialize_settings(self):
# TODO: This is called by ConfigAPI.init(), maybe move everything there?
|
diff --git a/conans/test/integration/settings/test_settings_user.py b/conans/test/integration/settings/test_settings_user.py
new file mode 100644
index 00000000000..e911b14669f
--- /dev/null
+++ b/conans/test/integration/settings/test_settings_user.py
@@ -0,0 +1,54 @@
+import os
+import textwrap
+
+from conans.test.assets.genconanfile import GenConanfile
+from conans.test.utils.tools import TestClient
+from conans.util.files import save
+
+
+def test_settings_user():
+ c = TestClient()
+ settings_user = textwrap.dedent("""\
+ os:
+ Windows:
+ subsystem: [new_sub]
+ Linux:
+ new_versions: ["a", "b", "c"]
+ new_os:
+ new_global: ["42", "21"]
+ """)
+ save(os.path.join(c.cache_folder, "settings_user.yml"), settings_user)
+ c.save({"conanfile.py": GenConanfile().with_settings("os").with_settings("new_global")})
+ # New settings are there
+ c.run("install . -s os=Windows -s os.subsystem=new_sub -s new_global=42")
+ assert "new_global=42" in c.out
+ assert "os.subsystem=new_sub" in c.out
+ # Existing values of subsystem are still there
+ c.run("install . -s os=Windows -s os.subsystem=msys2 -s new_global=42")
+ assert "new_global=42" in c.out
+ assert "os.subsystem=msys2" in c.out
+ # Completely new values, not appended, but new, are there
+ c.run("install . -s os=Linux -s os.new_versions=a -s new_global=42")
+ assert "new_global=42" in c.out
+ assert "os.new_versions=a" in c.out
+ # Existing values of OSs are also there
+ c.run("install . -s os=Macos -s new_global=42")
+ assert "os=Macos" in c.out
+ assert "new_global=42" in c.out
+
+
+def test_settings_user_subdict():
+ c = TestClient()
+ settings_user = textwrap.dedent("""\
+ other_new:
+ other1:
+ other2:
+ version: [1, 2, 3]
+ """)
+ save(os.path.join(c.cache_folder, "settings_user.yml"), settings_user)
+ c.save({"conanfile.py": GenConanfile().with_settings("other_new")})
+ c.run("install . -s other_new=other1")
+ assert "other_new=other1" in c.out
+ c.run("install . -s other_new=other2 -s other_new.version=2")
+ assert "other_new=other2" in c.out
+ assert "other_new.version=2" in c.out
| 2023-01-26T00:38:06
|
{}
|
{"conans/client/cache/cache.py": "import os\nimport platform\nfrom typing import List\n\nfrom jinja2 import Template\n\n\nfrom conan.internal.cache.cache import DataCache, RecipeLayout, PackageLayout\nfrom conans.client.cache.editable import EditablePackages\nfrom conans.client.cache.remote_registry import RemoteRegistry\nfrom conans.client.conf import default_settings_yml\nfrom conans.client.store.localdb import LocalDB\nfrom conans.model.conf import ConfDefinition\nfrom conans.model.package_ref import PkgReference\nfrom conans.model.recipe_ref import RecipeReference\nfrom conans.model.settings import Settings\nfrom conans.paths import DEFAULT_PROFILE_NAME\nfrom conans.util.files import load, save, mkdir\n\n\nCONAN_SETTINGS = \"settings.yml\"\nLOCALDB = \".conan.db\"\nREMOTES = \"remotes.json\"\nPROFILES_FOLDER = \"profiles\"\nEXTENSIONS_FOLDER = \"extensions\"\nHOOKS_EXTENSION_FOLDER = \"hooks\"\nPLUGINS_FOLDER = \"plugins\"\nDEPLOYERS_EXTENSION_FOLDER = \"deploy\"\nCUSTOM_COMMANDS_FOLDER = \"commands\"\n\n\n# TODO: Rename this to ClientHome\nclass ClientCache(object):\n \"\"\" Class to represent/store/compute all the paths involved in the execution\n of conans commands. Accesses to real disk and reads/write things. (OLD client ConanPaths)\n \"\"\"\n\n def __init__(self, cache_folder):\n self.cache_folder = cache_folder\n\n # Caching\n self._config = None\n self._new_config = None\n self.editable_packages = EditablePackages(self.cache_folder)\n # paths\n self._store_folder = self.new_config.get(\"core.cache:storage_path\") or \\\n os.path.join(self.cache_folder, \"p\")\n\n mkdir(self._store_folder)\n db_filename = os.path.join(self._store_folder, 'cache.sqlite3')\n self._data_cache = DataCache(self._store_folder, db_filename)\n # The cache is first thing instantiated, we can remove this from env now\n self._localdb_encryption_key = os.environ.pop('CONAN_LOGIN_ENCRYPTION_KEY', None)\n\n def closedb(self):\n self._data_cache.closedb()\n\n def create_export_recipe_layout(self, ref: RecipeReference):\n return self._data_cache.create_export_recipe_layout(ref)\n\n def assign_rrev(self, layout: RecipeLayout):\n return self._data_cache.assign_rrev(layout)\n\n def create_build_pkg_layout(self, ref):\n return self._data_cache.create_build_pkg_layout(ref)\n\n def assign_prev(self, layout: PackageLayout):\n return self._data_cache.assign_prev(layout)\n\n def ref_layout(self, ref: RecipeReference):\n return self._data_cache.get_reference_layout(ref)\n\n def pkg_layout(self, ref: PkgReference):\n return self._data_cache.get_package_layout(ref)\n\n def get_or_create_ref_layout(self, ref: RecipeReference):\n return self._data_cache.get_or_create_ref_layout(ref)\n\n def get_or_create_pkg_layout(self, ref: PkgReference):\n return self._data_cache.get_or_create_pkg_layout(ref)\n\n def remove_recipe_layout(self, layout):\n self._data_cache.remove_recipe(layout)\n\n def remove_package_layout(self, layout):\n self._data_cache.remove_package(layout)\n\n def get_recipe_timestamp(self, ref):\n return self._data_cache.get_recipe_timestamp(ref)\n\n def get_package_timestamp(self, ref):\n return self._data_cache.get_package_timestamp(ref)\n\n def update_recipe_timestamp(self, ref):\n \"\"\" when the recipe already exists in cache, but we get a new timestamp from a server\n that would affect its order in our cache \"\"\"\n return self._data_cache.update_recipe_timestamp(ref)\n\n def all_refs(self):\n return self._data_cache.list_references()\n\n def exists_rrev(self, ref):\n # Used just by inspect to check before calling get_recipe()\n return self._data_cache.exists_rrev(ref)\n\n def exists_prev(self, pref):\n # Used just by download to skip downloads if prev already exists in cache\n return self._data_cache.exists_prev(pref)\n\n def get_package_revisions_references(self, pref: PkgReference, only_latest_prev=False):\n return self._data_cache.get_package_revisions_references(pref, only_latest_prev)\n\n def get_package_references(self, ref: RecipeReference,\n only_latest_prev=True) -> List[PkgReference]:\n \"\"\"Get the latest package references\"\"\"\n return self._data_cache.get_package_references(ref, only_latest_prev)\n\n def get_matching_build_id(self, ref, build_id):\n return self._data_cache.get_matching_build_id(ref, build_id)\n\n def get_recipe_revisions_references(self, ref, only_latest_rrev=False):\n return self._data_cache.get_recipe_revisions_references(ref, only_latest_rrev)\n\n def get_latest_recipe_reference(self, ref):\n return self._data_cache.get_latest_recipe_reference(ref)\n\n def get_latest_package_reference(self, pref):\n return self._data_cache.get_latest_package_reference(pref)\n\n @property\n def store(self):\n return self._store_folder\n\n @property\n def remotes_path(self):\n return os.path.join(self.cache_folder, REMOTES)\n\n @property\n def remotes_registry(self) -> RemoteRegistry:\n return RemoteRegistry(self)\n\n @property\n def new_config_path(self):\n return os.path.join(self.cache_folder, \"global.conf\")\n\n @property\n def new_config(self):\n \"\"\" this is the new global.conf to replace the old conan.conf that contains\n configuration defined with the new syntax as in profiles, this config will be composed\n to the profile ones and passed to the conanfiles.conf, which can be passed to collaborators\n \"\"\"\n if self._new_config is None:\n self._new_config = ConfDefinition()\n if os.path.exists(self.new_config_path):\n text = load(self.new_config_path)\n distro = None\n if platform.system() in [\"Linux\", \"FreeBSD\"]:\n import distro\n content = Template(text).render({\"platform\": platform, \"os\": os, \"distro\": distro})\n self._new_config.loads(content)\n return self._new_config\n\n @property\n def localdb(self):\n localdb_filename = os.path.join(self.cache_folder, LOCALDB)\n return LocalDB.create(localdb_filename, encryption_key=self._localdb_encryption_key)\n\n @property\n def profiles_path(self):\n return os.path.join(self.cache_folder, PROFILES_FOLDER)\n\n @property\n def settings_path(self):\n return os.path.join(self.cache_folder, CONAN_SETTINGS)\n\n @property\n def custom_commands_path(self):\n return os.path.join(self.cache_folder, EXTENSIONS_FOLDER, CUSTOM_COMMANDS_FOLDER)\n\n @property\n def plugins_path(self):\n return os.path.join(self.cache_folder, EXTENSIONS_FOLDER, PLUGINS_FOLDER)\n\n @property\n def default_profile_path(self):\n # Used only in testing, and this class \"reset_default_profile\"\n return os.path.join(self.cache_folder, PROFILES_FOLDER, DEFAULT_PROFILE_NAME)\n\n @property\n def hooks_path(self):\n \"\"\"\n :return: Hooks folder in client cache\n \"\"\"\n return os.path.join(self.cache_folder, EXTENSIONS_FOLDER, HOOKS_EXTENSION_FOLDER)\n\n @property\n def deployers_path(self):\n return os.path.join(self.cache_folder, EXTENSIONS_FOLDER, DEPLOYERS_EXTENSION_FOLDER)\n\n @property\n def settings(self):\n \"\"\"Returns {setting: [value, ...]} defining all the possible\n settings without values\"\"\"\n self.initialize_settings()\n content = load(self.settings_path)\n return Settings.loads(content)\n\n def initialize_settings(self):\n # TODO: This is called by ConfigAPI.init(), maybe move everything there?\n if not os.path.exists(self.settings_path):\n settings_yml = default_settings_yml\n save(self.settings_path, settings_yml)\n save(self.settings_path + \".orig\", settings_yml) # stores a copy, to check migrations\n"}
|
{"conans/client/cache/cache.py": [{"type": "function", "name": "ClientCache.settings._load_settings", "lines": [208, 212], "signature": "def _load_settings(path):", "doc": ""}, {"type": "function", "name": "ClientCache.settings.appending_recursive_dict_update", "lines": [219, 230], "signature": "def appending_recursive_dict_update(d, u):", "doc": ""}]}
| null |
["conans/test/integration/settings/test_settings_user.py::test_settings_user", "conans/test/integration/settings/test_settings_user.py::test_settings_user_subdict"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1674692646.0, "pr_title": "[develop2] Propose new settings_user.yml", "pr_body": "Changelog: Feature: Users can define their own settings in `settings_user.yml` that will be merged with the Conan `settings.yml`.\r\n\r\nClose https://github.com/conan-io/conan/issues/8261\r\nClose https://github.com/conan-io/conan/issues/12422\r\n\r\nThe main blocker for https://github.com/conan-io/conan/issues/8261 was the idea that it is necessary to do the round-trip for the ``yml`` file. It seems that is more doable to update the in-memory dictionaries, this is a proof of concept.", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 13,354
|
https://github.com/conan-io/conan/pull/13354
|
conan-io__conan-13354
|
[]
|
d6058bbb690819ded29eb283cd9329a92aea3d56
|
diff --git a/conan/api/subapi/config.py b/conan/api/subapi/config.py
index 98faa962e73..f0d9c405e47 100644
--- a/conan/api/subapi/config.py
+++ b/conan/api/subapi/config.py
@@ -21,3 +21,7 @@ def install(self, path_or_url, verify_ssl, config_type=None, args=None,
def get(self, name, default=None, check_type=None):
app = ConanApp(self.conan_api.cache_folder)
return app.cache.new_config.get(name, default=default, check_type=check_type)
+
+ def show(self, pattern):
+ app = ConanApp(self.conan_api.cache_folder)
+ return app.cache.new_config.show(pattern)
diff --git a/conan/cli/commands/config.py b/conan/cli/commands/config.py
index fd7fae5a0a4..b8b58437960 100644
--- a/conan/cli/commands/config.py
+++ b/conan/cli/commands/config.py
@@ -66,3 +66,14 @@ def config_list(conan_api, parser, subparser, *args):
"""
parser.parse_args(*args)
return BUILT_IN_CONFS
+
+
+@conan_subcommand(formatters={"text": list_text_formatter, "json": default_json_formatter})
+def config_show(conan_api, parser, subparser, *args):
+ """
+ Get the value of the specified conf
+ """
+ subparser.add_argument('pattern', help='Conf item(s) pattern for which to query their value')
+ args = parser.parse_args(*args)
+
+ return conan_api.config.show(args.pattern)
diff --git a/conans/model/conf.py b/conans/model/conf.py
index cab702ddf58..c273e70da88 100644
--- a/conans/model/conf.py
+++ b/conans/model/conf.py
@@ -1,5 +1,6 @@
import re
import os
+import fnmatch
from collections import OrderedDict
@@ -306,6 +307,11 @@ def pop(self, conf_name, default=None):
self._values.pop(conf_name, None)
return value
+ def show(self, fnpattern, pattern=""):
+ return {key: self.get(key)
+ for key in self._values.keys()
+ if fnmatch.fnmatch(pattern + key, fnpattern)}
+
def copy(self):
c = Conf()
c._values = self._values.copy()
@@ -475,12 +481,30 @@ def __bool__(self):
def get(self, conf_name, default=None, check_type=None):
"""
- Get the value of the conf name requested and convert it to the [type]-like passed.
+ Get the value of the conf name requested and convert it to the [type]-like passed.
"""
pattern, name = self._split_pattern_name(conf_name)
return self._pattern_confs.get(pattern, Conf()).get(name, default=default,
check_type=check_type)
+ def show(self, fnpattern):
+ """
+ Get the value of the confs that match the requested pattern
+ """
+ result = {}
+
+ for patter_key, patter_conf in self._pattern_confs.items():
+ if patter_key is None:
+ patter_key = ""
+ else:
+ patter_key += ":"
+
+ pattern_values = patter_conf.show(fnpattern, patter_key)
+ result.update({patter_key + pattern_subkey: pattern_subvalue
+ for pattern_subkey, pattern_subvalue in pattern_values.items()})
+
+ return result
+
def pop(self, conf_name, default=None):
"""
Remove the conf name passed.
|
diff --git a/conans/test/integration/command/config_test.py b/conans/test/integration/command/config_test.py
index 1ff1e502137..b47bf91ae2c 100644
--- a/conans/test/integration/command/config_test.py
+++ b/conans/test/integration/command/config_test.py
@@ -108,3 +108,46 @@ def _assert_config_not_exists(path):
_assert_config_exists("foo")
os.listdir(tc.current_folder)
+
+
+def test_config_show():
+ globalconf = textwrap.dedent("""
+ tools.build:jobs=42
+ tools.files.download:retry_wait=10
+ tools.files.download:retry=7
+ core.net.http:timeout=30
+ core.net.http:max_retries=5
+ zlib/*:user.mycategory:retry=True
+ zlib/*:user.mycategory:foo=0
+ zlib/*:user.myothercategory:foo=0
+ """)
+ tc = TestClient()
+ tc.save_home({"global.conf": globalconf})
+ tc.run("config show tools.build:jobs")
+ assert "42" in tc.out
+
+ tc.run("config show core*")
+ assert "core.net.http:timeout" in tc.out
+ assert "30" in tc.out
+ assert "core.net.http:max_retries" in tc.out
+ assert "5" in tc.out
+
+ tc.run("config show *retr*")
+ assert "tools.files.download:retry_wait" in tc.out
+ assert "tools.files.download:retry" in tc.out
+ assert "core.net.http:max_retries" in tc.out
+ assert "zlib/*:user.mycategory:retry" in tc.out
+
+ tc.run("config show zlib*")
+ assert "zlib/*:user.mycategory:retry" in tc.out
+ assert "zlib/*:user.mycategory:foo" in tc.out
+ assert "zlib/*:user.myothercategory:foo" in tc.out
+
+ tc.run("config show zlib/*")
+ assert "zlib/*:user.mycategory:retry" in tc.out
+ assert "zlib/*:user.mycategory:foo" in tc.out
+ assert "zlib/*:user.myothercategory:foo" in tc.out
+
+ tc.run("config show zlib/*:foo")
+ assert "zlib/*:user.mycategory:foo" in tc.out
+ assert "zlib/*:user.myothercategory:foo" in tc.out
| 2023-03-07T08:38:34
|
{}
|
{"conan/api/subapi/config.py": "from conan.internal.conan_app import ConanApp\n\n\nclass ConfigAPI:\n\n def __init__(self, conan_api):\n self.conan_api = conan_api\n\n def home(self):\n return self.conan_api.cache_folder\n\n def install(self, path_or_url, verify_ssl, config_type=None, args=None,\n source_folder=None, target_folder=None):\n # TODO: We probably want to split this into git-folder-http cases?\n from conans.client.conf.config_installer import configuration_install\n app = ConanApp(self.conan_api.cache_folder)\n return configuration_install(app, path_or_url, verify_ssl,\n config_type=config_type, args=args,\n source_folder=source_folder, target_folder=target_folder)\n\n def get(self, name, default=None, check_type=None):\n app = ConanApp(self.conan_api.cache_folder)\n return app.cache.new_config.get(name, default=default, check_type=check_type)\n", "conan/cli/commands/config.py": "from conan.api.output import cli_out_write\nfrom conan.cli.command import conan_command, conan_subcommand\nfrom conan.cli.formatters import default_json_formatter\nfrom conans.model.conf import BUILT_IN_CONFS\nfrom conans.util.config_parser import get_bool_from_text\n\n\n@conan_command(group='Consumer')\ndef config(conan_api, parser, *args):\n \"\"\"\n Manage the Conan configuration in the Conan home.\n \"\"\"\n\n\n@conan_subcommand()\ndef config_install(conan_api, parser, subparser, *args):\n \"\"\"\n Install the configuration (remotes, profiles, conf), from git, http or a folder, into the\n Conan home folder.\n \"\"\"\n subparser.add_argument(\"item\",\n help=\"git repository, local file or folder or zip file (local or \"\n \"http) where the configuration is stored\")\n\n ssl_subgroup = subparser.add_mutually_exclusive_group()\n ssl_subgroup.add_argument(\"--verify-ssl\", nargs=\"?\", default=\"True\",\n help='Verify SSL connection when downloading file')\n ssl_subgroup.add_argument(\"--insecure\", action=\"store_false\", default=None,\n help=\"Allow insecure server connections when using SSL. \"\n \"Equivalent to --verify-ssl=False\",\n dest=\"verify_ssl\")\n subparser.add_argument(\"-t\", \"--type\", choices=[\"git\", \"dir\", \"file\", \"url\"],\n help='Type of remote config')\n subparser.add_argument(\"-a\", \"--args\",\n help='String with extra arguments for \"git clone\"')\n subparser.add_argument(\"-sf\", \"--source-folder\",\n help='Install files only from a source subfolder from the '\n 'specified origin')\n subparser.add_argument(\"-tf\", \"--target-folder\",\n help='Install to that path in the conan cache')\n args = parser.parse_args(*args)\n verify_ssl = args.verify_ssl if isinstance(args.verify_ssl, bool) else get_bool_from_text(args.verify_ssl)\n conan_api.config.install(args.item, verify_ssl, args.type, args.args,\n source_folder=args.source_folder,\n target_folder=args.target_folder)\n\n\ndef list_text_formatter(confs):\n for k, v in confs.items():\n cli_out_write(f\"{k}: {v}\")\n\n\n@conan_subcommand(formatters={\"text\": cli_out_write})\ndef config_home(conan_api, parser, subparser, *args):\n \"\"\"\n Show the Conan home folder.\n \"\"\"\n parser.parse_args(*args)\n return conan_api.config.home()\n\n\n@conan_subcommand(formatters={\"text\": list_text_formatter, \"json\": default_json_formatter})\ndef config_list(conan_api, parser, subparser, *args):\n \"\"\"\n Show all the Conan available configurations: core and tools.\n \"\"\"\n parser.parse_args(*args)\n return BUILT_IN_CONFS\n", "conans/model/conf.py": "import re\nimport os\n\nfrom collections import OrderedDict\n\n\nfrom conans.errors import ConanException\nfrom conans.model.recipe_ref import ref_matches\n\nBUILT_IN_CONFS = {\n \"core:required_conan_version\": \"Raise if current version does not match the defined range.\",\n \"core:non_interactive\": \"Disable interactive user input, raises error if input necessary\",\n \"core:default_profile\": \"Defines the default host profile ('default' by default)\",\n \"core:default_build_profile\": \"Defines the default build profile (None by default)\",\n \"core:allow_uppercase_pkg_names\": \"Temporarily (will be removed in 2.X) allow uppercase names\",\n \"core.upload:retry\": \"Number of retries in case of failure when uploading to Conan server\",\n \"core.upload:retry_wait\": \"Seconds to wait between upload attempts to Conan server\",\n \"core.download:parallel\": \"Number of concurrent threads to download packages\",\n \"core.download:retry\": \"Number of retries in case of failure when downloading from Conan server\",\n \"core.download:retry_wait\": \"Seconds to wait between download attempts from Conan server\",\n \"core.download:download_cache\": \"Define path to a file download cache\",\n \"core.cache:storage_path\": \"Absolute path where the packages and database are stored\",\n # Package ID\n \"core.package_id:default_unknown_mode\": \"By default, 'semver_mode'\",\n \"core.package_id:default_non_embed_mode\": \"By default, 'minor_mode'\",\n \"core.package_id:default_embed_mode\": \"By default, 'full_mode'\",\n \"core.package_id:default_python_mode\": \"By default, 'minor_mode'\",\n \"core.package_id:default_build_mode\": \"By default, 'None'\",\n # General HTTP(python-requests) configuration\n \"core.net.http:max_retries\": \"Maximum number of connection retries (requests library)\",\n \"core.net.http:timeout\": \"Number of seconds without response to timeout (requests library)\",\n \"core.net.http:no_proxy_match\": \"List of urls to skip from proxies configuration\",\n \"core.net.http:proxies\": \"Dictionary containing the proxy configuration\",\n \"core.net.http:cacert_path\": \"Path containing a custom Cacert file\",\n \"core.net.http:client_cert\": \"Path or tuple of files containing a client cert (and key)\",\n \"core.net.http:clean_system_proxy\": \"If defined, the proxies system env-vars will be discarded\",\n # Gzip compression\n \"core.gzip:compresslevel\": \"The Gzip compresion level for Conan artifacts (default=9)\",\n # Tools\n \"tools.android:ndk_path\": \"Argument for the CMAKE_ANDROID_NDK\",\n \"tools.build:skip_test\": \"Do not execute CMake.test() and Meson.test() when enabled\",\n \"tools.build:download_source\": \"Force download of sources for every package\",\n \"tools.build:jobs\": \"Default compile jobs number -jX Ninja, Make, /MP VS (default: max CPUs)\",\n \"tools.build:sysroot\": \"Pass the --sysroot=<tools.build:sysroot> flag if available. (None by default)\",\n \"tools.build.cross_building:can_run\": \"Bool value that indicates whether is possible to run a non-native \"\n \"app on the same architecture. It's used by 'can_run' tool\",\n \"tools.cmake.cmaketoolchain:generator\": \"User defined CMake generator to use instead of default\",\n \"tools.cmake.cmaketoolchain:find_package_prefer_config\": \"Argument for the CMAKE_FIND_PACKAGE_PREFER_CONFIG\",\n \"tools.cmake.cmaketoolchain:toolchain_file\": \"Use other existing file rather than conan_toolchain.cmake one\",\n \"tools.cmake.cmaketoolchain:user_toolchain\": \"Inject existing user toolchains at the beginning of conan_toolchain.cmake\",\n \"tools.cmake.cmaketoolchain:system_name\": \"Define CMAKE_SYSTEM_NAME in CMakeToolchain\",\n \"tools.cmake.cmaketoolchain:system_version\": \"Define CMAKE_SYSTEM_VERSION in CMakeToolchain\",\n \"tools.cmake.cmaketoolchain:system_processor\": \"Define CMAKE_SYSTEM_PROCESSOR in CMakeToolchain\",\n \"tools.cmake.cmaketoolchain:toolset_arch\": \"Toolset architecture to be used as part of CMAKE_GENERATOR_TOOLSET in CMakeToolchain\",\n \"tools.cmake.cmake_layout:build_folder_vars\": \"Settings and Options that will produce a different build folder and different CMake presets names\",\n \"tools.files.download:download_cache\": \"Define the cache folder to store downloads from files.download()/get()\",\n \"tools.files.download:retry\": \"Number of retries in case of failure when downloading\",\n \"tools.files.download:retry_wait\": \"Seconds to wait between download attempts\",\n \"tools.gnu:make_program\": \"Indicate path to make program\",\n \"tools.gnu:define_libcxx11_abi\": \"Force definition of GLIBCXX_USE_CXX11_ABI=1 for libstdc++11\",\n \"tools.gnu:pkg_config\": \"Path to pkg-config executable used by PkgConfig build helper\",\n \"tools.gnu:host_triplet\": \"Custom host triplet to pass to Autotools scripts\",\n \"tools.google.bazel:configs\": \"Define Bazel config file\",\n \"tools.google.bazel:bazelrc_path\": \"Defines Bazel rc-path\",\n \"tools.meson.mesontoolchain:backend\": \"Any Meson backend: ninja, vs, vs2010, vs2012, vs2013, vs2015, vs2017, vs2019, xcode\",\n \"tools.meson.mesontoolchain:extra_machine_files\": \"List of paths for any additional native/cross file references to be appended to the existing Conan ones\",\n \"tools.microsoft.msbuild:verbosity\": \"Verbosity level for MSBuild: 'Quiet', 'Minimal', 'Normal', 'Detailed', 'Diagnostic'\",\n \"tools.microsoft.msbuild:vs_version\": \"Defines the IDE version when using the new msvc compiler\",\n \"tools.microsoft.msbuild:max_cpu_count\": \"Argument for the /m when running msvc to build parallel projects\",\n \"tools.microsoft.msbuild:installation_path\": \"VS install path, to avoid auto-detect via vswhere, like C:/Program Files (x86)/Microsoft Visual Studio/2019/Community. Use empty string to disable\",\n \"tools.microsoft.msbuilddeps:exclude_code_analysis\": \"Suppress MSBuild code analysis for patterns\",\n \"tools.microsoft.msbuildtoolchain:compile_options\": \"Dictionary with MSBuild compiler options\",\n \"tools.microsoft.bash:subsystem\": \"The subsystem to be used when conanfile.win_bash==True. Possible values: msys2, msys, cygwin, wsl, sfu\",\n \"tools.microsoft.bash:path\": \"The path to the shell to run when conanfile.win_bash==True\",\n \"tools.microsoft.bash:active\": \"If Conan is already running inside bash terminal in Windows\",\n \"tools.intel:installation_path\": \"Defines the Intel oneAPI installation root path\",\n \"tools.intel:setvars_args\": \"Custom arguments to be passed onto the setvars.sh|bat script from Intel oneAPI\",\n \"tools.system.package_manager:tool\": \"Default package manager tool: 'apt-get', 'yum', 'dnf', 'brew', 'pacman', 'choco', 'zypper', 'pkg' or 'pkgutil'\",\n \"tools.system.package_manager:mode\": \"Mode for package_manager tools: 'check' or 'install'\",\n \"tools.system.package_manager:sudo\": \"Use 'sudo' when invoking the package manager tools in Linux (False by default)\",\n \"tools.system.package_manager:sudo_askpass\": \"Use the '-A' argument if using sudo in Linux to invoke the system package manager (False by default)\",\n \"tools.apple.xcodebuild:verbosity\": \"Verbosity level for xcodebuild: 'verbose' or 'quiet\",\n \"tools.apple:sdk_path\": \"Path to the SDK to be used\",\n \"tools.apple:enable_bitcode\": \"(boolean) Enable/Disable Bitcode Apple Clang flags\",\n \"tools.apple:enable_arc\": \"(boolean) Enable/Disable ARC Apple Clang flags\",\n \"tools.apple:enable_visibility\": \"(boolean) Enable/Disable Visibility Apple Clang flags\",\n \"tools.env.virtualenv:powershell\": \"If it is set to True it will generate powershell launchers if os=Windows\",\n # Compilers/Flags configurations\n \"tools.build:compiler_executables\": \"Defines a Python dict-like with the compilers path to be used. Allowed keys {'c', 'cpp', 'cuda', 'objc', 'objcxx', 'rc', 'fortran', 'asm', 'hip', 'ispc'}\",\n \"tools.build:cxxflags\": \"List of extra CXX flags used by different toolchains like CMakeToolchain, AutotoolsToolchain and MesonToolchain\",\n \"tools.build:cflags\": \"List of extra C flags used by different toolchains like CMakeToolchain, AutotoolsToolchain and MesonToolchain\",\n \"tools.build:defines\": \"List of extra definition flags used by different toolchains like CMakeToolchain and AutotoolsToolchain\",\n \"tools.build:sharedlinkflags\": \"List of extra flags used by CMakeToolchain for CMAKE_SHARED_LINKER_FLAGS_INIT variable\",\n \"tools.build:exelinkflags\": \"List of extra flags used by CMakeToolchain for CMAKE_EXE_LINKER_FLAGS_INIT variable\",\n \"tools.build:linker_scripts\": \"List of linker script files to pass to the linker used by different toolchains like CMakeToolchain, AutotoolsToolchain, and MesonToolchain\",\n # Package ID composition\n \"tools.info.package_id:confs\": \"List of existing configuration to be part of the package ID\",\n}\n\nBUILT_IN_CONFS = {key: value for key, value in sorted(BUILT_IN_CONFS.items())}\n\n\nCORE_CONF_PATTERN = re.compile(r\"^core[.:]\")\nTOOLS_CONF_PATTERN = re.compile(r\"^tools[.:]\")\nUSER_CONF_PATTERN = re.compile(r\"^user[.:]\")\n\n\ndef _is_profile_module(module_name):\n # These are the modules that are propagated to profiles and user recipes\n _profiles_modules_patterns = USER_CONF_PATTERN, TOOLS_CONF_PATTERN\n return any(pattern.match(module_name) for pattern in _profiles_modules_patterns)\n\n\n# FIXME: Refactor all the next classes because they are mostly the same as\n# conan.tools.env.environment ones\nclass _ConfVarPlaceHolder:\n pass\n\n\nclass _ConfValue(object):\n\n def __init__(self, name, value, path=False):\n if name != name.lower():\n raise ConanException(\"Conf '{}' must be lowercase\".format(name))\n self._name = name\n self._value = value\n self._value_type = type(value)\n self._path = path\n\n def __repr__(self):\n return repr(self._value)\n\n @property\n def value(self):\n if self._value_type is list and _ConfVarPlaceHolder in self._value:\n v = self._value[:]\n v.remove(_ConfVarPlaceHolder)\n return v\n return self._value\n\n def copy(self):\n return _ConfValue(self._name, self._value, self._path)\n\n def dumps(self):\n if self._value is None:\n return \"{}=!\".format(self._name) # unset\n elif self._value_type is list and _ConfVarPlaceHolder in self._value:\n v = self._value[:]\n v.remove(_ConfVarPlaceHolder)\n return \"{}={}\".format(self._name, v)\n else:\n return \"{}={}\".format(self._name, self._value)\n\n def serialize(self):\n if self._value is None:\n _value = \"!\" # unset\n elif self._value_type is list and _ConfVarPlaceHolder in self._value:\n v = self._value[:]\n v.remove(_ConfVarPlaceHolder)\n _value = v\n else:\n _value = self._value\n return {self._name: _value}\n\n def update(self, value):\n if self._value_type is dict:\n self._value.update(value)\n\n def remove(self, value):\n if self._value_type is list:\n self._value.remove(value)\n elif self._value_type is dict:\n self._value.pop(value, None)\n\n def append(self, value):\n if self._value_type is not list:\n raise ConanException(\"Only list-like values can append other values.\")\n\n if isinstance(value, list):\n self._value.extend(value)\n else:\n self._value.append(value)\n\n def prepend(self, value):\n if self._value_type is not list:\n raise ConanException(\"Only list-like values can prepend other values.\")\n\n if isinstance(value, list):\n self._value = value + self._value\n else:\n self._value.insert(0, value)\n\n def compose_conf_value(self, other):\n \"\"\"\n self has precedence, the \"other\" will add/append if possible and not conflicting, but\n self mandates what to do. If self has define(), without placeholder, that will remain.\n :type other: _ConfValue\n \"\"\"\n v_type = self._value_type\n o_type = other._value_type\n if v_type is list and o_type is list:\n try:\n index = self._value.index(_ConfVarPlaceHolder)\n except ValueError: # It doesn't have placeholder\n pass\n else:\n new_value = self._value[:] # do a copy\n new_value[index:index + 1] = other._value # replace the placeholder\n self._value = new_value\n elif self._value is None or other._value is None:\n # It means any of those values were an \"unset\" so doing nothing because we don't\n # really know the original value type\n pass\n elif o_type != v_type:\n raise ConanException(\"It's not possible to compose {} values \"\n \"and {} ones.\".format(v_type.__name__, o_type.__name__))\n # TODO: In case of any other object types?\n\n def set_relative_base_folder(self, folder):\n if not self._path:\n return\n if isinstance(self._value, list):\n self._value = [os.path.join(folder, v) if v != _ConfVarPlaceHolder else v\n for v in self._value]\n if isinstance(self._value, dict):\n self._value = {k: os.path.join(folder, v) for k, v in self._value.items()}\n elif isinstance(self._value, str):\n self._value = os.path.join(folder, self._value)\n\n\nclass Conf:\n\n # Putting some default expressions to check that any value could be false\n boolean_false_expressions = (\"0\", '\"0\"', \"false\", '\"false\"', \"off\")\n\n def __init__(self):\n # It being ordered allows for Windows case-insensitive composition\n self._values = OrderedDict() # {var_name: [] of values, including separators}\n\n def __bool__(self):\n return bool(self._values)\n\n def __repr__(self):\n return \"Conf: \" + repr(self._values)\n\n def __eq__(self, other):\n \"\"\"\n :type other: Conf\n \"\"\"\n return other._values == self._values\n\n def validate(self):\n for conf in self._values:\n if conf.startswith(\"tools\") or conf.startswith(\"core\"):\n if conf not in BUILT_IN_CONFS:\n raise ConanException(f\"Unknown conf '{conf}'. Use 'conan config list' to \"\n \"display existing configurations\")\n\n def items(self):\n # FIXME: Keeping backward compatibility\n for k, v in self._values.items():\n yield k, v.value\n\n def get(self, conf_name, default=None, check_type=None):\n \"\"\"\n Get all the values of the given configuration name.\n\n :param conf_name: Name of the configuration.\n :param default: Default value in case of conf does not have the conf_name key.\n :param check_type: Check the conf type(value) is the same as the given by this param.\n There are two default smart conversions for bool and str types.\n \"\"\"\n # Skipping this check only the user.* configurations\n if USER_CONF_PATTERN.match(conf_name) is None and conf_name not in BUILT_IN_CONFS:\n raise ConanException(f\"[conf] '{conf_name}' does not exist in configuration list. \"\n f\" Run 'conan config list' to see all the available confs.\")\n\n conf_value = self._values.get(conf_name)\n if conf_value:\n v = conf_value.value\n # Some smart conversions\n if check_type is bool and not isinstance(v, bool):\n # Perhaps, user has introduced a \"false\", \"0\" or even \"off\"\n return str(v).lower() not in Conf.boolean_false_expressions\n elif check_type is str and not isinstance(v, str):\n return str(v)\n elif v is None: # value was unset\n return default\n elif check_type is not None and not isinstance(v, check_type):\n raise ConanException(f\"[conf] {conf_name} must be a \"\n f\"{check_type.__name__}-like object. The value '{v}' \"\n f\"introduced is a {type(v).__name__} object\")\n return v\n else:\n return default\n\n def pop(self, conf_name, default=None):\n \"\"\"\n Remove the given configuration, returning its value.\n\n :param conf_name: Name of the configuration.\n :param default: Default value to return in case the configuration doesn't exist.\n :return:\n \"\"\"\n value = self.get(conf_name, default=default)\n self._values.pop(conf_name, None)\n return value\n\n def copy(self):\n c = Conf()\n c._values = self._values.copy()\n return c\n\n def dumps(self):\n \"\"\"\n Returns a string with the format ``name=conf-value``\n \"\"\"\n return \"\\n\".join([v.dumps() for v in reversed(self._values.values())])\n\n def serialize(self):\n \"\"\"\n Returns a dict-like object, e.g., ``{\"tools.xxxx\": \"value1\"}``\n \"\"\"\n ret = {}\n for v in self._values.values():\n ret.update(v.serialize())\n return ret\n\n def define(self, name, value):\n \"\"\"\n Define a value for the given configuration name.\n\n :param name: Name of the configuration.\n :param value: Value of the configuration.\n \"\"\"\n self._values[name] = _ConfValue(name, value)\n\n def define_path(self, name, value):\n self._values[name] = _ConfValue(name, value, path=True)\n\n def unset(self, name):\n \"\"\"\n Clears the variable, equivalent to a unset or set XXX=\n\n :param name: Name of the configuration.\n \"\"\"\n self._values[name] = _ConfValue(name, None)\n\n def update(self, name, value):\n \"\"\"\n Update the value to the given configuration name.\n\n :param name: Name of the configuration.\n :param value: Value of the configuration.\n \"\"\"\n conf_value = _ConfValue(name, {})\n self._values.setdefault(name, conf_value).update(value)\n\n def update_path(self, name, value):\n conf_value = _ConfValue(name, {}, path=True)\n self._values.setdefault(name, conf_value).update(value)\n\n def append(self, name, value):\n \"\"\"\n Append a value to the given configuration name.\n\n :param name: Name of the configuration.\n :param value: Value to append.\n \"\"\"\n conf_value = _ConfValue(name, [_ConfVarPlaceHolder])\n self._values.setdefault(name, conf_value).append(value)\n\n def append_path(self, name, value):\n conf_value = _ConfValue(name, [_ConfVarPlaceHolder], path=True)\n self._values.setdefault(name, conf_value).append(value)\n\n def prepend(self, name, value):\n \"\"\"\n Prepend a value to the given configuration name.\n\n :param name: Name of the configuration.\n :param value: Value to prepend.\n \"\"\"\n conf_value = _ConfValue(name, [_ConfVarPlaceHolder])\n self._values.setdefault(name, conf_value).prepend(value)\n\n def prepend_path(self, name, value):\n conf_value = _ConfValue(name, [_ConfVarPlaceHolder], path=True)\n self._values.setdefault(name, conf_value).prepend(value)\n\n def remove(self, name, value):\n \"\"\"\n Remove a value from the given configuration name.\n\n :param name: Name of the configuration.\n :param value: Value to remove.\n \"\"\"\n conf_value = self._values.get(name)\n if conf_value:\n conf_value.remove(value)\n else:\n raise ConanException(\"Conf {} does not exist.\".format(name))\n\n def compose_conf(self, other):\n \"\"\"\n :param other: other has less priority than current one\n :type other: Conf\n \"\"\"\n for k, v in other._values.items():\n existing = self._values.get(k)\n if existing is None:\n self._values[k] = v.copy()\n else:\n existing.compose_conf_value(v)\n return self\n\n def filter_user_modules(self):\n result = Conf()\n for k, v in self._values.items():\n if _is_profile_module(k):\n result._values[k] = v\n return result\n\n def copy_conaninfo_conf(self):\n \"\"\"\n Get a new `Conf()` object with all the configurations required by the consumer\n to be included in the final `ConanInfo().package_id()` computation. For instance, let's\n suppose that we have this Conan `profile`:\n\n ```\n ...\n [conf]\n tools.info.package_id:confs=[\"tools.build:cxxflags\", \"tools.build:cflags\"]\n tools.build:cxxflags=[\"flag1xx\"]\n tools.build:cflags=[\"flag1\"]\n tools.build:defines=[\"DEF1\"]\n ...\n\n Then, the resulting `Conf()` will have only these configuration lines:\n\n tools.build:cxxflags=[\"flag1xx\"]\n tools.build:cflags=[\"flag1\"]\n ```\n\n :return: a new `< Conf object >` with the configuration selected by `tools.info.package_id:confs`.\n \"\"\"\n result = Conf()\n # Reading the list of all the configurations selected by the user to use for the package_id\n package_id_confs = self.get(\"tools.info.package_id:confs\", default=[], check_type=list)\n for conf_name in package_id_confs:\n value = self.get(conf_name)\n # Pruning any empty values, those should not affect package ID\n if value:\n result.define(conf_name, value)\n return result\n\n def set_relative_base_folder(self, folder):\n for v in self._values.values():\n v.set_relative_base_folder(folder)\n\n\nclass ConfDefinition:\n\n actions = ((\"+=\", \"append\"), (\"=+\", \"prepend\"),\n (\"=!\", \"unset\"), (\"=\", \"define\"))\n\n def __init__(self):\n self._pattern_confs = OrderedDict()\n\n def __repr__(self):\n return \"ConfDefinition: \" + repr(self._pattern_confs)\n\n def __bool__(self):\n return bool(self._pattern_confs)\n\n def get(self, conf_name, default=None, check_type=None):\n \"\"\"\n Get the value of the conf name requested and convert it to the [type]-like passed.\n \"\"\"\n pattern, name = self._split_pattern_name(conf_name)\n return self._pattern_confs.get(pattern, Conf()).get(name, default=default,\n check_type=check_type)\n\n def pop(self, conf_name, default=None):\n \"\"\"\n Remove the conf name passed.\n \"\"\"\n pattern, name = self._split_pattern_name(conf_name)\n return self._pattern_confs.get(pattern, Conf()).pop(name, default=default)\n\n @staticmethod\n def _split_pattern_name(pattern_name):\n if pattern_name.count(\":\") >= 2:\n pattern, name = pattern_name.split(\":\", 1)\n else:\n pattern, name = None, pattern_name\n return pattern, name\n\n def get_conanfile_conf(self, ref, is_consumer=False):\n \"\"\" computes package-specific Conf\n it is only called when conanfile.buildenv is called\n the last one found in the profile file has top priority\n \"\"\"\n result = Conf()\n for pattern, conf in self._pattern_confs.items():\n if pattern is None or ref_matches(ref, pattern, is_consumer):\n # Latest declared has priority, copy() necessary to not destroy data\n result = conf.copy().compose_conf(result)\n return result\n\n def update_conf_definition(self, other):\n \"\"\"\n :type other: ConfDefinition\n :param other: The argument profile has priority/precedence over the current one.\n \"\"\"\n for pattern, conf in other._pattern_confs.items():\n self._update_conf_definition(pattern, conf)\n\n def _update_conf_definition(self, pattern, conf):\n existing = self._pattern_confs.get(pattern)\n if existing:\n self._pattern_confs[pattern] = conf.compose_conf(existing)\n else:\n self._pattern_confs[pattern] = conf\n\n def rebase_conf_definition(self, other):\n \"\"\"\n for taking the new global.conf and composing with the profile [conf]\n :type other: ConfDefinition\n \"\"\"\n for pattern, conf in other._pattern_confs.items():\n new_conf = conf.filter_user_modules() # Creates a copy, filtered\n existing = self._pattern_confs.get(pattern)\n if existing:\n existing.compose_conf(new_conf)\n else:\n self._pattern_confs[pattern] = new_conf\n\n def update(self, key, value, profile=False, method=\"define\"):\n \"\"\"\n Define/append/prepend/unset any Conf line\n >> update(\"tools.microsoft.msbuild:verbosity\", \"Detailed\")\n \"\"\"\n pattern, name = self._split_pattern_name(key)\n\n if not _is_profile_module(name):\n if profile:\n raise ConanException(\"[conf] '{}' not allowed in profiles\".format(key))\n if pattern is not None:\n raise ConanException(\"Conf '{}' cannot have a package pattern\".format(key))\n\n # strip whitespaces before/after =\n # values are not strip() unless they are a path, to preserve potential whitespaces\n name = name.strip()\n\n # When loading from profile file, latest line has priority\n conf = Conf()\n if method == \"unset\":\n conf.unset(name)\n else:\n getattr(conf, method)(name, value)\n # Update\n self._update_conf_definition(pattern, conf)\n\n def dumps(self):\n result = []\n for pattern, conf in self._pattern_confs.items():\n if pattern is None:\n result.append(conf.dumps())\n else:\n result.append(\"\\n\".join(\"{}:{}\".format(pattern, line) if line else \"\"\n for line in conf.dumps().splitlines()))\n if result:\n result.append(\"\")\n return \"\\n\".join(result)\n\n def serialize(self):\n result = {}\n for pattern, conf in self._pattern_confs.items():\n if pattern is None:\n result.update(conf.serialize())\n else:\n for k, v in conf.serialize():\n result[f\"{pattern}:{k}\"] = v\n return result\n\n @staticmethod\n def _get_evaluated_value(__v):\n \"\"\"\n Function to avoid eval() catching local variables\n \"\"\"\n try:\n # Isolated eval\n parsed_value = eval(__v)\n if isinstance(parsed_value, str): # xxx:xxx = \"my string\"\n # Let's respect the quotes introduced by any user\n parsed_value = '\"{}\"'.format(parsed_value)\n except:\n # It means eval() failed because of a string without quotes\n parsed_value = __v.strip()\n return parsed_value\n\n def loads(self, text, profile=False):\n self._pattern_confs = {}\n\n for line in text.splitlines():\n line = line.strip()\n if not line or line.startswith(\"#\"):\n continue\n for op, method in ConfDefinition.actions:\n tokens = line.split(op, 1)\n if len(tokens) != 2:\n continue\n pattern_name, value = tokens\n parsed_value = ConfDefinition._get_evaluated_value(value)\n self.update(pattern_name, parsed_value, profile=profile, method=method)\n break\n else:\n raise ConanException(\"Bad conf definition: {}\".format(line))\n\n def validate(self):\n for conf in self._pattern_confs.values():\n conf.validate()\n"}
|
{"conan/api/subapi/config.py": [{"type": "function", "name": "ConfigAPI.show", "lines": [25, 27], "signature": "def show(self, pattern):", "doc": ""}], "conan/cli/commands/config.py": [{"type": "function", "name": "config_show", "lines": [72, 79], "signature": "def config_show(conan_api, parser, subparser, *args):", "doc": "Get the value of the specified conf"}], "conans/model/conf.py": [{"type": "function", "name": "Conf.show", "lines": [310, 313], "signature": "def show(self, fnpattern, pattern=\"\"):", "doc": ""}, {"type": "function", "name": "ConfDefinition.show", "lines": [490, 506], "signature": "def show(self, fnpattern):", "doc": "Get the value of the confs that match the requested pattern"}]}
| null |
["conans/test/integration/command/config_test.py::test_config_show"]
|
["conans/test/integration/command/config_test.py::test_missing_subarguments", "conans/test/integration/command/config_test.py::TestConfigHome::test_config_home_default", "conans/test/integration/command/config_test.py::TestConfigHome::test_api_uses_env_var_home", "conans/test/integration/command/config_test.py::test_config_list", "conans/test/integration/command/config_test.py::test_config_install", "conans/test/integration/command/config_test.py::test_config_install_conanignore"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1678177288.0, "pr_title": "Add `conan config show <conf>` command", "pr_body": "Changelog: Feature: Add `conan config show <conf>` command.\r\nDocs: https://github.com/conan-io/docs/pull/3091\r\n\r\nLet's you get the value for any given configuration. This is the simplest implementation, which does not allow you to input additional profiles nor conf values in the cli, so for now just works from the global.conf, but can be extended in the future:\r\n\r\n - Allow to take into account profiles and conf values supplied by the CLI\r\n - Pass a pattern to match the confs against and show the values of every matching ones\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 13,509
|
https://github.com/conan-io/conan/pull/13509
|
conan-io__conan-13509
|
[]
|
03a29d352ae7d68751336fa10846e8b03ded5537
|
diff --git a/conan/tools/files/__init__.py b/conan/tools/files/__init__.py
index 2517002f97d..f47cd2beffd 100644
--- a/conan/tools/files/__init__.py
+++ b/conan/tools/files/__init__.py
@@ -1,5 +1,6 @@
from conan.tools.files.files import load, save, mkdir, rmdir, rm, ftp_download, download, get, \
- rename, chdir, unzip, replace_in_file, collect_libs, check_md5, check_sha1, check_sha256
+ rename, chdir, unzip, replace_in_file, collect_libs, check_md5, check_sha1, check_sha256, \
+ move_folder_contents
from conan.tools.files.patches import patch, apply_conandata_patches, export_conandata_patches
from conan.tools.files.cpp_package import CppPackage
diff --git a/conan/tools/files/files.py b/conan/tools/files/files.py
index e46e02b2f2f..36453bb30d4 100644
--- a/conan/tools/files/files.py
+++ b/conan/tools/files/files.py
@@ -544,18 +544,39 @@ def collect_libs(conanfile, folder=None):
# TODO: Do NOT document this yet. It is unclear the interface, maybe should be split
-def swap_child_folder(parent_folder, child_folder):
+def move_folder_contents(src_folder, dst_folder):
""" replaces the current folder contents with the contents of one child folder. This
is used in the SCM monorepo flow, when it is necessary to use one subproject subfolder
to replace the whole cloned git repo
+ /base-folder /base-folder
+ /pkg (src folder) /other/<otherfiles>
+ /other/<otherfiles> /pkg/<pkgfiles>
+ /pkg/<pkgfiles> <files>
+ <files>
+ /siblings
+ <siblingsfiles>
"""
- for f in os.listdir(parent_folder):
- if f != child_folder:
- path = os.path.join(parent_folder, f)
- if os.path.isfile(path):
- os.remove(path)
+ # Remove potential "siblings" folders not wanted
+ src_folder_name = os.path.basename(src_folder)
+ for f in os.listdir(dst_folder):
+ if f != src_folder_name: # FIXME: Only works for 1st level subfolder
+ dst = os.path.join(dst_folder, f)
+ if os.path.isfile(dst):
+ os.remove(dst)
else:
- _internal_rmdir(path)
- child = os.path.join(parent_folder, child_folder)
- for f in os.listdir(child):
- shutil.move(os.path.join(child, f), os.path.join(parent_folder, f))
+ _internal_rmdir(dst)
+
+ # Move all the contents
+ for f in os.listdir(src_folder):
+ src = os.path.join(src_folder, f)
+ dst = os.path.join(dst_folder, f)
+ if not os.path.exists(dst):
+ shutil.move(src, dst_folder)
+ else:
+ for sub_src in os.listdir(src):
+ shutil.move(os.path.join(src, sub_src), dst)
+ _internal_rmdir(src)
+ try:
+ os.rmdir(src_folder)
+ except OSError:
+ pass
|
diff --git a/conans/test/functional/tools/scm/test_git.py b/conans/test/functional/tools/scm/test_git.py
index 0b3d822a33f..a4e873b86a1 100644
--- a/conans/test/functional/tools/scm/test_git.py
+++ b/conans/test/functional/tools/scm/test_git.py
@@ -6,6 +6,8 @@
import pytest
import six
+from conans.test.assets.cmake import gen_cmakelists
+from conans.test.assets.sources import gen_function_cpp
from conans.test.utils.scm import create_local_git_repo, git_add_changes_commit, git_create_bare_repo
from conans.test.utils.test_files import temp_folder
from conans.test.utils.tools import TestClient
@@ -484,8 +486,7 @@ class TestGitMonorepoSCMFlow:
import os, shutil
from conan import ConanFile
from conan.tools.scm import Git
- from conan.tools.files import load, update_conandata
- from conan.tools.files.files import swap_child_folder
+ from conan.tools.files import load, update_conandata, move_folder_contents
class Pkg(ConanFile):
name = "{pkg}"
@@ -509,7 +510,8 @@ def source(self):
sources = self.conan_data["sources"]
git.clone(url=sources["url"], target=".")
git.checkout(commit=sources["commit"])
- swap_child_folder(self.source_folder, sources["folder"])
+ move_folder_contents(os.path.join(self.source_folder, sources["folder"]),
+ self.source_folder)
def build(self):
cmake = os.path.join(self.source_folder, "CMakeLists.txt")
@@ -558,6 +560,80 @@ def test_full_scm(self):
assert "pkg2/0.1: MYCMAKE-BUILD: mycmake2!" in c2.out
assert "pkg2/0.1: MYFILE-BUILD: my2header!" in c2.out
+ @pytest.mark.tool_cmake
+ def test_exports_sources_common_code_layout(self):
+ """ This is a copy of test_exports_sources_common_code_layout in test_in_subfolder.py
+ but instead of using "exports", trying to implement it with Git features
+ """
+ c = TestClient()
+ conanfile = textwrap.dedent("""
+ import os
+ from conan import ConanFile
+ from conan.tools.cmake import cmake_layout, CMake
+ from conan.tools.files import load, copy, save, update_conandata, move_folder_contents
+ from conan.tools.scm import Git
+
+ class Pkg(ConanFile):
+ name = "pkg"
+ version = "0.1"
+ settings = "os", "compiler", "build_type", "arch"
+ generators = "CMakeToolchain"
+
+ def export(self):
+ git = Git(self)
+ scm_url, scm_commit = git.get_url_and_commit()
+ update_conandata(self, {"sources": {"commit": scm_commit, "url": scm_url}})
+
+ def layout(self):
+ self.folders.root = ".."
+ self.folders.subproject = "pkg"
+ cmake_layout(self)
+
+ def source(self):
+ git = Git(self)
+ sources = self.conan_data["sources"]
+ git.clone(url=sources["url"], target=".")
+ git.checkout(commit=sources["commit"])
+ # Layout is pkg/pkg/<files> and pkg/common/<files>
+ # Final we want is pkg/<files> and common/<files>
+ # NOTE: This abs_path is IMPORTANT to avoid the trailing "."
+ src_folder = os.path.abspath(self.source_folder)
+ move_folder_contents(src_folder, os.path.dirname(src_folder))
+
+ def build(self):
+ cmake = CMake(self)
+ cmake.configure()
+ cmake.build()
+ self.run(os.path.join(self.cpp.build.bindirs[0], "myapp"))
+ """)
+ cmake_include = "include(${CMAKE_CURRENT_LIST_DIR}/../common/myutils.cmake)"
+ c.save({"pkg/conanfile.py": conanfile,
+ "pkg/app.cpp": gen_function_cpp(name="main", includes=["../common/myheader"],
+ preprocessor=["MYDEFINE"]),
+ "pkg/CMakeLists.txt": gen_cmakelists(appsources=["app.cpp"],
+ custom_content=cmake_include),
+ "common/myutils.cmake": 'message(STATUS "MYUTILS.CMAKE!")',
+ "common/myheader.h": '#define MYDEFINE "MYDEFINEVALUE"'})
+ c.init_git_repo()
+
+ c.run("create pkg")
+ assert "MYUTILS.CMAKE!" in c.out
+ assert "main: Release!" in c.out
+ assert "MYDEFINE: MYDEFINEVALUE" in c.out
+
+ # Local flow
+ c.run("install pkg")
+ c.run("build pkg")
+ assert "MYUTILS.CMAKE!" in c.out
+ assert "main: Release!" in c.out
+ assert "MYDEFINE: MYDEFINEVALUE" in c.out
+
+ c.run("install pkg -s build_type=Debug")
+ c.run("build pkg")
+ assert "MYUTILS.CMAKE!" in c.out
+ assert "main: Debug!" in c.out
+ assert "MYDEFINE: MYDEFINEVALUE" in c.out
+
class TestConanFileSubfolder:
"""verify that we can have a conanfile in a subfolder
| 2023-03-23T00:01:46
|
{}
|
{"conan/tools/files/__init__.py": "from conan.tools.files.files import load, save, mkdir, rmdir, rm, ftp_download, download, get, \\\n rename, chdir, unzip, replace_in_file, collect_libs, check_md5, check_sha1, check_sha256\n\nfrom conan.tools.files.patches import patch, apply_conandata_patches, export_conandata_patches\nfrom conan.tools.files.cpp_package import CppPackage\nfrom conan.tools.files.packager import AutoPackager\nfrom conan.tools.files.symlinks import symlinks\nfrom conan.tools.files.copy_pattern import copy\nfrom conan.tools.files.conandata import update_conandata\n", "conan/tools/files/files.py": "import configparser\nimport errno\nimport gzip\nimport hashlib\nimport os\nimport platform\nimport shutil\nimport subprocess\nimport sys\nfrom contextlib import contextmanager\nfrom fnmatch import fnmatch\n\nimport six\nfrom urllib.parse import urlparse\nfrom urllib.request import url2pathname\n\nfrom conan.tools import CONAN_TOOLCHAIN_ARGS_FILE, CONAN_TOOLCHAIN_ARGS_SECTION\nfrom conans.client.downloaders.download import run_downloader\nfrom conans.errors import ConanException\nfrom conans.util.files import rmdir as _internal_rmdir\nfrom conans.util.runners import check_output_runner\n\nif six.PY3: # Remove this IF in develop2\n from shutil import which\n\n\ndef load(conanfile, path, encoding=\"utf-8\"):\n \"\"\" Loads a file content \"\"\"\n with open(path, 'rb') as handle:\n tmp = handle.read()\n return tmp.decode(encoding)\n\n\ndef save(conanfile, path, content, append=False, encoding=\"utf-8\"):\n if append:\n mode = \"ab\"\n try:\n os.makedirs(os.path.dirname(path))\n except Exception:\n pass\n else:\n mode = \"wb\"\n dir_path = os.path.dirname(path)\n if not os.path.isdir(dir_path):\n try:\n os.makedirs(dir_path)\n except OSError as error:\n if error.errno not in (errno.EEXIST, errno.ENOENT):\n raise OSError(\"The folder {} does not exist and could not be created ({}).\"\n .format(dir_path, error.strerror))\n except Exception:\n raise\n\n with open(path, mode) as handle:\n if not isinstance(content, bytes):\n content = bytes(content, encoding=encoding)\n handle.write(content)\n\n\ndef mkdir(conanfile, path):\n \"\"\"Recursive mkdir, doesnt fail if already existing\"\"\"\n if os.path.exists(path):\n return\n os.makedirs(path)\n\n\ndef rmdir(conanfile, path):\n _internal_rmdir(path)\n\n\ndef rm(conanfile, pattern, folder, recursive=False):\n for root, _, filenames in os.walk(folder):\n for filename in filenames:\n if fnmatch(filename, pattern):\n fullname = os.path.join(root, filename)\n os.unlink(fullname)\n if not recursive:\n break\n\n\ndef get(conanfile, url, md5=None, sha1=None, sha256=None, destination=\".\", filename=\"\",\n keep_permissions=False, pattern=None, verify=True, retry=None, retry_wait=None,\n auth=None, headers=None, strip_root=False):\n \"\"\" high level downloader + unzipper + (optional hash checker) + delete temporary zip\n \"\"\"\n\n if not filename: # deduce filename from the URL\n url_base = url[0] if isinstance(url, (list, tuple)) else url\n if \"?\" in url_base or \"=\" in url_base:\n raise ConanException(\"Cannot deduce file name from the url: '{}'. Use 'filename' \"\n \"parameter.\".format(url_base))\n filename = os.path.basename(url_base)\n\n download(conanfile, url, filename, verify=verify,\n retry=retry, retry_wait=retry_wait, auth=auth, headers=headers,\n md5=md5, sha1=sha1, sha256=sha256)\n unzip(conanfile, filename, destination=destination, keep_permissions=keep_permissions,\n pattern=pattern, strip_root=strip_root)\n os.unlink(filename)\n\n\ndef ftp_download(conanfile, ip, filename, login='', password=''):\n # TODO: Check if we want to join this method with download() one, based on ftp:// protocol\n # this has been requested by some users, but the signature is a bit divergent\n import ftplib\n ftp = None\n try:\n ftp = ftplib.FTP(ip)\n ftp.login(login, password)\n filepath, filename = os.path.split(filename)\n if filepath:\n ftp.cwd(filepath)\n with open(filename, 'wb') as f:\n ftp.retrbinary('RETR ' + filename, f.write)\n except Exception as e:\n try:\n os.unlink(filename)\n except OSError:\n pass\n raise ConanException(\"Error in FTP download from %s\\n%s\" % (ip, str(e)))\n finally:\n if ftp:\n ftp.quit()\n\n\ndef download(conanfile, url, filename, verify=True, retry=None, retry_wait=None,\n auth=None, headers=None, md5=None, sha1=None, sha256=None):\n \"\"\"Retrieves a file from a given URL into a file with a given filename.\n It uses certificates from a list of known verifiers for https downloads,\n but this can be optionally disabled.\n\n :param conanfile:\n :param url: URL to download. It can be a list, which only the first one will be downloaded, and\n the follow URLs will be used as mirror in case of download error.\n :param filename: Name of the file to be created in the local storage\n :param verify: When False, disables https certificate validation\n :param retry: Number of retries in case of failure. Default is overriden by general.retry in the\n conan.conf file or an env variable CONAN_RETRY\n :param retry_wait: Seconds to wait between download attempts. Default is overriden by\n general.retry_wait in the conan.conf file or an env variable CONAN_RETRY_WAIT\n :param auth: A tuple of user and password to use HTTPBasic authentication\n :param headers: A dictionary with additional headers\n :param md5: MD5 hash code to check the downloaded file\n :param sha1: SHA-1 hash code to check the downloaded file\n :param sha256: SHA-256 hash code to check the downloaded file\n :return: None\n \"\"\"\n # TODO: Add all parameters to the new conf\n out = conanfile.output\n requester = conanfile._conan_requester\n config = conanfile.conf\n overwrite = True\n\n if config[\"tools.files.download:retry\"]:\n retry = int(config[\"tools.files.download:retry\"])\n elif retry is None:\n retry = 1\n\n if config[\"tools.files.download:retry_wait\"]:\n retry_wait = int(config[\"tools.files.download:retry_wait\"])\n elif retry_wait is None:\n retry_wait = 5\n\n checksum = sha256 or sha1 or md5\n download_cache = config[\"tools.files.download:download_cache\"] if checksum else None\n\n def _download_file(file_url):\n # The download cache is only used if a checksum is provided, otherwise, a normal download\n if file_url.startswith(\"file:\"):\n _copy_local_file_from_uri(conanfile, url=file_url, file_path=filename, md5=md5,\n sha1=sha1, sha256=sha256)\n else:\n run_downloader(requester=requester, output=out, verify=verify, download_cache=download_cache,\n user_download=True, url=file_url,\n file_path=filename, retry=retry, retry_wait=retry_wait, overwrite=overwrite,\n auth=auth, headers=headers, md5=md5, sha1=sha1, sha256=sha256)\n out.writeln(\"\")\n\n if not isinstance(url, (list, tuple)):\n _download_file(url)\n else: # We were provided several URLs to try\n for url_it in url:\n try:\n _download_file(url_it)\n break\n except Exception as error:\n message = \"Could not download from the URL {}: {}.\".format(url_it, str(error))\n out.warn(message + \" Trying another mirror.\")\n else:\n raise ConanException(\"All downloads from ({}) URLs have failed.\".format(len(url)))\n\n\ndef _copy_local_file_from_uri(conanfile, url, file_path, md5=None, sha1=None, sha256=None):\n file_origin = _path_from_file_uri(url)\n shutil.copyfile(file_origin, file_path)\n\n if md5 is not None:\n check_md5(conanfile, file_path, md5)\n if sha1 is not None:\n check_sha1(conanfile, file_path, sha1)\n if sha256 is not None:\n check_sha256(conanfile, file_path, sha256)\n\n\ndef _path_from_file_uri(uri):\n path = urlparse(uri).path\n return url2pathname(path)\n\n\ndef rename(conanfile, src, dst):\n \"\"\"\n rename a file or folder to avoid \"Access is denied\" error on Windows\n :param conanfile: conanfile object\n :param src: Source file or folder\n :param dst: Destination file or folder\n :return: None\n \"\"\"\n # FIXME: This function has been copied from legacy. Needs to fix: which() call and wrap subprocess call.\n if os.path.exists(dst):\n raise ConanException(\"rename {} to {} failed, dst exists.\".format(src, dst))\n\n if platform.system() == \"Windows\" and which(\"robocopy\") and os.path.isdir(src):\n # /move Moves files and directories, and deletes them from the source after they are copied.\n # /e Copies subdirectories. Note that this option includes empty directories.\n # /ndl Specifies that directory names are not to be logged.\n # /nfl Specifies that file names are not to be logged.\n process = subprocess.Popen([\"robocopy\", \"/move\", \"/e\", \"/ndl\", \"/nfl\", src, dst],\n stdout=subprocess.PIPE)\n process.communicate()\n if process.returncode > 7: # https://ss64.com/nt/robocopy-exit.html\n raise ConanException(\"rename {} to {} failed.\".format(src, dst))\n else:\n try:\n os.rename(src, dst)\n except Exception as err:\n raise ConanException(\"rename {} to {} failed: {}\".format(src, dst, err))\n\n\ndef load_toolchain_args(generators_folder=None, namespace=None):\n \"\"\"\n Helper function to load the content of any CONAN_TOOLCHAIN_ARGS_FILE\n\n :param generators_folder: `str` folder where is located the CONAN_TOOLCHAIN_ARGS_FILE.\n :param namespace: `str` namespace to be prepended to the filename.\n :return: <class 'configparser.SectionProxy'>\n \"\"\"\n namespace_name = \"{}_{}\".format(namespace, CONAN_TOOLCHAIN_ARGS_FILE) if namespace \\\n else CONAN_TOOLCHAIN_ARGS_FILE\n args_file = os.path.join(generators_folder, namespace_name) if generators_folder \\\n else namespace_name\n toolchain_config = configparser.ConfigParser()\n toolchain_file = toolchain_config.read(args_file)\n if not toolchain_file:\n raise ConanException(\"The file %s does not exist. Please, make sure that it was not\"\n \" generated in another folder.\" % args_file)\n try:\n return toolchain_config[CONAN_TOOLCHAIN_ARGS_SECTION]\n except KeyError:\n raise ConanException(\"The primary section [%s] does not exist in the file %s. Please, add it\"\n \" as the default one of all your configuration variables.\" %\n (CONAN_TOOLCHAIN_ARGS_SECTION, args_file))\n\n\ndef save_toolchain_args(content, generators_folder=None, namespace=None):\n \"\"\"\n Helper function to save the content into the CONAN_TOOLCHAIN_ARGS_FILE\n\n :param content: `dict` all the information to be saved into the toolchain file.\n :param namespace: `str` namespace to be prepended to the filename.\n :param generators_folder: `str` folder where is located the CONAN_TOOLCHAIN_ARGS_FILE\n \"\"\"\n # Let's prune None values\n content_ = {k: v for k, v in content.items() if v is not None}\n namespace_name = \"{}_{}\".format(namespace, CONAN_TOOLCHAIN_ARGS_FILE) if namespace \\\n else CONAN_TOOLCHAIN_ARGS_FILE\n args_file = os.path.join(generators_folder, namespace_name) if generators_folder \\\n else namespace_name\n toolchain_config = configparser.ConfigParser()\n toolchain_config[CONAN_TOOLCHAIN_ARGS_SECTION] = content_\n with open(args_file, \"w\") as f:\n toolchain_config.write(f)\n\n\n@contextmanager\ndef chdir(conanfile, newdir):\n old_path = os.getcwd()\n os.chdir(newdir)\n try:\n yield\n finally:\n os.chdir(old_path)\n\n\ndef unzip(conanfile, filename, destination=\".\", keep_permissions=False, pattern=None,\n strip_root=False):\n \"\"\"\n Unzip a zipped file\n :param filename: Path to the zip file\n :param destination: Destination folder (or file for .gz files)\n :param keep_permissions: Keep the zip permissions. WARNING: Can be\n dangerous if the zip was not created in a NIX system, the bits could\n produce undefined permission schema. Use this option only if you are sure\n that the zip was created correctly.\n :param pattern: Extract only paths matching the pattern. This should be a\n Unix shell-style wildcard, see fnmatch documentation for more details.\n :param flat: If all the contents are in a single dir, flat that directory.\n :return:\n \"\"\"\n\n output = conanfile.output\n if (filename.endswith(\".tar.gz\") or filename.endswith(\".tgz\") or\n filename.endswith(\".tbz2\") or filename.endswith(\".tar.bz2\") or\n filename.endswith(\".tar\")):\n return untargz(filename, destination, pattern, strip_root)\n if filename.endswith(\".gz\"):\n with gzip.open(filename, 'rb') as f:\n file_content = f.read()\n target_name = filename[:-3] if destination == \".\" else destination\n save(conanfile, target_name, file_content)\n return\n if filename.endswith(\".tar.xz\") or filename.endswith(\".txz\"):\n return untargz(filename, destination, pattern, strip_root)\n\n import zipfile\n full_path = os.path.normpath(os.path.join(os.getcwd(), destination))\n\n if hasattr(sys.stdout, \"isatty\") and sys.stdout.isatty():\n def print_progress(the_size, uncomp_size):\n the_size = (the_size * 100.0 / uncomp_size) if uncomp_size != 0 else 0\n txt_msg = \"Unzipping %d %%\"\n if the_size > print_progress.last_size + 1:\n output.rewrite_line(txt_msg % the_size)\n print_progress.last_size = the_size\n if int(the_size) == 99:\n output.rewrite_line(txt_msg % 100)\n else:\n def print_progress(_, __):\n pass\n\n with zipfile.ZipFile(filename, \"r\") as z:\n zip_info = z.infolist()\n if pattern:\n zip_info = [zi for zi in zip_info if fnmatch(zi.filename, pattern)]\n if strip_root:\n names = [n.replace(\"\\\\\", \"/\") for n in z.namelist()]\n common_folder = os.path.commonprefix(names).split(\"/\", 1)[0]\n if not common_folder and len(names) > 1:\n raise ConanException(\"The zip file contains more than 1 folder in the root\")\n if len(names) == 1 and len(names[0].split(\"/\", 1)) == 1:\n raise ConanException(\"The zip file contains a file in the root\")\n # Remove the directory entry if present\n # Note: The \"zip\" format contains the \"/\" at the end if it is a directory\n zip_info = [m for m in zip_info if m.filename != (common_folder + \"/\")]\n for member in zip_info:\n name = member.filename.replace(\"\\\\\", \"/\")\n member.filename = name.split(\"/\", 1)[1]\n\n uncompress_size = sum((file_.file_size for file_ in zip_info))\n if uncompress_size > 100000:\n output.info(\"Unzipping %s, this can take a while\" % _human_size(uncompress_size))\n else:\n output.info(\"Unzipping %s\" % _human_size(uncompress_size))\n extracted_size = 0\n\n print_progress.last_size = -1\n if platform.system() == \"Windows\":\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n else: # duplicated for, to avoid a platform check for each zipped file\n for file_ in zip_info:\n extracted_size += file_.file_size\n print_progress(extracted_size, uncompress_size)\n try:\n z.extract(file_, full_path)\n if keep_permissions:\n # Could be dangerous if the ZIP has been created in a non nix system\n # https://bugs.python.org/issue15795\n perm = file_.external_attr >> 16 & 0xFFF\n os.chmod(os.path.join(full_path, file_.filename), perm)\n except Exception as e:\n output.error(\"Error extract %s\\n%s\" % (file_.filename, str(e)))\n output.writeln(\"\")\n\n\ndef untargz(filename, destination=\".\", pattern=None, strip_root=False):\n # NOT EXPOSED at `conan.tools.files` but used in tests\n import tarfile\n with tarfile.TarFile.open(filename, 'r:*') as tarredgzippedFile:\n if not pattern and not strip_root:\n tarredgzippedFile.extractall(destination)\n else:\n members = tarredgzippedFile.getmembers()\n\n if strip_root:\n names = [n.replace(\"\\\\\", \"/\") for n in tarredgzippedFile.getnames()]\n common_folder = os.path.commonprefix(names).split(\"/\", 1)[0]\n if not common_folder and len(names) > 1:\n raise ConanException(\"The tgz file contains more than 1 folder in the root\")\n if len(names) == 1 and len(names[0].split(\"/\", 1)) == 1:\n raise ConanException(\"The tgz file contains a file in the root\")\n # Remove the directory entry if present\n members = [m for m in members if m.name != common_folder]\n for member in members:\n name = member.name.replace(\"\\\\\", \"/\")\n member.name = name.split(\"/\", 1)[1]\n member.path = member.name\n if member.linkpath.startswith(common_folder):\n # https://github.com/conan-io/conan/issues/11065\n linkpath = member.linkpath.replace(\"\\\\\", \"/\")\n member.linkpath = linkpath.split(\"/\", 1)[1]\n member.linkname = member.linkpath\n if pattern:\n members = list(filter(lambda m: fnmatch(m.name, pattern),\n tarredgzippedFile.getmembers()))\n tarredgzippedFile.extractall(destination, members=members)\n\n\ndef _human_size(size_bytes):\n \"\"\"\n format a size in bytes into a 'human' file size, e.g. B, KB, MB, GB, TB, PB\n Note that bytes will be reported in whole numbers but KB and above will have\n greater precision. e.g. 43 B, 443 KB, 4.3 MB, 4.43 GB, etc\n \"\"\"\n UNIT_SIZE = 1000.0\n\n suffixes_table = [('B', 0), ('KB', 1), ('MB', 1), ('GB', 2), ('TB', 2), ('PB', 2)]\n\n num = float(size_bytes)\n the_precision = None\n the_suffix = None\n for suffix, precision in suffixes_table:\n the_precision = precision\n the_suffix = suffix\n if num < UNIT_SIZE:\n break\n num /= UNIT_SIZE\n\n if the_precision == 0:\n formatted_size = \"%d\" % num\n else:\n formatted_size = str(round(num, ndigits=the_precision))\n\n return \"%s%s\" % (formatted_size, the_suffix)\n\n\ndef check_sha1(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"sha1\", file_path, signature)\n\n\ndef check_md5(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"md5\", file_path, signature)\n\n\ndef check_sha256(conanfile, file_path, signature):\n _check_with_algorithm_sum(\"sha256\", file_path, signature)\n\n\ndef _check_with_algorithm_sum(algorithm_name, file_path, signature):\n real_signature = _generic_algorithm_sum(file_path, algorithm_name)\n if real_signature != signature.lower():\n raise ConanException(\"%s signature failed for '%s' file. \\n\"\n \" Provided signature: %s \\n\"\n \" Computed signature: %s\" % (algorithm_name,\n os.path.basename(file_path),\n signature,\n real_signature))\n\n\ndef _generic_algorithm_sum(file_path, algorithm_name):\n\n with open(file_path, 'rb') as fh:\n try:\n m = hashlib.new(algorithm_name)\n except ValueError: # FIPS error https://github.com/conan-io/conan/issues/7800\n m = hashlib.new(algorithm_name, usedforsecurity=False)\n while True:\n data = fh.read(8192)\n if not data:\n break\n m.update(data)\n return m.hexdigest()\n\n\ndef replace_in_file(conanfile, file_path, search, replace, strict=True, encoding=\"utf-8\"):\n \"\"\"\n :param conanfile: Conanfile instance\n :param file_path: Path to the file\n :param search: Pattern to search\n :param replace: string to replace the matches\n :param strict: Raise in case \"search\" is not found in the file contents\n :return:\n \"\"\"\n output = conanfile.output\n content = load(conanfile, file_path, encoding=encoding)\n if -1 == content.find(search):\n message = \"replace_in_file didn't find pattern '%s' in '%s' file.\" % (search, file_path)\n if strict:\n raise ConanException(message)\n else:\n output.warn(message)\n return False\n content = content.replace(search, replace)\n save(conanfile, file_path, content, encoding=encoding)\n\n\ndef collect_libs(conanfile, folder=None):\n if not conanfile.package_folder:\n return []\n if folder:\n lib_folders = [os.path.join(conanfile.package_folder, folder)]\n else:\n lib_folders = [os.path.join(conanfile.package_folder, folder)\n for folder in conanfile.cpp_info.libdirs]\n\n ref_libs = {}\n for lib_folder in lib_folders:\n if not os.path.exists(lib_folder):\n conanfile.output.warn(\"Lib folder doesn't exist, can't collect libraries: \"\n \"{0}\".format(lib_folder))\n continue\n # In case of symlinks, only keep shortest file name in the same \"group\"\n files = os.listdir(lib_folder)\n for f in files:\n name, ext = os.path.splitext(f)\n if ext in (\".so\", \".lib\", \".a\", \".dylib\", \".bc\"):\n real_lib = os.path.basename(os.path.realpath(os.path.join(lib_folder, f)))\n if real_lib not in ref_libs or len(f) < len(ref_libs[real_lib]):\n ref_libs[real_lib] = f\n\n result = []\n for f in ref_libs.values():\n name, ext = os.path.splitext(f)\n if ext != \".lib\" and name.startswith(\"lib\"):\n name = name[3:]\n if name not in result:\n result.append(name)\n result.sort()\n return result\n\n\n# TODO: Do NOT document this yet. It is unclear the interface, maybe should be split\ndef swap_child_folder(parent_folder, child_folder):\n \"\"\" replaces the current folder contents with the contents of one child folder. This\n is used in the SCM monorepo flow, when it is necessary to use one subproject subfolder\n to replace the whole cloned git repo\n \"\"\"\n for f in os.listdir(parent_folder):\n if f != child_folder:\n path = os.path.join(parent_folder, f)\n if os.path.isfile(path):\n os.remove(path)\n else:\n _internal_rmdir(path)\n child = os.path.join(parent_folder, child_folder)\n for f in os.listdir(child):\n shutil.move(os.path.join(child, f), os.path.join(parent_folder, f))\n"}
|
{"conan/tools/files/files.py": [{"type": "function", "name": "move_folder_contents", "lines": [547, 582], "signature": "def move_folder_contents(src_folder, dst_folder):", "doc": "replaces the current folder contents with the contents of one child folder. This\nis used in the SCM monorepo flow, when it is necessary to use one subproject subfolder\nto replace the whole cloned git repo\n/base-folder /base-folder\n /pkg (src folder) /other/<otherfiles>\n /other/<otherfiles> /pkg/<pkgfiles>\n /pkg/<pkgfiles> <files>\n <files>\n /siblings\n <siblingsfiles>"}]}
| null |
["conans/test/functional/tools/scm/test_git.py::TestGitMonorepoSCMFlow::test_full_scm"]
|
["conans/test/functional/tools/scm/test_git.py::TestGitBasicCapture::test_capture_commit_local", "conans/test/functional/tools/scm/test_git.py::TestGitBasicCapture::test_capture_remote_url", "conans/test/functional/tools/scm/test_git.py::TestGitBasicCapture::test_capture_remote_pushed_commit", "conans/test/functional/tools/scm/test_git.py::TestGitCaptureSCM::test_capture_commit_local", "conans/test/functional/tools/scm/test_git.py::TestGitCaptureSCM::test_capture_remote_url", "conans/test/functional/tools/scm/test_git.py::TestGitCaptureSCM::test_capture_remote_pushed_commit", "conans/test/functional/tools/scm/test_git.py::TestGitBasicClone::test_clone_checkout", "conans/test/functional/tools/scm/test_git.py::TestGitCloneWithArgs::test_clone_specify_branch_or_tag", "conans/test/functional/tools/scm/test_git.py::TestGitCloneWithArgs::test_clone_invalid_branch_argument", "conans/test/functional/tools/scm/test_git.py::TestGitBasicSCMFlow::test_full_scm", "conans/test/functional/tools/scm/test_git.py::TestGitBasicSCMFlow::test_branch_flow", "conans/test/functional/tools/scm/test_git.py::TestGitBasicSCMFlowSubfolder::test_full_scm", "conans/test/functional/tools/scm/test_git.py::TestConanFileSubfolder::test_conanfile_subfolder", "conans/test/functional/tools/scm/test_git.py::TestConanFileSubfolder::test_git_run", "conans/test/functional/tools/scm/test_git.py::TestGitIncluded::test_git_included", "conans/test/functional/tools/scm/test_git.py::TestGitIncluded::test_git_included_subfolder"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1679529410.0, "pr_title": "new move_folder_contents() file helper to re-arrange repos", "pr_body": "Changelog: Feature: New ``move_folder_contents()`` file helper to re-arrange repos folders.\r\nDocs: https://github.com/conan-io/docs/pull/3196\r\nClose https://github.com/conan-io/conan/issues/12360\r\n\r\nThis was already private internal to implement some tests, but tests are a public use case, so it should be made public, I have refactored, improved, and added more tests for this helper.\r\n\r\n\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 14,233
|
https://github.com/conan-io/conan/pull/14233
|
conan-io__conan-14233
|
[]
|
57c7049c758a685cdc02e870eb77617af5e45da6
|
diff --git a/conan/tools/gnu/pkgconfigdeps.py b/conan/tools/gnu/pkgconfigdeps.py
index e793711ee81..1aa86e9d472 100644
--- a/conan/tools/gnu/pkgconfigdeps.py
+++ b/conan/tools/gnu/pkgconfigdeps.py
@@ -1,12 +1,13 @@
import os
+import re
import textwrap
from collections import namedtuple
from jinja2 import Template, StrictUndefined
+from conan.errors import ConanException
from conan.internal import check_duplicated_generator
from conan.tools.gnu.gnudeps_flags import GnuDepsFlags
-from conan.errors import ConanException
from conans.model.dependencies import get_transitive_requires
from conans.util.files import save
@@ -75,8 +76,8 @@ def _get_suffix(req, build_context_suffix=None):
return build_context_suffix.get(req.ref.name, "")
-def _get_formatted_dirs(folders, prefix_path_):
- ret = []
+def _get_formatted_dirs(folder_name, folders, prefix_path_):
+ ret = {}
for i, directory in enumerate(folders):
directory = os.path.normpath(directory).replace("\\", "/")
prefix = ""
@@ -85,7 +86,9 @@ def _get_formatted_dirs(folders, prefix_path_):
elif directory.startswith(prefix_path_):
prefix = "${prefix}/"
directory = os.path.relpath(directory, prefix_path_).replace("\\", "/")
- ret.append("%s%s" % (prefix, directory))
+ suffix = str(i) if i else ""
+ var_name = f"{folder_name}{suffix}"
+ ret[var_name] = f"{prefix}{directory}"
return ret
@@ -95,71 +98,19 @@ def _get_formatted_dirs(folders, prefix_path_):
class _PCContentGenerator:
template = textwrap.dedent("""\
- {%- macro get_libs(libdirs, cpp_info, gnudeps_flags) -%}
- {%- for _ in libdirs -%}
- {{ '-L"${libdir%s}"' % (loop.index0 or "") + " " }}
- {%- endfor -%}
- {%- for sys_lib in (cpp_info.libs + cpp_info.system_libs) -%}
- {{ "-l%s" % sys_lib + " " }}
- {%- endfor -%}
- {%- for shared_flag in (cpp_info.sharedlinkflags + cpp_info.exelinkflags) -%}
- {{ shared_flag + " " }}
- {%- endfor -%}
- {%- for framework in (gnudeps_flags.frameworks + gnudeps_flags.framework_paths) -%}
- {{ framework + " " }}
- {%- endfor -%}
- {%- endmacro -%}
-
- {%- macro get_cflags(includedirs, cxxflags, cflags, defines) -%}
- {%- for _ in includedirs -%}
- {{ '-I"${includedir%s}"' % (loop.index0 or "") + " " }}
- {%- endfor -%}
- {%- for cxxflag in cxxflags -%}
- {{ cxxflag + " " }}
- {%- endfor -%}
- {%- for cflag in cflags-%}
- {{ cflag + " " }}
- {%- endfor -%}
- {%- for define in defines-%}
- {{ "-D%s" % define + " " }}
- {%- endfor -%}
- {%- endmacro -%}
-
- prefix={{ prefix_path }}
- {% for path in libdirs %}
- {{ "libdir{}={}".format((loop.index0 or ""), path) }}
- {% endfor %}
- {% for path in includedirs %}
- {{ "includedir%s=%s" % ((loop.index0 or ""), path) }}
- {% endfor %}
- {% for path in bindirs %}
- {{ "bindir{}={}".format((loop.index0 or ""), path) }}
+ {% for k, v in pc_variables.items() %}
+ {{ "{}={}".format(k, v) }}
{% endfor %}
- {% if pkg_config_custom_content %}
- # Custom PC content
- {{ pkg_config_custom_content }}
- {% endif %}
Name: {{ name }}
Description: {{ description }}
Version: {{ version }}
- Libs: {{ get_libs(libdirs, cpp_info, gnudeps_flags) }}
- Cflags: {{ get_cflags(includedirs, cxxflags, cflags, defines) }}
- {% if requires|length %}
- Requires: {{ requires|join(' ') }}
+ {% if libflags %}
+ Libs: {{ libflags }}
{% endif %}
- """)
-
- shortened_template = textwrap.dedent("""\
- prefix={{ prefix_path }}
- {% if pkg_config_custom_content %}
- # Custom PC content
- {{ pkg_config_custom_content }}
+ {% if cflags %}
+ Cflags: {{ cflags }}
{% endif %}
-
- Name: {{ name }}
- Description: {{ description }}
- Version: {{ version }}
{% if requires|length %}
Requires: {{ requires|join(' ') }}
{% endif %}
@@ -175,50 +126,74 @@ def _get_prefix_path(self):
else self._dep.package_folder
return root_folder.replace("\\", "/")
- def content(self, info):
- assert isinstance(info, _PCInfo) and info.cpp_info is not None
-
+ def _get_pc_variables(self, cpp_info):
+ """
+ Get all the freeform variables defined by Conan and
+ users (through ``pkg_config_custom_content``). This last ones will override the
+ Conan defined variables.
+ """
prefix_path = self._get_prefix_path()
- version = info.cpp_info.get_property("component_version") or self._dep.ref.version
- libdirs = _get_formatted_dirs(info.cpp_info.libdirs, prefix_path)
- includedirs = _get_formatted_dirs(info.cpp_info.includedirs, prefix_path)
- bindirs = _get_formatted_dirs(info.cpp_info.bindirs, prefix_path)
- custom_content = info.cpp_info.get_property("pkg_config_custom_content")
-
+ pc_variables = {"prefix": prefix_path}
+ if cpp_info is None:
+ return pc_variables
+ # Already formatted directories
+ pc_variables.update(_get_formatted_dirs("libdir", cpp_info.libdirs, prefix_path))
+ pc_variables.update(_get_formatted_dirs("includedir", cpp_info.includedirs, prefix_path))
+ pc_variables.update(_get_formatted_dirs("bindir", cpp_info.bindirs, prefix_path))
+ # Get the custom content introduced by user and sanitize it
+ custom_content = cpp_info.get_property("pkg_config_custom_content")
+ if isinstance(custom_content, dict):
+ pc_variables.update(custom_content)
+ elif custom_content: # Legacy: custom content is string
+ pc_variable_pattern = re.compile("^(.*)=(.*)")
+ for line in custom_content.splitlines():
+ match = pc_variable_pattern.match(line)
+ if match:
+ key, value = match.group(1).strip(), match.group(2).strip()
+ pc_variables[key] = value
+ return pc_variables
+
+ def _get_lib_flags(self, libdirvars, cpp_info):
+ gnudeps_flags = GnuDepsFlags(self._conanfile, cpp_info)
+ libdirsflags = ['-L"${%s}"' % d for d in libdirvars]
+ system_libs = ["-l%s" % l for l in (cpp_info.libs + cpp_info.system_libs)]
+ shared_flags = cpp_info.sharedlinkflags + cpp_info.exelinkflags
+ framework_flags = gnudeps_flags.frameworks + gnudeps_flags.framework_paths
+ return " ".join(libdirsflags + system_libs + shared_flags + framework_flags)
+
+ def _get_cflags(self, includedirvars, cpp_info):
+ includedirsflags = ['-I"${%s}"' % d for d in includedirvars]
+ cxxflags = [var.replace('"', '\\"') for var in cpp_info.cxxflags]
+ cflags = [var.replace('"', '\\"') for var in cpp_info.cflags]
+ defines = ["-D%s" % var.replace('"', '\\"') for var in cpp_info.defines]
+ return " ".join(includedirsflags + cxxflags + cflags + defines)
+
+ def _get_context(self, info):
+ pc_variables = self._get_pc_variables(info.cpp_info)
context = {
- "prefix_path": prefix_path,
- "libdirs": libdirs,
- "includedirs": includedirs,
- "bindirs": bindirs,
- "pkg_config_custom_content": custom_content,
"name": info.name,
"description": info.description,
- "version": version,
+ "version": self._dep.ref.version,
"requires": info.requires,
- "cpp_info": info.cpp_info,
- "cxxflags": [var.replace('"', '\\"') for var in info.cpp_info.cxxflags],
- "cflags": [var.replace('"', '\\"') for var in info.cpp_info.cflags],
- "defines": [var.replace('"', '\\"') for var in info.cpp_info.defines],
- "gnudeps_flags": GnuDepsFlags(self._conanfile, info.cpp_info)
+ "pc_variables": pc_variables,
+ "cflags": "",
+ "libflags": ""
}
- template = Template(self.template, trim_blocks=True, lstrip_blocks=True,
- undefined=StrictUndefined)
- return template.render(context)
+ if info.cpp_info is not None:
+ context.update({
+ "version": info.cpp_info.get_property("component_version") or self._dep.ref.version,
+ "cflags": self._get_cflags([d for d in pc_variables if d.startswith("includedir")],
+ info.cpp_info),
+ "libflags": self._get_lib_flags([d for d in pc_variables if d.startswith("libdir")],
+ info.cpp_info)
+ })
+ return context
- def shortened_content(self, info):
+ def content(self, info):
assert isinstance(info, _PCInfo)
- custom_content = info.cpp_info.get_property("pkg_config_custom_content") if info.cpp_info \
- else None
- context = {
- "prefix_path": self._get_prefix_path(),
- "pkg_config_custom_content": custom_content,
- "name": info.name,
- "description": info.description,
- "version": self._dep.ref.version,
- "requires": info.requires
- }
- template = Template(self.shortened_template, trim_blocks=True,
- lstrip_blocks=True, undefined=StrictUndefined)
+ context = self._get_context(info)
+ template = Template(self.template, trim_blocks=True, lstrip_blocks=True,
+ undefined=StrictUndefined)
return template.render(context)
@@ -340,7 +315,7 @@ def _update_pc_files(info):
pc_files[f"{info.name}.pc"] = self._content_generator.content(info)
for alias in info.aliases:
alias_info = _PCInfo(alias, [info.name], f"Alias {alias} for {info.name}", None, [])
- pc_files[f"{alias}.pc"] = self._content_generator.shortened_content(alias_info)
+ pc_files[f"{alias}.pc"] = self._content_generator.content(alias_info)
pc_files = {}
# If the package has no components, then we have to calculate only the root pc file
@@ -365,11 +340,11 @@ def _update_pc_files(info):
package_info = _PCInfo(pkg_name, pkg_requires, f"Conan package: {pkg_name}",
self._dep.cpp_info, _get_package_aliases(self._dep))
# It'll be enough creating a shortened PC file. This file will be like an alias
- pc_files[f"{package_info.name}.pc"] = self._content_generator.shortened_content(package_info)
+ pc_files[f"{package_info.name}.pc"] = self._content_generator.content(package_info)
for alias in package_info.aliases:
alias_info = _PCInfo(alias, [package_info.name],
f"Alias {alias} for {package_info.name}", None, [])
- pc_files[f"{alias}.pc"] = self._content_generator.shortened_content(alias_info)
+ pc_files[f"{alias}.pc"] = self._content_generator.content(alias_info)
return pc_files
|
diff --git a/conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py b/conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py
index e211bd6e889..fb6bc02c644 100644
--- a/conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py
+++ b/conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py
@@ -94,9 +94,7 @@ def package_info(self):
expected = textwrap.dedent("""
Name: mylib
Description: Conan package: mylib
- Version: 0.1
- Libs:%s
- Cflags: """ % " ") # ugly hack for trailing whitespace removed by IDEs
+ Version: 0.1""")
assert "\n".join(pc_content.splitlines()[1:]) == expected
@@ -173,9 +171,10 @@ def package(self):
def package_info(self):
custom_content = textwrap.dedent(\"""
+ bindir=${prefix}/my/bin/folder
+ fakelibdir=${prefix}/my/lib/folder
datadir=${prefix}/share
schemasdir=${datadir}/mylib/schemas
- bindir=${prefix}/bin
\""")
self.cpp_info.set_property("pkg_config_custom_content", custom_content)
self.cpp_info.includedirs = ["include"]
@@ -187,11 +186,23 @@ def package_info(self):
client.run("install --requires=pkg/0.1@ -g PkgConfigDeps")
pc_content = client.load("pkg.pc")
- assert "libdir=${prefix}/lib" in pc_content
- assert "datadir=${prefix}/share" in pc_content
- assert "schemasdir=${datadir}/mylib/schemas" in pc_content
- assert "bindir=${prefix}/bin" in pc_content
- assert "Name: pkg" in pc_content
+ prefix = pc_content.splitlines()[0]
+ expected = textwrap.dedent(f"""\
+ {prefix}
+ libdir=${{prefix}}/lib
+ includedir=${{prefix}}/include
+ bindir=${{prefix}}/my/bin/folder
+ fakelibdir=${{prefix}}/my/lib/folder
+ datadir=${{prefix}}/share
+ schemasdir=${{datadir}}/mylib/schemas
+
+ Name: pkg
+ Description: Conan package: pkg
+ Version: 0.1
+ Libs: -L"${{libdir}}"
+ Cflags: -I"${{includedir}}"
+ """)
+ assert expected == pc_content
def test_custom_content_and_version_components():
@@ -446,15 +457,17 @@ def package_info(self):
pc_content = client.load("pkg_other_name.pc")
content = textwrap.dedent(f"""\
{prefix}
- # Custom PC content
-
+ libdir=${{prefix}}/lib
+ includedir=${{prefix}}/include
+ bindir=${{prefix}}/bin
datadir=${{prefix}}/share
schemasdir=${{datadir}}/mylib/schemas
-
Name: pkg_other_name
Description: Conan package: pkg_other_name
Version: 0.3
+ Libs: -L"${{libdir}}"
+ Cflags: -I"${{includedir}}"
Requires: compo1
""")
assert content == pc_content
| 2023-07-06T06:04:41
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"conan/tools/gnu/pkgconfigdeps.py": "import os\nimport textwrap\nfrom collections import namedtuple\n\nfrom jinja2 import Template, StrictUndefined\n\nfrom conan.internal import check_duplicated_generator\nfrom conan.tools.gnu.gnudeps_flags import GnuDepsFlags\nfrom conan.errors import ConanException\nfrom conans.model.dependencies import get_transitive_requires\nfrom conans.util.files import save\n\n\ndef _get_name_with_namespace(namespace, name):\n \"\"\"\n Build a name with a namespace, e.g., openssl-crypto\n \"\"\"\n return f\"{namespace}-{name}\"\n\n\ndef _get_package_reference_name(dep):\n \"\"\"\n Get the reference name for the given package\n \"\"\"\n return dep.ref.name\n\n\ndef _get_package_aliases(dep):\n pkg_aliases = dep.cpp_info.get_property(\"pkg_config_aliases\")\n return pkg_aliases or []\n\n\ndef _get_component_aliases(dep, comp_name):\n if comp_name not in dep.cpp_info.components:\n # foo::foo might be referencing the root cppinfo\n if _get_package_reference_name(dep) == comp_name:\n return _get_package_aliases(dep)\n raise ConanException(\"Component '{name}::{cname}' not found in '{name}' \"\n \"package requirement\".format(name=_get_package_reference_name(dep),\n cname=comp_name))\n comp_aliases = dep.cpp_info.components[comp_name].get_property(\"pkg_config_aliases\")\n return comp_aliases or []\n\n\ndef _get_package_name(dep, build_context_suffix=None):\n pkg_name = dep.cpp_info.get_property(\"pkg_config_name\") or _get_package_reference_name(dep)\n suffix = _get_suffix(dep, build_context_suffix)\n return f\"{pkg_name}{suffix}\"\n\n\ndef _get_component_name(dep, comp_name, build_context_suffix=None):\n if comp_name not in dep.cpp_info.components:\n # foo::foo might be referencing the root cppinfo\n if _get_package_reference_name(dep) == comp_name:\n return _get_package_name(dep, build_context_suffix)\n raise ConanException(\"Component '{name}::{cname}' not found in '{name}' \"\n \"package requirement\".format(name=_get_package_reference_name(dep),\n cname=comp_name))\n comp_name = dep.cpp_info.components[comp_name].get_property(\"pkg_config_name\")\n suffix = _get_suffix(dep, build_context_suffix)\n return f\"{comp_name}{suffix}\" if comp_name else None\n\n\ndef _get_suffix(req, build_context_suffix=None):\n \"\"\"\n Get the package name suffix coming from PkgConfigDeps.build_context_suffix attribute, but only\n for requirements declared as build requirement.\n\n :param req: requirement ConanFile instance\n :param build_context_suffix: `dict` with all the suffixes\n :return: `str` with the suffix\n \"\"\"\n if not build_context_suffix or not req.is_build_context:\n return \"\"\n return build_context_suffix.get(req.ref.name, \"\")\n\n\ndef _get_formatted_dirs(folders, prefix_path_):\n ret = []\n for i, directory in enumerate(folders):\n directory = os.path.normpath(directory).replace(\"\\\\\", \"/\")\n prefix = \"\"\n if not os.path.isabs(directory):\n prefix = \"${prefix}/\"\n elif directory.startswith(prefix_path_):\n prefix = \"${prefix}/\"\n directory = os.path.relpath(directory, prefix_path_).replace(\"\\\\\", \"/\")\n ret.append(\"%s%s\" % (prefix, directory))\n return ret\n\n\n_PCInfo = namedtuple(\"PCInfo\", ['name', 'requires', 'description', 'cpp_info', 'aliases'])\n\n\nclass _PCContentGenerator:\n\n template = textwrap.dedent(\"\"\"\\\n {%- macro get_libs(libdirs, cpp_info, gnudeps_flags) -%}\n {%- for _ in libdirs -%}\n {{ '-L\"${libdir%s}\"' % (loop.index0 or \"\") + \" \" }}\n {%- endfor -%}\n {%- for sys_lib in (cpp_info.libs + cpp_info.system_libs) -%}\n {{ \"-l%s\" % sys_lib + \" \" }}\n {%- endfor -%}\n {%- for shared_flag in (cpp_info.sharedlinkflags + cpp_info.exelinkflags) -%}\n {{ shared_flag + \" \" }}\n {%- endfor -%}\n {%- for framework in (gnudeps_flags.frameworks + gnudeps_flags.framework_paths) -%}\n {{ framework + \" \" }}\n {%- endfor -%}\n {%- endmacro -%}\n\n {%- macro get_cflags(includedirs, cxxflags, cflags, defines) -%}\n {%- for _ in includedirs -%}\n {{ '-I\"${includedir%s}\"' % (loop.index0 or \"\") + \" \" }}\n {%- endfor -%}\n {%- for cxxflag in cxxflags -%}\n {{ cxxflag + \" \" }}\n {%- endfor -%}\n {%- for cflag in cflags-%}\n {{ cflag + \" \" }}\n {%- endfor -%}\n {%- for define in defines-%}\n {{ \"-D%s\" % define + \" \" }}\n {%- endfor -%}\n {%- endmacro -%}\n\n prefix={{ prefix_path }}\n {% for path in libdirs %}\n {{ \"libdir{}={}\".format((loop.index0 or \"\"), path) }}\n {% endfor %}\n {% for path in includedirs %}\n {{ \"includedir%s=%s\" % ((loop.index0 or \"\"), path) }}\n {% endfor %}\n {% for path in bindirs %}\n {{ \"bindir{}={}\".format((loop.index0 or \"\"), path) }}\n {% endfor %}\n {% if pkg_config_custom_content %}\n # Custom PC content\n {{ pkg_config_custom_content }}\n {% endif %}\n\n Name: {{ name }}\n Description: {{ description }}\n Version: {{ version }}\n Libs: {{ get_libs(libdirs, cpp_info, gnudeps_flags) }}\n Cflags: {{ get_cflags(includedirs, cxxflags, cflags, defines) }}\n {% if requires|length %}\n Requires: {{ requires|join(' ') }}\n {% endif %}\n \"\"\")\n\n shortened_template = textwrap.dedent(\"\"\"\\\n prefix={{ prefix_path }}\n {% if pkg_config_custom_content %}\n # Custom PC content\n {{ pkg_config_custom_content }}\n {% endif %}\n\n Name: {{ name }}\n Description: {{ description }}\n Version: {{ version }}\n {% if requires|length %}\n Requires: {{ requires|join(' ') }}\n {% endif %}\n \"\"\")\n\n def __init__(self, conanfile, dep):\n self._conanfile = conanfile\n self._dep = dep\n\n def _get_prefix_path(self):\n # If editable, package_folder can be None\n root_folder = self._dep.recipe_folder if self._dep.package_folder is None \\\n else self._dep.package_folder\n return root_folder.replace(\"\\\\\", \"/\")\n\n def content(self, info):\n assert isinstance(info, _PCInfo) and info.cpp_info is not None\n\n prefix_path = self._get_prefix_path()\n version = info.cpp_info.get_property(\"component_version\") or self._dep.ref.version\n libdirs = _get_formatted_dirs(info.cpp_info.libdirs, prefix_path)\n includedirs = _get_formatted_dirs(info.cpp_info.includedirs, prefix_path)\n bindirs = _get_formatted_dirs(info.cpp_info.bindirs, prefix_path)\n custom_content = info.cpp_info.get_property(\"pkg_config_custom_content\")\n\n context = {\n \"prefix_path\": prefix_path,\n \"libdirs\": libdirs,\n \"includedirs\": includedirs,\n \"bindirs\": bindirs,\n \"pkg_config_custom_content\": custom_content,\n \"name\": info.name,\n \"description\": info.description,\n \"version\": version,\n \"requires\": info.requires,\n \"cpp_info\": info.cpp_info,\n \"cxxflags\": [var.replace('\"', '\\\\\"') for var in info.cpp_info.cxxflags],\n \"cflags\": [var.replace('\"', '\\\\\"') for var in info.cpp_info.cflags],\n \"defines\": [var.replace('\"', '\\\\\"') for var in info.cpp_info.defines],\n \"gnudeps_flags\": GnuDepsFlags(self._conanfile, info.cpp_info)\n }\n template = Template(self.template, trim_blocks=True, lstrip_blocks=True,\n undefined=StrictUndefined)\n return template.render(context)\n\n def shortened_content(self, info):\n assert isinstance(info, _PCInfo)\n custom_content = info.cpp_info.get_property(\"pkg_config_custom_content\") if info.cpp_info \\\n else None\n context = {\n \"prefix_path\": self._get_prefix_path(),\n \"pkg_config_custom_content\": custom_content,\n \"name\": info.name,\n \"description\": info.description,\n \"version\": self._dep.ref.version,\n \"requires\": info.requires\n }\n template = Template(self.shortened_template, trim_blocks=True,\n lstrip_blocks=True, undefined=StrictUndefined)\n return template.render(context)\n\n\nclass _PCGenerator:\n\n def __init__(self, conanfile, dep, build_context_suffix=None):\n self._conanfile = conanfile\n self._build_context_suffix = build_context_suffix or {}\n self._dep = dep\n self._content_generator = _PCContentGenerator(self._conanfile, self._dep)\n self._transitive_reqs = get_transitive_requires(self._conanfile, dep)\n\n def _get_cpp_info_requires_names(self, cpp_info):\n \"\"\"\n Get all the pkg-config valid names from the requires ones given a CppInfo object.\n\n For instance, those requires could be coming from:\n\n ```python\n from conan import ConanFile\n class PkgConfigConan(ConanFile):\n requires = \"other/1.0\"\n\n def package_info(self):\n self.cpp_info.requires = [\"other::cmp1\"]\n\n # Or:\n\n def package_info(self):\n self.cpp_info.components[\"cmp\"].requires = [\"other::cmp1\"]\n ```\n \"\"\"\n dep_ref_name = _get_package_reference_name(self._dep)\n ret = []\n for req in cpp_info.requires:\n pkg_ref_name, comp_ref_name = req.split(\"::\") if \"::\" in req else (dep_ref_name, req)\n # For instance, dep == \"hello/1.0\" and req == \"other::cmp1\" -> hello != other\n if dep_ref_name != pkg_ref_name:\n try:\n req_conanfile = self._transitive_reqs[pkg_ref_name]\n except KeyError:\n continue # If the dependency is not in the transitive, might be skipped\n else: # For instance, dep == \"hello/1.0\" and req == \"hello::cmp1\" -> hello == hello\n req_conanfile = self._dep\n comp_name = _get_component_name(req_conanfile, comp_ref_name, self._build_context_suffix)\n if not comp_name:\n pkg_name = _get_package_name(req_conanfile, self._build_context_suffix)\n # Creating a component name with namespace, e.g., dep-comp1\n comp_name = _get_name_with_namespace(pkg_name, comp_ref_name)\n ret.append(comp_name)\n return ret\n\n @property\n def components_info(self):\n \"\"\"\n Get the whole package and its components information like their own requires, names and even\n the cpp_info for each component.\n\n :return: `list` of `_PCInfo` objects with all the components information\n \"\"\"\n pkg_name = _get_package_name(self._dep, self._build_context_suffix)\n components_info = []\n # Loop through all the package's components\n for comp_ref_name, cpp_info in self._dep.cpp_info.get_sorted_components().items():\n # At first, let's check if we have defined some components requires, e.g., \"dep::cmp1\"\n comp_requires_names = self._get_cpp_info_requires_names(cpp_info)\n comp_name = _get_component_name(self._dep, comp_ref_name, self._build_context_suffix)\n if not comp_name:\n comp_name = _get_name_with_namespace(pkg_name, comp_ref_name)\n comp_description = f\"Conan component: {comp_name}\"\n else:\n comp_description = f\"Conan component: {pkg_name}-{comp_name}\"\n comp_aliases = _get_component_aliases(self._dep, comp_ref_name)\n # Save each component information\n components_info.append(_PCInfo(comp_name, comp_requires_names, comp_description,\n cpp_info, comp_aliases))\n return components_info\n\n @property\n def package_info(self):\n \"\"\"\n Get the whole package information\n\n :return: `_PCInfo` object with the package information\n \"\"\"\n pkg_name = _get_package_name(self._dep, self._build_context_suffix)\n # At first, let's check if we have defined some global requires, e.g., \"other::cmp1\"\n requires = self._get_cpp_info_requires_names(self._dep.cpp_info)\n # If we have found some component requires it would be enough\n if not requires:\n # If no requires were found, let's try to get all the direct dependencies,\n # e.g., requires = \"other_pkg/1.0\"\n requires = [_get_package_name(req, self._build_context_suffix)\n for req in self._transitive_reqs.values()]\n description = \"Conan package: %s\" % pkg_name\n aliases = _get_package_aliases(self._dep)\n cpp_info = self._dep.cpp_info\n return _PCInfo(pkg_name, requires, description, cpp_info, aliases)\n\n @property\n def pc_files(self):\n \"\"\"\n Get all the PC files and contents for any dependency:\n\n * If the given dependency does not have components:\n The PC file will be the dependency one.\n\n * If the given dependency has components:\n The PC files will be saved in this order:\n 1- Package components.\n 2- Root component.\n\n Note: If the root-package PC name matches with any other of the components one, the first one\n is not going to be created. Components have more priority than root package.\n\n * Apart from those PC files, if there are any aliases declared, they will be created too.\n \"\"\"\n def _update_pc_files(info):\n pc_files[f\"{info.name}.pc\"] = self._content_generator.content(info)\n for alias in info.aliases:\n alias_info = _PCInfo(alias, [info.name], f\"Alias {alias} for {info.name}\", None, [])\n pc_files[f\"{alias}.pc\"] = self._content_generator.shortened_content(alias_info)\n\n pc_files = {}\n # If the package has no components, then we have to calculate only the root pc file\n if not self._dep.cpp_info.has_components:\n package_info = self.package_info\n _update_pc_files(package_info)\n return pc_files\n\n # First, let's load all the components PC files\n # Loop through all the package's components\n pkg_requires = []\n for component_info in self.components_info:\n _update_pc_files(component_info)\n # Saving components name as the package requires\n pkg_requires.append(component_info.name)\n\n # Second, let's load the root package's PC file ONLY\n # if it does not already exist in components one\n # Issue related: https://github.com/conan-io/conan/issues/10341\n pkg_name = _get_package_name(self._dep, self._build_context_suffix)\n if f\"{pkg_name}.pc\" not in pc_files:\n package_info = _PCInfo(pkg_name, pkg_requires, f\"Conan package: {pkg_name}\",\n self._dep.cpp_info, _get_package_aliases(self._dep))\n # It'll be enough creating a shortened PC file. This file will be like an alias\n pc_files[f\"{package_info.name}.pc\"] = self._content_generator.shortened_content(package_info)\n for alias in package_info.aliases:\n alias_info = _PCInfo(alias, [package_info.name],\n f\"Alias {alias} for {package_info.name}\", None, [])\n pc_files[f\"{alias}.pc\"] = self._content_generator.shortened_content(alias_info)\n\n return pc_files\n\n\nclass PkgConfigDeps:\n\n def __init__(self, conanfile):\n self._conanfile = conanfile\n # Activate the build *.pc files for the specified libraries\n self.build_context_activated = []\n # If specified, the files/requires/names for the build context will be renamed appending\n # a suffix. It is necessary in case of same require and build_require and will cause an error\n self.build_context_suffix = {}\n\n def _validate_build_requires(self, host_req, build_req):\n \"\"\"\n Check if any package exists at host and build context at the same time, and\n it doesn't have any suffix to avoid any name collisions\n\n :param host_req: list of host requires\n :param build_req: list of build requires\n \"\"\"\n activated_br = {r.ref.name for r in build_req.values()\n if r.ref.name in self.build_context_activated}\n common_names = {r.ref.name for r in host_req.values()}.intersection(activated_br)\n without_suffixes = [common_name for common_name in common_names\n if self.build_context_suffix.get(common_name) is None]\n if without_suffixes:\n raise ConanException(f\"The packages {without_suffixes} exist both as 'require' and as\"\n f\" 'build require'. You need to specify a suffix using the \"\n f\"'build_context_suffix' attribute at the PkgConfigDeps generator.\")\n\n @property\n def content(self):\n \"\"\"\n Get all the .pc files content\n \"\"\"\n pc_files = {}\n # Get all the dependencies\n host_req = self._conanfile.dependencies.host\n build_req = self._conanfile.dependencies.build # tool_requires\n test_req = self._conanfile.dependencies.test\n\n # Check if it exists both as require and as build require without a suffix\n self._validate_build_requires(host_req, build_req)\n\n for require, dep in list(host_req.items()) + list(build_req.items()) + list(test_req.items()):\n # Require is not used at the moment, but its information could be used,\n # and will be used in Conan 2.0\n # Filter the build_requires not activated with PkgConfigDeps.build_context_activated\n if require.build and dep.ref.name not in self.build_context_activated:\n continue\n\n pc_generator = _PCGenerator(self._conanfile, dep, build_context_suffix=self.build_context_suffix)\n pc_files.update(pc_generator.pc_files)\n return pc_files\n\n def generate(self):\n \"\"\"\n Save all the `*.pc` files\n \"\"\"\n check_duplicated_generator(self, self._conanfile)\n # Current directory is the generators_folder\n generator_files = self.content\n for generator_file, content in generator_files.items():\n save(generator_file, content)\n"}
|
{"conan/tools/gnu/pkgconfigdeps.py": [{"type": "function", "name": "_PCContentGenerator._get_pc_variables", "lines": [129, 154], "signature": "def _get_pc_variables(self, cpp_info):", "doc": "Get all the freeform variables defined by Conan and\nusers (through ``pkg_config_custom_content``). This last ones will override the\nConan defined variables."}, {"type": "function", "name": "_PCContentGenerator._get_lib_flags", "lines": [156, 162], "signature": "def _get_lib_flags(self, libdirvars, cpp_info):", "doc": ""}, {"type": "function", "name": "_PCContentGenerator._get_cflags", "lines": [164, 169], "signature": "def _get_cflags(self, includedirvars, cpp_info):", "doc": ""}, {"type": "function", "name": "_PCContentGenerator._get_context", "lines": [171, 190], "signature": "def _get_context(self, info):", "doc": ""}]}
| null |
["conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_empty_dirs", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_custom_content", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_pkg_config_name_full_aliases"]
|
["conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_pkg_config_dirs", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_system_libs", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_multiple_include", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_custom_content_and_version_components", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_pkg_with_public_deps_and_component_requires", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_pkg_with_public_deps_and_component_requires_2", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_components_and_package_pc_creation_order", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_pkgconfigdeps_with_test_requires", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_with_editable_layout", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_tool_requires", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_tool_requires_not_created_if_no_activated", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_tool_requires_raise_exception_if_exist_both_require_and_build_one", "conans/test/integration/toolchains/gnu/test_pkgconfigdeps.py::test_error_missing_pc_build_context"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1688557428.0, "pr_title": "[PkgConfigDeps] `pkg_config_custom_content` property overwrites variables", "pr_body": "Changelog: Feature: Let `pkg_config_custom_content` overwrite default `*.pc` variables created by `PkgConfigDeps`.\r\nChangelog: Feature: Let `pkg_config_custom_content` be a dict-like object too.\r\nDocs: https://github.com/conan-io/docs/pull/3293\r\n\r\nUPDATED:\r\n* Simplified internal `PkgConfigDeps` logic: only one template and more straightforward, one context, Python is formatting the template, etc.\r\n* `pkg_config_custom_content` overwrites Conan-defined freeform variables, not keyword metadata ones.\r\n* `pkg_config_custom_content` could be defined as a dictionary instead of a string.\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 15,284
|
https://github.com/conan-io/conan/pull/15284
|
conan-io__conan-15284
|
[]
|
98d8f6aaa074b2bf6b827a359daf415f447a86a3
|
diff --git a/conan/api/subapi/lockfile.py b/conan/api/subapi/lockfile.py
index e9fc4daf337..7e4bf7de760 100644
--- a/conan/api/subapi/lockfile.py
+++ b/conan/api/subapi/lockfile.py
@@ -88,6 +88,12 @@ def add_lockfile(lockfile=None, requires=None, build_requires=None, python_requi
python_requires=python_requires)
return lockfile
+ @staticmethod
+ def remove_lockfile(lockfile, requires=None, build_requires=None, python_requires=None):
+ lockfile.remove(requires=requires, build_requires=build_requires,
+ python_requires=python_requires)
+ return lockfile
+
@staticmethod
def save_lockfile(lockfile, lockfile_out, path=None):
if lockfile_out is not None:
diff --git a/conan/cli/commands/lock.py b/conan/cli/commands/lock.py
index 9831337c29c..31630863a43 100644
--- a/conan/cli/commands/lock.py
+++ b/conan/cli/commands/lock.py
@@ -6,7 +6,6 @@
from conan.cli import make_abs_path
from conan.cli.args import common_graph_args, validate_common_graph_args
from conan.cli.printers.graph import print_graph_packages, print_graph_basic
-from conans.client.cache.cache import ClientCache
from conans.model.graph_lock import Lockfile, LOCKFILE
from conans.model.recipe_ref import RecipeReference
@@ -124,3 +123,27 @@ def _parse_requires(reqs):
python_requires=python_requires,
build_requires=build_requires)
conan_api.lockfile.save_lockfile(lockfile, args.lockfile_out)
+
+
+@conan_subcommand()
+def lock_remove(conan_api, parser, subparser, *args):
+ """
+ Remove requires, build-requires or python-requires from an existing or new lockfile.
+ References can be supplied with and without revisions like "--requires=pkg/version",
+ """
+ subparser.add_argument('--requires', action="append", help='Remove references to lockfile.')
+ subparser.add_argument('--build-requires', action="append",
+ help='Remove build-requires from lockfile')
+ subparser.add_argument('--python-requires', action="append",
+ help='Remove python-requires from lockfile')
+ subparser.add_argument("--lockfile-out", action=OnceArgument, default=LOCKFILE,
+ help="Filename of the created lockfile")
+ subparser.add_argument("--lockfile", action=OnceArgument, help="Filename of the input lockfile")
+ args = parser.parse_args(*args)
+
+ lockfile = conan_api.lockfile.get_lockfile(lockfile=args.lockfile, partial=True)
+ lockfile = conan_api.lockfile.remove_lockfile(lockfile,
+ requires=args.requires,
+ python_requires=args.python_requires,
+ build_requires=args.build_requires)
+ conan_api.lockfile.save_lockfile(lockfile, args.lockfile_out)
diff --git a/conans/model/graph_lock.py b/conans/model/graph_lock.py
index 789bb081824..3786beb07d5 100644
--- a/conans/model/graph_lock.py
+++ b/conans/model/graph_lock.py
@@ -1,3 +1,4 @@
+import fnmatch
import json
import os
from collections import OrderedDict
@@ -6,6 +7,7 @@
from conans.client.graph.graph import RECIPE_VIRTUAL, RECIPE_CONSUMER, CONTEXT_BUILD, Overrides
from conans.errors import ConanException
from conans.model.recipe_ref import RecipeReference
+from conans.model.version_range import VersionRange
from conans.util.files import load, save
LOCKFILE = "conan.lock"
@@ -64,6 +66,23 @@ def add(self, ref, package_ids=None):
raise ConanException(f"Cannot add {ref} to lockfile, already exists")
self._requires[ref] = package_ids
+ def remove(self, pattern):
+ ref = RecipeReference.loads(pattern)
+ version = str(ref.version)
+ remove = []
+ if version.startswith("[") and version.endswith("]"):
+ version_range = VersionRange(version[1:-1])
+ for k, v in self._requires.items():
+ if fnmatch.fnmatch(k.name, ref.name) and version_range.contains(k.version, None):
+ new_pattern = f"{k.name}/*@{ref.user or ''}"
+ new_pattern += f"/{ref.channel}" if ref.channel else ""
+ if k.matches(new_pattern, False):
+ remove.append(k)
+ else:
+ remove = [k for k in self._requires if k.matches(pattern, False)]
+ self._requires = OrderedDict((k, v) for k, v in self._requires.items() if k not in remove)
+ return remove
+
def sort(self):
self._requires = OrderedDict(reversed(sorted(self._requires.items())))
@@ -171,6 +190,19 @@ def add(self, requires=None, build_requires=None, python_requires=None):
self._python_requires.add(r)
self._python_requires.sort()
+ def remove(self, requires=None, build_requires=None, python_requires=None):
+ def _remove(reqs, self_reqs, name):
+ if reqs:
+ removed = []
+ for r in reqs:
+ removed.extend(self_reqs.remove(r))
+ for d in removed:
+ ConanOutput().info(f"Removed locked {name}: {d.repr_notime()}")
+
+ _remove(requires, self._requires, "require")
+ _remove(build_requires, self._build_requires, "build_require")
+ _remove(python_requires, self._python_requires, "python_require")
+
@staticmethod
def deserialize(data):
""" constructs a GraphLock from a json like dict
|
diff --git a/conans/test/integration/lockfile/test_user_overrides.py b/conans/test/integration/lockfile/test_user_overrides.py
index f37a91c4259..53648a5c6c2 100644
--- a/conans/test/integration/lockfile/test_user_overrides.py
+++ b/conans/test/integration/lockfile/test_user_overrides.py
@@ -1,4 +1,7 @@
import json
+import textwrap
+
+import pytest
from conans.test.assets.genconanfile import GenConanfile
from conans.test.utils.tools import TestClient
@@ -211,3 +214,106 @@ def test_lock_add_error():
c = TestClient()
c.run(f"lock add --requires=math/1.0:pid1", assert_error=True)
assert "ERROR: Invalid recipe reference 'math/1.0:pid1' is a package reference" in c.out
+
+
+class TestLockRemove:
+ @pytest.mark.parametrize("args, removed", [
+ ("--requires=math/*", ["math"]),
+ ("--requires=math/2.0", []),
+ ("--build-requires=cmake/1.0", ["cmake"]),
+ # Not valid ("--build-requires=*", ["cmake", "ninja"]),
+ ("--build-requires=*/*", ["cmake", "ninja"]), # But this is valid
+ ("--python-requires=mytool/*", ["mytool"]),
+ ("--python-requires=*tool/*", ["mytool", "othertool"]),
+ # With version ranges
+ ('--requires="math/[>=1.0 <2]"', ["math"]),
+ ('--requires="math/[>1.0]"', []),
+ ('--requires="*/[>=1.0 <2]"', ["math", "engine"])
+ ])
+ def test_lock_remove(self, args, removed):
+ c = TestClient()
+ lock = textwrap.dedent("""\
+ {
+ "version": "0.5",
+ "requires": [
+ "math/1.0#85d927a4a067a531b1a9c7619522c015%1702683583.3411012",
+ "math/1.0#12345%1702683584.3411012",
+ "engine/1.0#fd2b006646a54397c16a1478ac4111ac%1702683583.3544693"
+ ],
+ "build_requires": [
+ "cmake/1.0#85d927a4a067a531b1a9c7619522c015%1702683583.3411012",
+ "ninja/1.0#fd2b006646a54397c16a1478ac4111ac%1702683583.3544693"
+ ],
+ "python_requires": [
+ "mytool/1.0#85d927a4a067a531b1a9c7619522c015%1702683583.3411012",
+ "othertool/1.0#fd2b006646a54397c16a1478ac4111ac%1702683583.3544693"
+ ]
+ }
+ """)
+ c.save({"conan.lock": lock})
+ c.run(f"lock remove {args}")
+ lock = c.load("conan.lock")
+ for remove in removed:
+ assert remove not in lock
+ for pkg in {"math", "engine", "cmake", "ninja", "mytool", "othertool"}.difference(removed):
+ assert pkg in lock
+
+ @pytest.mark.parametrize("args, removed", [
+ ("--requires=math/1.0#12345*", ["math/1.0#123456789abcdef"]),
+ ("--requires=math/1.0#*", ["math/1.0#123456789abcdef",
+ "math/1.0#85d927a4a067a531b1a9c7619522c015"]),
+ ])
+ def test_lock_remove_revisions(self, args, removed):
+ c = TestClient()
+ lock = textwrap.dedent("""\
+ {
+ "version": "0.5",
+ "requires": [
+ "math/1.0#123456789abcdef%1702683584.3411012",
+ "math/1.0#85d927a4a067a531b1a9c7619522c015%1702683583.3411012",
+ "engine/1.0#fd2b006646a54397c16a1478ac4111ac%1702683583.3544693"
+ ]
+ }
+ """)
+ c.save({"conan.lock": lock})
+ c.run(f"lock remove {args}")
+ lock = c.load("conan.lock")
+ for remove in removed:
+ assert remove not in lock
+ for pkg in {"math/1.0#123456789abcdef",
+ "math/1.0#85d927a4a067a531b1a9c7619522c015",
+ "engine/1.0#fd2b006646a54397c16a1478ac4111ac"}.difference(removed):
+ assert pkg in lock
+
+ @pytest.mark.parametrize("args, removed", [
+ ("--requires=*/*@team", ["pkg/1.0@team"]),
+ ("--requires=*/*@team*", ["pkg/1.0@team", "math/2.0@team/stable"]),
+ ("--requires=*/*@user", ["math/1.0@user", "other/1.0@user"]),
+ ("--requires=*/*@", ["engine/1.0"]), # Remove those without user
+ # with version ranges
+ ("--requires=math/[*]@user", ["math/1.0@user"]),
+ ("--requires=math/[*]@team*", ["math/2.0@team/stable"]),
+ ])
+ def test_lock_remove_user_channel(self, args, removed):
+ c = TestClient()
+ lock = textwrap.dedent("""\
+ {
+ "version": "0.5",
+ "requires": [
+ "math/1.0@user#123456789abcdef%1702683584.3411012",
+ "math/2.0@team/stable#123456789abcdef%1702683584.3411012",
+ "other/1.0@user#85d927a4a067a531b1a9c7619522c015%1702683583.3411012",
+ "pkg/1.0@team#85d927a4a067a531b1a9c7619522c015%1702683583.3411012",
+ "engine/1.0#fd2b006646a54397c16a1478ac4111ac%1702683583.3544693"
+ ]
+ }
+ """)
+ c.save({"conan.lock": lock})
+ c.run(f"lock remove {args}")
+ lock = c.load("conan.lock")
+ for remove in removed:
+ assert remove not in lock
+ rest = {"math/1.0@user", "math/2.0@team/stable",
+ "other/1.0@user", "pkg/1.0@team", "engine/1.0"}.difference(removed)
+ for pkg in rest:
+ assert pkg in lock
| 2023-12-16T10:42:15
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"conan/api/subapi/lockfile.py": "import os\n\nfrom conan.api.output import ConanOutput\nfrom conan.cli import make_abs_path\nfrom conans.client.graph.graph import Overrides\nfrom conans.errors import ConanException\nfrom conans.model.graph_lock import Lockfile, LOCKFILE\n\n\nclass LockfileAPI:\n\n def __init__(self, conan_api):\n self.conan_api = conan_api\n\n @staticmethod\n def get_lockfile(lockfile=None, conanfile_path=None, cwd=None, partial=False, overrides=None):\n \"\"\" obtain a lockfile, following this logic:\n - If lockfile is explicitly defined, it would be either absolute or relative to cwd and\n the lockfile file must exist. If lockfile=\"\" (empty string) the default \"conan.lock\"\n lockfile will not be automatically used even if it is present.\n - If lockfile is not defined, it will still look for a default conan.lock:\n - if conanfile_path is defined, it will be besides it\n - if conanfile_path is not defined, the default conan.lock should be in cwd\n - if the default conan.lock cannot be found, it is not an error\n\n :param partial: If the obtained lockfile will allow partial resolving\n :param cwd: the current working dir, if None, os.getcwd() will be used\n :param conanfile_path: The full path to the conanfile, if existing\n :param lockfile: the name of the lockfile file\n :param overrides: Dictionary of overrides {overriden: [new_ref1, new_ref2]}\n \"\"\"\n if lockfile == \"\":\n # Allow a way with ``--lockfile=\"\"`` to optout automatic usage of conan.lock\n return\n\n cwd = cwd or os.getcwd()\n if lockfile is None: # Look for a default \"conan.lock\"\n # if path is defined, take it as reference\n base_path = os.path.dirname(conanfile_path) if conanfile_path else cwd\n lockfile_path = make_abs_path(LOCKFILE, base_path)\n if not os.path.isfile(lockfile_path):\n if overrides:\n raise ConanException(\"Cannot define overrides without a lockfile\")\n return\n else: # explicit lockfile given\n lockfile_path = make_abs_path(lockfile, cwd)\n if not os.path.isfile(lockfile_path):\n raise ConanException(\"Lockfile doesn't exist: {}\".format(lockfile_path))\n\n graph_lock = Lockfile.load(lockfile_path)\n graph_lock.partial = partial\n\n if overrides:\n graph_lock._overrides = Overrides.deserialize(overrides)\n ConanOutput().info(\"Using lockfile: '{}'\".format(lockfile_path))\n return graph_lock\n\n def update_lockfile_export(self, lockfile, conanfile, ref, is_build_require=False):\n # The package_type is not fully processed at export\n is_python_require = conanfile.package_type == \"python-require\"\n is_require = not is_python_require and not is_build_require\n if hasattr(conanfile, \"python_requires\"):\n python_requires = conanfile.python_requires.all_refs()\n else:\n python_requires = []\n python_requires = python_requires + ([ref] if is_python_require else [])\n lockfile = self.add_lockfile(lockfile,\n requires=[ref] if is_require else None,\n python_requires=python_requires,\n build_requires=[ref] if is_build_require else None)\n return lockfile\n\n @staticmethod\n def update_lockfile(lockfile, graph, lock_packages=False, clean=False):\n if lockfile is None or clean:\n lockfile = Lockfile(graph, lock_packages)\n else:\n lockfile.update_lock(graph, lock_packages)\n return lockfile\n\n @staticmethod\n def add_lockfile(lockfile=None, requires=None, build_requires=None, python_requires=None):\n if lockfile is None:\n lockfile = Lockfile() # create a new lockfile\n lockfile.partial = True\n\n lockfile.add(requires=requires, build_requires=build_requires,\n python_requires=python_requires)\n return lockfile\n\n @staticmethod\n def save_lockfile(lockfile, lockfile_out, path=None):\n if lockfile_out is not None:\n lockfile_out = make_abs_path(lockfile_out, path)\n lockfile.save(lockfile_out)\n ConanOutput().info(f\"Generated lockfile: {lockfile_out}\")\n", "conan/cli/commands/lock.py": "import os\n\nfrom conan.api.output import ConanOutput\nfrom conan.cli.command import conan_command, OnceArgument, conan_subcommand\n\nfrom conan.cli import make_abs_path\nfrom conan.cli.args import common_graph_args, validate_common_graph_args\nfrom conan.cli.printers.graph import print_graph_packages, print_graph_basic\nfrom conans.client.cache.cache import ClientCache\nfrom conans.model.graph_lock import Lockfile, LOCKFILE\nfrom conans.model.recipe_ref import RecipeReference\n\n\n@conan_command(group=\"Consumer\")\ndef lock(conan_api, parser, *args):\n \"\"\"\n Create or manage lockfiles.\n \"\"\"\n\n\n@conan_subcommand()\ndef lock_create(conan_api, parser, subparser, *args):\n \"\"\"\n Create a lockfile from a conanfile or a reference.\n \"\"\"\n common_graph_args(subparser)\n subparser.add_argument(\"--build-require\", action='store_true', default=False,\n help='Whether the provided reference is a build-require')\n args = parser.parse_args(*args)\n\n # parameter validation\n validate_common_graph_args(args)\n\n cwd = os.getcwd()\n path = conan_api.local.get_conanfile_path(args.path, cwd, py=None) if args.path else None\n remotes = conan_api.remotes.list(args.remote) if not args.no_remote else []\n overrides = eval(args.lockfile_overrides) if args.lockfile_overrides else None\n lockfile = conan_api.lockfile.get_lockfile(lockfile=args.lockfile, conanfile_path=path,\n cwd=cwd, partial=True, overrides=overrides)\n profile_host, profile_build = conan_api.profiles.get_profiles_from_args(args)\n\n if path:\n graph = conan_api.graph.load_graph_consumer(path, args.name, args.version,\n args.user, args.channel,\n profile_host, profile_build, lockfile,\n remotes, args.update,\n is_build_require=args.build_require)\n else:\n graph = conan_api.graph.load_graph_requires(args.requires, args.tool_requires,\n profile_host, profile_build, lockfile,\n remotes, args.update)\n\n print_graph_basic(graph)\n graph.report_graph_error()\n conan_api.graph.analyze_binaries(graph, args.build, remotes=remotes, update=args.update,\n lockfile=lockfile)\n print_graph_packages(graph)\n\n lockfile = conan_api.lockfile.update_lockfile(lockfile, graph, args.lockfile_packages,\n clean=args.lockfile_clean)\n conanfile_path = os.path.dirname(graph.root.path) \\\n if graph.root.path and args.lockfile_out is None else cwd\n conan_api.lockfile.save_lockfile(lockfile, args.lockfile_out or \"conan.lock\", conanfile_path)\n\n\n@conan_subcommand()\ndef lock_merge(conan_api, parser, subparser, *args):\n \"\"\"\n Merge 2 or more lockfiles.\n \"\"\"\n subparser.add_argument('--lockfile', action=\"append\", help='Path to lockfile to be merged')\n subparser.add_argument(\"--lockfile-out\", action=OnceArgument, default=LOCKFILE,\n help=\"Filename of the created lockfile\")\n\n args = parser.parse_args(*args)\n\n result = Lockfile()\n for lockfile in args.lockfile:\n lockfile = make_abs_path(lockfile)\n graph_lock = Lockfile.load(lockfile)\n result.merge(graph_lock)\n\n lockfile_out = make_abs_path(args.lockfile_out)\n result.save(lockfile_out)\n ConanOutput().info(\"Generated lockfile: %s\" % lockfile_out)\n\n\n@conan_subcommand()\ndef lock_add(conan_api, parser, subparser, *args):\n \"\"\"\n Add requires, build-requires or python-requires to an existing or new lockfile.\n The resulting lockfile will be ordered, newer versions/revisions first.\n References can be supplied with and without revisions like \"--requires=pkg/version\",\n but they must be package references, including at least the version,\n and they cannot contain a version range.\n \"\"\"\n subparser.add_argument('--requires', action=\"append\", help='Add references to lockfile.')\n subparser.add_argument('--build-requires', action=\"append\",\n help='Add build-requires to lockfile')\n subparser.add_argument('--python-requires', action=\"append\",\n help='Add python-requires to lockfile')\n subparser.add_argument(\"--lockfile-out\", action=OnceArgument, default=LOCKFILE,\n help=\"Filename of the created lockfile\")\n subparser.add_argument(\"--lockfile\", action=OnceArgument, help=\"Filename of the input lockfile\")\n args = parser.parse_args(*args)\n\n lockfile = conan_api.lockfile.get_lockfile(lockfile=args.lockfile, partial=True)\n\n global_conf = conan_api.config.global_conf\n allow_uppercase = global_conf.get(\"core:allow_uppercase_pkg_names\", check_type=bool)\n\n def _parse_requires(reqs):\n if reqs:\n result = [RecipeReference.loads(r) for r in reqs]\n [r.validate_ref(allow_uppercase) for r in result]\n return result\n\n requires = _parse_requires(args.requires)\n build_requires = _parse_requires(args.build_requires)\n python_requires = _parse_requires(args.python_requires)\n\n lockfile = conan_api.lockfile.add_lockfile(lockfile,\n requires=requires,\n python_requires=python_requires,\n build_requires=build_requires)\n conan_api.lockfile.save_lockfile(lockfile, args.lockfile_out)\n", "conans/model/graph_lock.py": "import json\nimport os\nfrom collections import OrderedDict\n\nfrom conan.api.output import ConanOutput\nfrom conans.client.graph.graph import RECIPE_VIRTUAL, RECIPE_CONSUMER, CONTEXT_BUILD, Overrides\nfrom conans.errors import ConanException\nfrom conans.model.recipe_ref import RecipeReference\nfrom conans.util.files import load, save\n\nLOCKFILE = \"conan.lock\"\nLOCKFILE_VERSION = \"0.5\"\n\n\nclass _LockRequires:\n \"\"\"\n This is an ordered set of locked references.\n It is implemented this way to allow adding package_id:prev information later,\n otherwise it could be a bare list\n \"\"\"\n def __init__(self):\n self._requires = OrderedDict() # {require: package_ids}\n\n def __contains__(self, item):\n return item in self._requires\n\n def refs(self):\n return self._requires.keys()\n\n def get(self, item):\n return self._requires.get(item)\n\n def serialize(self):\n result = []\n for k, v in self._requires.items():\n if v is None:\n result.append(repr(k))\n else:\n result.append((repr(k), v))\n return result\n\n @staticmethod\n def deserialize(data):\n result = _LockRequires()\n for d in data:\n if isinstance(d, str):\n result._requires[RecipeReference.loads(d)] = None\n else:\n result._requires[RecipeReference.loads(d[0])] = d[1]\n return result\n\n def add(self, ref, package_ids=None):\n if ref.revision is not None:\n old_package_ids = self._requires.pop(ref, None) # Get existing one\n if old_package_ids is not None:\n if package_ids is not None:\n package_ids = old_package_ids.update(package_ids)\n else:\n package_ids = old_package_ids\n self._requires[ref] = package_ids\n else: # Manual addition of something without revision\n existing = {r: r for r in self._requires}.get(ref)\n if existing and existing.revision is not None:\n raise ConanException(f\"Cannot add {ref} to lockfile, already exists\")\n self._requires[ref] = package_ids\n\n def sort(self):\n self._requires = OrderedDict(reversed(sorted(self._requires.items())))\n\n def merge(self, other):\n \"\"\"\n :type other: _LockRequires\n \"\"\"\n # TODO: What happens when merging incomplete refs? Probably str(ref) should be used\n for k, v in other._requires.items():\n if k in self._requires:\n if v is not None:\n self._requires.setdefault(k, {}).update(v)\n else:\n self._requires[k] = v\n self.sort()\n\n\nclass Lockfile(object):\n\n def __init__(self, deps_graph=None, lock_packages=False):\n self._requires = _LockRequires()\n self._python_requires = _LockRequires()\n self._build_requires = _LockRequires()\n self._alias = {}\n self._overrides = Overrides()\n self.partial = False\n\n if deps_graph is None:\n return\n\n self.update_lock(deps_graph, lock_packages)\n\n def update_lock(self, deps_graph, lock_packages=False):\n for graph_node in deps_graph.nodes:\n try:\n for r in graph_node.conanfile.python_requires.all_refs():\n self._python_requires.add(r)\n except AttributeError:\n pass\n if graph_node.recipe in (RECIPE_VIRTUAL, RECIPE_CONSUMER) or graph_node.ref is None:\n continue\n assert graph_node.conanfile is not None\n\n pids = {graph_node.package_id: graph_node.prev} if lock_packages else None\n if graph_node.context == CONTEXT_BUILD:\n self._build_requires.add(graph_node.ref, pids)\n else:\n self._requires.add(graph_node.ref, pids)\n\n self._alias.update(deps_graph.aliased)\n self._overrides.update(deps_graph.overrides())\n\n self._requires.sort()\n self._build_requires.sort()\n self._python_requires.sort()\n\n @staticmethod\n def load(path):\n if not path:\n raise IOError(\"Invalid path\")\n if not os.path.isfile(path):\n raise ConanException(\"Missing lockfile in: %s\" % path)\n content = load(path)\n try:\n return Lockfile.loads(content)\n except Exception as e:\n raise ConanException(\"Error parsing lockfile '{}': {}\".format(path, e))\n\n @staticmethod\n def loads(content):\n return Lockfile.deserialize(json.loads(content))\n\n def dumps(self):\n return json.dumps(self.serialize(), indent=4)\n\n def save(self, path):\n save(path, self.dumps())\n\n def merge(self, other):\n \"\"\"\n :type other: Lockfile\n \"\"\"\n self._requires.merge(other._requires)\n self._build_requires.merge(other._build_requires)\n self._python_requires.merge(other._python_requires)\n self._alias.update(other._alias)\n self._overrides.update(other._overrides)\n\n def add(self, requires=None, build_requires=None, python_requires=None):\n \"\"\" adding new things manually will trigger the sort() of the locked list, so lockfiles\n alwasys keep the ordered lists. This means that for some especial edge cases it might\n be necessary to allow removing from a lockfile, for example to test an older version\n than the one locked (in general adding works better for moving forward to newer versions)\n \"\"\"\n if requires:\n for r in requires:\n self._requires.add(r)\n self._requires.sort()\n if build_requires:\n for r in build_requires:\n self._build_requires.add(r)\n self._build_requires.sort()\n if python_requires:\n for r in python_requires:\n self._python_requires.add(r)\n self._python_requires.sort()\n\n @staticmethod\n def deserialize(data):\n \"\"\" constructs a GraphLock from a json like dict\n \"\"\"\n graph_lock = Lockfile()\n version = data.get(\"version\")\n if version and version != LOCKFILE_VERSION:\n raise ConanException(\"This lockfile was created with an incompatible \"\n \"version. Please regenerate the lockfile\")\n if \"requires\" in data:\n graph_lock._requires = _LockRequires.deserialize(data[\"requires\"])\n if \"build_requires\" in data:\n graph_lock._build_requires = _LockRequires.deserialize(data[\"build_requires\"])\n if \"python_requires\" in data:\n graph_lock._python_requires = _LockRequires.deserialize(data[\"python_requires\"])\n if \"alias\" in data:\n graph_lock._alias = {RecipeReference.loads(k): RecipeReference.loads(v)\n for k, v in data[\"alias\"].items()}\n if \"overrides\" in data:\n graph_lock._overrides = Overrides.deserialize(data[\"overrides\"])\n return graph_lock\n\n def serialize(self):\n \"\"\" returns the object serialized as a dict of plain python types\n that can be converted to json\n \"\"\"\n result = {\"version\": LOCKFILE_VERSION}\n if self._requires:\n result[\"requires\"] = self._requires.serialize()\n if self._build_requires:\n result[\"build_requires\"] = self._build_requires.serialize()\n if self._python_requires:\n result[\"python_requires\"] = self._python_requires.serialize()\n if self._alias:\n result[\"alias\"] = {repr(k): repr(v) for k, v in self._alias.items()}\n if self._overrides:\n result[\"overrides\"] = self._overrides.serialize()\n return result\n\n def resolve_locked(self, node, require, resolve_prereleases):\n if require.build or node.context == CONTEXT_BUILD:\n locked_refs = self._build_requires.refs()\n else:\n locked_refs = self._requires.refs()\n self._resolve_overrides(require)\n try:\n self._resolve(require, locked_refs, resolve_prereleases)\n except ConanException:\n overrides = self._overrides.get(require.ref)\n if overrides is not None and len(overrides) > 1:\n msg = f\"Override defined for {require.ref}, but multiple possible overrides\" \\\n f\" {overrides}. You might need to apply the 'conan graph build-order'\" \\\n f\" overrides for correctly building this package with this lockfile\"\n ConanOutput().error(msg)\n raise\n\n def _resolve_overrides(self, require):\n existing = self._overrides.get(require.ref)\n if existing is not None and len(existing) == 1:\n require.overriden_ref = require.ref # Store that the require has been overriden\n ref = next(iter(existing))\n require.ref = ref\n require.override_ref = ref\n\n def resolve_prev(self, node):\n if node.context == CONTEXT_BUILD:\n prevs = self._build_requires.get(node.ref)\n else:\n prevs = self._requires.get(node.ref)\n if prevs:\n return prevs.get(node.package_id)\n\n def _resolve(self, require, locked_refs, resolve_prereleases):\n version_range = require.version_range\n ref = require.ref\n matches = [r for r in locked_refs if r.name == ref.name and r.user == ref.user and\n r.channel == ref.channel]\n if version_range:\n for m in matches:\n if version_range.contains(m.version, resolve_prereleases):\n require.ref = m\n break\n else:\n if not self.partial:\n raise ConanException(f\"Requirement '{ref}' not in lockfile\")\n else:\n ref = require.ref\n if ref.revision is None:\n for m in matches:\n if m.version == ref.version:\n require.ref = m\n break\n else:\n if not self.partial:\n raise ConanException(f\"Requirement '{ref}' not in lockfile\")\n else:\n if ref not in matches and not self.partial:\n raise ConanException(f\"Requirement '{repr(ref)}' not in lockfile\")\n\n def replace_alias(self, require, alias):\n locked_alias = self._alias.get(alias)\n if locked_alias is not None:\n require.ref = locked_alias\n return True\n elif not self.partial:\n raise ConanException(f\"Requirement alias '{alias}' not in lockfile\")\n\n def resolve_locked_pyrequires(self, require, resolve_prereleases=None):\n locked_refs = self._python_requires.refs() # CHANGE\n self._resolve(require, locked_refs, resolve_prereleases)\n"}
|
{"conan/api/subapi/lockfile.py": [{"type": "function", "name": "LockfileAPI.remove_lockfile", "lines": [92, 95], "signature": "def remove_lockfile(lockfile, requires=None, build_requires=None, python_requires=None):", "doc": ""}], "conan/cli/commands/lock.py": [{"type": "function", "name": "lock_remove", "lines": [129, 149], "signature": "def lock_remove(conan_api, parser, subparser, *args):", "doc": "Remove requires, build-requires or python-requires from an existing or new lockfile.\nReferences can be supplied with and without revisions like \"--requires=pkg/version\","}], "conans/model/graph_lock.py": [{"type": "function", "name": "_LockRequires.remove", "lines": [69, 84], "signature": "def remove(self, pattern):", "doc": ""}, {"type": "function", "name": "Lockfile.remove", "lines": [193, 204], "signature": "def remove(self, requires=None, build_requires=None, python_requires=None):", "doc": ""}, {"type": "function", "name": "Lockfile.remove._remove", "lines": [194, 200], "signature": "def _remove(reqs, self_reqs, name):", "doc": ""}]}
| null |
["conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--requires=math/*-removed0]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--requires=math/2.0-removed1]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--build-requires=cmake/1.0-removed2]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--build-requires=*/*-removed3]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--python-requires=mytool/*-removed4]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--python-requires=*tool/*-removed5]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--requires=\"math/[>=1.0", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--requires=\"math/[>1.0]\"-removed7]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove[--requires=\"*/[>=1.0", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_revisions[--requires=math/1.0#12345*-removed0]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_revisions[--requires=math/1.0#*-removed1]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_user_channel[--requires=*/*@team-removed0]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_user_channel[--requires=*/*@team*-removed1]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_user_channel[--requires=*/*@user-removed2]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_user_channel[--requires=*/*@-removed3]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_user_channel[--requires=math/[*]@user-removed4]", "conans/test/integration/lockfile/test_user_overrides.py::TestLockRemove::test_lock_remove_user_channel[--requires=math/[*]@team*-removed5]"]
|
["conans/test/integration/lockfile/test_user_overrides.py::test_user_overrides", "conans/test/integration/lockfile/test_user_overrides.py::test_user_build_overrides", "conans/test/integration/lockfile/test_user_overrides.py::test_user_python_overrides", "conans/test/integration/lockfile/test_user_overrides.py::test_add_revisions", "conans/test/integration/lockfile/test_user_overrides.py::test_add_multiple_revisions", "conans/test/integration/lockfile/test_user_overrides.py::test_timestamps_are_updated", "conans/test/integration/lockfile/test_user_overrides.py::test_lock_add_error"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1702723238.0, "pr_title": "new lock remove command", "pr_body": "Changelog: Feature: New ``conan lock remove`` command to remove requires from lockfiles.\r\nDocs: https://github.com/conan-io/docs/pull/3496\r\n\r\nI am proposing this PR to close:\r\n\r\nClose https://github.com/conan-io/conan/issues/14523\r\nClose https://github.com/conan-io/conan/issues/14524\r\n\r\nThe idea is that updates are equivalent to something like:\r\n```bash\r\n$ conan remove --requires=zlib/* && conan add --requires=zlib/1.2.11\r\n```\r\nBut it will allow us to first iterate and learn about the logic of removal of versions from lockfiles, because ``conan remove`` makes sense in isolation, even without ``lock update`` or ``lock add``, like for example using it before resolving it to latest with ``--lockfile-partial``. When we have clear what is the UX and UI for the removal, we can do a single command ``conan update`` or ``conan add`` with extra arguments\r\n\r\n\r\n\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 15,731
|
https://github.com/conan-io/conan/pull/15731
|
conan-io__conan-15731
|
[]
|
1763159dc74a54cb4920a55a7620557687e1dc25
|
diff --git a/conan/tools/build/__init__.py b/conan/tools/build/__init__.py
index 829886a8de3..da2223ff12e 100644
--- a/conan/tools/build/__init__.py
+++ b/conan/tools/build/__init__.py
@@ -3,3 +3,4 @@
from conan.tools.build.cppstd import check_min_cppstd, valid_min_cppstd, default_cppstd, \
supported_cppstd
from conan.tools.build.stdcpp_library import stdcpp_library
+from conan.tools.build.flags import cppstd_flag
diff --git a/conan/tools/build/flags.py b/conan/tools/build/flags.py
new file mode 100644
index 00000000000..9cd9f06385a
--- /dev/null
+++ b/conan/tools/build/flags.py
@@ -0,0 +1,16 @@
+from conan.tools._compilers import cppstd_flag as cppstd_flag_settings
+
+
+def cppstd_flag(conanfile):
+ """
+ Returns flags specific to the C++ standard based on the ``conanfile.settings.compiler``,
+ ``conanfile.settings.compiler.version`` and ``conanfile.settings.compiler.cppstd``.
+ It also considers when using GNU extension in ``settings.compiler.cppstd``, reflecting it in the
+ compiler flag. Currently, it supports GCC, Clang, AppleClang, MSVC, Intel, MCST-LCC.
+ In case there is no ``settings.compiler`` or ``settings.cppstd`` in the profile, the result will
+ be an **empty string**.
+ :param conanfile: The current recipe object. Always use ``self``.
+ :return: ``str`` with the standard C++ flag used by the compiler. e.g. "-std=c++11", "/std:c++latest"
+ """
+ settings = conanfile.settings
+ return cppstd_flag_settings(settings)
|
diff --git a/conans/test/unittests/client/build/cpp_std_flags_test.py b/conans/test/unittests/client/build/cpp_std_flags_test.py
index 1badf2c90ab..3c30e83717a 100644
--- a/conans/test/unittests/client/build/cpp_std_flags_test.py
+++ b/conans/test/unittests/client/build/cpp_std_flags_test.py
@@ -1,9 +1,11 @@
import unittest
from conans.client.build.cppstd_flags import cppstd_default
-from conans.test.utils.mocks import MockSettings
+from conans.test.utils.mocks import MockSettings, ConanFileMock
from conans.tools import cppstd_flag
+from conan.tools.build import cppstd_flag as cppstd_flag_conanfile
+
def _make_cppstd_flag(compiler, compiler_version, cppstd=None, compiler_base=None):
settings = MockSettings({"compiler": compiler,
@@ -399,3 +401,12 @@ def test_mcst_lcc_cppstd_flag(self):
self.assertEqual(_make_cppstd_flag("mcst-lcc", "1.25", "14", "gcc"), "-std=c++14")
self.assertEqual(_make_cppstd_flag("mcst-lcc", "1.25", "17", "gcc"), "-std=c++17")
self.assertEqual(_make_cppstd_flag("mcst-lcc", "1.25", "20", "gcc"), "-std=c++2a")
+
+ def test_cppstd_flag_conanfile(self):
+ """The conan.tools.build.cppstd_flag should work when passing a ConanFile instance
+ """
+ conanfile = ConanFileMock()
+ conanfile.settings = MockSettings({"compiler": "gcc",
+ "compiler.version": "9",
+ "compiler.cppstd": "17"})
+ self.assertEqual(cppstd_flag_conanfile(conanfile), "-std=c++17")
| 2024-02-22T07:51:29
|
{}
|
{"conan/tools/build/__init__.py": "from conan.tools.build.cpu import build_jobs\nfrom conan.tools.build.cross_building import cross_building, can_run\nfrom conan.tools.build.cppstd import check_min_cppstd, valid_min_cppstd, default_cppstd, \\\n supported_cppstd\nfrom conan.tools.build.stdcpp_library import stdcpp_library\n", "conan/tools/build/flags.py": null}
|
{"conan/tools/build/flags.py": [{"type": "function", "name": "cppstd_flag", "lines": [4, 16], "signature": "def cppstd_flag(conanfile):", "doc": "Returns flags specific to the C++ standard based on the ``conanfile.settings.compiler``,\n``conanfile.settings.compiler.version`` and ``conanfile.settings.compiler.cppstd``.\nIt also considers when using GNU extension in ``settings.compiler.cppstd``, reflecting it in the\ncompiler flag. Currently, it supports GCC, Clang, AppleClang, MSVC, Intel, MCST-LCC.\nIn case there is no ``settings.compiler`` or ``settings.cppstd`` in the profile, the result will\nbe an **empty string**.\n:param conanfile: The current recipe object. Always use ``self``.\n:return: ``str`` with the standard C++ flag used by the compiler. e.g. \"-std=c++11\", \"/std:c++latest\""}]}
| null |
["conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_apple_clang_cppstd_defaults", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_apple_clang_cppstd_flags", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_clang_cppstd_defaults", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_clang_cppstd_flags", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_cppstd_flag_conanfile", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_gcc_cppstd_defaults", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_gcc_cppstd_flags", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_intel_gcc_cppstd_defaults", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_intel_gcc_cppstd_flag", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_intel_visual_cppstd_defaults", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_intel_visual_cppstd_flag", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_mcst_lcc_cppstd_defaults", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_mcst_lcc_cppstd_flag", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_visual_cppstd_defaults", "conans/test/unittests/client/build/cpp_std_flags_test.py::CompilerFlagsTest::test_visual_cppstd_flags"]
|
[]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1708587880.0, "pr_title": "Backport cppstd_flag from Conan 2.x", "pr_body": "Related to the PR #15710, this PR backports the method available in Conan 2.x\r\n\r\nIn Conan 1.x we have [cppstd_flag](https://docs.conan.io/1/reference/tools.html#tools-cppstd-flag) available already, but it receives `settings` only is under `conans.tools`.\r\n\r\nThis change creates a wrapper to make it available under `conan.tools.build` and should pass `conanfile` instead.\r\n\r\nChangelog: Feature: Promote cppstd_flag in the new conan.tools.build module.\r\n\r\nDocs: https://github.com/conan-io/docs/pull/3602\r\n\r\n- [x] Refer to the issue that supports this Pull Request.\r\n- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.\r\n- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n- [x] I've followed the PEP8 style guides for Python code.\r\n- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.\r\n", "pr_timeline": [], "issues": {}}
|
|
conan-io/conan
| 16,231
|
https://github.com/conan-io/conan/pull/16231
|
conan-io__conan-16231
|
[]
|
80fee05d5811608511bbb30a965afd66bdc13311
|
diff --git a/conan/tools/cmake/cmakedeps/templates/__init__.py b/conan/tools/cmake/cmakedeps/templates/__init__.py
index 64e90955bf1..f1cd8ba1b49 100644
--- a/conan/tools/cmake/cmakedeps/templates/__init__.py
+++ b/conan/tools/cmake/cmakedeps/templates/__init__.py
@@ -24,6 +24,12 @@ def root_target_name(self):
def file_name(self):
return self.cmakedeps.get_cmake_package_name(self.conanfile, module_mode=self.generating_module) + self.suffix
+ @property
+ def additional_variables_prefixes(self):
+ prefix_list = (
+ self.cmakedeps.get_property("cmake_additional_variables_prefixes", self.conanfile) or [])
+ return list(set([self.file_name] + prefix_list))
+
@property
def suffix(self):
if not self.require.build:
diff --git a/conan/tools/cmake/cmakedeps/templates/config.py b/conan/tools/cmake/cmakedeps/templates/config.py
index ea01844579a..a4df8a399cc 100644
--- a/conan/tools/cmake/cmakedeps/templates/config.py
+++ b/conan/tools/cmake/cmakedeps/templates/config.py
@@ -28,7 +28,8 @@ def context(self):
targets_include += "{}Targets.cmake".format(self.file_name)
return {"is_module": self.generating_module,
"version": self.conanfile.ref.version,
- "file_name": self.file_name,
+ "file_name": self.file_name,
+ "additional_variables_prefixes": self.additional_variables_prefixes,
"pkg_name": self.pkg_name,
"config_suffix": self.config_suffix,
"check_components_exist": self.cmakedeps.check_components_exist,
@@ -67,11 +68,14 @@ def template(self):
endif()
endforeach()
- set({{ file_name }}_VERSION_STRING "{{ version }}")
- set({{ file_name }}_INCLUDE_DIRS {{ pkg_var(pkg_name, 'INCLUDE_DIRS', config_suffix) }} )
- set({{ file_name }}_INCLUDE_DIR {{ pkg_var(pkg_name, 'INCLUDE_DIRS', config_suffix) }} )
- set({{ file_name }}_LIBRARIES {{ pkg_var(pkg_name, 'LIBRARIES', config_suffix) }} )
- set({{ file_name }}_DEFINITIONS {{ pkg_var(pkg_name, 'DEFINITIONS', config_suffix) }} )
+ {% for prefix in additional_variables_prefixes %}
+ set({{ prefix }}_VERSION_STRING "{{ version }}")
+ set({{ prefix }}_INCLUDE_DIRS {{ pkg_var(pkg_name, 'INCLUDE_DIRS', config_suffix) }} )
+ set({{ prefix }}_INCLUDE_DIR {{ pkg_var(pkg_name, 'INCLUDE_DIRS', config_suffix) }} )
+ set({{ prefix }}_LIBRARIES {{ pkg_var(pkg_name, 'LIBRARIES', config_suffix) }} )
+ set({{ prefix }}_DEFINITIONS {{ pkg_var(pkg_name, 'DEFINITIONS', config_suffix) }} )
+
+ {% endfor %}
# Only the first installed configuration is included to avoid the collision
foreach(_BUILD_MODULE {{ pkg_var(pkg_name, 'BUILD_MODULES_PATHS', config_suffix) }} )
|
diff --git a/conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py b/conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py
index f8a7eddf5b1..53dee803222 100644
--- a/conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py
+++ b/conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py
@@ -707,3 +707,38 @@ def generate(self):
assert 'set(dep_NO_SONAME_MODE_RELEASE TRUE)' in dep
other = c.load("app/other-release-data.cmake")
assert 'set(other_other_mycomp1_NO_SONAME_MODE_RELEASE TRUE)' in other
+
+def test_cmakedeps_set_legacy_variable_name():
+ client = TestClient()
+ base_conanfile = str(GenConanfile("dep", "1.0"))
+ conanfile = base_conanfile + """
+ def package_info(self):
+ self.cpp_info.set_property("cmake_file_name", "CMakeFileName")
+ """
+ client.save({"dep/conanfile.py": conanfile})
+ client.run("create dep")
+ client.run("install --requires=dep/1.0 -g CMakeDeps")
+
+ # Check that all the CMake variables are generated with the file_name
+ dep_config = client.load("CMakeFileNameConfig.cmake")
+ cmake_variables = ["VERSION_STRING", "INCLUDE_DIRS", "INCLUDE_DIR", "LIBRARIES", "DEFINITIONS"]
+ for variable in cmake_variables:
+ assert f"CMakeFileName_{variable}" in dep_config
+
+ conanfile = base_conanfile + """
+ def package_info(self):
+ self.cpp_info.set_property("cmake_file_name", "NewCMakeFileName")
+ self.cpp_info.set_property("cmake_additional_variables_prefixes", ["PREFIX", "prefix", "PREFIX"])
+ """
+ client.save({"dep/conanfile.py": conanfile})
+ client.run("create dep")
+ client.run("install --requires=dep/1.0 -g CMakeDeps")
+
+ # Check that all the CMake variables are generated with the file_name and both prefixes
+ dep_config = client.load("NewCMakeFileNameConfig.cmake")
+ for variable in cmake_variables:
+ assert f"NewCMakeFileName_{variable}" in dep_config
+ assert f"PREFIX_{variable}" in dep_config
+ assert f"prefix_{variable}" in dep_config
+ # Check that variables are not duplicated
+ assert dep_config.count("PREFIX_VERSION") == 1
| 2024-05-09T12:29:23
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"conan/tools/cmake/cmakedeps/templates/__init__.py": "import jinja2\nfrom jinja2 import Template\n\nfrom conan.errors import ConanException\n\n\nclass CMakeDepsFileTemplate(object):\n\n def __init__(self, cmakedeps, require, conanfile, generating_module=False):\n self.cmakedeps = cmakedeps\n self.require = require\n self.conanfile = conanfile\n self.generating_module = generating_module\n\n @property\n def pkg_name(self):\n return self.conanfile.ref.name + self.suffix\n\n @property\n def root_target_name(self):\n return self.get_root_target_name(self.conanfile, self.suffix)\n\n @property\n def file_name(self):\n return self.cmakedeps.get_cmake_package_name(self.conanfile, module_mode=self.generating_module) + self.suffix\n\n @property\n def suffix(self):\n if not self.require.build:\n return \"\"\n return self.cmakedeps.build_context_suffix.get(self.conanfile.ref.name, \"\")\n\n def render(self):\n try:\n context = self.context\n except Exception as e:\n raise ConanException(\"error generating context for '{}': {}\".format(self.conanfile, e))\n\n # Cache the template instance as a class attribute to greatly speed up the rendering\n # NOTE: this assumes that self.template always returns the same string\n template_instance = getattr(type(self), \"template_instance\", None)\n if template_instance is None:\n template_instance = Template(self.template, trim_blocks=True, lstrip_blocks=True,\n undefined=jinja2.StrictUndefined)\n setattr(type(self), \"template_instance\", template_instance)\n return template_instance.render(context)\n\n def context(self):\n raise NotImplementedError()\n\n @property\n def template(self):\n raise NotImplementedError()\n\n @property\n def filename(self):\n raise NotImplementedError()\n\n @property\n def configuration(self):\n return self.cmakedeps.configuration\n\n @property\n def arch(self):\n return self.cmakedeps.arch\n\n @property\n def config_suffix(self):\n return \"_{}\".format(self.configuration.upper()) if self.configuration else \"\"\n\n @staticmethod\n def _get_target_default_name(req, component_name=\"\", suffix=\"\"):\n return \"{name}{suffix}::{cname}{suffix}\".format(cname=component_name or req.ref.name,\n name=req.ref.name, suffix=suffix)\n\n def get_root_target_name(self, req, suffix=\"\"):\n if self.generating_module:\n ret = self.cmakedeps.get_property(\"cmake_module_target_name\", req)\n if ret:\n return ret\n ret = self.cmakedeps.get_property(\"cmake_target_name\", req)\n return ret or self._get_target_default_name(req, suffix=suffix)\n\n def get_component_alias(self, req, comp_name):\n if comp_name not in req.cpp_info.components:\n # foo::foo might be referencing the root cppinfo\n if req.ref.name == comp_name:\n return self.get_root_target_name(req)\n raise ConanException(\"Component '{name}::{cname}' not found in '{name}' \"\n \"package requirement\".format(name=req.ref.name, cname=comp_name))\n if self.generating_module:\n ret = self.cmakedeps.get_property(\"cmake_module_target_name\", req, comp_name=comp_name)\n if ret:\n return ret\n ret = self.cmakedeps.get_property(\"cmake_target_name\", req, comp_name=comp_name)\n\n # If we don't specify the `cmake_target_name` property for the component it will\n # fallback to the pkg_name::comp_name, it wont use the root cpp_info cmake_target_name\n # property because that is also an absolute name (Greetings::Greetings), it is not a namespace\n # and we don't want to split and do tricks.\n return ret or self._get_target_default_name(req, component_name=comp_name)\n", "conan/tools/cmake/cmakedeps/templates/config.py": "import textwrap\n\nfrom conan.tools.cmake.cmakedeps.templates import CMakeDepsFileTemplate\n\n\"\"\"\n\nFooConfig.cmake\nfoo-config.cmake\n\n\"\"\"\n\n\nclass ConfigTemplate(CMakeDepsFileTemplate):\n\n @property\n def filename(self):\n if self.generating_module:\n return \"Find{}.cmake\".format(self.file_name)\n else:\n if self.file_name == self.file_name.lower():\n return \"{}-config.cmake\".format(self.file_name)\n else:\n return \"{}Config.cmake\".format(self.file_name)\n\n @property\n def context(self):\n targets_include = \"\" if not self.generating_module else \"module-\"\n targets_include += \"{}Targets.cmake\".format(self.file_name)\n return {\"is_module\": self.generating_module,\n \"version\": self.conanfile.ref.version,\n \"file_name\": self.file_name,\n \"pkg_name\": self.pkg_name,\n \"config_suffix\": self.config_suffix,\n \"check_components_exist\": self.cmakedeps.check_components_exist,\n \"targets_include_file\": targets_include}\n\n @property\n def template(self):\n return textwrap.dedent(\"\"\"\\\n {%- macro pkg_var(pkg_name, var, config_suffix) -%}\n {{'${'+pkg_name+'_'+var+config_suffix+'}'}}\n {%- endmacro -%}\n ########## MACROS ###########################################################################\n #############################################################################################\n\n # Requires CMake > 3.15\n if(${CMAKE_VERSION} VERSION_LESS \"3.15\")\n message(FATAL_ERROR \"The 'CMakeDeps' generator only works with CMake >= 3.15\")\n endif()\n\n if({{ file_name }}_FIND_QUIETLY)\n set({{ file_name }}_MESSAGE_MODE VERBOSE)\n else()\n set({{ file_name }}_MESSAGE_MODE STATUS)\n endif()\n\n include(${CMAKE_CURRENT_LIST_DIR}/cmakedeps_macros.cmake)\n include(${CMAKE_CURRENT_LIST_DIR}/{{ targets_include_file }})\n include(CMakeFindDependencyMacro)\n\n check_build_type_defined()\n\n foreach(_DEPENDENCY {{ pkg_var(pkg_name, 'FIND_DEPENDENCY_NAMES', '') }} )\n # Check that we have not already called a find_package with the transitive dependency\n if(NOT {{ '${_DEPENDENCY}' }}_FOUND)\n find_dependency({{ '${_DEPENDENCY}' }} REQUIRED ${${_DEPENDENCY}_FIND_MODE})\n endif()\n endforeach()\n\n set({{ file_name }}_VERSION_STRING \"{{ version }}\")\n set({{ file_name }}_INCLUDE_DIRS {{ pkg_var(pkg_name, 'INCLUDE_DIRS', config_suffix) }} )\n set({{ file_name }}_INCLUDE_DIR {{ pkg_var(pkg_name, 'INCLUDE_DIRS', config_suffix) }} )\n set({{ file_name }}_LIBRARIES {{ pkg_var(pkg_name, 'LIBRARIES', config_suffix) }} )\n set({{ file_name }}_DEFINITIONS {{ pkg_var(pkg_name, 'DEFINITIONS', config_suffix) }} )\n\n # Only the first installed configuration is included to avoid the collision\n foreach(_BUILD_MODULE {{ pkg_var(pkg_name, 'BUILD_MODULES_PATHS', config_suffix) }} )\n message({% raw %}${{% endraw %}{{ file_name }}_MESSAGE_MODE} \"Conan: Including build module from '${_BUILD_MODULE}'\")\n include({{ '${_BUILD_MODULE}' }})\n endforeach()\n\n {% if check_components_exist %}\n # Check that the specified components in the find_package(Foo COMPONENTS x y z) are there\n # This is the variable filled by CMake with the requested components in find_package\n if({{ file_name }}_FIND_COMPONENTS)\n foreach(_FIND_COMPONENT {{ pkg_var(file_name, 'FIND_COMPONENTS', '') }})\n if (TARGET ${_FIND_COMPONENT})\n message({% raw %}${{% endraw %}{{ file_name }}_MESSAGE_MODE} \"Conan: Component '${_FIND_COMPONENT}' found in package '{{ pkg_name }}'\")\n else()\n message(FATAL_ERROR \"Conan: Component '${_FIND_COMPONENT}' NOT found in package '{{ pkg_name }}'\")\n endif()\n endforeach()\n endif()\n {% endif %}\n\n {% if is_module %}\n include(FindPackageHandleStandardArgs)\n set({{ file_name }}_FOUND 1)\n set({{ file_name }}_VERSION \"{{ version }}\")\n\n find_package_handle_standard_args({{ file_name }}\n REQUIRED_VARS {{ file_name }}_VERSION\n VERSION_VAR {{ file_name }}_VERSION)\n mark_as_advanced({{ file_name }}_FOUND {{ file_name }}_VERSION)\n {% endif %}\n \"\"\")\n"}
|
{"conan/tools/cmake/cmakedeps/templates/__init__.py": [{"type": "function", "name": "CMakeDepsFileTemplate.additional_variables_prefixes", "lines": [28, 31], "signature": "def additional_variables_prefixes(self):", "doc": ""}]}
| null |
["conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_cmakedeps_set_legacy_variable_name"]
|
["conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_package_from_system", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_test_package", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_components_error", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_cpp_info_component_objects", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_cpp_info_component_error_aggregate", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_cmakedeps_cppinfo_complex_strings", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_dependency_props_from_consumer", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_props_from_consumer_build_context_activated", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_skip_transitive_components", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_error_missing_config_build_context", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_using_package_module", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_system_libs_transitivity", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::TestCMakeVersionConfigCompat::test_cmake_version_config_compatibility", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::TestCMakeVersionConfigCompat::test_cmake_version_config_compatibility_error", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::TestCMakeVersionConfigCompat::test_cmake_version_config_compatibility_consumer", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::TestSystemPackageVersion::test_component_version", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::TestSystemPackageVersion::test_component_version_consumer", "conans/test/integration/toolchains/cmake/cmakedeps/test_cmakedeps.py::test_cmakedeps_set_property_overrides"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1715257479.0, "pr_title": "Add property to change the CMake variable names generated with CMakeDeps", "pr_body": "Changelog: Feature: Add `cmake_additional_variables_prefixes` variable to CMakeDeps generator to allow adding extra names for declared CMake variables.\r\nDocs: https://github.com/conan-io/docs/pull/3721\r\n\r\nCreated new property for CMakeDeps generator: `cmake_legacy_variable_prefix` that can be set in `package_info()` of the conanfile.\r\nIt allows changing the prefix used when creating CMake variables instead of the package name that was currently being used.\r\n\r\nCloses: https://github.com/conan-io/conan/issues/14788\r\n\r\n- [x] Refer to the issue that supports this Pull Request.\r\n- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.\r\n- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop2/.github/CONTRIBUTING.md).\r\n- [x] I've followed the PEP8 style guides for Python code.\r\n- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.", "pr_timeline": [{"time": 1715638929.0, "comment": "Just wanted to say thanks for adding this feature. The simple difference in capitalization of variable prefixes has been by far the most common reason for having to apply patches to public CMake projects on CCI. Also the most common reason for having to include custom CMake modules in the recipe. Two birds with one stone. Thanks!"}], "issues": {}}
|
|
conan-io/conan
| 16,871
|
https://github.com/conan-io/conan/pull/16871
|
conan-io__conan-16871
|
[]
|
d99c7149ff64c2dd98b5d9752f5308f8ffd70474
|
diff --git a/conans/client/graph/build_mode.py b/conans/client/graph/build_mode.py
index f7a3cd61f81..8efbd99da3b 100644
--- a/conans/client/graph/build_mode.py
+++ b/conans/client/graph/build_mode.py
@@ -16,6 +16,8 @@ def __init__(self, params):
self.patterns = []
self.build_missing_patterns = []
self._build_missing_excluded = []
+ self._build_compatible_patterns = []
+ self._build_compatible_excluded = []
self._excluded_patterns = []
if params is None:
return
@@ -39,6 +41,14 @@ def __init__(self, params):
self._build_missing_excluded.append(clean_pattern[1:])
else:
self.build_missing_patterns.append(clean_pattern)
+ elif param == "compatible":
+ self._build_compatible_patterns = ["*"]
+ elif param.startswith("compatible:"):
+ clean_pattern = param[len("compatible:"):]
+ if clean_pattern and clean_pattern[0] in ["!", "~"]:
+ self._build_compatible_excluded.append(clean_pattern[1:])
+ else:
+ self._build_compatible_patterns.append(clean_pattern)
else:
clean_pattern = param
if clean_pattern and clean_pattern[0] in ["!", "~"]:
@@ -87,8 +97,21 @@ def allowed(self, conan_file):
return True
if self.should_build_missing(conan_file):
return True
+ if self.allowed_compatible(conan_file):
+ return True
return False
+ def allowed_compatible(self, conanfile):
+ if self._build_compatible_excluded:
+ for pattern in self._build_compatible_excluded:
+ if ref_matches(conanfile.ref, pattern, is_consumer=False):
+ return False
+ return True # If it has not been excluded by the negated patterns, it is included
+
+ for pattern in self._build_compatible_patterns:
+ if ref_matches(conanfile.ref, pattern, is_consumer=conanfile._conan_is_consumer):
+ return True
+
def should_build_missing(self, conanfile):
if self._build_missing_excluded:
for pattern in self._build_missing_excluded:
diff --git a/conans/client/graph/compute_pid.py b/conans/client/graph/compute_pid.py
index 2ed1d6a5f4c..5855eafb63e 100644
--- a/conans/client/graph/compute_pid.py
+++ b/conans/client/graph/compute_pid.py
@@ -58,15 +58,6 @@ def compute_package_id(node, new_config, config_version):
config_version=config_version.copy() if config_version else None)
conanfile.original_info = conanfile.info.clone()
- if hasattr(conanfile, "validate_build"):
- with conanfile_exception_formatter(conanfile, "validate_build"):
- with conanfile_remove_attr(conanfile, ['cpp_info'], "validate_build"):
- try:
- conanfile.validate_build()
- except ConanInvalidConfiguration as e:
- # This 'cant_build' will be ignored if we don't have to build the node.
- conanfile.info.cant_build = str(e)
-
run_validate_package_id(conanfile)
if conanfile.info.settings_target:
@@ -79,6 +70,15 @@ def compute_package_id(node, new_config, config_version):
def run_validate_package_id(conanfile):
# IMPORTANT: This validation code must run before calling info.package_id(), to mark "invalid"
+ if hasattr(conanfile, "validate_build"):
+ with conanfile_exception_formatter(conanfile, "validate_build"):
+ with conanfile_remove_attr(conanfile, ['cpp_info'], "validate_build"):
+ try:
+ conanfile.validate_build()
+ except ConanInvalidConfiguration as e:
+ # This 'cant_build' will be ignored if we don't have to build the node.
+ conanfile.info.cant_build = str(e)
+
if hasattr(conanfile, "validate"):
with conanfile_exception_formatter(conanfile, "validate"):
with conanfile_remove_attr(conanfile, ['cpp_info'], "validate"):
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index f10991628e6..cecd9f88efa 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -21,7 +21,7 @@
from conans.util.files import load
-class GraphBinariesAnalyzer(object):
+class GraphBinariesAnalyzer:
def __init__(self, conan_app, global_conf):
self._cache = conan_app.cache
@@ -145,7 +145,7 @@ def _find_existing_compatible_binaries(self, node, compatibles, remotes, update)
conanfile = node.conanfile
original_binary = node.binary
original_package_id = node.package_id
-
+ conanfile.output.info(f"Main binary package '{original_package_id}' missing")
conanfile.output.info(f"Checking {len(compatibles)} compatible configurations")
for package_id, compatible_package in compatibles.items():
if should_update_reference(node.ref, update):
@@ -173,24 +173,47 @@ def _find_existing_compatible_binaries(self, node, compatibles, remotes, update)
node.binary = original_binary
node._package_id = original_package_id
+ def _find_build_compatible_binary(self, node, compatibles):
+ original_binary = node.binary
+ original_package_id = node.package_id
+ output = node.conanfile.output
+ output.info(f"Requested binary package '{original_package_id}' invalid, can't be built")
+ output.info(f"Checking {len(compatibles)} configurations, to build a compatible one, "
+ f"as requested by '--build=compatible'")
+ for pkg_id, compatible in compatibles.items():
+ if not compatible.cant_build:
+ node._package_id = pkg_id # Modifying package id under the hood, FIXME
+ self._compatible_found(node.conanfile, pkg_id, compatible)
+ node.binary = BINARY_BUILD
+ return
+ node.binary = original_binary
+ node._package_id = original_package_id
+
def _evaluate_node(self, node, build_mode, remotes, update):
assert node.binary is None, "Node.binary should be None"
assert node.package_id is not None, "Node.package_id shouldn't be None"
assert node.prev is None, "Node.prev should be None"
self._process_node(node, build_mode, remotes, update)
- original_package_id = node.package_id
+ compatibles = None
+
if node.binary == BINARY_MISSING \
and not build_mode.should_build_missing(node.conanfile) and not node.should_build:
compatibles = self._get_compatible_packages(node)
- node.conanfile.output.info(f"Main binary package '{original_package_id}' missing")
- self._find_existing_compatible_binaries(node, compatibles, remotes, update)
+ if compatibles:
+ self._find_existing_compatible_binaries(node, compatibles, remotes, update)
if node.binary == BINARY_MISSING and build_mode.allowed(node.conanfile):
node.should_build = True
node.build_allowed = True
node.binary = BINARY_BUILD if not node.conanfile.info.cant_build else BINARY_INVALID
+ if node.binary == BINARY_INVALID and build_mode.allowed_compatible(node.conanfile):
+ if compatibles is None:
+ compatibles = self._get_compatible_packages(node)
+ if compatibles:
+ self._find_build_compatible_binary(node, compatibles)
+
if node.binary == BINARY_BUILD:
conanfile = node.conanfile
if conanfile.vendor and not conanfile.conf.get("tools.graph:vendor", choices=("build",)):
|
diff --git a/test/integration/package_id/compatible_test.py b/test/integration/package_id/compatible_test.py
index 5e27f7816f6..b7719e41711 100644
--- a/test/integration/package_id/compatible_test.py
+++ b/test/integration/package_id/compatible_test.py
@@ -448,3 +448,173 @@ def test_compatibility_msvc_and_cppstd(self):
tc.run("create dep -pr=profile -s compiler.cppstd=20")
tc.run("create . -pr=profile -s compiler.cppstd=17")
tc.assert_listed_binary({"dep/1.0": ("b6d26a6bc439b25b434113982791edf9cab4d004", "Cache")})
+
+
+class TestCompatibleBuild:
+ def test_build_compatible(self):
+ c = TestClient()
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+ from conan.tools.build import check_min_cppstd
+
+ class Pkg(ConanFile):
+ name = "pkg"
+ version = "0.1"
+ settings = "os", "compiler"
+
+ def validate(self):
+ check_min_cppstd(self, 14)
+ """)
+ c.save({"conanfile.py": conanfile})
+ settings = "-s os=Windows -s compiler=gcc -s compiler.version=11 " \
+ "-s compiler.libcxx=libstdc++11 -s compiler.cppstd=11"
+ c.run(f"create . {settings}", assert_error=True)
+ c.assert_listed_binary({"pkg/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid")})
+ assert "pkg/0.1: Invalid: Current cppstd (11)" in c.out
+
+ c.run(f"create . {settings} --build=compatible:&")
+ # the one for cppstd=14 is built!!
+ c.assert_listed_binary({"pkg/0.1": ("389803bed06200476fcee1af2023d4e9bfa24ff9", "Build")})
+ c.run("list *:*")
+ assert "compiler.cppstd: 14" in c.out
+
+ def test_build_compatible_cant_build(self):
+ # requires c++17 to build, can be consumed with c++14
+ c = TestClient()
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+ from conan.tools.build import check_min_cppstd
+
+ class Pkg(ConanFile):
+ name = "pkg"
+ version = "0.1"
+ settings = "os", "compiler"
+
+ def validate(self):
+ check_min_cppstd(self, 14)
+
+ def validate_build(self):
+ check_min_cppstd(self, 17)
+ """)
+ c.save({"conanfile.py": conanfile})
+ settings = "-s os=Windows -s compiler=gcc -s compiler.version=11 " \
+ "-s compiler.libcxx=libstdc++11 -s compiler.cppstd=11"
+ c.run(f"create . {settings}", assert_error=True)
+ c.assert_listed_binary({"pkg/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid")})
+ assert "pkg/0.1: Invalid: Current cppstd (11)" in c.out
+
+ c.run(f"create . {settings} --build=missing", assert_error=True)
+ c.assert_listed_binary({"pkg/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid")})
+ assert "pkg/0.1: Invalid: Current cppstd (11)" in c.out
+
+ c.run(f"create . {settings} --build=compatible:&")
+ # the one for cppstd=17 is built!!
+ c.assert_listed_binary({"pkg/0.1": ("58fb8ac6c2dc3e3f837253ce1a6ea59011525866", "Build")})
+ c.run("list *:*")
+ assert "compiler.cppstd: 17" in c.out
+
+ def test_build_compatible_cant_build2(self):
+ # requires c++17 to build, can be consumed with c++11
+ c = TestClient()
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+ from conan.tools.build import check_min_cppstd
+
+ class Pkg(ConanFile):
+ name = "pkg"
+ version = "0.1"
+ settings = "os", "compiler"
+
+ def validate(self):
+ check_min_cppstd(self, 11)
+
+ def validate_build(self):
+ check_min_cppstd(self, 17)
+ """)
+ c.save({"conanfile.py": conanfile})
+ settings = "-s os=Windows -s compiler=gcc -s compiler.version=11 " \
+ "-s compiler.libcxx=libstdc++11 -s compiler.cppstd=11"
+ c.run(f"create . {settings}", assert_error=True)
+ c.assert_listed_binary({"pkg/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid")})
+ assert "pkg/0.1: Cannot build for this configuration: Current cppstd (11)" in c.out
+
+ c.run(f"create . {settings} --build=missing", assert_error=True)
+ # the one for cppstd=17 is built!!
+ c.assert_listed_binary({"pkg/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid")})
+ assert "pkg/0.1: Cannot build for this configuration: Current cppstd (11)" in c.out
+
+ c.run(f"create . {settings} --build=compatible:&")
+ # the one for cppstd=17 is built!!
+ c.assert_listed_binary({"pkg/0.1": ("58fb8ac6c2dc3e3f837253ce1a6ea59011525866", "Build")})
+ c.run("list *:*")
+ assert "compiler.cppstd: 17" in c.out
+
+ def test_build_compatible_cant_build_only(self):
+ # requires c++17 to build, but don't specify consumption
+ c = TestClient()
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+ from conan.tools.build import check_min_cppstd
+
+ class Pkg(ConanFile):
+ name = "pkg"
+ version = "0.1"
+ settings = "os", "compiler"
+
+ def validate_build(self):
+ check_min_cppstd(self, 17)
+ """)
+ c.save({"conanfile.py": conanfile})
+ settings = "-s os=Windows -s compiler=gcc -s compiler.version=11 " \
+ "-s compiler.libcxx=libstdc++11 -s compiler.cppstd=11"
+ c.run(f"create . {settings}", assert_error=True)
+ c.assert_listed_binary({"pkg/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid")})
+ assert "pkg/0.1: Cannot build for this configuration: Current cppstd (11)" in c.out
+
+ c.run(f"create . {settings} --build=missing", assert_error=True)
+ # the one for cppstd=17 is built!!
+ c.assert_listed_binary({"pkg/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid")})
+ assert "pkg/0.1: Cannot build for this configuration: Current cppstd (11)" in c.out
+
+ c.run(f"create . {settings} --build=compatible:&")
+ # the one for cppstd=17 is built!!
+ c.assert_listed_binary({"pkg/0.1": ("58fb8ac6c2dc3e3f837253ce1a6ea59011525866", "Build")})
+ c.run("list *:*")
+ assert "compiler.cppstd: 17" in c.out
+
+ def test_multi_level_build_compatible(self):
+ c = TestClient()
+ conanfile = textwrap.dedent("""
+ from conan import ConanFile
+ from conan.tools.build import check_min_cppstd
+
+ class Pkg(ConanFile):
+ name = "{name}"
+ version = "0.1"
+ settings = "os", "compiler"
+ {requires}
+
+ def validate(self):
+ check_min_cppstd(self, {cppstd})
+ """)
+ c.save({"liba/conanfile.py": conanfile.format(name="liba", cppstd=14, requires=""),
+ "libb/conanfile.py": conanfile.format(name="libb", cppstd=17,
+ requires='requires="liba/0.1"')})
+ c.run("export liba")
+ c.run("export libb")
+ settings = "-s os=Windows -s compiler=gcc -s compiler.version=11 " \
+ "-s compiler.libcxx=libstdc++11 -s compiler.cppstd=11"
+ c.run(f"install --requires=libb/0.1 {settings}", assert_error=True)
+ c.assert_listed_binary({"liba/0.1": ("bb33db23c961978d08dc0cdd6bc786b45b3e5943", "Invalid"),
+ "libb/0.1": ("144910d65b27bcbf7d544201f5578555bbd0376e", "Invalid")})
+ assert "liba/0.1: Invalid: Current cppstd (11)" in c.out
+ assert "libb/0.1: Invalid: Current cppstd (11)" in c.out
+
+ c.run(f"install --requires=libb/0.1 {settings} --build=compatible")
+ # the one for cppstd=14 is built!!
+ c.assert_listed_binary({"liba/0.1": ("389803bed06200476fcee1af2023d4e9bfa24ff9", "Build"),
+ "libb/0.1": ("8f29f49be3ba2b6cbc9fa1e05432ce928b96ae5d", "Build")})
+ c.run("list liba:*")
+ assert "compiler.cppstd: 14" in c.out
+ c.run("list libb:*")
+ assert "compiler.cppstd: 17" in c.out
| 2024-08-25T21:10:13
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"conans/client/graph/build_mode.py": "from conans.errors import ConanException\nfrom conans.model.recipe_ref import ref_matches\n\n\nclass BuildMode:\n \"\"\" build_mode => [\"*\"] if user wrote \"--build=*\"\n => [\"hello\", \"bye\"] if user wrote \"--build hello --build bye\"\n => [\"hello/0.1@foo/bar\"] if user wrote \"--build hello/0.1@foo/bar\"\n => [\"!foo\"] or [\"~foo\"] means exclude when building all from sources\n \"\"\"\n def __init__(self, params):\n self.missing = False\n self.never = False\n self.cascade = False\n self.editable = False\n self.patterns = []\n self.build_missing_patterns = []\n self._build_missing_excluded = []\n self._excluded_patterns = []\n if params is None:\n return\n\n assert isinstance(params, list)\n assert len(params) > 0 # Not empty list\n\n for param in params:\n if param == \"missing\":\n self.missing = True\n elif param == \"editable\":\n self.editable = True\n elif param == \"never\":\n self.never = True\n elif param == \"cascade\":\n self.cascade = True\n else:\n if param.startswith(\"missing:\"):\n clean_pattern = param[len(\"missing:\"):]\n if clean_pattern and clean_pattern[0] in [\"!\", \"~\"]:\n self._build_missing_excluded.append(clean_pattern[1:])\n else:\n self.build_missing_patterns.append(clean_pattern)\n else:\n clean_pattern = param\n if clean_pattern and clean_pattern[0] in [\"!\", \"~\"]:\n self._excluded_patterns.append(clean_pattern[1:])\n else:\n self.patterns.append(clean_pattern)\n\n if self.never and (self.missing or self.patterns or self.cascade):\n raise ConanException(\"--build=never not compatible with other options\")\n\n def forced(self, conan_file, ref, with_deps_to_build=False):\n # TODO: ref can be obtained from conan_file\n\n for pattern in self._excluded_patterns:\n if ref_matches(ref, pattern, is_consumer=conan_file._conan_is_consumer):\n conan_file.output.info(\"Excluded build from source\")\n return False\n\n if conan_file.build_policy == \"never\": # this package has been export-pkg\n return False\n\n if self.never:\n return False\n\n if conan_file.build_policy == \"always\":\n raise ConanException(\"{}: build_policy='always' has been removed. \"\n \"Please use 'missing' only\".format(conan_file))\n\n if self.cascade and with_deps_to_build:\n return True\n\n # Patterns to match, if package matches pattern, build is forced\n for pattern in self.patterns:\n if ref_matches(ref, pattern, is_consumer=conan_file._conan_is_consumer):\n return True\n return False\n\n def allowed(self, conan_file):\n if self.never or conan_file.build_policy == \"never\": # this package has been export-pkg\n return False\n if self.missing:\n return True\n if conan_file.build_policy == \"missing\":\n conan_file.output.info(\"Building package from source as defined by \"\n \"build_policy='missing'\")\n return True\n if self.should_build_missing(conan_file):\n return True\n return False\n\n def should_build_missing(self, conanfile):\n if self._build_missing_excluded:\n for pattern in self._build_missing_excluded:\n if ref_matches(conanfile.ref, pattern, is_consumer=False):\n return False\n return True # If it has not been excluded by the negated patterns, it is included\n\n for pattern in self.build_missing_patterns:\n if ref_matches(conanfile.ref, pattern, is_consumer=conanfile._conan_is_consumer):\n return True\n", "conans/client/graph/compute_pid.py": "from collections import OrderedDict\n\nfrom conans.errors import conanfile_exception_formatter, ConanInvalidConfiguration, \\\n conanfile_remove_attr, ConanException\nfrom conans.model.info import ConanInfo, RequirementsInfo, RequirementInfo, PythonRequiresInfo\nfrom conans.client.conanfile.implementations import auto_header_only_package_id\n\n\ndef compute_package_id(node, new_config, config_version):\n \"\"\"\n Compute the binary package ID of this node\n \"\"\"\n conanfile = node.conanfile\n\n # TODO: Extract\n unknown_mode = new_config.get(\"core.package_id:default_unknown_mode\", default=\"semver_mode\")\n non_embed_mode = new_config.get(\"core.package_id:default_non_embed_mode\", default=\"minor_mode\")\n # recipe_revision_mode already takes into account the package_id\n embed_mode = new_config.get(\"core.package_id:default_embed_mode\", default=\"full_mode\")\n python_mode = new_config.get(\"core.package_id:default_python_mode\", default=\"minor_mode\")\n build_mode = new_config.get(\"core.package_id:default_build_mode\", default=None)\n\n python_requires = getattr(conanfile, \"python_requires\", None)\n if python_requires:\n python_requires = python_requires.info_requires()\n\n data = OrderedDict()\n build_data = OrderedDict()\n for require, transitive in node.transitive_deps.items():\n dep_node = transitive.node\n require.deduce_package_id_mode(conanfile.package_type, dep_node,\n non_embed_mode, embed_mode, build_mode, unknown_mode)\n if require.package_id_mode is not None:\n req_info = RequirementInfo(dep_node.pref.ref, dep_node.pref.package_id,\n require.package_id_mode)\n if require.build:\n build_data[require] = req_info\n else:\n data[require] = req_info\n\n if conanfile.vendor: # Make the package_id fully independent of dependencies versions\n data, build_data = OrderedDict(), OrderedDict() # TODO, cleaner, now minimal diff\n\n reqs_info = RequirementsInfo(data)\n build_requires_info = RequirementsInfo(build_data)\n python_requires = PythonRequiresInfo(python_requires, python_mode)\n try:\n copied_options = conanfile.options.copy_conaninfo_options()\n except ConanException as e:\n raise ConanException(f\"{conanfile}: {e}\")\n\n conanfile.info = ConanInfo(settings=conanfile.settings.copy_conaninfo_settings(),\n options=copied_options,\n reqs_info=reqs_info,\n build_requires_info=build_requires_info,\n python_requires=python_requires,\n conf=conanfile.conf.copy_conaninfo_conf(),\n config_version=config_version.copy() if config_version else None)\n conanfile.original_info = conanfile.info.clone()\n\n if hasattr(conanfile, \"validate_build\"):\n with conanfile_exception_formatter(conanfile, \"validate_build\"):\n with conanfile_remove_attr(conanfile, ['cpp_info'], \"validate_build\"):\n try:\n conanfile.validate_build()\n except ConanInvalidConfiguration as e:\n # This 'cant_build' will be ignored if we don't have to build the node.\n conanfile.info.cant_build = str(e)\n\n run_validate_package_id(conanfile)\n\n if conanfile.info.settings_target:\n # settings_target has beed added to conan package via package_id api\n conanfile.original_info.settings_target = conanfile.info.settings_target\n\n info = conanfile.info\n node.package_id = info.package_id()\n\n\ndef run_validate_package_id(conanfile):\n # IMPORTANT: This validation code must run before calling info.package_id(), to mark \"invalid\"\n if hasattr(conanfile, \"validate\"):\n with conanfile_exception_formatter(conanfile, \"validate\"):\n with conanfile_remove_attr(conanfile, ['cpp_info'], \"validate\"):\n try:\n conanfile.validate()\n except ConanInvalidConfiguration as e:\n conanfile.info.invalid = str(e)\n\n # Once we are done, call package_id() to narrow and change possible values\n if hasattr(conanfile, \"package_id\"):\n with conanfile_exception_formatter(conanfile, \"package_id\"):\n with conanfile_remove_attr(conanfile, ['cpp_info', 'settings', 'options'], \"package_id\"):\n conanfile.package_id()\n elif \"auto_header_only\" in conanfile.implements:\n auto_header_only_package_id(conanfile)\n\n conanfile.info.validate()\n", "conans/client/graph/graph_binaries.py": "import json\nimport os\nfrom collections import OrderedDict\n\nfrom conan.api.output import ConanOutput\nfrom conan.internal.cache.home_paths import HomePaths\nfrom conans.client.graph.build_mode import BuildMode\nfrom conans.client.graph.compatibility import BinaryCompatibility\nfrom conans.client.graph.compute_pid import compute_package_id\nfrom conans.client.graph.graph import (BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_MISSING,\n BINARY_UPDATE, RECIPE_EDITABLE, BINARY_EDITABLE,\n RECIPE_CONSUMER, RECIPE_VIRTUAL, BINARY_SKIP,\n BINARY_INVALID, BINARY_EDITABLE_BUILD, RECIPE_PLATFORM,\n BINARY_PLATFORM)\nfrom conans.client.graph.proxy import should_update_reference\nfrom conans.errors import NoRemoteAvailable, NotFoundException, \\\n PackageNotFoundException, conanfile_exception_formatter, ConanConnectionError, ConanException\nfrom conans.model.info import RequirementInfo, RequirementsInfo\nfrom conans.model.package_ref import PkgReference\nfrom conans.model.recipe_ref import RecipeReference\nfrom conans.util.files import load\n\n\nclass GraphBinariesAnalyzer(object):\n\n def __init__(self, conan_app, global_conf):\n self._cache = conan_app.cache\n self._home_folder = conan_app.cache_folder\n self._global_conf = global_conf\n self._remote_manager = conan_app.remote_manager\n # These are the nodes with pref (not including PREV) that have been evaluated\n self._evaluated = {} # {pref: [nodes]}\n compat_folder = HomePaths(conan_app.cache_folder).compatibility_plugin_path\n self._compatibility = BinaryCompatibility(compat_folder)\n\n @staticmethod\n def _evaluate_build(node, build_mode):\n ref, conanfile = node.ref, node.conanfile\n with_deps_to_build = False\n # check dependencies, if they are being built, \"cascade\" will try to build this one too\n if build_mode.cascade:\n with_deps_to_build = any(dep.dst.binary in (BINARY_BUILD, BINARY_EDITABLE_BUILD)\n for dep in node.dependencies)\n if build_mode.forced(conanfile, ref, with_deps_to_build):\n node.should_build = True\n conanfile.output.info('Forced build from source')\n node.binary = BINARY_BUILD if not conanfile.info.cant_build else BINARY_INVALID\n node.prev = None\n return True\n\n @staticmethod\n def _evaluate_clean_pkg_folder_dirty(node, package_layout):\n # Check if dirty, to remove it\n with package_layout.package_lock():\n assert node.recipe != RECIPE_EDITABLE, \"Editable package shouldn't reach this code\"\n if package_layout.package_is_dirty():\n node.conanfile.output.warning(\"Package binary is corrupted, \"\n \"removing: %s\" % node.package_id)\n package_layout.package_remove()\n return True\n\n # check through all the selected remotes:\n # - if not --update: get the first package found\n # - if --update: get the latest remote searching in all of them\n def _get_package_from_remotes(self, node, remotes, update):\n results = []\n pref = node.pref\n for r in remotes:\n try:\n info = node.conanfile.info\n latest_pref = self._remote_manager.get_latest_package_reference(pref, r, info)\n results.append({'pref': latest_pref, 'remote': r})\n if len(results) > 0 and not should_update_reference(node.ref, update):\n break\n except NotFoundException:\n pass\n except ConanConnectionError:\n ConanOutput().error(f\"Failed checking for binary '{pref}' in remote '{r.name}': \"\n \"remote not available\")\n raise\n if not remotes and should_update_reference(node.ref, update):\n node.conanfile.output.warning(\"Can't update, there are no remotes defined\")\n\n if len(results) > 0:\n remotes_results = sorted(results, key=lambda k: k['pref'].timestamp, reverse=True)\n result = remotes_results[0]\n node.prev = result.get(\"pref\").revision\n node.pref_timestamp = result.get(\"pref\").timestamp\n node.binary_remote = result.get('remote')\n else:\n node.binary_remote = None\n node.prev = None\n raise PackageNotFoundException(pref)\n\n def _evaluate_is_cached(self, node):\n \"\"\" Each pref has to be evaluated just once, and the action for all of them should be\n exactly the same\n \"\"\"\n pref = node.pref\n previous_nodes = self._evaluated.get(pref)\n if previous_nodes:\n previous_nodes.append(node)\n previous_node = previous_nodes[0]\n node.binary = previous_node.binary\n node.binary_remote = previous_node.binary_remote\n node.prev = previous_node.prev\n node.pref_timestamp = previous_node.pref_timestamp\n node.should_build = previous_node.should_build\n node.build_allowed = previous_node.build_allowed\n\n # this line fixed the compatible_packages with private case.\n # https://github.com/conan-io/conan/issues/9880\n node._package_id = previous_node.package_id\n return True\n self._evaluated[pref] = [node]\n\n def _get_compatible_packages(self, node):\n conanfile = node.conanfile\n original_package_id = node.package_id\n\n compatibles = self._compatibility.compatibles(conanfile)\n existing = compatibles.pop(original_package_id, None) # Skip main package_id\n if existing: # Skip the check if same package_id\n conanfile.output.debug(f\"Compatible package ID {original_package_id} equal to \"\n \"the default package ID: Skipping it.\")\n return compatibles\n\n @staticmethod\n def _compatible_found(conanfile, pkg_id, compatible_pkg):\n diff = conanfile.info.dump_diff(compatible_pkg)\n conanfile.output.success(f\"Found compatible package '{pkg_id}': {diff}\")\n # So they are available in package_info() method\n conanfile.info = compatible_pkg # Redefine current\n\n # TODO: Improve this interface\n # The package_id method might have modified the settings to erase information,\n # ensure we allow those new values\n conanfile.settings = conanfile.settings.copy_conaninfo_settings()\n conanfile.settings.update_values(compatible_pkg.settings.values_list)\n # Trick to allow mutating the options (they were freeze=True)\n conanfile.options = conanfile.options.copy_conaninfo_options()\n conanfile.options.update_options(compatible_pkg.options)\n\n def _find_existing_compatible_binaries(self, node, compatibles, remotes, update):\n conanfile = node.conanfile\n original_binary = node.binary\n original_package_id = node.package_id\n\n conanfile.output.info(f\"Checking {len(compatibles)} compatible configurations\")\n for package_id, compatible_package in compatibles.items():\n if should_update_reference(node.ref, update):\n conanfile.output.info(f\"'{package_id}': \"\n f\"{conanfile.info.dump_diff(compatible_package)}\")\n node._package_id = package_id # Modifying package id under the hood, FIXME\n node.binary = None # Invalidate it\n self._process_compatible_node(node, remotes, update) # TODO: what if BINARY_BUILD\n if node.binary in (BINARY_CACHE, BINARY_UPDATE, BINARY_DOWNLOAD):\n self._compatible_found(conanfile, package_id, compatible_package)\n return\n if not should_update_reference(conanfile.ref, update):\n conanfile.output.info(f\"Compatible configurations not found in cache, checking servers\")\n for package_id, compatible_package in compatibles.items():\n conanfile.output.info(f\"'{package_id}': \"\n f\"{conanfile.info.dump_diff(compatible_package)}\")\n node._package_id = package_id # Modifying package id under the hood, FIXME\n node.binary = None # Invalidate it\n self._evaluate_download(node, remotes, update)\n if node.binary == BINARY_DOWNLOAD:\n self._compatible_found(conanfile, package_id, compatible_package)\n return\n\n # If no compatible is found, restore original state\n node.binary = original_binary\n node._package_id = original_package_id\n\n def _evaluate_node(self, node, build_mode, remotes, update):\n assert node.binary is None, \"Node.binary should be None\"\n assert node.package_id is not None, \"Node.package_id shouldn't be None\"\n assert node.prev is None, \"Node.prev should be None\"\n\n self._process_node(node, build_mode, remotes, update)\n original_package_id = node.package_id\n if node.binary == BINARY_MISSING \\\n and not build_mode.should_build_missing(node.conanfile) and not node.should_build:\n compatibles = self._get_compatible_packages(node)\n node.conanfile.output.info(f\"Main binary package '{original_package_id}' missing\")\n self._find_existing_compatible_binaries(node, compatibles, remotes, update)\n\n if node.binary == BINARY_MISSING and build_mode.allowed(node.conanfile):\n node.should_build = True\n node.build_allowed = True\n node.binary = BINARY_BUILD if not node.conanfile.info.cant_build else BINARY_INVALID\n\n if node.binary == BINARY_BUILD:\n conanfile = node.conanfile\n if conanfile.vendor and not conanfile.conf.get(\"tools.graph:vendor\", choices=(\"build\",)):\n node.conanfile.info.invalid = f\"The package '{conanfile.ref}' is a vendoring one, \" \\\n f\"needs to be built from source, but it \" \\\n \"didn't enable 'tools.graph:vendor=build' to compute \" \\\n \"its dependencies\"\n node.binary = BINARY_INVALID\n\n def _process_node(self, node, build_mode, remotes, update):\n # Check that this same reference hasn't already been checked\n if self._evaluate_is_cached(node):\n return\n\n if node.conanfile.info.invalid:\n node.binary = BINARY_INVALID\n return\n if node.recipe == RECIPE_PLATFORM:\n node.binary = BINARY_PLATFORM\n return\n\n if node.recipe == RECIPE_EDITABLE:\n # TODO: Check what happens when editable is passed an Invalid configuration\n if build_mode.editable or self._evaluate_build(node, build_mode) or \\\n build_mode.should_build_missing(node.conanfile):\n node.binary = BINARY_EDITABLE_BUILD\n else:\n node.binary = BINARY_EDITABLE # TODO: PREV?\n return\n\n # If the CLI says this package needs to be built, it doesn't make sense to mark\n # it as invalid\n if self._evaluate_build(node, build_mode):\n return\n\n # Obtain the cache_latest valid one, cleaning things if dirty\n while True:\n cache_latest_prev = self._cache.get_latest_package_reference(node.pref)\n if cache_latest_prev is None:\n break\n package_layout = self._cache.pkg_layout(cache_latest_prev)\n if not self._evaluate_clean_pkg_folder_dirty(node, package_layout):\n break\n\n if node.conanfile.upload_policy == \"skip\":\n # Download/update shouldn't be checked in the servers if this is \"skip-upload\"\n # The binary can only be in cache or missing.\n if cache_latest_prev:\n node.binary = BINARY_CACHE\n node.prev = cache_latest_prev.revision\n else:\n node.binary = BINARY_MISSING\n elif cache_latest_prev is None: # This binary does NOT exist in the cache\n self._evaluate_download(node, remotes, update)\n else: # This binary already exists in the cache, maybe can be updated\n self._evaluate_in_cache(cache_latest_prev, node, remotes, update)\n\n def _process_compatible_node(self, node, remotes, update):\n \"\"\" simplified checking of compatible_packages, that should be found existing, but\n will never be built, for example. They cannot be editable either at this point.\n \"\"\"\n # Check that this same reference hasn't already been checked\n if self._evaluate_is_cached(node):\n return\n\n # TODO: Test that this works\n if node.conanfile.info.invalid:\n node.binary = BINARY_INVALID\n return\n\n # Obtain the cache_latest valid one, cleaning things if dirty\n while True:\n cache_latest_prev = self._cache.get_latest_package_reference(node.pref)\n if cache_latest_prev is None:\n break\n package_layout = self._cache.pkg_layout(cache_latest_prev)\n if not self._evaluate_clean_pkg_folder_dirty(node, package_layout):\n break\n\n if cache_latest_prev is not None:\n # This binary already exists in the cache, maybe can be updated\n self._evaluate_in_cache(cache_latest_prev, node, remotes, update)\n elif should_update_reference(node.ref, update):\n self._evaluate_download(node, remotes, update)\n\n def _process_locked_node(self, node, build_mode, locked_prev):\n # Check that this same reference hasn't already been checked\n if self._evaluate_is_cached(node):\n return\n\n # If the CLI says this package needs to be built, it doesn't make sense to mark\n # it as invalid\n if self._evaluate_build(node, build_mode):\n # TODO: We migth want to rais if strict\n return\n\n if node.recipe == RECIPE_EDITABLE:\n # TODO: Raise if strict\n node.binary = BINARY_EDITABLE # TODO: PREV?\n return\n\n # in cache:\n node.prev = locked_prev\n if self._cache.exists_prev(node.pref):\n node.binary = BINARY_CACHE\n node.binary_remote = None\n # TODO: Dirty\n return\n\n # TODO: Check in remotes for download\n\n def _evaluate_download(self, node, remotes, update):\n try:\n self._get_package_from_remotes(node, remotes, update)\n except NotFoundException:\n node.binary = BINARY_MISSING\n else:\n node.binary = BINARY_DOWNLOAD\n\n def _evaluate_in_cache(self, cache_latest_prev, node, remotes, update):\n assert cache_latest_prev.revision\n if should_update_reference(node.ref, update):\n output = node.conanfile.output\n try:\n self._get_package_from_remotes(node, remotes, update)\n except NotFoundException:\n output.warning(\"Can't update, no package in remote\")\n except NoRemoteAvailable:\n output.warning(\"Can't update, there are no remotes configured or enabled\")\n else:\n cache_time = cache_latest_prev.timestamp\n # TODO: cache 2.0 should we update the date if the prev is the same?\n if cache_time < node.pref_timestamp and cache_latest_prev != node.pref:\n node.binary = BINARY_UPDATE\n output.info(\"Current package revision is older than the remote one\")\n else:\n node.binary = BINARY_CACHE\n # The final data is the cache one, not the server one\n node.binary_remote = None\n node.prev = cache_latest_prev.revision\n if cache_time > node.pref_timestamp:\n output.info(\"Current package revision is newer than the remote one\")\n node.pref_timestamp = cache_time\n if not node.binary:\n node.binary = BINARY_CACHE\n node.binary_remote = None\n node.prev = cache_latest_prev.revision\n node.pref_timestamp = cache_latest_prev.timestamp\n assert node.prev, \"PREV for %s is None\" % str(node.pref)\n\n def _config_version(self):\n config_mode = self._global_conf.get(\"core.package_id:config_mode\", default=None)\n if config_mode is None:\n return\n config_version_file = HomePaths(self._home_folder).config_version_path\n try:\n config_refs = json.loads(load(config_version_file))[\"config_version\"]\n result = OrderedDict()\n for r in config_refs:\n try:\n config_ref = PkgReference.loads(r)\n req_info = RequirementInfo(config_ref.ref, config_ref.package_id, config_mode)\n except ConanException:\n config_ref = RecipeReference.loads(r)\n req_info = RequirementInfo(config_ref, None, config_mode)\n result[config_ref] = req_info\n except Exception as e:\n raise ConanException(f\"core.package_id:config_mode defined, but error while loading \"\n f\"'{os.path.basename(config_version_file)}'\"\n f\" file in cache: {self._home_folder}: {e}\")\n return RequirementsInfo(result)\n\n def _evaluate_package_id(self, node, config_version):\n compute_package_id(node, self._global_conf, config_version=config_version)\n\n # TODO: layout() execution don't need to be evaluated at GraphBuilder time.\n # it could even be delayed until installation time, but if we got enough info here for\n # package_id, we can run it\n conanfile = node.conanfile\n if hasattr(conanfile, \"layout\"):\n with conanfile_exception_formatter(conanfile, \"layout\"):\n conanfile.layout()\n\n def evaluate_graph(self, deps_graph, build_mode, lockfile, remotes, update, build_mode_test=None,\n tested_graph=None):\n if tested_graph is None:\n main_mode = BuildMode(build_mode)\n test_mode = None # Should not be used at all\n mainprefs = None\n else:\n main_mode = BuildMode([\"never\"])\n test_mode = BuildMode(build_mode_test)\n mainprefs = [str(n.pref) for n in tested_graph.nodes\n if n.recipe not in (RECIPE_CONSUMER, RECIPE_VIRTUAL)]\n\n if main_mode.cascade:\n ConanOutput().warning(\"Using build-mode 'cascade' is generally inefficient and it \"\n \"shouldn't be used. Use 'package_id' and 'package_id_modes' for\"\n \"more efficient re-builds\")\n\n def _evaluate_single(n):\n mode = main_mode if mainprefs is None or str(n.pref) in mainprefs else test_mode\n if lockfile:\n locked_prev = lockfile.resolve_prev(n) # this is not public, should never happen\n if locked_prev:\n self._process_locked_node(n, mode, locked_prev)\n return\n self._evaluate_node(n, mode, remotes, update)\n\n levels = deps_graph.by_levels()\n config_version = self._config_version()\n for level in levels[:-1]: # all levels but the last one, which is the single consumer\n for node in level:\n self._evaluate_package_id(node, config_version)\n # group by pref to paralelize, so evaluation is done only 1 per pref\n nodes = {}\n for node in level:\n nodes.setdefault(node.pref, []).append(node)\n # PARALLEL, this is the slow part that can query servers for packages, and compatibility\n for pref, pref_nodes in nodes.items():\n _evaluate_single(pref_nodes[0])\n # END OF PARALLEL\n # Evaluate the possible nodes with repeated \"prefs\" that haven't been evaluated\n for pref, pref_nodes in nodes.items():\n for n in pref_nodes[1:]:\n _evaluate_single(n)\n\n # Last level is always necessarily a consumer or a virtual\n assert len(levels[-1]) == 1\n node = levels[-1][0]\n assert node.recipe in (RECIPE_CONSUMER, RECIPE_VIRTUAL)\n if node.path is not None:\n if node.path.endswith(\".py\"):\n # For .py we keep evaluating the package_id, validate(), etc\n compute_package_id(node, self._global_conf, config_version=config_version)\n # To support the ``[layout]`` in conanfile.txt\n if hasattr(node.conanfile, \"layout\"):\n with conanfile_exception_formatter(node.conanfile, \"layout\"):\n node.conanfile.layout()\n\n self._skip_binaries(deps_graph)\n\n @staticmethod\n def _skip_binaries(graph):\n required_nodes = set()\n # Aggregate all necessary starting nodes\n required_nodes.add(graph.root)\n for node in graph.nodes:\n if node.binary in (BINARY_BUILD, BINARY_EDITABLE_BUILD, BINARY_EDITABLE):\n if not node.build_allowed: # Only those that are forced to build, not only \"missing\"\n required_nodes.add(node)\n\n root_nodes = required_nodes.copy()\n while root_nodes:\n new_root_nodes = set()\n for node in root_nodes:\n # The nodes that are directly required by this one to build correctly\n is_consumer = not (node.recipe != RECIPE_CONSUMER and\n node.binary not in (BINARY_BUILD, BINARY_EDITABLE_BUILD,\n BINARY_EDITABLE))\n deps_required = set(d.node for d in node.transitive_deps.values() if d.require.files\n or (d.require.direct and is_consumer))\n\n # second pass, transitive affected. Packages that have some dependency that is required\n # cannot be skipped either. In theory the binary could be skipped, but build system\n # integrations like CMakeDeps rely on find_package() to correctly find transitive deps\n indirect = (d.node for d in node.transitive_deps.values()\n if any(t.node in deps_required for t in d.node.transitive_deps.values()))\n deps_required.update(indirect)\n\n # Third pass, mark requires as skippeable\n for dep in node.transitive_deps.values():\n dep.require.skip = dep.node not in deps_required\n\n # Finally accumulate all needed nodes for marking binaries as SKIP download\n news_req = [r for r in deps_required\n if r.binary in (BINARY_BUILD, BINARY_EDITABLE_BUILD, BINARY_EDITABLE)\n if r not in required_nodes] # Avoid already expanded before\n new_root_nodes.update(news_req) # For expanding the next iteration\n required_nodes.update(deps_required)\n\n root_nodes = new_root_nodes\n\n for node in graph.nodes:\n if node not in required_nodes and node.conanfile.conf.get(\"tools.graph:skip_binaries\",\n check_type=bool, default=True):\n node.binary = BINARY_SKIP\n"}
|
{"conans/client/graph/build_mode.py": [{"type": "function", "name": "BuildMode.allowed_compatible", "lines": [104, 113], "signature": "def allowed_compatible(self, conanfile):", "doc": ""}], "conans/client/graph/graph_binaries.py": [{"type": "function", "name": "GraphBinariesAnalyzer._find_build_compatible_binary", "lines": [176, 190], "signature": "def _find_build_compatible_binary(self, node, compatibles):", "doc": ""}]}
| null |
["test/integration/package_id/compatible_test.py::TestCompatibleBuild::test_build_compatible", "test/integration/package_id/compatible_test.py::TestCompatibleBuild::test_build_compatible_cant_build", "test/integration/package_id/compatible_test.py::TestCompatibleBuild::test_build_compatible_cant_build2", "test/integration/package_id/compatible_test.py::TestCompatibleBuild::test_build_compatible_cant_build_only", "test/integration/package_id/compatible_test.py::TestCompatibleBuild::test_multi_level_build_compatible"]
|
["test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_build_missing", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_compatible_diamond", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_compatible_lockfile", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_compatible_option", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_compatible_package_python_requires", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_compatible_setting", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_compatible_setting_no_binary", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_compatible_setting_no_user_channel", "test/integration/package_id/compatible_test.py::CompatibleIDsTest::test_package_id_consumers", "test/integration/package_id/compatible_test.py::TestNewCompatibility::test_compatible_setting", "test/integration/package_id/compatible_test.py::TestNewCompatibility::test_compatibility_remove_package_id", "test/integration/package_id/compatible_test.py::TestNewCompatibility::test_compatibility_erase_package_id", "test/integration/package_id/compatible_test.py::TestNewCompatibility::test_compatibility_msvc_and_cppstd"]
|
4a5b19a75db9225316c8cb022a2dfb9705a2af34
|
{"first_commit_time": 1724225223.0, "pr_title": "Feature/build compatibles", "pr_body": "Changelog: Feature: Allow building a compatible package still of the current profile one.\r\nDocs: Omit\r\n\r\nLets keep this not documented at the moment, as there is a gap in ``conan graph build-order``\r\n\r\nClose https://github.com/conan-io/conan/issues/16148\r\nClose https://github.com/conan-io/conan/issues/14291\r\n", "pr_timeline": [{"time": 1725232463.0, "comment": "@czoido @franramirez688 @AbrilRBS \r\n\r\nThis PR contains both the feature + some necessary refactors to support the feature. \r\n\r\nI have done the refactor in its own PR to simplify the review, please check it first before reviewing this one: https://github.com/conan-io/conan/pull/16919"}], "issues": {}}
|
|
deepset-ai/haystack
| 6,758
|
https://github.com/deepset-ai/haystack/pull/6758
|
deepset-ai__haystack-6758
|
["6588"]
|
88191e74bf72345924ed703c65edb9fdf6bd8edd
|
diff --git a/haystack/components/converters/html.py b/haystack/components/converters/html.py
index 0586065c78..dea38dbd1a 100644
--- a/haystack/components/converters/html.py
+++ b/haystack/components/converters/html.py
@@ -3,7 +3,7 @@
from typing import Any, Dict, List, Optional, Union, Literal
from boilerpy3 import extractors
-from haystack import Document, component
+from haystack import Document, component, default_from_dict, default_to_dict
from haystack.dataclasses import ByteStream
from haystack.components.converters.utils import get_bytestream_from_source, normalize_metadata
@@ -49,6 +49,13 @@ def __init__(
"""
self.extractor_type = extractor_type
+ def to_dict(self) -> Dict[str, Any]:
+ return default_to_dict(self, extractor_type=self.extractor_type)
+
+ @classmethod
+ def from_dict(cls, data: Dict[str, Any]) -> "HTMLToDocument":
+ return default_from_dict(cls, data)
+
@component.output_types(documents=List[Document])
def run(
self,
|
diff --git a/test/components/converters/test_html_to_document.py b/test/components/converters/test_html_to_document.py
index aa8df51197..519a1c053e 100644
--- a/test/components/converters/test_html_to_document.py
+++ b/test/components/converters/test_html_to_document.py
@@ -160,3 +160,12 @@ def test_mixed_sources_run(self, test_files_path):
assert len(docs) == 3
for doc in docs:
assert "Haystack" in doc.content
+
+ def test_serde(self):
+ """
+ Test if the component runs correctly gets serialized and deserialized.
+ """
+ converter = HTMLToDocument("ArticleExtractor")
+ serde_data = converter.to_dict()
+ new_converter = HTMLToDocument.from_dict(serde_data)
+ assert new_converter.extractor_type == converter.extractor_type
| 2024-01-17T14:05:22
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/components/converters/html.py": "import logging\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union, Literal\nfrom boilerpy3 import extractors\n\nfrom haystack import Document, component\nfrom haystack.dataclasses import ByteStream\nfrom haystack.components.converters.utils import get_bytestream_from_source, normalize_metadata\n\nlogger = logging.getLogger(__name__)\n\n\n@component\nclass HTMLToDocument:\n \"\"\"\n Converts an HTML file to a Document.\n\n Usage example:\n ```python\n from haystack.components.converters.html import HTMLToDocument\n\n converter = HTMLToDocument()\n results = converter.run(sources=[\"sample.html\"])\n documents = results[\"documents\"]\n print(documents[0].content)\n # 'This is a text from the HTML file.'\n ```\n\n \"\"\"\n\n def __init__(\n self,\n extractor_type: Literal[\n \"DefaultExtractor\",\n \"ArticleExtractor\",\n \"ArticleSentencesExtractor\",\n \"LargestContentExtractor\",\n \"CanolaExtractor\",\n \"KeepEverythingExtractor\",\n \"NumWordsRulesExtractor\",\n ] = \"DefaultExtractor\",\n ):\n \"\"\"\n Create an HTMLToDocument component.\n\n :param extractor_type: The type of boilerpy3 extractor to use. Defaults to `DefaultExtractor`.\n For more information on the different types of extractors,\n see [boilerpy3 documentation](https://github.com/jmriebold/BoilerPy3?tab=readme-ov-file#extractors).\n \"\"\"\n self.extractor_type = extractor_type\n\n @component.output_types(documents=List[Document])\n def run(\n self,\n sources: List[Union[str, Path, ByteStream]],\n meta: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None,\n ):\n \"\"\"\n Converts a list of HTML files to Documents.\n\n :param sources: List of HTML file paths or ByteStream objects.\n :param meta: Optional metadata to attach to the Documents.\n This value can be either a list of dictionaries or a single dictionary.\n If it's a single dictionary, its content is added to the metadata of all produced Documents.\n If it's a list, the length of the list must match the number of sources, because the two lists will be zipped.\n Defaults to `None`.\n :return: A dictionary containing a list of Document objects under the 'documents' key.\n \"\"\"\n\n documents = []\n meta_list = normalize_metadata(meta=meta, sources_count=len(sources))\n\n extractor_class = getattr(extractors, self.extractor_type)\n extractor = extractor_class(raise_on_failure=False)\n\n for source, metadata in zip(sources, meta_list):\n try:\n bytestream = get_bytestream_from_source(source=source)\n except Exception as e:\n logger.warning(\"Could not read %s. Skipping it. Error: %s\", source, e)\n continue\n try:\n file_content = bytestream.data.decode(\"utf-8\")\n text = extractor.get_content(file_content)\n except Exception as conversion_e:\n logger.warning(\"Failed to extract text from %s. Skipping it. Error: %s\", source, conversion_e)\n continue\n\n merged_metadata = {**bytestream.meta, **metadata}\n document = Document(content=text, meta=merged_metadata)\n documents.append(document)\n\n return {\"documents\": documents}\n"}
|
{"haystack/components/converters/html.py": [{"type": "function", "name": "HTMLToDocument.to_dict", "lines": [52, 53], "signature": "def to_dict(self) -> Dict[str, Any]:", "doc": ""}, {"type": "function", "name": "HTMLToDocument.from_dict", "lines": [56, 57], "signature": "def from_dict(cls, data: Dict[str, Any]) -> \"HTMLToDocument\":", "doc": ""}]}
| null |
["test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_serde"]
|
["[", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run_different_extractors", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run_doc_metadata", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_incorrect_meta", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run_bytestream_metadata", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run_bytestream_and_doc_metadata", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run_bytestream_doc_overlapping_metadata", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run_wrong_file_type", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_run_error_handling", "test/components/converters/test_html_to_document.py::TestHTMLToDocument::test_mixed_sources_run"]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1705500015.0, "pr_title": "feat: Add serde methods to `HTMLToDocument`", "pr_body": "### Related Issues\r\n\r\n- fixes #6588\r\n\r\n### How did you test it?\r\n\r\n<!-- unit tests, integration tests, manual verification, instructions for manual tests -->\r\nUnit.\r\n\r\n### Checklist\r\n\r\n- I have read the [contributors guidelines](https://github.com/deepset-ai/haystack/blob/main/CONTRIBUTING.md) and the [code of conduct](https://github.com/deepset-ai/haystack/blob/main/code_of_conduct.txt)\r\n- I have updated the related issue with new insights and changes\r\n- I added unit tests and updated the docstrings\r\n- I've used one of the [conventional commit types](https://www.conventionalcommits.org/en/v1.0.0/) for my PR title: `fix:`, `feat:`, `build:`, `chore:`, `ci:`, `docs:`, `style:`, `refactor:`, `perf:`, `test:`.\r\n- I documented my code\r\n- I ran [pre-commit hooks](https://github.com/deepset-ai/haystack/blob/main/CONTRIBUTING.md#installation) and fixed any issue\r\n", "pr_timeline": [{"time": 1705500721.0, "comment": "Hey, @shadeMe...\r\n\r\nI think that `to_dict/from_dict` methods are only needed if they diverge from [`default_to_dict`](https://github.com/deepset-ai/haystack/blob/88191e74bf72345924ed703c65edb9fdf6bd8edd/haystack/core/serialization.py#L53) and `default_from_dict`.\r\n\r\nSo in this case, it makes sense to add them if we want to directly pass a class in `extractor_type` (see https://github.com/deepset-ai/haystack/pull/6582#discussion_r1431094215).\r\n\r\n@silvanocerza am I wrong?"}, {"time": 1705501045.0, "comment": "## Pull Request Test Coverage Report for [Build 7556878925](https://coveralls.io/builds/65124740)\n\n### Warning: This coverage report may be inaccurate.\n\nWe've detected an issue with your CI configuration that might affect the accuracy of this pull request's coverage report.\nTo ensure accuracy in future PRs, please see [these guidelines](https://docs.coveralls.io/build-types#recommended-ci-configurations).\nA quick fix for this PR: rebase it; your next report should be accurate.\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* No unchanged relevant lines lost coverage.\n* Overall coverage increased (+**0.01%**) to **88.022%**\n\n---\n\n\n\n| Totals | [](https://coveralls.io/builds/65124740) |\n| :-- | --: |\n| Change from base [Build 7556480704](https://coveralls.io/builds/65123773): | 0.01% |\n| Covered Lines: | 4534 |\n| Relevant Lines: | 5151 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1705508970.0, "comment": "You're not wrong @anakin87. \r\n\r\nMy bad though, I created #6588 cause at the time I thought serde methods were mandatory. I always forget we automated that. \ud83e\udd26 \r\n\r\nNothing wrong having them explicit though, I actually prefer it this way."}], "issues": {"6588": {"issue_title": "Add `to_dict` and `from_dict` methods in `HTMLToDocument`", "issue_body": "`HTMLToDocument` Component is missing serialization methods.\r\n\r\nWe need to add them so it can be properly serialized.", "issue_timeline": []}}}
|
|
deepset-ai/haystack
| 7,009
|
https://github.com/deepset-ai/haystack/pull/7009
|
deepset-ai__haystack-7009
|
[]
|
a7209f64136d7cc8bd446f6801d8695fc367608f
|
diff --git a/haystack/dataclasses/byte_stream.py b/haystack/dataclasses/byte_stream.py
index 80b1c50c3b..ee736c001d 100644
--- a/haystack/dataclasses/byte_stream.py
+++ b/haystack/dataclasses/byte_stream.py
@@ -49,3 +49,13 @@ def from_string(
:param meta: Additional metadata to be stored with the ByteStream.
"""
return cls(data=text.encode(encoding), mime_type=mime_type, meta=meta or {})
+
+ def to_string(self, encoding: str = "utf-8") -> str:
+ """
+ Convert the ByteStream to a string, metadata will not be included.
+
+ :param encoding: The encoding used to convert the bytes to a string. Defaults to "utf-8".
+ :return: The string representation of the ByteStream.
+ :raises UnicodeDecodeError: If the ByteStream data cannot be decoded with the specified encoding.
+ """
+ return self.data.decode(encoding)
|
diff --git a/test/dataclasses/test_byte_stream.py b/test/dataclasses/test_byte_stream.py
index 57d444b038..4e4199ba19 100644
--- a/test/dataclasses/test_byte_stream.py
+++ b/test/dataclasses/test_byte_stream.py
@@ -1,3 +1,5 @@
+import pytest
+
from haystack.dataclasses import ByteStream
@@ -35,6 +37,30 @@ def test_from_string():
assert b.meta == {"foo": "bar"}
+def test_to_string():
+ test_string = "Hello, world!"
+ b = ByteStream.from_string(test_string)
+ assert b.to_string() == test_string
+
+
+def test_to_from_string_encoding():
+ test_string = "Hello Baščaršija!"
+ with pytest.raises(UnicodeEncodeError):
+ ByteStream.from_string(test_string, encoding="ISO-8859-1")
+
+ bs = ByteStream.from_string(test_string) # default encoding is utf-8
+
+ assert bs.to_string(encoding="ISO-8859-1") != test_string
+ assert bs.to_string(encoding="utf-8") == test_string
+
+
+def test_to_string_encoding_error():
+ # test that it raises ValueError if the encoding is not valid
+ b = ByteStream.from_string("Hello, world!")
+ with pytest.raises(UnicodeDecodeError):
+ b.to_string("utf-16")
+
+
def test_to_file(tmp_path, request):
test_str = "Hello, world!\n"
test_path = tmp_path / request.node.name
| 2024-02-16T09:06:38
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/dataclasses/byte_stream.py": "from dataclasses import dataclass, field\nfrom pathlib import Path\nfrom typing import Optional, Dict, Any\n\n\n@dataclass\nclass ByteStream:\n \"\"\"\n Base data class representing a binary object in the Haystack API.\n \"\"\"\n\n data: bytes\n meta: Dict[str, Any] = field(default_factory=dict, hash=False)\n mime_type: Optional[str] = field(default=None)\n\n def to_file(self, destination_path: Path):\n \"\"\"\n Write the ByteStream to a file. Note: the metadata will be lost.\n\n :param destination_path: The path to write the ByteStream to.\n \"\"\"\n with open(destination_path, \"wb\") as fd:\n fd.write(self.data)\n\n @classmethod\n def from_file_path(\n cls, filepath: Path, mime_type: Optional[str] = None, meta: Optional[Dict[str, Any]] = None\n ) -> \"ByteStream\":\n \"\"\"\n Create a ByteStream from the contents read from a file.\n\n :param filepath: A valid path to a file.\n :param mime_type: The mime type of the file.\n :param meta: Additional metadata to be stored with the ByteStream.\n \"\"\"\n with open(filepath, \"rb\") as fd:\n return cls(data=fd.read(), mime_type=mime_type, meta=meta or {})\n\n @classmethod\n def from_string(\n cls, text: str, encoding: str = \"utf-8\", mime_type: Optional[str] = None, meta: Optional[Dict[str, Any]] = None\n ) -> \"ByteStream\":\n \"\"\"\n Create a ByteStream encoding a string.\n\n :param text: The string to encode\n :param encoding: The encoding used to convert the string into bytes\n :param mime_type: The mime type of the file.\n :param meta: Additional metadata to be stored with the ByteStream.\n \"\"\"\n return cls(data=text.encode(encoding), mime_type=mime_type, meta=meta or {})\n"}
|
{"haystack/dataclasses/byte_stream.py": [{"type": "function", "name": "ByteStream.to_string", "lines": [53, 61], "signature": "def to_string(self, encoding: str = \"utf-8\") -> str:", "doc": "Convert the ByteStream to a string, metadata will not be included.\n\n:param encoding: The encoding used to convert the bytes to a string. Defaults to \"utf-8\".\n:return: The string representation of the ByteStream.\n:raises UnicodeDecodeError: If the ByteStream data cannot be decoded with the specified encoding."}]}
| null |
["test/dataclasses/test_byte_stream.py::test_to_string", "test/dataclasses/test_byte_stream.py::test_to_from_string_encoding", "test/dataclasses/test_byte_stream.py::test_to_string_encoding_error"]
|
["test/dataclasses/test_byte_stream.py::test_from_file_path", "test/dataclasses/test_byte_stream.py::test_from_string", "test/dataclasses/test_byte_stream.py::test_to_file"]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1708073900.0, "pr_title": "feat: Add ByteStream to_string method", "pr_body": "### Why:\r\nAdds `to_string` method in the `ByteStream` class which arises from the requirement to convert byte data back into its original string format for certain use cases. This enhancement provides a convenient and intuitive way to achieve this, enriching the class's functionalities.\r\n\r\nIn addition, we already have a method from_string that creates a ByteStream instance from a string by encoding it to bytes. A to_string method provides the symmetric operation, converting the byte stream to a string. This symmetry in API design improves intuitiveness and usability.\r\n\r\n### What:\r\n* A new method named `to_string` has been added to the `ByteStream` class in `haystack/dataclasses/byte_stream.py`, enabling conversion of byte data to its original string representation.\r\n* The method accepts an optional encoding parameter for controlling the decoding process and defaults to `utf-8`.\r\n* Custom error handling, specifically a `UnicodeDecodeError`, is implemented to ensure graceful failure in case of encoding issues.\r\n\r\n### How can it be used:\r\n* The `to_string` method can be applied to `ByteStream` instances when users wish to extract the original text content.\r\n\r\nExample usage:\r\n```python\r\nbs = ByteStream.from_string('hello, world!')\r\ntext = bs.to_string()\r\nassert text == 'hello, world!'\r\n```\r\n\r\n### How did you test it:\r\n* Unit tests have been added to `test/dataclasses/test_byte_stream.py` verifying correct functionality of the `to_string` method in different scenarios, including valid and invalid encoding cases.\r\n\r\n### Notes for the reviewer:\r\n* The changes mainly concern the `ByteStream` class and its test class, keeping the impact localized.\r\n", "pr_timeline": [{"time": 1708097939.0, "comment": "## Pull Request Test Coverage Report for [Build 7932723812](https://coveralls.io/builds/65746831)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **12** unchanged lines in **2** files lost coverage.\n* Overall coverage increased (+**0.007%**) to **89.208%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/audio/whisper_local.py](https://coveralls.io/builds/65746831/source?filename=components%2Faudio%2Fwhisper_local.py#L91) | 6 | 91.3% |\n| [utils/auth.py](https://coveralls.io/builds/65746831/source?filename=utils%2Fauth.py#L95) | 6 | 93.27% |\n<!-- | **Total:** | **12** | | -->\n\n| Totals | [](https://coveralls.io/builds/65746831) |\n| :-- | --: |\n| Change from base [Build 7920674498](https://coveralls.io/builds/65727254): | 0.007% |\n| Covered Lines: | 4968 |\n| Relevant Lines: | 5569 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}], "issues": {}}
|
|
deepset-ai/haystack
| 7,902
|
https://github.com/deepset-ai/haystack/pull/7902
|
deepset-ai__haystack-7902
|
[]
|
57c1d47c7d55caf1385e8315e18bab2bfe1ce2f6
|
diff --git a/haystack/document_stores/types/filter_policy.py b/haystack/document_stores/types/filter_policy.py
index b30b9d3352..a2be576d20 100644
--- a/haystack/document_stores/types/filter_policy.py
+++ b/haystack/document_stores/types/filter_policy.py
@@ -3,6 +3,7 @@
# SPDX-License-Identifier: Apache-2.0
from enum import Enum
+from typing import Any, Dict, Optional
class FilterPolicy(Enum):
@@ -33,3 +34,28 @@ def from_str(filter_policy: str) -> "FilterPolicy":
msg = f"Unknown FilterPolicy type '{filter_policy}'. Supported types are: {list(enum_map.keys())}"
raise ValueError(msg)
return policy
+
+
+def apply_filter_policy(
+ filter_policy: FilterPolicy,
+ init_filters: Optional[Dict[str, Any]] = None,
+ runtime_filters: Optional[Dict[str, Any]] = None,
+) -> Optional[Dict[str, Any]]:
+ """
+ Apply the filter policy to the given initial and runtime filters to determine the final set of filters used.
+
+ The function combines or replaces the initial and runtime filters based on the specified filter policy.
+
+ :param filter_policy: The policy to apply when handling the filters. It can be one of the following:
+ - `FilterPolicy.REPLACE`: Runtime filters will replace the initial filters.
+ - `FilterPolicy.MERGE`: Runtime filters will be merged with the initial filters. If there are overlapping keys,
+ values from the runtime filters will overwrite those from the initial filters.
+ :param init_filters: The initial filters set during the initialization of the relevant retriever.
+ :param runtime_filters: The filters provided at runtime, usually during a query operation execution. These filters
+ can change for each query/retreiver run invocation.
+ :returns: A dictionary containing the resulting filters based on the provided policy.
+ """
+ if filter_policy == FilterPolicy.MERGE and runtime_filters:
+ return {**(init_filters or {}), **runtime_filters}
+ else:
+ return runtime_filters or init_filters
diff --git a/releasenotes/notes/add-apply_filter_policy-function-ae3152e6afe0ca57.yaml b/releasenotes/notes/add-apply_filter_policy-function-ae3152e6afe0ca57.yaml
new file mode 100644
index 0000000000..c890a44297
--- /dev/null
+++ b/releasenotes/notes/add-apply_filter_policy-function-ae3152e6afe0ca57.yaml
@@ -0,0 +1,4 @@
+---
+enhancements:
+ - |
+ Added the apply_filter_policy function to standardize the application of filter policies across all document store-specific retrievers, allowing for consistent handling of initial and runtime filters based on the chosen policy (replace or merge).
|
diff --git a/test/document_stores/test_filter_policy.py b/test/document_stores/test_filter_policy.py
new file mode 100644
index 0000000000..b7efcd0672
--- /dev/null
+++ b/test/document_stores/test_filter_policy.py
@@ -0,0 +1,45 @@
+# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>
+#
+# SPDX-License-Identifier: Apache-2.0
+
+import pytest
+from typing import Any, Dict, Optional
+from enum import Enum
+
+from haystack.document_stores.types import FilterPolicy
+from haystack.document_stores.types.filter_policy import apply_filter_policy
+
+
+def test_replace_policy_with_both_filters():
+ init_filters = {"status": "active", "category": "news"}
+ runtime_filters = {"author": "John Doe"}
+ result = apply_filter_policy(FilterPolicy.REPLACE, init_filters, runtime_filters)
+ assert result == runtime_filters
+
+
+def test_merge_policy_with_both_filters():
+ init_filters = {"status": "active", "category": "news"}
+ runtime_filters = {"author": "John Doe"}
+ result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
+ assert result == {"status": "active", "category": "news", "author": "John Doe"}
+
+
+def test_replace_policy_with_only_init_filters():
+ init_filters = {"status": "active", "category": "news"}
+ runtime_filters = None
+ result = apply_filter_policy(FilterPolicy.REPLACE, init_filters, runtime_filters)
+ assert result == init_filters
+
+
+def test_merge_policy_with_only_init_filters():
+ init_filters = {"status": "active", "category": "news"}
+ runtime_filters = None
+ result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
+ assert result == init_filters
+
+
+def test_merge_policy_with_overlapping_keys():
+ init_filters = {"status": "active", "category": "news"}
+ runtime_filters = {"category": "science", "author": "John Doe"}
+ result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
+ assert result == {"status": "active", "category": "science", "author": "John Doe"}
| 2024-06-20T11:21:56
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/document_stores/types/filter_policy.py": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom enum import Enum\n\n\nclass FilterPolicy(Enum):\n \"\"\"\n Policy to determine how filters are applied in retrievers interacting with document stores.\n \"\"\"\n\n # Runtime filters replace init filters during retriever run invocation.\n REPLACE = \"replace\"\n\n # Runtime filters are merged with init filters, with runtime filters overwriting init values.\n MERGE = \"merge\"\n\n def __str__(self):\n return self.value\n\n @staticmethod\n def from_str(filter_policy: str) -> \"FilterPolicy\":\n \"\"\"\n Convert a string to a FilterPolicy enum.\n\n :param filter_policy: The string to convert.\n :return: The corresponding FilterPolicy enum.\n \"\"\"\n enum_map = {e.value: e for e in FilterPolicy}\n policy = enum_map.get(filter_policy)\n if policy is None:\n msg = f\"Unknown FilterPolicy type '{filter_policy}'. Supported types are: {list(enum_map.keys())}\"\n raise ValueError(msg)\n return policy\n", "releasenotes/notes/add-apply_filter_policy-function-ae3152e6afe0ca57.yaml": null}
|
diff --git a/releasenotes/notes/add-apply_filter_policy-function-ae3152e6afe0ca57.yaml b/releasenotes/notes/add-apply_filter_policy-function-ae3152e6afe0ca57.yaml
new file mode 100644
index 0000000000..c890a44297
--- /dev/null
+++ b/releasenotes/notes/add-apply_filter_policy-function-ae3152e6afe0ca57.yaml
@@ -0,0 +1,4 @@
+---
+enhancements:
+ - |
+ Added the apply_filter_policy function to standardize the application of filter policies across all document store-specific retrievers, allowing for consistent handling of initial and runtime filters based on the chosen policy (replace or merge).
|
{"haystack/document_stores/types/filter_policy.py": [{"type": "function", "name": "apply_filter_policy", "lines": [39, 61], "signature": "def apply_filter_policy( filter_policy: FilterPolicy, init_filters: Optional[Dict[str, Any]] = None, runtime_filters: Optional[Dict[str, Any]] = None, ) -> Optional[Dict[str, Any]]:", "doc": "Apply the filter policy to the given initial and runtime filters to determine the final set of filters used.\n\nThe function combines or replaces the initial and runtime filters based on the specified filter policy.\n\n:param filter_policy: The policy to apply when handling the filters. It can be one of the following:\n - `FilterPolicy.REPLACE`: Runtime filters will replace the initial filters.\n - `FilterPolicy.MERGE`: Runtime filters will be merged with the initial filters. If there are overlapping keys,\n values from the runtime filters will overwrite those from the initial filters.\n:param init_filters: The initial filters set during the initialization of the relevant retriever.\n:param runtime_filters: The filters provided at runtime, usually during a query operation execution. These filters\n can change for each query/retreiver run invocation.\n:returns: A dictionary containing the resulting filters based on the provided policy."}]}
| null |
["test/document_stores/test_filter_policy.py::test_replace_policy_with_both_filters", "test/document_stores/test_filter_policy.py::test_merge_policy_with_both_filters", "test/document_stores/test_filter_policy.py::test_replace_policy_with_only_init_filters", "test/document_stores/test_filter_policy.py::test_merge_policy_with_only_init_filters", "test/document_stores/test_filter_policy.py::test_merge_policy_with_overlapping_keys"]
|
[]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1718881866.0, "pr_title": "feat: Add apply_filter_policy function", "pr_body": "Added the apply_filter_policy function to standardize the application of filter policies across all document store-specific retrievers, allowing for consistent handling of initial and runtime filters based on the chosen policy (replace or merge).\r\nAdd unit tests for `apply_filter_policy` function", "pr_timeline": [{"time": 1718883187.0, "comment": "@silvanocerza @shadeMe has more context about this, we talked about it over there in integrations"}, {"time": 1718883472.0, "comment": "## Pull Request Test Coverage Report for [Build 9596545677](https://coveralls.io/builds/68196887)\n\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **2** unchanged lines in **1** file lost coverage.\n* Overall coverage increased (+**0.006%**) to **90.171%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [document_stores/types/filter_policy.py](https://coveralls.io/builds/68196887/source?filename=document_stores%2Ftypes%2Ffilter_policy.py#L21) | 2 | 84.21% |\n<!-- | **Total:** | **2** | | -->\n\n| Totals | [](https://coveralls.io/builds/68196887) |\n| :-- | --: |\n| Change from base [Build 9596534228](https://coveralls.io/builds/68196850): | 0.006% |\n| Covered Lines: | 7018 |\n| Relevant Lines: | 7783 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}], "issues": {}}
|
deepset-ai/haystack
| 7,920
|
https://github.com/deepset-ai/haystack/pull/7920
|
deepset-ai__haystack-7920
|
[]
|
c51f8ffb865db54f39103a202d2b6ba81fc68fb9
|
diff --git a/haystack/components/fetchers/link_content.py b/haystack/components/fetchers/link_content.py
index b658ff2450..ccc698f613 100644
--- a/haystack/components/fetchers/link_content.py
+++ b/haystack/components/fetchers/link_content.py
@@ -4,6 +4,7 @@
from collections import defaultdict
from concurrent.futures import ThreadPoolExecutor
+from fnmatch import fnmatch
from typing import Callable, Dict, List, Optional, Tuple
import requests
@@ -94,10 +95,12 @@ def __init__(
# register default content handlers that extract data from the response
self.handlers: Dict[str, Callable[[Response], ByteStream]] = defaultdict(lambda: _text_content_handler)
- self.handlers["text/html"] = _text_content_handler
- self.handlers["text/plain"] = _text_content_handler
- self.handlers["application/pdf"] = _binary_content_handler
- self.handlers["application/octet-stream"] = _binary_content_handler
+ self.handlers["text/*"] = _text_content_handler
+ self.handlers["application/json"] = _text_content_handler
+ self.handlers["application/*"] = _binary_content_handler
+ self.handlers["image/*"] = _binary_content_handler
+ self.handlers["audio/*"] = _binary_content_handler
+ self.handlers["video/*"] = _binary_content_handler
@retry(
reraise=True,
@@ -175,7 +178,7 @@ def _fetch(self, url: str) -> Tuple[Dict[str, str], ByteStream]:
try:
response = self._get_response(url)
content_type = self._get_content_type(response)
- handler: Callable = self.handlers[content_type]
+ handler: Callable = self._resolve_handler(content_type)
stream = handler(response)
except Exception as e:
if self.raise_on_failure:
@@ -217,6 +220,29 @@ def _get_content_type(self, response: Response):
content_type = response.headers.get("Content-Type", "")
return content_type.split(";")[0]
+ def _resolve_handler(self, content_type: str) -> Callable[[Response], ByteStream]:
+ """
+ Resolves the handler for the given content type.
+
+ First, it tries to find a direct match for the content type in the handlers dictionary.
+ If no direct match is found, it tries to find a pattern match using the fnmatch function.
+ If no pattern match is found, it returns the default handler for text/plain.
+
+ :param content_type: The content type to resolve the handler for.
+ :returns: The handler for the given content type, if found. Otherwise, the default handler for text/plain.
+ """
+ # direct match
+ if content_type in self.handlers:
+ return self.handlers[content_type]
+
+ # pattern matches
+ for pattern, handler in self.handlers.items():
+ if fnmatch(content_type, pattern):
+ return handler
+
+ # default handler
+ return self.handlers["text/plain"]
+
def _switch_user_agent(self, retry_state: RetryCallState) -> None:
"""
Switches the User-Agent for this LinkContentRetriever to the next one in the list of user agents.
diff --git a/releasenotes/notes/link-content-fetcher-enhancements-49babe1c60888043.yaml b/releasenotes/notes/link-content-fetcher-enhancements-49babe1c60888043.yaml
new file mode 100644
index 0000000000..d6a7d24285
--- /dev/null
+++ b/releasenotes/notes/link-content-fetcher-enhancements-49babe1c60888043.yaml
@@ -0,0 +1,4 @@
+---
+enhancements:
+ - |
+ Improve LinkContentFetcher to support a broader range of content types, including glob patterns for text, application, audio, and video types. This update introduces a more flexible content handler resolution mechanism, allowing for direct matches and pattern matching, thereby greatly improving the handler's adaptability to various content types encountered on the web.
|
diff --git a/test/components/fetchers/test_link_content_fetcher.py b/test/components/fetchers/test_link_content_fetcher.py
index ac99bc4cfa..c6a4d5c55f 100644
--- a/test/components/fetchers/test_link_content_fetcher.py
+++ b/test/components/fetchers/test_link_content_fetcher.py
@@ -46,10 +46,12 @@ def test_init(self):
assert fetcher.retry_attempts == 2
assert fetcher.timeout == 3
assert fetcher.handlers == {
- "text/html": _text_content_handler,
- "text/plain": _text_content_handler,
- "application/pdf": _binary_content_handler,
- "application/octet-stream": _binary_content_handler,
+ "text/*": _text_content_handler,
+ "application/json": _text_content_handler,
+ "application/*": _binary_content_handler,
+ "image/*": _binary_content_handler,
+ "audio/*": _binary_content_handler,
+ "video/*": _binary_content_handler,
}
assert hasattr(fetcher, "_get_response")
@@ -191,3 +193,11 @@ def test_bad_request_exception_raised(self):
fetcher = LinkContentFetcher()
with pytest.raises(requests.exceptions.ConnectionError):
fetcher.run(["https://non_existent_website_dot.com/"])
+
+ @pytest.mark.integration
+ def test_link_content_fetcher_audio(self):
+ fetcher = LinkContentFetcher()
+ streams = fetcher.run(["https://download.samplelib.com/mp3/sample-3s.mp3"])["streams"]
+ first_stream = streams[0]
+ assert first_stream.meta["content_type"] == "audio/mpeg"
+ assert len(first_stream.data) > 0
| 2024-06-24T14:22:50
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/components/fetchers/link_content.py": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom collections import defaultdict\nfrom concurrent.futures import ThreadPoolExecutor\nfrom typing import Callable, Dict, List, Optional, Tuple\n\nimport requests\nfrom requests import Response\nfrom requests.exceptions import HTTPError\nfrom tenacity import RetryCallState, retry, retry_if_exception_type, stop_after_attempt, wait_exponential\n\nfrom haystack import component, logging\nfrom haystack.dataclasses import ByteStream\nfrom haystack.version import __version__\n\nlogger = logging.getLogger(__name__)\n\n\nDEFAULT_USER_AGENT = f\"haystack/LinkContentFetcher/{__version__}\"\n\nREQUEST_HEADERS = {\n \"accept\": \"*/*\",\n \"User-Agent\": DEFAULT_USER_AGENT,\n \"Accept-Language\": \"en-US,en;q=0.9,it;q=0.8,es;q=0.7\",\n \"referer\": \"https://www.google.com/\",\n}\n\n\ndef _text_content_handler(response: Response) -> ByteStream:\n \"\"\"\n Handles text content.\n\n :param response: Response object from the request.\n :return: The extracted text.\n \"\"\"\n return ByteStream.from_string(response.text)\n\n\ndef _binary_content_handler(response: Response) -> ByteStream:\n \"\"\"\n Handles binary content.\n\n :param response: Response object from the request.\n :return: The extracted binary file-like object.\n \"\"\"\n return ByteStream(data=response.content)\n\n\n@component\nclass LinkContentFetcher:\n \"\"\"\n LinkContentFetcher is a component for fetching and extracting content from URLs.\n\n It supports handling various content types, retries on failures, and automatic user-agent rotation for failed web\n requests.\n\n Usage example:\n ```python\n from haystack.components.fetchers.link_content import LinkContentFetcher\n\n fetcher = LinkContentFetcher()\n streams = fetcher.run(urls=[\"https://www.google.com\"])[\"streams\"]\n\n assert len(streams) == 1\n assert streams[0].meta == {'content_type': 'text/html', 'url': 'https://www.google.com'}\n assert streams[0].data\n ```\n \"\"\"\n\n def __init__(\n self,\n raise_on_failure: bool = True,\n user_agents: Optional[List[str]] = None,\n retry_attempts: int = 2,\n timeout: int = 3,\n ):\n \"\"\"\n Initializes the component.\n\n :param raise_on_failure: If `True`, raises an exception if it fails to fetch a single URL.\n For multiple URLs, it logs errors and returns the content it successfully fetched.\n :param user_agents: [User agents](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent)\n for fetching content. If `None`, a default user agent is used.\n :param retry_attempts: Specifies how many times you want it to retry to fetch the URL's content.\n :param timeout: Timeout in seconds for the request.\n \"\"\"\n self.raise_on_failure = raise_on_failure\n self.user_agents = user_agents or [DEFAULT_USER_AGENT]\n self.current_user_agent_idx: int = 0\n self.retry_attempts = retry_attempts\n self.timeout = timeout\n\n # register default content handlers that extract data from the response\n self.handlers: Dict[str, Callable[[Response], ByteStream]] = defaultdict(lambda: _text_content_handler)\n self.handlers[\"text/html\"] = _text_content_handler\n self.handlers[\"text/plain\"] = _text_content_handler\n self.handlers[\"application/pdf\"] = _binary_content_handler\n self.handlers[\"application/octet-stream\"] = _binary_content_handler\n\n @retry(\n reraise=True,\n stop=stop_after_attempt(self.retry_attempts),\n wait=wait_exponential(multiplier=1, min=2, max=10),\n retry=(retry_if_exception_type((HTTPError, requests.RequestException))),\n # This method is invoked only after failed requests (exception raised)\n after=self._switch_user_agent,\n )\n def get_response(url):\n # we need to copy because we modify the headers\n headers = REQUEST_HEADERS.copy()\n headers[\"User-Agent\"] = self.user_agents[self.current_user_agent_idx]\n response = requests.get(url, headers=headers, timeout=timeout or 3)\n response.raise_for_status()\n return response\n\n self._get_response: Callable = get_response\n\n @component.output_types(streams=List[ByteStream])\n def run(self, urls: List[str]):\n \"\"\"\n Fetches content from a list of URLs and returns a list of extracted content streams.\n\n Each content stream is a `ByteStream` object containing the extracted content as binary data.\n Each ByteStream object in the returned list corresponds to the contents of a single URL.\n The content type of each stream is stored in the metadata of the ByteStream object under\n the key \"content_type\". The URL of the fetched content is stored under the key \"url\".\n\n :param urls: A list of URLs to fetch content from.\n :return: `ByteStream` objects representing the extracted content.\n\n :raises Exception: If the provided list of URLs contains only a single URL, and `raise_on_failure` is set to\n `True`, an exception will be raised in case of an error during content retrieval.\n In all other scenarios, any retrieval errors are logged, and a list of successfully retrieved `ByteStream`\n objects is returned.\n \"\"\"\n streams: List[ByteStream] = []\n if not urls:\n return {\"streams\": streams}\n\n # don't use multithreading if there's only one URL\n if len(urls) == 1:\n stream_metadata, stream = self._fetch(urls[0])\n stream.meta.update(stream_metadata)\n streams.append(stream)\n else:\n with ThreadPoolExecutor() as executor:\n results = executor.map(self._fetch_with_exception_suppression, urls)\n\n for stream_metadata, stream in results: # type: ignore\n if stream_metadata is not None and stream is not None:\n stream.meta.update(stream_metadata)\n stream.mime_type = stream.meta.get(\"content_type\", None)\n streams.append(stream)\n\n return {\"streams\": streams}\n\n def _fetch(self, url: str) -> Tuple[Dict[str, str], ByteStream]:\n \"\"\"\n Fetches content from a URL and returns it as a ByteStream.\n\n :param url: The URL to fetch content from.\n :return: A tuple containing the ByteStream metadata dict and the corresponding ByteStream.\n ByteStream metadata contains the URL and the content type of the fetched content.\n The content type is a string indicating the type of content fetched (for example, \"text/html\",\n \"application/pdf\"). The ByteStream object contains the fetched content as binary data.\n\n :raises: If an error occurs during content retrieval and `raise_on_failure` is set to True, this method will\n raise an exception. Otherwise, all fetching errors are logged, and an empty ByteStream is returned.\n\n \"\"\"\n content_type: str = \"text/html\"\n stream: ByteStream = ByteStream(data=b\"\")\n try:\n response = self._get_response(url)\n content_type = self._get_content_type(response)\n handler: Callable = self.handlers[content_type]\n stream = handler(response)\n except Exception as e:\n if self.raise_on_failure:\n raise e\n # less verbose log as this is expected to happen often (requests failing, blocked, etc.)\n logger.debug(\"Couldn't retrieve content from {url} because {error}\", url=url, error=str(e))\n\n finally:\n self.current_user_agent_idx = 0\n\n return {\"content_type\": content_type, \"url\": url}, stream\n\n def _fetch_with_exception_suppression(self, url: str) -> Tuple[Optional[Dict[str, str]], Optional[ByteStream]]:\n \"\"\"\n Fetches content from a URL and returns it as a ByteStream.\n\n If `raise_on_failure` is set to True, this method will wrap the fetch() method and catch any exceptions.\n Otherwise, it will simply call the fetch() method.\n :param url: The URL to fetch content from.\n :return: A tuple containing the ByteStream metadata dict and the corresponding ByteStream.\n\n \"\"\"\n if self.raise_on_failure:\n try:\n return self._fetch(url)\n except Exception as e:\n logger.warning(\"Error fetching {url}: {error}\", url=url, error=str(e))\n return {\"content_type\": \"Unknown\", \"url\": url}, None\n else:\n return self._fetch(url)\n\n def _get_content_type(self, response: Response):\n \"\"\"\n Get the content type of the response.\n\n :param response: The response object.\n :return: The content type of the response.\n \"\"\"\n content_type = response.headers.get(\"Content-Type\", \"\")\n return content_type.split(\";\")[0]\n\n def _switch_user_agent(self, retry_state: RetryCallState) -> None:\n \"\"\"\n Switches the User-Agent for this LinkContentRetriever to the next one in the list of user agents.\n\n Used by tenacity to retry the requests with a different user agent.\n\n :param retry_state: The retry state (unused, required by tenacity).\n \"\"\"\n self.current_user_agent_idx = (self.current_user_agent_idx + 1) % len(self.user_agents)\n logger.debug(\"Switched user agent to {user_agent}\", user_agent=self.user_agents[self.current_user_agent_idx])\n", "releasenotes/notes/link-content-fetcher-enhancements-49babe1c60888043.yaml": null}
|
diff --git a/releasenotes/notes/link-content-fetcher-enhancements-49babe1c60888043.yaml b/releasenotes/notes/link-content-fetcher-enhancements-49babe1c60888043.yaml
new file mode 100644
index 0000000000..d6a7d24285
--- /dev/null
+++ b/releasenotes/notes/link-content-fetcher-enhancements-49babe1c60888043.yaml
@@ -0,0 +1,4 @@
+---
+enhancements:
+ - |
+ Improve LinkContentFetcher to support a broader range of content types, including glob patterns for text, application, audio, and video types. This update introduces a more flexible content handler resolution mechanism, allowing for direct matches and pattern matching, thereby greatly improving the handler's adaptability to various content types encountered on the web.
|
{"haystack/components/fetchers/link_content.py": [{"type": "function", "name": "LinkContentFetcher._resolve_handler", "lines": [223, 244], "signature": "def _resolve_handler(self, content_type: str) -> Callable[[Response], ByteStream]:", "doc": "Resolves the handler for the given content type.\n\nFirst, it tries to find a direct match for the content type in the handlers dictionary.\nIf no direct match is found, it tries to find a pattern match using the fnmatch function.\nIf no pattern match is found, it returns the default handler for text/plain.\n\n:param content_type: The content type to resolve the handler for.\n:returns: The handler for the given content type, if found. Otherwise, the default handler for text/plain."}]}
| null |
["test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_init"]
|
["[", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_init_with_params", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_run_text", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_run_html", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_run_binary", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_run_bad_status_code", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_link_content_fetcher_html", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_link_content_fetcher_text", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_link_content_fetcher_pdf", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_link_content_fetcher_multiple_different_content_types", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_link_content_fetcher_multiple_html_streams", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_mix_of_good_and_failed_requests", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_bad_request_exception_raised", "test/components/fetchers/test_link_content_fetcher.py::TestLinkContentFetcher::test_link_content_fetcher_audio"]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1719236919.0, "pr_title": "feat: Improve LinkContentFetcher content type handling", "pr_body": "### Why:\r\nEnhances `LinkContentFetcher` to broaden its content handling capabilities. The primary motivation behind these changes is to allow more content type support utilizing pattern matching for content types and allow fetching web content that includes text, applications, audio, and video files.\r\n\r\nDirect motivation for this enhancement was an experiment to fetch an audio file and transcribe it via Groq i.e:\r\n\r\n```\r\nfrom haystack.components.audio import RemoteWhisperTranscriber\r\nfrom haystack.components.fetchers import LinkContentFetcher\r\nfrom haystack import Pipeline\r\nfrom haystack.utils import Secret\r\n\r\ntranscriber = RemoteWhisperTranscriber(Secret.from_env_var(\"GROQ_API_KEY\"),\r\n api_base_url=\"https://api.groq.com/openai/v1\",\r\n model=\"whisper-large-v3\")\r\n\r\npipe = Pipeline()\r\npipe.add_component(\"fetcher\", LinkContentFetcher())\r\npipe.add_component(\"transcriber\", RemoteWhisperTranscriber(Secret.from_env_var(\"GROQ_API_KEY\"),\r\n api_base_url=\"https://api.groq.com/openai/v1\",\r\n model=\"whisper-large-v3\"))\r\n\r\npipe.connect(\"fetcher\", \"transcriber\")\r\nresult = pipe.run(\r\n data={\"fetcher\": {\"urls\": [\"https://ia601309.us.archive.org/29/items/jfks19610427/jfk_1961_0427_press_64kb.mp3\"]}})\r\nprint(result[\"transcriber\"][\"documents\"])\r\n```\r\n\r\nWe couldn't however do this out-of-the box because we didn't have content type handler registered for audio files. \r\nThis PR remedies this oversight. And adds more flexibility by adding pattern matching for content types\r\n\r\n\r\n### What:\r\n- Simplified and expanded content type handling by implementing wildcard patterns (e.g., `text/*`, `application/*`, `audio/*`, `video/*`) in the fetcher's handlers to efficiently map multiple content types to their appropriate handlers.\r\n- Added support for `application/json` as a text content type.\r\n- Introduced a `_resolve_handler` method to dynamically determine the correct handler for a content type, utilizing the `fnmatch` module for pattern matching.\r\n- Adjusted existing tests and added a new integration test to cover the handling of an `audio/mpeg` content type.\r\n\r\n### How can it be used:\r\n- All previous use cases are still supported\r\n- The above audio transcription example works out of the box\r\n \r\n### How did you test it:\r\n- Modified unit tests to assert the correct mapping of MIME types to handlers according to the new pattern matching logic.\r\n- Added an integration test (`test_link_content_fetcher_audio`) which specifically tests fetching an MP3 file, asserting both the `content_type` to be `audio/mpeg` and that the data stream is correctly retrieved and non-empty.\r\n\r\n### Notes for the reviewer:\r\n- Pay special attention to the pattern matching implementation in the `_resolve_handler` method to ensure that it properly covers all anticipated content types and does not inadvertently match undesired types.\r\n- Review the added integration test for audio content", "pr_timeline": [{"time": 1719239069.0, "comment": "Removing @anakin87 who has his hands full already and assigning @Amnah199"}, {"time": 1719239524.0, "comment": "## Pull Request Test Coverage Report for [Build 9647099885](https://coveralls.io/builds/68261487)\n\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **11** unchanged lines in **1** file lost coverage.\n* Overall coverage decreased (**-0.01%**) to **89.932%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/fetchers/link_content.py](https://coveralls.io/builds/68261487/source?filename=components%2Ffetchers%2Flink_content.py#L142) | 11 | 78.26% |\n<!-- | **Total:** | **11** | | -->\n\n| Totals | [](https://coveralls.io/builds/68261487) |\n| :-- | --: |\n| Change from base [Build 9615573436](https://coveralls.io/builds/68226953): | -0.01% |\n| Covered Lines: | 6726 |\n| Relevant Lines: | 7479 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1719476558.0, "comment": "## Pull Request Test Coverage Report for [Build 9693013480](https://coveralls.io/builds/68332850)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **17** unchanged lines in **5** files lost coverage.\n* Overall coverage decreased (**-0.005%**) to **89.941%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/evaluators/document_map.py](https://coveralls.io/builds/68332850/source?filename=components%2Fevaluators%2Fdocument_map.py#L80) | 1 | 96.15% |\n| [components/evaluators/document_mrr.py](https://coveralls.io/builds/68332850/source?filename=components%2Fevaluators%2Fdocument_mrr.py#L76) | 1 | 95.45% |\n| [tracing/datadog.py](https://coveralls.io/builds/68332850/source?filename=tracing%2Fdatadog.py#L75) | 1 | 94.59% |\n| [components/generators/openai.py](https://coveralls.io/builds/68332850/source?filename=components%2Fgenerators%2Fopenai.py#L189) | 2 | 96.34% |\n| [components/fetchers/link_content.py](https://coveralls.io/builds/68332850/source?filename=components%2Ffetchers%2Flink_content.py#L143) | 12 | 78.49% |\n<!-- | **Total:** | **17** | | -->\n\n| Totals | [](https://coveralls.io/builds/68332850) |\n| :-- | --: |\n| Change from base [Build 9615573436](https://coveralls.io/builds/68226953): | -0.005% |\n| Covered Lines: | 6724 |\n| Relevant Lines: | 7476 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1719477066.0, "comment": "Thanks for the review @Amnah199 please have another pass \ud83d\ude4f "}], "issues": {}}
|
deepset-ai/haystack
| 7,933
|
https://github.com/deepset-ai/haystack/pull/7933
|
deepset-ai__haystack-7933
|
[]
|
8b9eddcd948e972c844e03000c2802e85e462c2a
|
diff --git a/haystack/components/preprocessors/document_splitter.py b/haystack/components/preprocessors/document_splitter.py
index f5e048db6e..200fa8aa92 100644
--- a/haystack/components/preprocessors/document_splitter.py
+++ b/haystack/components/preprocessors/document_splitter.py
@@ -90,38 +90,38 @@ def run(self, documents: List[Document]):
f"DocumentSplitter only works with text documents but content for document ID {doc.id} is None."
)
units = self._split_into_units(doc.content, self.split_by)
- text_splits, splits_pages = self._concatenate_units(
+ text_splits, splits_pages, splits_start_idxs = self._concatenate_units(
units, self.split_length, self.split_overlap, self.split_threshold
)
metadata = deepcopy(doc.meta)
metadata["source_id"] = doc.id
split_docs += self._create_docs_from_splits(
- text_splits=text_splits, splits_pages=splits_pages, meta=metadata
+ text_splits=text_splits, splits_pages=splits_pages, splits_start_idxs=splits_start_idxs, meta=metadata
)
return {"documents": split_docs}
def _split_into_units(self, text: str, split_by: Literal["word", "sentence", "passage", "page"]) -> List[str]:
if split_by == "page":
- split_at = "\f"
+ self.split_at = "\f"
elif split_by == "passage":
- split_at = "\n\n"
+ self.split_at = "\n\n"
elif split_by == "sentence":
- split_at = "."
+ self.split_at = "."
elif split_by == "word":
- split_at = " "
+ self.split_at = " "
else:
raise NotImplementedError(
"DocumentSplitter only supports 'word', 'sentence', 'page' or 'passage' split_by options."
)
- units = text.split(split_at)
+ units = text.split(self.split_at)
# Add the delimiter back to all units except the last one
for i in range(len(units) - 1):
- units[i] += split_at
+ units[i] += self.split_at
return units
def _concatenate_units(
self, elements: List[str], split_length: int, split_overlap: int, split_threshold: int
- ) -> Tuple[List[str], List[int]]:
+ ) -> Tuple[List[str], List[int], List[int]]:
"""
Concatenates the elements into parts of split_length units.
@@ -132,36 +132,90 @@ def _concatenate_units(
text_splits: List[str] = []
splits_pages = []
+ splits_start_idxs = []
+ split_at_len = len(self.split_at)
+ cur_start_idx = 0
cur_page = 1
segments = windowed(elements, n=split_length, step=split_length - split_overlap)
+
for seg in segments:
current_units = [unit for unit in seg if unit is not None]
txt = "".join(current_units)
+
# check if length of current units is below split_threshold
if len(current_units) < split_threshold and len(text_splits) > 0:
# concatenate the last split with the current one
text_splits[-1] += txt
+
elif len(txt) > 0:
text_splits.append(txt)
splits_pages.append(cur_page)
+ splits_start_idxs.append(cur_start_idx)
+
processed_units = current_units[: split_length - split_overlap]
+ cur_start_idx += len("".join(processed_units)) + split_at_len
+
if self.split_by == "page":
num_page_breaks = len(processed_units)
else:
num_page_breaks = sum(processed_unit.count("\f") for processed_unit in processed_units)
+
cur_page += num_page_breaks
- return text_splits, splits_pages
- @staticmethod
- def _create_docs_from_splits(text_splits: List[str], splits_pages: List[int], meta: Dict) -> List[Document]:
+ return text_splits, splits_pages, splits_start_idxs
+
+ def _create_docs_from_splits(
+ self, text_splits: List[str], splits_pages: List[int], splits_start_idxs: List[int], meta: Dict
+ ) -> List[Document]:
"""
Creates Document objects from splits enriching them with page number and the metadata of the original document.
"""
documents: List[Document] = []
- for i, txt in enumerate(text_splits):
+ for i, (txt, split_idx) in enumerate(zip(text_splits, splits_start_idxs)):
meta = deepcopy(meta)
doc = Document(content=txt, meta=meta)
doc.meta["page_number"] = splits_pages[i]
+ doc.meta["split_id"] = i
+ doc.meta["split_idx_start"] = split_idx
documents.append(doc)
+
+ if self.split_overlap <= 0:
+ continue
+
+ doc.meta["_split_overlap"] = []
+
+ if i == 0:
+ continue
+
+ doc_start_idx = splits_start_idxs[i]
+ previous_doc = documents[i - 1]
+ previous_doc_start_idx = splits_start_idxs[i - 1]
+ self._add_split_overlap_information(doc, doc_start_idx, previous_doc, previous_doc_start_idx)
+
return documents
+
+ @staticmethod
+ def _add_split_overlap_information(
+ current_doc: Document, current_doc_start_idx: int, previous_doc: Document, previous_doc_start_idx: int
+ ):
+ """
+ Adds split overlap information to the current and previous Document's meta.
+
+ :param current_doc: The Document that is being split.
+ :param current_doc_start_idx: The starting index of the current Document.
+ :param previous_doc: The Document that was split before the current Document.
+ :param previous_doc_start_idx: The starting index of the previous Document.
+ """
+ overlapping_range = (current_doc_start_idx - previous_doc_start_idx - 1, len(previous_doc.content) - 1) # type: ignore
+
+ if overlapping_range[0] < overlapping_range[1]:
+ overlapping_str = previous_doc.content[overlapping_range[0] : overlapping_range[1]] # type: ignore
+
+ if current_doc.content.startswith(overlapping_str): # type: ignore
+ # add split overlap information to this Document regarding the previous Document
+ current_doc.meta["_split_overlap"].append({"doc_id": previous_doc.id, "range": overlapping_range})
+
+ # add split overlap information to previous Document regarding this Document
+ overlapping_range = (0, overlapping_range[1] - overlapping_range[0])
+ previous_doc.meta["_split_overlap"].append({"doc_id": current_doc.id, "range": overlapping_range})
diff --git a/releasenotes/notes/add-split_id_and_overlap_to_DocumentSplitter-8180ad8f13495741.yaml b/releasenotes/notes/add-split_id_and_overlap_to_DocumentSplitter-8180ad8f13495741.yaml
new file mode 100644
index 0000000000..e3eba2d57b
--- /dev/null
+++ b/releasenotes/notes/add-split_id_and_overlap_to_DocumentSplitter-8180ad8f13495741.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - |
+ The `DocumentSplitter` now has support for the `split_id` and `split_overlap` to allow for more control over the splitting process.
|
diff --git a/test/components/preprocessors/test_document_splitter.py b/test/components/preprocessors/test_document_splitter.py
index 4351457f79..d6fcaa9d1c 100644
--- a/test/components/preprocessors/test_document_splitter.py
+++ b/test/components/preprocessors/test_document_splitter.py
@@ -7,6 +7,28 @@
from haystack.components.preprocessors import DocumentSplitter
+def merge_documents(documents):
+ """Merge a list of doc chunks into a single doc by concatenating their content, eliminating overlapping content."""
+ sorted_docs = sorted(documents, key=lambda doc: doc.meta["split_idx_start"])
+ merged_text = ""
+ last_idx_end = 0
+ for doc in sorted_docs:
+ start = doc.meta["split_idx_start"] # start of the current content
+
+ # if the start of the current content is before the end of the last appended content, adjust it
+ if start < last_idx_end:
+ start = last_idx_end
+
+ # append the non-overlapping part to the merged text
+ merged_text = merged_text.strip()
+ merged_text += doc.content[start - doc.meta["split_idx_start"] :]
+
+ # update the last end index
+ last_idx_end = doc.meta["split_idx_start"] + len(doc.content)
+
+ return merged_text
+
+
class TestDocumentSplitter:
def test_non_text_document(self):
with pytest.raises(
@@ -219,7 +241,6 @@ def test_add_page_number_to_metadata_with_overlap_word_split(self):
expected_pages = [1, 1, 1, 2, 2, 1, 1, 3]
for doc, p in zip(result["documents"], expected_pages):
- print(doc.content, doc.meta, p)
assert doc.meta["page_number"] == p
def test_add_page_number_to_metadata_with_overlap_sentence_split(self):
@@ -230,7 +251,6 @@ def test_add_page_number_to_metadata_with_overlap_sentence_split(self):
expected_pages = [1, 1, 1, 2, 1, 1]
for doc, p in zip(result["documents"], expected_pages):
- print(doc.content, doc.meta, p)
assert doc.meta["page_number"] == p
def test_add_page_number_to_metadata_with_overlap_passage_split(self):
@@ -254,3 +274,16 @@ def test_add_page_number_to_metadata_with_overlap_page_split(self):
for doc, p in zip(result["documents"], expected_pages):
assert doc.meta["page_number"] == p
+
+ def test_add_split_overlap_information(self):
+ splitter = DocumentSplitter(split_length=10, split_overlap=5, split_by="word")
+ doc = Document(content="This is a text with some words. There is a second sentence. And a third sentence.")
+ docs = splitter.run(documents=[doc])
+
+ # check split_overlap is added to all the documents
+ assert len(docs["documents"]) == 3
+ for d in docs["documents"]:
+ assert "_split_overlap" in d.meta
+
+ # reconstruct the original document content from the split documents
+ assert doc.content == merge_documents(docs["documents"])
| 2024-06-26T08:47:37
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/components/preprocessors/document_splitter.py": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom copy import deepcopy\nfrom typing import Dict, List, Literal, Tuple\n\nfrom more_itertools import windowed\n\nfrom haystack import Document, component\n\n\n@component\nclass DocumentSplitter:\n \"\"\"\n Splits a list of text documents into a list of text documents with shorter texts.\n\n Splitting documents with long texts is a common preprocessing step during indexing.\n This allows Embedders to create significant semantic representations\n and avoids exceeding the maximum context length of language models.\n\n Usage example:\n ```python\n from haystack import Document\n from haystack.components.preprocessors import DocumentSplitter\n\n doc = Document(content=\"Moonlight shimmered softly, wolves howled nearby, night enveloped everything.\")\n\n splitter = DocumentSplitter(split_by=\"word\", split_length=3, split_overlap=0)\n result = splitter.run(documents=[doc])\n ```\n \"\"\"\n\n def __init__(\n self,\n split_by: Literal[\"word\", \"sentence\", \"page\", \"passage\"] = \"word\",\n split_length: int = 200,\n split_overlap: int = 0,\n split_threshold: int = 0,\n ):\n \"\"\"\n Initialize the DocumentSplitter.\n\n :param split_by: The unit by which the document should be split. Choose from \"word\" for splitting by \" \",\n \"sentence\" for splitting by \".\", \"page\" for splitting by \"\\\\f\" or \"passage\" for splitting by \"\\\\n\\\\n\".\n :param split_length: The maximum number of units in each split.\n :param split_overlap: The number of units that each split should overlap.\n :param split_threshold: The minimum number of units that the split should have. If the split has fewer units\n than the threshold, it will be attached to the previous split.\n \"\"\"\n\n self.split_by = split_by\n if split_by not in [\"word\", \"sentence\", \"page\", \"passage\"]:\n raise ValueError(\"split_by must be one of 'word', 'sentence', 'page' or 'passage'.\")\n if split_length <= 0:\n raise ValueError(\"split_length must be greater than 0.\")\n self.split_length = split_length\n if split_overlap < 0:\n raise ValueError(\"split_overlap must be greater than or equal to 0.\")\n self.split_overlap = split_overlap\n self.split_threshold = split_threshold\n\n @component.output_types(documents=List[Document])\n def run(self, documents: List[Document]):\n \"\"\"\n Split documents into smaller parts.\n\n Splits documents by the unit expressed in `split_by`, with a length of `split_length`\n and an overlap of `split_overlap`.\n\n :param documents: The documents to split.\n\n :returns: A dictionary with the following key:\n - `documents`: List of documents with the split texts. A metadata field \"source_id\" is added to each\n document to keep track of the original document that was split. Another metadata field \"page_number\"\n is added to each number to keep track of the page it belonged to in the original document. Other metadata\n are copied from the original document.\n\n :raises TypeError: if the input is not a list of Documents.\n :raises ValueError: if the content of a document is None.\n \"\"\"\n\n if not isinstance(documents, list) or (documents and not isinstance(documents[0], Document)):\n raise TypeError(\"DocumentSplitter expects a List of Documents as input.\")\n\n split_docs = []\n for doc in documents:\n if doc.content is None:\n raise ValueError(\n f\"DocumentSplitter only works with text documents but content for document ID {doc.id} is None.\"\n )\n units = self._split_into_units(doc.content, self.split_by)\n text_splits, splits_pages = self._concatenate_units(\n units, self.split_length, self.split_overlap, self.split_threshold\n )\n metadata = deepcopy(doc.meta)\n metadata[\"source_id\"] = doc.id\n split_docs += self._create_docs_from_splits(\n text_splits=text_splits, splits_pages=splits_pages, meta=metadata\n )\n return {\"documents\": split_docs}\n\n def _split_into_units(self, text: str, split_by: Literal[\"word\", \"sentence\", \"passage\", \"page\"]) -> List[str]:\n if split_by == \"page\":\n split_at = \"\\f\"\n elif split_by == \"passage\":\n split_at = \"\\n\\n\"\n elif split_by == \"sentence\":\n split_at = \".\"\n elif split_by == \"word\":\n split_at = \" \"\n else:\n raise NotImplementedError(\n \"DocumentSplitter only supports 'word', 'sentence', 'page' or 'passage' split_by options.\"\n )\n units = text.split(split_at)\n # Add the delimiter back to all units except the last one\n for i in range(len(units) - 1):\n units[i] += split_at\n return units\n\n def _concatenate_units(\n self, elements: List[str], split_length: int, split_overlap: int, split_threshold: int\n ) -> Tuple[List[str], List[int]]:\n \"\"\"\n Concatenates the elements into parts of split_length units.\n\n Keeps track of the original page number that each element belongs. If the length of the current units is less\n than the pre-defined `split_threshold`, it does not create a new split. Instead, it concatenates the current\n units with the last split, preventing the creation of excessively small splits.\n \"\"\"\n\n text_splits: List[str] = []\n splits_pages = []\n cur_page = 1\n segments = windowed(elements, n=split_length, step=split_length - split_overlap)\n for seg in segments:\n current_units = [unit for unit in seg if unit is not None]\n txt = \"\".join(current_units)\n # check if length of current units is below split_threshold\n if len(current_units) < split_threshold and len(text_splits) > 0:\n # concatenate the last split with the current one\n text_splits[-1] += txt\n elif len(txt) > 0:\n text_splits.append(txt)\n splits_pages.append(cur_page)\n processed_units = current_units[: split_length - split_overlap]\n if self.split_by == \"page\":\n num_page_breaks = len(processed_units)\n else:\n num_page_breaks = sum(processed_unit.count(\"\\f\") for processed_unit in processed_units)\n cur_page += num_page_breaks\n return text_splits, splits_pages\n\n @staticmethod\n def _create_docs_from_splits(text_splits: List[str], splits_pages: List[int], meta: Dict) -> List[Document]:\n \"\"\"\n Creates Document objects from splits enriching them with page number and the metadata of the original document.\n \"\"\"\n documents: List[Document] = []\n\n for i, txt in enumerate(text_splits):\n meta = deepcopy(meta)\n doc = Document(content=txt, meta=meta)\n doc.meta[\"page_number\"] = splits_pages[i]\n documents.append(doc)\n return documents\n", "releasenotes/notes/add-split_id_and_overlap_to_DocumentSplitter-8180ad8f13495741.yaml": null}
|
diff --git a/releasenotes/notes/add-split_id_and_overlap_to_DocumentSplitter-8180ad8f13495741.yaml b/releasenotes/notes/add-split_id_and_overlap_to_DocumentSplitter-8180ad8f13495741.yaml
new file mode 100644
index 0000000000..e3eba2d57b
--- /dev/null
+++ b/releasenotes/notes/add-split_id_and_overlap_to_DocumentSplitter-8180ad8f13495741.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - |
+ The `DocumentSplitter` now has support for the `split_id` and `split_overlap` to allow for more control over the splitting process.
|
{"haystack/components/preprocessors/document_splitter.py": [{"type": "function", "name": "DocumentSplitter._add_split_overlap_information", "lines": [199, 221], "signature": "def _add_split_overlap_information( current_doc: Document, current_doc_start_idx: int, previous_doc: Document, previous_doc_start_idx: int ):", "doc": "Adds split overlap information to the current and previous Document's meta.\n\n:param current_doc: The Document that is being split.\n:param current_doc_start_idx: The starting index of the current Document.\n:param previous_doc: The Document that was split before the current Document.\n:param previous_doc_start_idx: The starting index of the previous Document."}]}
| null |
["test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_split_overlap_information"]
|
["test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_non_text_document", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_single_doc", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_empty_list", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_unsupported_split_by", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_unsupported_split_length", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_unsupported_split_overlap", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word_with_threshold", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word_multiple_input_docs", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_sentence", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_passage", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_page", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word_with_overlap", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_source_id_stored_in_metadata", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_copy_metadata", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_word_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_sentence_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_passage_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_page_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_word_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_sentence_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_passage_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_page_split"]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1719248237.0, "pr_title": "feat : adding `split_id` and `split_overlap` to `DocumentSplitter`", "pr_body": "### Related Issues\r\n\r\n- fixes [#7389](https://github.com/deepset-ai/haystack/issues/7389)\r\n\r\n### Proposed Changes:\r\n\r\nWhen a `split_overlap` is set each produced chunk Document will have information:\r\n- about the `split_id` allowing an ordering over the document chunks\r\n- each document chunk will have in the `meta` the `_split_overlap`, telling with which other docs it overlaps and on what range\r\n\r\n### How did you test it?\r\n\r\n- added new unit tests, did manual verification and run integration tests\r\n\r\n### Checklist\r\n\r\n- I have read the [contributors guidelines](https://github.com/deepset-ai/haystack/blob/main/CONTRIBUTING.md) and the [code of conduct](https://github.com/deepset-ai/haystack/blob/main/code_of_conduct.txt)\r\n- I have updated the related issue with new insights and changes\r\n- I added unit tests and updated the docstrings\r\n- I've used one of the [conventional commit types](https://www.conventionalcommits.org/en/v1.0.0/) for my PR title: `fix:`, `feat:`, `build:`, `chore:`, `ci:`, `docs:`, `style:`, `refactor:`, `perf:`, `test:`.\r\n- I documented my code\r\n- I ran [pre-commit hooks](https://github.com/deepset-ai/haystack/blob/main/CONTRIBUTING.md#installation) and fixed any issue\r\n", "pr_timeline": [{"time": 1719392177.0, "comment": "## Pull Request Test Coverage Report for [Build 9676516194](https://coveralls.io/builds/68307687)\n\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* No unchanged relevant lines lost coverage.\n* Overall coverage increased (+**0.03%**) to **89.997%**\n\n---\n\n\n\n| Totals | [](https://coveralls.io/builds/68307687) |\n| :-- | --: |\n| Change from base [Build 9664851641](https://coveralls.io/builds/68289438): | 0.03% |\n| Covered Lines: | 6739 |\n| Relevant Lines: | 7488 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1719392175.0, "comment": "## Pull Request Test Coverage Report for [Build 9676513721](https://coveralls.io/builds/68307683)\n\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* No unchanged relevant lines lost coverage.\n* Overall coverage increased (+**0.03%**) to **89.997%**\n\n---\n\n\n\n| Totals | [](https://coveralls.io/builds/68307683) |\n| :-- | --: |\n| Change from base [Build 9664851641](https://coveralls.io/builds/68289438): | 0.03% |\n| Covered Lines: | 6739 |\n| Relevant Lines: | 7488 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1719393626.0, "comment": "## Pull Request Test Coverage Report for [Build 9676872651](https://coveralls.io/builds/68308275)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **2** unchanged lines in **1** file lost coverage.\n* Overall coverage increased (+**0.03%**) to **89.997%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/generators/openai.py](https://coveralls.io/builds/68308275/source?filename=components%2Fgenerators%2Fopenai.py#L189) | 2 | 96.34% |\n<!-- | **Total:** | **2** | | -->\n\n| Totals | [](https://coveralls.io/builds/68308275) |\n| :-- | --: |\n| Change from base [Build 9664851641](https://coveralls.io/builds/68289438): | 0.03% |\n| Covered Lines: | 6739 |\n| Relevant Lines: | 7488 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1719395332.0, "comment": "## Pull Request Test Coverage Report for [Build 9677319700](https://coveralls.io/builds/68308860)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **2** unchanged lines in **1** file lost coverage.\n* Overall coverage increased (+**0.03%**) to **89.997%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/generators/openai.py](https://coveralls.io/builds/68308860/source?filename=components%2Fgenerators%2Fopenai.py#L189) | 2 | 96.34% |\n<!-- | **Total:** | **2** | | -->\n\n| Totals | [](https://coveralls.io/builds/68308860) |\n| :-- | --: |\n| Change from base [Build 9664851641](https://coveralls.io/builds/68289438): | 0.03% |\n| Covered Lines: | 6739 |\n| Relevant Lines: | 7488 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1719396142.0, "comment": "## Pull Request Test Coverage Report for [Build 9677523888](https://coveralls.io/builds/68309115)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **2** unchanged lines in **1** file lost coverage.\n* Overall coverage increased (+**0.03%**) to **89.997%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/generators/openai.py](https://coveralls.io/builds/68309115/source?filename=components%2Fgenerators%2Fopenai.py#L189) | 2 | 96.34% |\n<!-- | **Total:** | **2** | | -->\n\n| Totals | [](https://coveralls.io/builds/68309115) |\n| :-- | --: |\n| Change from base [Build 9664851641](https://coveralls.io/builds/68289438): | 0.03% |\n| Covered Lines: | 6739 |\n| Relevant Lines: | 7488 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1719481818.0, "comment": "## Pull Request Test Coverage Report for [Build 9694289136](https://coveralls.io/builds/68334721)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **3** unchanged lines in **2** files lost coverage.\n* Overall coverage increased (+**0.02%**) to **89.987%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [tracing/datadog.py](https://coveralls.io/builds/68334721/source?filename=tracing%2Fdatadog.py#L75) | 1 | 94.59% |\n| [components/generators/openai.py](https://coveralls.io/builds/68334721/source?filename=components%2Fgenerators%2Fopenai.py#L189) | 2 | 96.34% |\n<!-- | **Total:** | **3** | | -->\n\n| Totals | [](https://coveralls.io/builds/68334721) |\n| :-- | --: |\n| Change from base [Build 9664851641](https://coveralls.io/builds/68289438): | 0.02% |\n| Covered Lines: | 6740 |\n| Relevant Lines: | 7490 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}], "issues": {}}
|
deepset-ai/haystack
| 8,042
|
https://github.com/deepset-ai/haystack/pull/8042
|
deepset-ai__haystack-8042
|
[]
|
031b0bfbd836f128736220589917a62c43c9c512
|
diff --git a/haystack/document_stores/types/__init__.py b/haystack/document_stores/types/__init__.py
index df2032f79c..ed6becf8b4 100644
--- a/haystack/document_stores/types/__init__.py
+++ b/haystack/document_stores/types/__init__.py
@@ -2,8 +2,8 @@
#
# SPDX-License-Identifier: Apache-2.0
-from .filter_policy import FilterPolicy
+from .filter_policy import FilterPolicy, apply_filter_policy
from .policy import DuplicatePolicy
from .protocol import DocumentStore
-__all__ = ["DocumentStore", "DuplicatePolicy", "FilterPolicy"]
+__all__ = ["apply_filter_policy", "DocumentStore", "DuplicatePolicy", "FilterPolicy"]
diff --git a/haystack/document_stores/types/filter_policy.py b/haystack/document_stores/types/filter_policy.py
index a2be576d20..b0dc58d895 100644
--- a/haystack/document_stores/types/filter_policy.py
+++ b/haystack/document_stores/types/filter_policy.py
@@ -3,7 +3,11 @@
# SPDX-License-Identifier: Apache-2.0
from enum import Enum
-from typing import Any, Dict, Optional
+from typing import Any, Dict, Literal, Optional
+
+from haystack import logging
+
+logger = logging.getLogger(__name__)
class FilterPolicy(Enum):
@@ -28,18 +32,259 @@ def from_str(filter_policy: str) -> "FilterPolicy":
:param filter_policy: The string to convert.
:return: The corresponding FilterPolicy enum.
"""
- enum_map = {e.value: e for e in FilterPolicy}
- policy = enum_map.get(filter_policy)
+ enum_map = {e.value.lower(): e for e in FilterPolicy}
+ policy = enum_map.get(filter_policy.lower() if filter_policy else "")
if policy is None:
msg = f"Unknown FilterPolicy type '{filter_policy}'. Supported types are: {list(enum_map.keys())}"
raise ValueError(msg)
return policy
+def is_comparison_filter(filter_item: Dict[str, Any]) -> bool:
+ """
+ Check if the given filter is a comparison filter.
+
+ :param filter_item: The filter to check.
+ :returns: True if the filter is a comparison filter, False otherwise.
+ """
+ return all(key in filter_item for key in ["field", "operator", "value"])
+
+
+def is_logical_filter(filter_item: Dict[str, Any]) -> bool:
+ """
+ Check if the given filter is a logical filter.
+
+ :param filter_item: The filter to check.
+ :returns: True if the filter is a logical filter, False otherwise.
+ """
+ return "operator" in filter_item and "conditions" in filter_item
+
+
+def combine_two_logical_filters(
+ init_logical_filter: Dict[str, Any], runtime_logical_filter: Dict[str, Any]
+) -> Dict[str, Any]:
+ """
+ Combine two logical filters, they must have the same operator.
+
+ If `init_logical_filter["operator"]` and `runtime_logical_filter["operator"]` are the same, the conditions
+ of both filters are combined. Otherwise, the `init_logical_filter` is ignored and `
+ runtime_logical_filter` is returned.
+
+ __Example__:
+
+ ```python
+ init_logical_filter = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ ]
+ }
+ runtime_logical_filter = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.genre", "operator": "IN", "value": ["economy", "politics"]},
+ {"field": "meta.publisher", "operator": "==", "value": "nytimes"},
+ ]
+ }
+ new_filters = combine_two_logical_filters(
+ init_logical_filter, runtime_logical_filter, "AND"
+ )
+ # Output:
+ {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ {"field": "meta.genre", "operator": "IN", "value": ["economy", "politics"]},
+ {"field": "meta.publisher", "operator": "==", "value": "nytimes"},
+ ]
+ }
+ ```
+ """
+ if init_logical_filter["operator"] == runtime_logical_filter["operator"]:
+ return {
+ "operator": str(init_logical_filter["operator"]),
+ "conditions": init_logical_filter["conditions"] + runtime_logical_filter["conditions"],
+ }
+
+ logger.warning(
+ "The provided logical operators, {parsed_operator} and {operator}, do not match so the parsed logical "
+ "filter, {init_logical_filter}, will be ignored and only the provided logical filter,{runtime_logical_filter}, "
+ "will be used. Update the logical operators to match to include the parsed filter.",
+ parsed_operator=init_logical_filter["operator"],
+ operator=runtime_logical_filter["operator"],
+ init_logical_filter=init_logical_filter,
+ runtime_logical_filter=runtime_logical_filter,
+ )
+ runtime_logical_filter["operator"] = str(runtime_logical_filter["operator"])
+ return runtime_logical_filter
+
+
+def combine_init_comparison_and_runtime_logical_filters(
+ init_comparison_filter: Dict[str, Any],
+ runtime_logical_filter: Dict[str, Any],
+ logical_operator: Literal["AND", "OR", "NOT"],
+) -> Dict[str, Any]:
+ """
+ Combine a runtime logical filter with the init comparison filter using the provided logical_operator.
+
+ We only add the init_comparison_filter if logical_operator matches the existing
+ runtime_logical_filter["operator"]. Otherwise, we return the runtime_logical_filter unchanged.
+
+ __Example__:
+
+ ```python
+ runtime_logical_filter = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ ]
+ }
+ init_comparison_filter = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"}
+ new_filters = combine_init_comparison_and_runtime_logical_filters(
+ init_comparison_filter, runtime_logical_filter, "AND"
+ )
+ # Output:
+ {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ ]
+ }
+ ```
+ """
+ if runtime_logical_filter["operator"] == logical_operator:
+ conditions = runtime_logical_filter["conditions"]
+ fields = {c.get("field") for c in conditions}
+ if init_comparison_filter["field"] not in fields:
+ conditions.append(init_comparison_filter)
+ else:
+ logger.warning(
+ "The init filter, {init_filter}, is ignored as the field is already present in the existing "
+ "filters, {filters}.",
+ init_filter=init_comparison_filter,
+ filters=runtime_logical_filter,
+ )
+ return {"operator": str(runtime_logical_filter["operator"]), "conditions": conditions}
+
+ logger.warning(
+ "The provided logical_operator, {logical_operator}, does not match the logical operator found in "
+ "the runtime filters, {filters_logical_operator}, so the init filter will be ignored.",
+ logical_operator=logical_operator,
+ filters_logical_operator=runtime_logical_filter["operator"],
+ )
+ runtime_logical_filter["operator"] = str(runtime_logical_filter["operator"])
+ return runtime_logical_filter
+
+
+def combine_runtime_comparison_and_init_logical_filters(
+ runtime_comparison_filter: Dict[str, Any],
+ init_logical_filter: Dict[str, Any],
+ logical_operator: Literal["AND", "OR", "NOT"],
+) -> Dict[str, Any]:
+ """
+ Combine an init logical filter with the runtime comparison filter using the provided logical_operator.
+
+ We only add the runtime_comparison_filter if logical_operator matches the existing
+ init_logical_filter["operator"]. Otherwise, we return the runtime_comparison_filter unchanged.
+
+ __Example__:
+
+ ```python
+ init_logical_filter = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ ]
+ }
+ runtime_comparison_filter = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"}
+ new_filters = combine_runtime_comparison_and_init_logical_filters(
+ runtime_comparison_filter, init_logical_filter, "AND"
+ )
+ # Output:
+ {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ ]
+ }
+ ```
+ """
+ if init_logical_filter["operator"] == logical_operator:
+ conditions = init_logical_filter["conditions"]
+ fields = {c.get("field") for c in conditions}
+ if runtime_comparison_filter["field"] in fields:
+ logger.warning(
+ "The runtime filter, {runtime_filter}, will overwrite the existing filter with the same "
+ "field in the init logical filter.",
+ runtime_filter=runtime_comparison_filter,
+ )
+ conditions = [c for c in conditions if c.get("field") != runtime_comparison_filter["field"]]
+ conditions.append(runtime_comparison_filter)
+ return {"operator": str(init_logical_filter["operator"]), "conditions": conditions}
+
+ logger.warning(
+ "The provided logical_operator, {logical_operator}, does not match the logical operator found in "
+ "the init logical filter, {filters_logical_operator}, so the init logical filter will be ignored.",
+ logical_operator=logical_operator,
+ filters_logical_operator=init_logical_filter["operator"],
+ )
+ return runtime_comparison_filter
+
+
+def combine_two_comparison_filters(
+ init_comparison_filter: Dict[str, Any],
+ runtime_comparison_filter: Dict[str, Any],
+ logical_operator: Literal["AND", "OR", "NOT"],
+) -> Dict[str, Any]:
+ """
+ Combine a comparison filter with the `init_comparison_filter` using the provided `logical_operator`.
+
+ If `runtime_comparison_filter` and `init_comparison_filter` target the same field, `init_comparison_filter`
+ is ignored and `runtime_comparison_filter` is returned unchanged.
+
+ __Example__:
+
+ ```python
+ runtime_comparison_filter = {"field": "meta.type", "operator": "==", "value": "article"},
+ init_comparison_filter = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ new_filters = combine_two_comparison_filters(
+ init_comparison_filter, runtime_comparison_filter, "AND"
+ )
+ # Output:
+ {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ ]
+ }
+ ```
+ """
+ if runtime_comparison_filter["field"] == init_comparison_filter["field"]:
+ logger.warning(
+ "The parsed filter, {parsed_filter}, is ignored as the field is already present in the existing "
+ "filters, {filters}.",
+ parsed_filter=init_comparison_filter,
+ filters=runtime_comparison_filter,
+ )
+ return runtime_comparison_filter
+
+ return {"operator": str(logical_operator), "conditions": [init_comparison_filter, runtime_comparison_filter]}
+
+
def apply_filter_policy(
filter_policy: FilterPolicy,
init_filters: Optional[Dict[str, Any]] = None,
runtime_filters: Optional[Dict[str, Any]] = None,
+ default_logical_operator: Literal["AND", "OR", "NOT"] = "AND",
) -> Optional[Dict[str, Any]]:
"""
Apply the filter policy to the given initial and runtime filters to determine the final set of filters used.
@@ -52,10 +297,23 @@ def apply_filter_policy(
values from the runtime filters will overwrite those from the initial filters.
:param init_filters: The initial filters set during the initialization of the relevant retriever.
:param runtime_filters: The filters provided at runtime, usually during a query operation execution. These filters
- can change for each query/retreiver run invocation.
+ can change for each query/retriever run invocation.
+ :param default_logical_operator: The default logical operator to use when merging filters (non-legacy filters only).
:returns: A dictionary containing the resulting filters based on the provided policy.
"""
- if filter_policy == FilterPolicy.MERGE and runtime_filters:
- return {**(init_filters or {}), **runtime_filters}
- else:
- return runtime_filters or init_filters
+ if filter_policy == FilterPolicy.MERGE and runtime_filters and init_filters:
+ # now we merge filters
+ if is_comparison_filter(init_filters) and is_comparison_filter(runtime_filters):
+ return combine_two_comparison_filters(init_filters, runtime_filters, default_logical_operator)
+ elif is_comparison_filter(init_filters) and is_logical_filter(runtime_filters):
+ return combine_init_comparison_and_runtime_logical_filters(
+ init_filters, runtime_filters, default_logical_operator
+ )
+ elif is_logical_filter(init_filters) and is_comparison_filter(runtime_filters):
+ return combine_runtime_comparison_and_init_logical_filters(
+ runtime_filters, init_filters, default_logical_operator
+ )
+ elif is_logical_filter(init_filters) and is_logical_filter(runtime_filters):
+ return combine_two_logical_filters(init_filters, runtime_filters)
+
+ return runtime_filters or init_filters
diff --git a/releasenotes/notes/implement-merge-filter-logic-99e6785a78f80ae9.yaml b/releasenotes/notes/implement-merge-filter-logic-99e6785a78f80ae9.yaml
new file mode 100644
index 0000000000..c90479c2c6
--- /dev/null
+++ b/releasenotes/notes/implement-merge-filter-logic-99e6785a78f80ae9.yaml
@@ -0,0 +1,4 @@
+---
+enhancements:
+ - |
+ Enhanced filter application logic to support merging of filters. It facilitates more precise retrieval filtering, allowing for both init and runtime complex filter combinations with logical operators. For more details see https://docs.haystack.deepset.ai/docs/metadata-filtering
|
diff --git a/test/document_stores/test_filter_policy.py b/test/document_stores/test_filter_policy.py
index b7efcd0672..d775ee356e 100644
--- a/test/document_stores/test_filter_policy.py
+++ b/test/document_stores/test_filter_policy.py
@@ -3,43 +3,178 @@
# SPDX-License-Identifier: Apache-2.0
import pytest
-from typing import Any, Dict, Optional
-from enum import Enum
-from haystack.document_stores.types import FilterPolicy
-from haystack.document_stores.types.filter_policy import apply_filter_policy
+from haystack.document_stores.types import apply_filter_policy, FilterPolicy
-def test_replace_policy_with_both_filters():
- init_filters = {"status": "active", "category": "news"}
- runtime_filters = {"author": "John Doe"}
- result = apply_filter_policy(FilterPolicy.REPLACE, init_filters, runtime_filters)
- assert result == runtime_filters
+def test_merge_two_comparison_filters():
+ """
+ Merging two comparison filters
+ Result: AND operator with both filters
+ """
+ init_filters = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"}
+ runtime_filters = {"field": "meta.type", "operator": "==", "value": "article"}
+ result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
+ assert result == {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ ],
+ }
-def test_merge_policy_with_both_filters():
- init_filters = {"status": "active", "category": "news"}
- runtime_filters = {"author": "John Doe"}
+def test_merge_init_comparison_and_runtime_logical_filters():
+ """
+ Merging init comparison and runtime logical filters
+ Result: AND operator with both filters
+ """
+ init_filters = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"}
+ runtime_filters = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ ],
+ }
result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
- assert result == {"status": "active", "category": "news", "author": "John Doe"}
+ assert result == {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ ],
+ }
-def test_replace_policy_with_only_init_filters():
- init_filters = {"status": "active", "category": "news"}
- runtime_filters = None
- result = apply_filter_policy(FilterPolicy.REPLACE, init_filters, runtime_filters)
- assert result == init_filters
+def test_merge_runtime_comparison_and_init_logical_filters_with_string_operators():
+ """
+ Merging a runtime comparison filter with an init logical filter, but with string-based logical operators
+ Result: AND operator with both filters
+ """
+ # Test with string-based logical operators
+ init_filters = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ ],
+ }
+ runtime_filters = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.genre", "operator": "IN", "value": ["economy", "politics"]},
+ {"field": "meta.publisher", "operator": "==", "value": "nytimes"},
+ ],
+ }
+ result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
+ assert result == {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ {"field": "meta.genre", "operator": "IN", "value": ["economy", "politics"]},
+ {"field": "meta.publisher", "operator": "==", "value": "nytimes"},
+ ],
+ }
-def test_merge_policy_with_only_init_filters():
- init_filters = {"status": "active", "category": "news"}
- runtime_filters = None
+def test_merge_runtime_comparison_and_init_logical_filters():
+ """
+ Merging a runtime comparison filter with an init logical filter
+ Result: AND operator with both filters
+ """
+ init_filters = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ ],
+ }
+ runtime_filters = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"}
result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
- assert result == init_filters
+ assert result == {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ ],
+ }
-def test_merge_policy_with_overlapping_keys():
- init_filters = {"status": "active", "category": "news"}
- runtime_filters = {"category": "science", "author": "John Doe"}
+def test_merge_two_logical_filters():
+ """
+ Merging two logical filters
+ Result: AND operator with both filters
+ """
+ init_filters = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ ],
+ }
+ runtime_filters = {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.genre", "operator": "IN", "value": ["economy", "politics"]},
+ {"field": "meta.publisher", "operator": "==", "value": "nytimes"},
+ ],
+ }
result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
- assert result == {"status": "active", "category": "science", "author": "John Doe"}
+ assert result == {
+ "operator": "AND",
+ "conditions": [
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ {"field": "meta.rating", "operator": ">=", "value": 3},
+ {"field": "meta.genre", "operator": "IN", "value": ["economy", "politics"]},
+ {"field": "meta.publisher", "operator": "==", "value": "nytimes"},
+ ],
+ }
+
+
+def test_merge_with_different_logical_operators():
+ """
+ Merging with a different logical operator
+ Result: warnings and runtime filters
+ """
+ init_filters = {"operator": "AND", "conditions": [{"field": "meta.type", "operator": "==", "value": "article"}]}
+ runtime_filters = {
+ "operator": "OR",
+ "conditions": [{"field": "meta.genre", "operator": "IN", "value": ["economy", "politics"]}],
+ }
+ result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
+ assert result == runtime_filters
+
+
+def test_merge_comparison_filters_with_same_field():
+ """
+ Merging comparison filters with the same field
+ Result: warnings and runtime filters
+ """
+ init_filters = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"}
+ runtime_filters = {"field": "meta.date", "operator": "<=", "value": "2020-12-31"}
+ result = apply_filter_policy(FilterPolicy.MERGE, init_filters, runtime_filters)
+ assert result == runtime_filters
+
+
[email protected]("logical_operator", ["AND", "OR", "NOT"])
+def test_merge_with_custom_logical_operator(logical_operator: str):
+ """
+ Merging with a custom logical operator
+ Result: The given logical operator with both filters
+ """
+ init_filters = {"field": "meta.date", "operator": ">=", "value": "2015-01-01"}
+ runtime_filters = {"field": "meta.type", "operator": "==", "value": "article"}
+ result = apply_filter_policy(
+ FilterPolicy.MERGE, init_filters, runtime_filters, default_logical_operator=logical_operator
+ )
+ assert result == {
+ "operator": logical_operator,
+ "conditions": [
+ {"field": "meta.date", "operator": ">=", "value": "2015-01-01"},
+ {"field": "meta.type", "operator": "==", "value": "article"},
+ ],
+ }
| 2024-07-18T09:14:08
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/document_stores/types/__init__.py": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom .filter_policy import FilterPolicy\nfrom .policy import DuplicatePolicy\nfrom .protocol import DocumentStore\n\n__all__ = [\"DocumentStore\", \"DuplicatePolicy\", \"FilterPolicy\"]\n", "haystack/document_stores/types/filter_policy.py": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom enum import Enum\nfrom typing import Any, Dict, Optional\n\n\nclass FilterPolicy(Enum):\n \"\"\"\n Policy to determine how filters are applied in retrievers interacting with document stores.\n \"\"\"\n\n # Runtime filters replace init filters during retriever run invocation.\n REPLACE = \"replace\"\n\n # Runtime filters are merged with init filters, with runtime filters overwriting init values.\n MERGE = \"merge\"\n\n def __str__(self):\n return self.value\n\n @staticmethod\n def from_str(filter_policy: str) -> \"FilterPolicy\":\n \"\"\"\n Convert a string to a FilterPolicy enum.\n\n :param filter_policy: The string to convert.\n :return: The corresponding FilterPolicy enum.\n \"\"\"\n enum_map = {e.value: e for e in FilterPolicy}\n policy = enum_map.get(filter_policy)\n if policy is None:\n msg = f\"Unknown FilterPolicy type '{filter_policy}'. Supported types are: {list(enum_map.keys())}\"\n raise ValueError(msg)\n return policy\n\n\ndef apply_filter_policy(\n filter_policy: FilterPolicy,\n init_filters: Optional[Dict[str, Any]] = None,\n runtime_filters: Optional[Dict[str, Any]] = None,\n) -> Optional[Dict[str, Any]]:\n \"\"\"\n Apply the filter policy to the given initial and runtime filters to determine the final set of filters used.\n\n The function combines or replaces the initial and runtime filters based on the specified filter policy.\n\n :param filter_policy: The policy to apply when handling the filters. It can be one of the following:\n - `FilterPolicy.REPLACE`: Runtime filters will replace the initial filters.\n - `FilterPolicy.MERGE`: Runtime filters will be merged with the initial filters. If there are overlapping keys,\n values from the runtime filters will overwrite those from the initial filters.\n :param init_filters: The initial filters set during the initialization of the relevant retriever.\n :param runtime_filters: The filters provided at runtime, usually during a query operation execution. These filters\n can change for each query/retreiver run invocation.\n :returns: A dictionary containing the resulting filters based on the provided policy.\n \"\"\"\n if filter_policy == FilterPolicy.MERGE and runtime_filters:\n return {**(init_filters or {}), **runtime_filters}\n else:\n return runtime_filters or init_filters\n", "releasenotes/notes/implement-merge-filter-logic-99e6785a78f80ae9.yaml": null}
|
diff --git a/releasenotes/notes/implement-merge-filter-logic-99e6785a78f80ae9.yaml b/releasenotes/notes/implement-merge-filter-logic-99e6785a78f80ae9.yaml
new file mode 100644
index 0000000000..c90479c2c6
--- /dev/null
+++ b/releasenotes/notes/implement-merge-filter-logic-99e6785a78f80ae9.yaml
@@ -0,0 +1,4 @@
+---
+enhancements:
+ - |
+ Enhanced filter application logic to support merging of filters. It facilitates more precise retrieval filtering, allowing for both init and runtime complex filter combinations with logical operators. For more details see https://docs.haystack.deepset.ai/docs/metadata-filtering
|
{"haystack/document_stores/types/filter_policy.py": [{"type": "function", "name": "is_comparison_filter", "lines": [43, 50], "signature": "def is_comparison_filter(filter_item: Dict[str, Any]) -> bool:", "doc": "Check if the given filter is a comparison filter.\n\n:param filter_item: The filter to check.\n:returns: True if the filter is a comparison filter, False otherwise."}, {"type": "function", "name": "is_logical_filter", "lines": [53, 60], "signature": "def is_logical_filter(filter_item: Dict[str, Any]) -> bool:", "doc": "Check if the given filter is a logical filter.\n\n:param filter_item: The filter to check.\n:returns: True if the filter is a logical filter, False otherwise."}, {"type": "function", "name": "combine_two_logical_filters", "lines": [63, 121], "signature": "def combine_two_logical_filters( init_logical_filter: Dict[str, Any], runtime_logical_filter: Dict[str, Any] ) -> Dict[str, Any]:", "doc": "Combine two logical filters, they must have the same operator.\n\nIf `init_logical_filter[\"operator\"]` and `runtime_logical_filter[\"operator\"]` are the same, the conditions\nof both filters are combined. Otherwise, the `init_logical_filter` is ignored and `\nruntime_logical_filter` is returned.\n\n __Example__:\n\n ```python\n init_logical_filter = {\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n {\"field\": \"meta.rating\", \"operator\": \">=\", \"value\": 3},\n ]\n }\n runtime_logical_filter = {\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.genre\", \"operator\": \"IN\", \"value\": [\"economy\", \"politics\"]},\n {\"field\": \"meta.publisher\", \"operator\": \"==\", \"value\": \"nytimes\"},\n ]\n }\n new_filters = combine_two_logical_filters(\n init_logical_filter, runtime_logical_filter, \"AND\"\n )\n # Output:\n {\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n {\"field\": \"meta.rating\", \"operator\": \">=\", \"value\": 3},\n {\"field\": \"meta.genre\", \"operator\": \"IN\", \"value\": [\"economy\", \"politics\"]},\n {\"field\": \"meta.publisher\", \"operator\": \"==\", \"value\": \"nytimes\"},\n ]\n }\n ```"}, {"type": "function", "name": "combine_init_comparison_and_runtime_logical_filters", "lines": [124, 181], "signature": "def combine_init_comparison_and_runtime_logical_filters( init_comparison_filter: Dict[str, Any], runtime_logical_filter: Dict[str, Any], logical_operator: Literal[\"AND\", \"OR\", \"NOT\"], ) -> Dict[str, Any]:", "doc": "Combine a runtime logical filter with the init comparison filter using the provided logical_operator.\n\nWe only add the init_comparison_filter if logical_operator matches the existing\nruntime_logical_filter[\"operator\"]. Otherwise, we return the runtime_logical_filter unchanged.\n\n__Example__:\n\n```python\nruntime_logical_filter = {\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n {\"field\": \"meta.rating\", \"operator\": \">=\", \"value\": 3},\n ]\n}\ninit_comparison_filter = {\"field\": \"meta.date\", \"operator\": \">=\", \"value\": \"2015-01-01\"}\nnew_filters = combine_init_comparison_and_runtime_logical_filters(\n init_comparison_filter, runtime_logical_filter, \"AND\"\n)\n# Output:\n{\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n {\"field\": \"meta.rating\", \"operator\": \">=\", \"value\": 3},\n {\"field\": \"meta.date\", \"operator\": \">=\", \"value\": \"2015-01-01\"},\n ]\n}\n```"}, {"type": "function", "name": "combine_runtime_comparison_and_init_logical_filters", "lines": [184, 239], "signature": "def combine_runtime_comparison_and_init_logical_filters( runtime_comparison_filter: Dict[str, Any], init_logical_filter: Dict[str, Any], logical_operator: Literal[\"AND\", \"OR\", \"NOT\"], ) -> Dict[str, Any]:", "doc": "Combine an init logical filter with the runtime comparison filter using the provided logical_operator.\n\nWe only add the runtime_comparison_filter if logical_operator matches the existing\ninit_logical_filter[\"operator\"]. Otherwise, we return the runtime_comparison_filter unchanged.\n\n__Example__:\n\n```python\ninit_logical_filter = {\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n {\"field\": \"meta.rating\", \"operator\": \">=\", \"value\": 3},\n ]\n}\nruntime_comparison_filter = {\"field\": \"meta.date\", \"operator\": \">=\", \"value\": \"2015-01-01\"}\nnew_filters = combine_runtime_comparison_and_init_logical_filters(\n runtime_comparison_filter, init_logical_filter, \"AND\"\n)\n# Output:\n{\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n {\"field\": \"meta.rating\", \"operator\": \">=\", \"value\": 3},\n {\"field\": \"meta.date\", \"operator\": \">=\", \"value\": \"2015-01-01\"},\n ]\n}\n```"}, {"type": "function", "name": "combine_two_comparison_filters", "lines": [242, 280], "signature": "def combine_two_comparison_filters( init_comparison_filter: Dict[str, Any], runtime_comparison_filter: Dict[str, Any], logical_operator: Literal[\"AND\", \"OR\", \"NOT\"], ) -> Dict[str, Any]:", "doc": "Combine a comparison filter with the `init_comparison_filter` using the provided `logical_operator`.\n\nIf `runtime_comparison_filter` and `init_comparison_filter` target the same field, `init_comparison_filter`\nis ignored and `runtime_comparison_filter` is returned unchanged.\n\n __Example__:\n\n ```python\n runtime_comparison_filter = {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n init_comparison_filter = {\"field\": \"meta.date\", \"operator\": \">=\", \"value\": \"2015-01-01\"},\n new_filters = combine_two_comparison_filters(\n init_comparison_filter, runtime_comparison_filter, \"AND\"\n )\n # Output:\n {\n \"operator\": \"AND\",\n \"conditions\": [\n {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"article\"},\n {\"field\": \"meta.date\", \"operator\": \">=\", \"value\": \"2015-01-01\"},\n ]\n }\n ```"}]}
| null |
["[", "test/document_stores/test_filter_policy.py::test_merge_two_comparison_filters", "test/document_stores/test_filter_policy.py::test_merge_init_comparison_and_runtime_logical_filters", "test/document_stores/test_filter_policy.py::test_merge_runtime_comparison_and_init_logical_filters_with_string_operators", "test/document_stores/test_filter_policy.py::test_merge_runtime_comparison_and_init_logical_filters", "test/document_stores/test_filter_policy.py::test_merge_two_logical_filters", "test/document_stores/test_filter_policy.py::test_merge_with_different_logical_operators", "test/document_stores/test_filter_policy.py::test_merge_comparison_filters_with_same_field", "test/document_stores/test_filter_policy.py::test_merge_with_custom_logical_operator[AND]", "test/document_stores/test_filter_policy.py::test_merge_with_custom_logical_operator[OR]", "test/document_stores/test_filter_policy.py::test_merge_with_custom_logical_operator[NOT]"]
|
[]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1721293309.0, "pr_title": "feat: Implement apply_filter_policy and FilterPolicy.MERGE for the new filters", "pr_body": "### Why:\r\nImplements proper merging of new filters.\r\n\r\n- fixes: https://github.com/deepset-ai/haystack/issues/7995\r\n\r\n### What:\r\n- Implemented new utility functions to determine filter types (`is_legacy`, `is_comparison_filter`, `is_logical_filter`).\r\n- Extended the `apply_filter: policy` function to support merging of `init_filters` with `runtime_filters` using a logical operator, facilitating complex filter scenarios.\r\n- Added a comprehensive suite of tests to validate the functionality of applying filter policies including scenarios with no filters, comparison filters, logical operators, and user-defined logical operators.\r\n\r\n### How can it be used:\r\nThe enhancements enable complex filter logic to be applied seamlessly in document query operations, such as:\r\n- Merging initial and runtime filters with support for legacy filter formats.\r\n- Applying logical operators (`AND`, `OR`) when merging comparison or logical filters, allowing for a more nuanced filter logic that can accurately reflect user needs or query specifics.\r\n\r\nExample usage of merging comparison and logical filters:\r\n```python\r\ninit_filter = {\"field\": \"meta.type\", \"operator\": \"==\", \"value\": \"pdf\"}\r\nruntime_filter = {\r\n \"operator\": \"AND\",\r\n \"conditions\": [\r\n {\"field\": \"meta.name\", \"operator\": \"==\", \"value\": \"John\"},\r\n {\"field\": \"meta.year\", \"operator\": \"==\", \"value\": \"2022\"},\r\n ],\r\n}\r\n# Merging the above would result in runtime_filter including the init_filter as another condition under the same logical operator \"AND\".\r\n```\r\n\r\n### How did you test it:\r\nA series of unit tests were added covering various scenarios including:\r\n- Merging filters under both `MERGE` and `REPLACE` policies with different combinations of comparison and logical filters.\r\n- Ensuring that the correct logical operator is applied during the merge process.\r\n- Testing the behavior with no filters provided, ensuring backward compatibility and robust error handling.\r\n\r\n### Notes for the reviewer:\r\n- Special attention should be given to the logic involving merging different types of filters (comparison vs. logical) and ensuring the intended behavior aligns with real-world use cases.\r\n- Review the test cases for `apply_filter_policy` to ensure all potential scenarios are covered and the expected behavior is clearly documented and verified.", "pr_timeline": [{"time": 1723131802.0, "comment": "## Pull Request Test Coverage Report for [Build 10305030806](https://coveralls.io/builds/69121892)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **14** unchanged lines in **3** files lost coverage.\n* Overall coverage decreased (**-0.03%**) to **90.11%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/generators/utils.py](https://coveralls.io/builds/69121892/source?filename=components%2Fgenerators%2Futils.py#L14) | 1 | 66.67% |\n| [components/preprocessors/document_cleaner.py](https://coveralls.io/builds/69121892/source?filename=components%2Fpreprocessors%2Fdocument_cleaner.py#L311) | 1 | 98.08% |\n| [document_stores/types/filter_policy.py](https://coveralls.io/builds/69121892/source?filename=document_stores%2Ftypes%2Ffilter_policy.py#L25) | 12 | 81.54% |\n<!-- | **Total:** | **14** | | -->\n\n| Totals | [](https://coveralls.io/builds/69121892) |\n| :-- | --: |\n| Change from base [Build 10266828865](https://coveralls.io/builds/69067290): | -0.03% |\n| Covered Lines: | 6952 |\n| Relevant Lines: | 7715 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1723027372.0, "comment": "This should be it now, see commits since [c8a53f2](https://github.com/deepset-ai/haystack/pull/8042/commits/c8a53f2884635cac70e7820da8eb34e1a6326aeb) onward"}, {"time": 1723131914.0, "comment": "Offline we agreed to eviscerate LogicalOperator enum and use Literal strings for operators. The change is in [a1cc6f9](https://github.com/deepset-ai/haystack/pull/8042/commits/a1cc6f942270682104fa850980a9204fefacea98) "}], "issues": {}}
|
deepset-ai/haystack
| 8,103
|
https://github.com/deepset-ai/haystack/pull/8103
|
deepset-ai__haystack-8103
|
[]
|
e17d0c41926849975166e70ec46411f402c8098e
|
diff --git a/haystack/components/preprocessors/document_cleaner.py b/haystack/components/preprocessors/document_cleaner.py
index 233dfed50a..282728ea74 100644
--- a/haystack/components/preprocessors/document_cleaner.py
+++ b/haystack/components/preprocessors/document_cleaner.py
@@ -6,7 +6,8 @@
from copy import deepcopy
from functools import partial, reduce
from itertools import chain
-from typing import Generator, List, Optional, Set
+from typing import Generator, List, Literal, Optional, Set
+from unicodedata import normalize
from haystack import Document, component, logging
@@ -45,6 +46,8 @@ def __init__(
keep_id: bool = False,
remove_substrings: Optional[List[str]] = None,
remove_regex: Optional[str] = None,
+ unicode_normalization: Optional[Literal["NFC", "NFKC", "NFD", "NFKD"]] = None,
+ ascii_only: bool = False,
):
"""
Initialize DocumentCleaner.
@@ -57,14 +60,34 @@ def __init__(
:param remove_substrings: List of substrings to remove from the text.
:param remove_regex: Regex to match and replace substrings by "".
:param keep_id: If `True`, keeps the IDs of the original documents.
+ :param unicode_normalization: Unicode normalization form to apply to the text.
+ Note: This will run before any other steps.
+ :param ascii_only: Whether to convert the text to ASCII only.
+ Will remove accents from characters and replace them with ASCII characters.
+ Other non-ASCII characters will be removed.
+ Note: This will run before any pattern matching or removal.
"""
+ self._validate_params(unicode_normalization=unicode_normalization)
+
self.remove_empty_lines = remove_empty_lines
self.remove_extra_whitespaces = remove_extra_whitespaces
self.remove_repeated_substrings = remove_repeated_substrings
self.remove_substrings = remove_substrings
self.remove_regex = remove_regex
self.keep_id = keep_id
+ self.unicode_normalization = unicode_normalization
+ self.ascii_only = ascii_only
+
+ def _validate_params(self, unicode_normalization: Optional[str]):
+ """
+ Validate the parameters of the DocumentCleaner.
+
+ :param unicode_normalization: Unicode normalization form to apply to the text.
+ :raises ValueError: if the parameters are not valid.
+ """
+ if unicode_normalization and unicode_normalization not in ["NFC", "NFKC", "NFD", "NFKD"]:
+ raise ValueError("unicode_normalization must be one of 'NFC', 'NFKC', 'NFD', 'NFKD'.")
@component.output_types(documents=List[Document])
def run(self, documents: List[Document]):
@@ -93,6 +116,10 @@ def run(self, documents: List[Document]):
continue
text = doc.content
+ if self.unicode_normalization:
+ text = self._normalize_unicode(text, self.unicode_normalization)
+ if self.ascii_only:
+ text = self._ascii_only(text)
if self.remove_extra_whitespaces:
text = self._remove_extra_whitespaces(text)
if self.remove_empty_lines:
@@ -108,6 +135,32 @@ def run(self, documents: List[Document]):
return {"documents": cleaned_docs}
+ def _normalize_unicode(self, text: str, form: Literal["NFC", "NFKC", "NFD", "NFKD"]) -> str:
+ """
+ Normalize the unicode of the text.
+
+ :param text: Text to normalize.
+ :param form: Unicode normalization form to apply to the text.
+ Options: "NFC", "NFKC", "NFD", "NFKD".
+ :returns: The normalized text.
+ """
+ return normalize(form, text)
+
+ def _ascii_only(self, text: str) -> str:
+ """
+ Convert the text to ASCII only.
+
+ Will remove accents from characters and replace them with ASCII characters.
+ Other non-ASCII characters will be removed.
+
+ :param text: Text to convert to ASCII only.
+ :returns: The text in ASCII only.
+ """
+
+ # First normalize the text to NFKD to separate the characters and their diacritics
+ # Then encode it to ASCII and ignore any characters that can't be encoded
+ return self._normalize_unicode(text, "NFKD").encode("ascii", "ignore").decode("utf-8")
+
def _remove_empty_lines(self, text: str) -> str:
"""
Remove empty lines and lines that contain nothing but whitespaces from text.
diff --git a/releasenotes/notes/add-unicode-normalization-and-ascii-mode-to-document-cleaner-ba536b46e499663c.yaml b/releasenotes/notes/add-unicode-normalization-and-ascii-mode-to-document-cleaner-ba536b46e499663c.yaml
new file mode 100644
index 0000000000..d4d28ee47b
--- /dev/null
+++ b/releasenotes/notes/add-unicode-normalization-and-ascii-mode-to-document-cleaner-ba536b46e499663c.yaml
@@ -0,0 +1,6 @@
+---
+enhancements:
+ - |
+ Added `unicode_normalization` parameter to the DocumentCleaner, allowing to normalize the text to NFC, NFD, NFKC, or NFKD.
+ - |
+ Added `ascii_only` parameter to the DocumentCleaner, transforming letters with diacritics to their ASCII equivalent and removing other non-ASCII characters.
|
diff --git a/test/components/preprocessors/test_document_cleaner.py b/test/components/preprocessors/test_document_cleaner.py
index 0acd9e8e8c..9bd5df549e 100644
--- a/test/components/preprocessors/test_document_cleaner.py
+++ b/test/components/preprocessors/test_document_cleaner.py
@@ -139,3 +139,68 @@ def test_keep_id_does_not_alter_document_ids(self):
assert len(result["documents"]) == 2
assert result["documents"][0].id == "1"
assert result["documents"][1].id == "2"
+
+ def test_unicode_normalization(self):
+ text = """\
+ アイウエオ
+ Comment ça va
+ مرحبا بالعالم
+ em Space"""
+
+ expected_text_NFC = """\
+ アイウエオ
+ Comment ça va
+ مرحبا بالعالم
+ em Space"""
+
+ expected_text_NFD = """\
+ アイウエオ
+ Comment ça va
+ مرحبا بالعالم
+ em Space"""
+
+ expected_text_NFKC = """\
+ アイウエオ
+ Comment ça va
+ مرحبا بالعالم
+ em Space"""
+
+ expected_text_NFKD = """\
+ アイウエオ
+ Comment ça va
+ مرحبا بالعالم
+ em Space"""
+
+ nfc_cleaner = DocumentCleaner(unicode_normalization="NFC", remove_extra_whitespaces=False)
+ nfd_cleaner = DocumentCleaner(unicode_normalization="NFD", remove_extra_whitespaces=False)
+ nfkc_cleaner = DocumentCleaner(unicode_normalization="NFKC", remove_extra_whitespaces=False)
+ nfkd_cleaner = DocumentCleaner(unicode_normalization="NFKD", remove_extra_whitespaces=False)
+
+ nfc_result = nfc_cleaner.run(documents=[Document(content=text)])
+ nfd_result = nfd_cleaner.run(documents=[Document(content=text)])
+ nfkc_result = nfkc_cleaner.run(documents=[Document(content=text)])
+ nfkd_result = nfkd_cleaner.run(documents=[Document(content=text)])
+
+ assert nfc_result["documents"][0].content == expected_text_NFC
+ assert nfd_result["documents"][0].content == expected_text_NFD
+ assert nfkc_result["documents"][0].content == expected_text_NFKC
+ assert nfkd_result["documents"][0].content == expected_text_NFKD
+
+ def test_ascii_only(self):
+ text = """\
+ アイウエオ
+ Comment ça va
+ Á
+ مرحبا بالعالم
+ em Space"""
+
+ expected_text = """\
+ \n\
+ Comment ca va
+ A
+ \n\
+ em Space"""
+
+ cleaner = DocumentCleaner(ascii_only=True, remove_extra_whitespaces=False, remove_empty_lines=False)
+ result = cleaner.run(documents=[Document(content=text)])
+ assert result["documents"][0].content == expected_text
| 2024-07-29T04:06:24
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/components/preprocessors/document_cleaner.py": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nimport re\nfrom copy import deepcopy\nfrom functools import partial, reduce\nfrom itertools import chain\nfrom typing import Generator, List, Optional, Set\n\nfrom haystack import Document, component, logging\n\nlogger = logging.getLogger(__name__)\n\n\n@component\nclass DocumentCleaner:\n \"\"\"\n Cleans the text in the documents.\n\n It removes extra whitespaces,\n empty lines, specified substrings, regexes,\n page headers and footers (in this order).\n\n ### Usage example:\n\n ```python\n from haystack import Document\n from haystack.components.preprocessors import DocumentCleaner\n\n doc = Document(content=\"This is a document to clean\\\\n\\\\n\\\\nsubstring to remove\")\n\n cleaner = DocumentCleaner(remove_substrings = [\"substring to remove\"])\n result = cleaner.run(documents=[doc])\n\n assert result[\"documents\"][0].content == \"This is a document to clean \"\n ```\n \"\"\"\n\n def __init__(\n self,\n remove_empty_lines: bool = True,\n remove_extra_whitespaces: bool = True,\n remove_repeated_substrings: bool = False,\n keep_id: bool = False,\n remove_substrings: Optional[List[str]] = None,\n remove_regex: Optional[str] = None,\n ):\n \"\"\"\n Initialize DocumentCleaner.\n\n :param remove_empty_lines: If `True`, removes empty lines.\n :param remove_extra_whitespaces: If `True`, removes extra whitespaces.\n :param remove_repeated_substrings: If `True`, removes repeated substrings (headers and footers) from pages.\n Pages must be separated by a form feed character \"\\\\f\",\n which is supported by `TextFileToDocument` and `AzureOCRDocumentConverter`.\n :param remove_substrings: List of substrings to remove from the text.\n :param remove_regex: Regex to match and replace substrings by \"\".\n :param keep_id: If `True`, keeps the IDs of the original documents.\n \"\"\"\n\n self.remove_empty_lines = remove_empty_lines\n self.remove_extra_whitespaces = remove_extra_whitespaces\n self.remove_repeated_substrings = remove_repeated_substrings\n self.remove_substrings = remove_substrings\n self.remove_regex = remove_regex\n self.keep_id = keep_id\n\n @component.output_types(documents=List[Document])\n def run(self, documents: List[Document]):\n \"\"\"\n Cleans up the documents.\n\n :param documents: List of Documents to clean.\n\n :returns: A dictionary with the following key:\n - `documents`: List of cleaned Documents.\n\n :raises TypeError: if documents is not a list of Documents.\n \"\"\"\n if not isinstance(documents, list) or documents and not isinstance(documents[0], Document):\n raise TypeError(\"DocumentCleaner expects a List of Documents as input.\")\n\n cleaned_docs = []\n for doc in documents:\n if doc.content is None:\n logger.warning(\n \"DocumentCleaner only cleans text documents but document.content for document ID\"\n \" %{document_id} is None.\",\n document_id=doc.id,\n )\n cleaned_docs.append(doc)\n continue\n text = doc.content\n\n if self.remove_extra_whitespaces:\n text = self._remove_extra_whitespaces(text)\n if self.remove_empty_lines:\n text = self._remove_empty_lines(text)\n if self.remove_substrings:\n text = self._remove_substrings(text, self.remove_substrings)\n if self.remove_regex:\n text = self._remove_regex(text, self.remove_regex)\n if self.remove_repeated_substrings:\n text = self._remove_repeated_substrings(text)\n\n cleaned_docs.append(Document(content=text, meta=deepcopy(doc.meta), id=doc.id if self.keep_id else \"\"))\n\n return {\"documents\": cleaned_docs}\n\n def _remove_empty_lines(self, text: str) -> str:\n \"\"\"\n Remove empty lines and lines that contain nothing but whitespaces from text.\n\n :param text: Text to clean.\n :returns: The text without empty lines.\n \"\"\"\n lines = text.split(\"\\n\")\n non_empty_lines = filter(lambda line: line.strip() != \"\", lines)\n return \"\\n\".join(non_empty_lines)\n\n def _remove_extra_whitespaces(self, text: str) -> str:\n \"\"\"\n Remove extra whitespaces from text.\n\n :param text: Text to clean.\n :returns: The text without extra whitespaces.\n \"\"\"\n return re.sub(r\"\\s\\s+\", \" \", text).strip()\n\n def _remove_regex(self, text: str, regex: str) -> str:\n \"\"\"\n Remove substrings that match the specified regex from the text.\n\n :param text: Text to clean.\n :param regex: Regex to match and replace substrings by \"\".\n :returns: The text without the substrings that match the regex.\n \"\"\"\n return re.sub(regex, \"\", text).strip()\n\n def _remove_substrings(self, text: str, substrings: List[str]) -> str:\n \"\"\"\n Remove all specified substrings from the text.\n\n :param text: Text to clean.\n :param substrings: Substrings to remove.\n :returns: The text without the specified substrings.\n \"\"\"\n for substring in substrings:\n text = text.replace(substring, \"\")\n return text\n\n def _remove_repeated_substrings(self, text: str) -> str:\n \"\"\"\n Remove any substrings from the text that occur repeatedly on every page. For example headers or footers.\n\n Pages in the text need to be separated by form feed character \"\\f\".\n :param text: Text to clean.\n :returns: The text without the repeated substrings.\n \"\"\"\n return self._find_and_remove_header_footer(\n text, n_chars=300, n_first_pages_to_ignore=1, n_last_pages_to_ignore=1\n )\n\n def _find_and_remove_header_footer(\n self, text: str, n_chars: int, n_first_pages_to_ignore: int, n_last_pages_to_ignore: int\n ) -> str:\n \"\"\"\n Heuristic to find footers and headers across different pages by searching for the longest common string.\n\n Pages in the text need to be separated by form feed character \"\\f\".\n For headers, we only search in the first n_chars characters (for footer: last n_chars).\n Note: This heuristic uses exact matches and therefore works well for footers like \"Copyright 2019 by XXX\",\n but won't detect \"Page 3 of 4\" or similar.\n\n :param n_chars: The number of first/last characters where the header/footer shall be searched in.\n :param n_first_pages_to_ignore: The number of first pages to ignore\n (e.g. TOCs often don't contain footer/header).\n :param n_last_pages_to_ignore: The number of last pages to ignore.\n :returns: The text without the found headers and footers.\n \"\"\"\n\n pages = text.split(\"\\f\")\n\n # header\n start_of_pages = [p[:n_chars] for p in pages[n_first_pages_to_ignore:-n_last_pages_to_ignore]]\n found_header = self._find_longest_common_ngram(start_of_pages)\n if found_header:\n pages = [page.replace(found_header, \"\") for page in pages]\n\n # footer\n end_of_pages = [p[-n_chars:] for p in pages[n_first_pages_to_ignore:-n_last_pages_to_ignore]]\n found_footer = self._find_longest_common_ngram(end_of_pages)\n if found_footer:\n pages = [page.replace(found_footer, \"\") for page in pages]\n\n logger.debug(\n \"Removed header '{header}' and footer '{footer}' in document\", header=found_header, footer=found_footer\n )\n text = \"\\f\".join(pages)\n return text\n\n def _ngram(self, seq: str, n: int) -> Generator[str, None, None]:\n \"\"\"\n Return all ngrams of length n from a text sequence. Each ngram consists of n words split by whitespace.\n\n :param seq: The sequence to generate ngrams from.\n :param n: The length of the ngrams to generate.\n :returns: A Generator generating all ngrams of length n from the given sequence.\n \"\"\"\n\n # In order to maintain the original whitespace, but still consider \\n and \\t for n-gram tokenization,\n # we add a space here and remove it after creation of the ngrams again (see below)\n seq = seq.replace(\"\\n\", \" \\n\")\n seq = seq.replace(\"\\t\", \" \\t\")\n\n words = seq.split(\" \")\n ngrams = (\n \" \".join(words[i : i + n]).replace(\" \\n\", \"\\n\").replace(\" \\t\", \"\\t\") for i in range(0, len(words) - n + 1)\n )\n\n return ngrams\n\n def _allngram(self, seq: str, min_ngram: int, max_ngram: int) -> Set[str]:\n \"\"\"\n Generates all possible ngrams from a given sequence of text.\n\n Considering all ngram lengths between the minimum and maximum length.\n\n :param seq: The sequence to generate ngrams from.\n :param min_ngram: The minimum length of ngram to consider.\n :param max_ngram: The maximum length of ngram to consider.\n :returns: A set of all ngrams from the given sequence.\n \"\"\"\n lengths = range(min_ngram, max_ngram) if max_ngram else range(min_ngram, len(seq))\n ngrams = map(partial(self._ngram, seq), lengths)\n res = set(chain.from_iterable(ngrams))\n return res\n\n def _find_longest_common_ngram(self, sequences: List[str], min_ngram: int = 3, max_ngram: int = 30) -> str:\n \"\"\"\n Find the longest common ngram across a list of text sequences (e.g. start of pages).\n\n Considering all ngram lengths between the minimum and maximum length. Helpful for finding footers, headers etc.\n Empty sequences are ignored.\n\n :param sequences: The list of strings that shall be searched for common n_grams.\n :param max_ngram: The maximum length of ngram to consider.\n :param min_ngram: The minimum length of ngram to consider.\n :returns: The longest ngram that all sequences have in common.\n \"\"\"\n sequences = [s for s in sequences if s] # filter empty sequences\n if not sequences:\n return \"\"\n seqs_ngrams = map(partial(self._allngram, min_ngram=min_ngram, max_ngram=max_ngram), sequences)\n intersection = reduce(set.intersection, seqs_ngrams)\n\n longest = max(intersection, key=len, default=\"\")\n return longest if longest.strip() else \"\"\n", "releasenotes/notes/add-unicode-normalization-and-ascii-mode-to-document-cleaner-ba536b46e499663c.yaml": null}
|
diff --git a/releasenotes/notes/add-unicode-normalization-and-ascii-mode-to-document-cleaner-ba536b46e499663c.yaml b/releasenotes/notes/add-unicode-normalization-and-ascii-mode-to-document-cleaner-ba536b46e499663c.yaml
new file mode 100644
index 0000000000..d4d28ee47b
--- /dev/null
+++ b/releasenotes/notes/add-unicode-normalization-and-ascii-mode-to-document-cleaner-ba536b46e499663c.yaml
@@ -0,0 +1,6 @@
+---
+enhancements:
+ - |
+ Added `unicode_normalization` parameter to the DocumentCleaner, allowing to normalize the text to NFC, NFD, NFKC, or NFKD.
+ - |
+ Added `ascii_only` parameter to the DocumentCleaner, transforming letters with diacritics to their ASCII equivalent and removing other non-ASCII characters.
|
{"haystack/components/preprocessors/document_cleaner.py": [{"type": "function", "name": "DocumentCleaner._validate_params", "lines": [82, 90], "signature": "def _validate_params(self, unicode_normalization: Optional[str]):", "doc": "Validate the parameters of the DocumentCleaner.\n\n:param unicode_normalization: Unicode normalization form to apply to the text.\n:raises ValueError: if the parameters are not valid."}, {"type": "function", "name": "DocumentCleaner._normalize_unicode", "lines": [138, 147], "signature": "def _normalize_unicode(self, text: str, form: Literal[\"NFC\", \"NFKC\", \"NFD\", \"NFKD\"]) -> str:", "doc": "Normalize the unicode of the text.\n\n:param text: Text to normalize.\n:param form: Unicode normalization form to apply to the text.\n Options: \"NFC\", \"NFKC\", \"NFD\", \"NFKD\".\n:returns: The normalized text."}, {"type": "function", "name": "DocumentCleaner._ascii_only", "lines": [149, 162], "signature": "def _ascii_only(self, text: str) -> str:", "doc": "Convert the text to ASCII only.\n\nWill remove accents from characters and replace them with ASCII characters.\nOther non-ASCII characters will be removed.\n\n:param text: Text to convert to ASCII only.\n:returns: The text in ASCII only."}]}
| null |
["test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_unicode_normalization", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_ascii_only"]
|
["[", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_init", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_non_text_document", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_single_document", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_empty_list", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_remove_empty_lines", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_remove_whitespaces", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_remove_substrings", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_remove_regex", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_remove_repeated_substrings", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_copy_metadata", "test/components/preprocessors/test_document_cleaner.py::TestDocumentCleaner::test_keep_id_does_not_alter_document_ids"]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1722225270.0, "pr_title": "feat: add unicode normalization & ascii_only mode for DocumentCleaner", "pr_body": "### Proposed Changes:\r\n\r\nAdded two new parameters to the `DocumentCleaner` Component, `unicode_normalization` and `ascii_only`.\r\n\r\n`unicode_normalization` allows to normalize unicodes within documents using python's [unicodedata module](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize).\r\n\r\n`ascii_only` mode converts letters with accents to regular ascii letters, removes other non ascii characters. \r\nI decided not to use [unidecode](https://pypi.org/project/Unidecode) since it would add a new dependency with possible unknown transformations but could be a path for improvement, adding support for languages / alphabets.\r\n\r\n### How did you test it?\r\n\r\nAdded 2 new unit tests to the DocumentCleaner component (1 per new parameter/step).\r\n\r\n### Notes for the reviewer\r\n\r\nI made the arbitrary decision to have the unicode normalization run first, followed by ascii before any other steps in the DocumentCleaner.\r\n**This means regex & substring removal will be made against the normalized documents.** \r\n\r\n### Checklist\r\n\r\n- [x] I have read the [contributors guidelines](https://github.com/deepset-ai/haystack/blob/main/CONTRIBUTING.md) and the [code of conduct](https://github.com/deepset-ai/haystack/blob/main/code_of_conduct.txt)\r\n- [ ] I have updated the related issue with new insights and changes\r\n- [x] I added unit tests and updated the docstrings\r\n- [x] I've used one of the [conventional commit types](https://www.conventionalcommits.org/en/v1.0.0/) for my PR title: `fix:`, `feat:`, `build:`, `chore:`, `ci:`, `docs:`, `style:`, `refactor:`, `perf:`, `test:`.\r\n- [x] I documented my code\r\n- [x] I ran [pre-commit hooks](https://github.com/deepset-ai/haystack/blob/main/CONTRIBUTING.md#installation) and fixed any issue\r\n", "pr_timeline": [{"time": 1722226046.0, "comment": "[](https://cla-assistant.io/deepset-ai/haystack?pullRequest=8103) <br/>All committers have signed the CLA."}, {"time": 1722521480.0, "comment": "The PR looks good. @julian-risch it would be good to have your opinion on the addition of these new parameters."}, {"time": 1722612722.0, "comment": "## Pull Request Test Coverage Report for [Build 10218067758](https://coveralls.io/builds/69013176)\n\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **2** unchanged lines in **1** file lost coverage.\n* Overall coverage increased (+**0.006%**) to **90.142%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/preprocessors/document_cleaner.py](https://coveralls.io/builds/69013176/source?filename=components%2Fpreprocessors%2Fdocument_cleaner.py#L90) | 2 | 98.0% |\n<!-- | **Total:** | **2** | | -->\n\n| Totals | [](https://coveralls.io/builds/69013176) |\n| :-- | --: |\n| Change from base [Build 10217550097](https://coveralls.io/builds/69012465): | 0.006% |\n| Covered Lines: | 6913 |\n| Relevant Lines: | 7669 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1722613272.0, "comment": "@twellck I rebased this PR on top of main to remove conflicts."}], "issues": {}}
|
deepset-ai/haystack
| 8,336
|
https://github.com/deepset-ai/haystack/pull/8336
|
deepset-ai__haystack-8336
|
[]
|
5514676b5ecea45c5b93da75bfa271059f89197d
|
diff --git a/haystack/components/preprocessors/document_splitter.py b/haystack/components/preprocessors/document_splitter.py
index c0c39ea82f..556878a965 100644
--- a/haystack/components/preprocessors/document_splitter.py
+++ b/haystack/components/preprocessors/document_splitter.py
@@ -3,11 +3,13 @@
# SPDX-License-Identifier: Apache-2.0
from copy import deepcopy
-from typing import Dict, List, Literal, Tuple
+from typing import Any, Callable, Dict, List, Literal, Optional, Tuple
from more_itertools import windowed
from haystack import Document, component
+from haystack.core.serialization import default_from_dict, default_to_dict
+from haystack.utils import deserialize_callable, serialize_callable
@component
@@ -46,10 +48,11 @@ class DocumentSplitter:
def __init__(
self,
- split_by: Literal["word", "sentence", "page", "passage"] = "word",
+ split_by: Literal["function", "page", "passage", "sentence", "word"] = "word",
split_length: int = 200,
split_overlap: int = 0,
split_threshold: int = 0,
+ splitting_function: Optional[Callable[[str], List[str]]] = None,
):
"""
Initialize DocumentSplitter.
@@ -61,11 +64,16 @@ def __init__(
:param split_overlap: The number of overlapping units for each split.
:param split_threshold: The minimum number of units per split. If a split has fewer units
than the threshold, it's attached to the previous split.
+ :param splitting_function: Necessary when `split_by` is set to "function".
+ This is a function which must accept a single `str` as input and return a `list` of `str` as output,
+ representing the chunks after splitting.
"""
self.split_by = split_by
- if split_by not in ["word", "sentence", "page", "passage"]:
+ if split_by not in ["function", "page", "passage", "sentence", "word"]:
raise ValueError("split_by must be one of 'word', 'sentence', 'page' or 'passage'.")
+ if split_by == "function" and splitting_function is None:
+ raise ValueError("When 'split_by' is set to 'function', a valid 'splitting_function' must be provided.")
if split_length <= 0:
raise ValueError("split_length must be greater than 0.")
self.split_length = split_length
@@ -73,6 +81,7 @@ def __init__(
raise ValueError("split_overlap must be greater than or equal to 0.")
self.split_overlap = split_overlap
self.split_threshold = split_threshold
+ self.splitting_function = splitting_function
@component.output_types(documents=List[Document])
def run(self, documents: List[Document]):
@@ -114,7 +123,9 @@ def run(self, documents: List[Document]):
)
return {"documents": split_docs}
- def _split_into_units(self, text: str, split_by: Literal["word", "sentence", "passage", "page"]) -> List[str]:
+ def _split_into_units(
+ self, text: str, split_by: Literal["function", "page", "passage", "sentence", "word"]
+ ) -> List[str]:
if split_by == "page":
self.split_at = "\f"
elif split_by == "passage":
@@ -123,9 +134,11 @@ def _split_into_units(self, text: str, split_by: Literal["word", "sentence", "pa
self.split_at = "."
elif split_by == "word":
self.split_at = " "
+ elif split_by == "function" and self.splitting_function is not None:
+ return self.splitting_function(text)
else:
raise NotImplementedError(
- "DocumentSplitter only supports 'word', 'sentence', 'page' or 'passage' split_by options."
+ "DocumentSplitter only supports 'function', 'page', 'passage', 'sentence' or 'word' split_by options."
)
units = text.split(self.split_at)
# Add the delimiter back to all units except the last one
@@ -232,3 +245,31 @@ def _add_split_overlap_information(
# add split overlap information to previous Document regarding this Document
overlapping_range = (0, overlapping_range[1] - overlapping_range[0])
previous_doc.meta["_split_overlap"].append({"doc_id": current_doc.id, "range": overlapping_range})
+
+ def to_dict(self) -> Dict[str, Any]:
+ """
+ Serializes the component to a dictionary.
+ """
+ serialized = default_to_dict(
+ self,
+ split_by=self.split_by,
+ split_length=self.split_length,
+ split_overlap=self.split_overlap,
+ split_threshold=self.split_threshold,
+ )
+ if self.splitting_function:
+ serialized["init_parameters"]["splitting_function"] = serialize_callable(self.splitting_function)
+ return serialized
+
+ @classmethod
+ def from_dict(cls, data: Dict[str, Any]) -> "DocumentSplitter":
+ """
+ Deserializes the component from a dictionary.
+ """
+ init_params = data.get("init_parameters", {})
+
+ splitting_function = init_params.get("splitting_function", None)
+ if splitting_function:
+ init_params["splitting_function"] = deserialize_callable(splitting_function)
+
+ return default_from_dict(cls, data)
diff --git a/releasenotes/notes/feat-documentsplitter-add-split-by-function-77501f439b63bb49.yaml b/releasenotes/notes/feat-documentsplitter-add-split-by-function-77501f439b63bb49.yaml
new file mode 100644
index 0000000000..e8b170442a
--- /dev/null
+++ b/releasenotes/notes/feat-documentsplitter-add-split-by-function-77501f439b63bb49.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Added the option to use a custom splitting function in DocumentSplitter. The function must accept a string as
+ input and return a list of strings, representing the split units. To use the feature initialise `DocumentSplitter`
+ with `split_by="function"` providing the custom splitting function as `splitting_function=custom_function`.
|
diff --git a/test/components/preprocessors/test_document_splitter.py b/test/components/preprocessors/test_document_splitter.py
index d9fa85f005..7c942ab4cc 100644
--- a/test/components/preprocessors/test_document_splitter.py
+++ b/test/components/preprocessors/test_document_splitter.py
@@ -1,10 +1,18 @@
# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>
#
# SPDX-License-Identifier: Apache-2.0
+import re
+
import pytest
from haystack import Document
from haystack.components.preprocessors import DocumentSplitter
+from haystack.utils import deserialize_callable, serialize_callable
+
+
+# custom split function for testing
+def custom_split(text):
+ return text.split(".")
def merge_documents(documents):
@@ -165,6 +173,27 @@ def test_split_by_page(self):
assert docs[2].meta["split_idx_start"] == text.index(docs[2].content)
assert docs[2].meta["page_number"] == 3
+ def test_split_by_function(self):
+ splitting_function = lambda input_str: input_str.split(".")
+ splitter = DocumentSplitter(split_by="function", splitting_function=splitting_function, split_length=1)
+ text = "This.Is.A.Test"
+ result = splitter.run(documents=[Document(content=text)])
+ docs = result["documents"]
+
+ word_list = ["This", "Is", "A", "Test"]
+ assert len(docs) == 4
+ for w_target, w_split in zip(word_list, docs):
+ assert w_split.content == w_target
+
+ splitting_function = lambda input_str: re.split("[\s]{2,}", input_str)
+ splitter = DocumentSplitter(split_by="function", splitting_function=splitting_function, split_length=1)
+ text = "This Is\n A Test"
+ result = splitter.run(documents=[Document(content=text)])
+ docs = result["documents"]
+ assert len(docs) == 4
+ for w_target, w_split in zip(word_list, docs):
+ assert w_split.content == w_target
+
def test_split_by_word_with_overlap(self):
splitter = DocumentSplitter(split_by="word", split_length=10, split_overlap=2)
text = "This is a text with some words. There is a second sentence. And there is a third sentence."
@@ -329,3 +358,90 @@ def test_add_split_overlap_information(self):
# reconstruct the original document content from the split documents
assert doc.content == merge_documents(docs)
+
+ def test_to_dict(self):
+ """
+ Test the to_dict method of the DocumentSplitter class.
+ """
+ splitter = DocumentSplitter(split_by="word", split_length=10, split_overlap=2, split_threshold=5)
+ serialized = splitter.to_dict()
+
+ assert serialized["type"] == "haystack.components.preprocessors.document_splitter.DocumentSplitter"
+ assert serialized["init_parameters"]["split_by"] == "word"
+ assert serialized["init_parameters"]["split_length"] == 10
+ assert serialized["init_parameters"]["split_overlap"] == 2
+ assert serialized["init_parameters"]["split_threshold"] == 5
+ assert "splitting_function" not in serialized["init_parameters"]
+
+ def test_to_dict_with_splitting_function(self):
+ """
+ Test the to_dict method of the DocumentSplitter class when a custom splitting function is provided.
+ """
+
+ splitter = DocumentSplitter(split_by="function", splitting_function=custom_split)
+ serialized = splitter.to_dict()
+
+ assert serialized["type"] == "haystack.components.preprocessors.document_splitter.DocumentSplitter"
+ assert serialized["init_parameters"]["split_by"] == "function"
+ assert "splitting_function" in serialized["init_parameters"]
+ assert callable(deserialize_callable(serialized["init_parameters"]["splitting_function"]))
+
+ def test_from_dict(self):
+ """
+ Test the from_dict class method of the DocumentSplitter class.
+ """
+ data = {
+ "type": "haystack.components.preprocessors.document_splitter.DocumentSplitter",
+ "init_parameters": {"split_by": "word", "split_length": 10, "split_overlap": 2, "split_threshold": 5},
+ }
+ splitter = DocumentSplitter.from_dict(data)
+
+ assert splitter.split_by == "word"
+ assert splitter.split_length == 10
+ assert splitter.split_overlap == 2
+ assert splitter.split_threshold == 5
+ assert splitter.splitting_function is None
+
+ def test_from_dict_with_splitting_function(self):
+ """
+ Test the from_dict class method of the DocumentSplitter class when a custom splitting function is provided.
+ """
+
+ def custom_split(text):
+ return text.split(".")
+
+ data = {
+ "type": "haystack.components.preprocessors.document_splitter.DocumentSplitter",
+ "init_parameters": {"split_by": "function", "splitting_function": serialize_callable(custom_split)},
+ }
+ splitter = DocumentSplitter.from_dict(data)
+
+ assert splitter.split_by == "function"
+ assert callable(splitter.splitting_function)
+ assert splitter.splitting_function("a.b.c") == ["a", "b", "c"]
+
+ def test_roundtrip_serialization(self):
+ """
+ Test the round-trip serialization of the DocumentSplitter class.
+ """
+ original_splitter = DocumentSplitter(split_by="word", split_length=10, split_overlap=2, split_threshold=5)
+ serialized = original_splitter.to_dict()
+ deserialized_splitter = DocumentSplitter.from_dict(serialized)
+
+ assert original_splitter.split_by == deserialized_splitter.split_by
+ assert original_splitter.split_length == deserialized_splitter.split_length
+ assert original_splitter.split_overlap == deserialized_splitter.split_overlap
+ assert original_splitter.split_threshold == deserialized_splitter.split_threshold
+
+ def test_roundtrip_serialization_with_splitting_function(self):
+ """
+ Test the round-trip serialization of the DocumentSplitter class when a custom splitting function is provided.
+ """
+
+ original_splitter = DocumentSplitter(split_by="function", splitting_function=custom_split)
+ serialized = original_splitter.to_dict()
+ deserialized_splitter = DocumentSplitter.from_dict(serialized)
+
+ assert original_splitter.split_by == deserialized_splitter.split_by
+ assert callable(deserialized_splitter.splitting_function)
+ assert deserialized_splitter.splitting_function("a.b.c") == ["a", "b", "c"]
| 2024-09-05T20:21:06
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"haystack/components/preprocessors/document_splitter.py": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom copy import deepcopy\nfrom typing import Dict, List, Literal, Tuple\n\nfrom more_itertools import windowed\n\nfrom haystack import Document, component\n\n\n@component\nclass DocumentSplitter:\n \"\"\"\n Splits long documents into smaller chunks.\n\n This is a common preprocessing step during indexing.\n It helps Embedders create meaningful semantic representations\n and prevents exceeding language model context limits.\n\n The DocumentSplitter is compatible with the following DocumentStores:\n - [Astra](https://docs.haystack.deepset.ai/docs/astradocumentstore)\n - [Chroma](https://docs.haystack.deepset.ai/docs/chromadocumentstore) limited support, overlapping information is\n not stored\n - [Elasticsearch](https://docs.haystack.deepset.ai/docs/elasticsearch-document-store)\n - [OpenSearch](https://docs.haystack.deepset.ai/docs/opensearch-document-store)\n - [Pgvector](https://docs.haystack.deepset.ai/docs/pgvectordocumentstore)\n - [Pinecone](https://docs.haystack.deepset.ai/docs/pinecone-document-store) limited support, overlapping\n information is not stored\n - [Qdrant](https://docs.haystack.deepset.ai/docs/qdrant-document-store)\n - [Weaviate](https://docs.haystack.deepset.ai/docs/weaviatedocumentstore)\n\n ### Usage example\n\n ```python\n from haystack import Document\n from haystack.components.preprocessors import DocumentSplitter\n\n doc = Document(content=\"Moonlight shimmered softly, wolves howled nearby, night enveloped everything.\")\n\n splitter = DocumentSplitter(split_by=\"word\", split_length=3, split_overlap=0)\n result = splitter.run(documents=[doc])\n ```\n \"\"\"\n\n def __init__(\n self,\n split_by: Literal[\"word\", \"sentence\", \"page\", \"passage\"] = \"word\",\n split_length: int = 200,\n split_overlap: int = 0,\n split_threshold: int = 0,\n ):\n \"\"\"\n Initialize DocumentSplitter.\n\n :param split_by: The unit for splitting your documents. Choose from `word` for splitting by spaces (\" \"),\n `sentence` for splitting by periods (\".\"), `page` for splitting by form feed (\"\\\\f\"),\n or `passage` for splitting by double line breaks (\"\\\\n\\\\n\").\n :param split_length: The maximum number of units in each split.\n :param split_overlap: The number of overlapping units for each split.\n :param split_threshold: The minimum number of units per split. If a split has fewer units\n than the threshold, it's attached to the previous split.\n \"\"\"\n\n self.split_by = split_by\n if split_by not in [\"word\", \"sentence\", \"page\", \"passage\"]:\n raise ValueError(\"split_by must be one of 'word', 'sentence', 'page' or 'passage'.\")\n if split_length <= 0:\n raise ValueError(\"split_length must be greater than 0.\")\n self.split_length = split_length\n if split_overlap < 0:\n raise ValueError(\"split_overlap must be greater than or equal to 0.\")\n self.split_overlap = split_overlap\n self.split_threshold = split_threshold\n\n @component.output_types(documents=List[Document])\n def run(self, documents: List[Document]):\n \"\"\"\n Split documents into smaller parts.\n\n Splits documents by the unit expressed in `split_by`, with a length of `split_length`\n and an overlap of `split_overlap`.\n\n :param documents: The documents to split.\n\n :returns: A dictionary with the following key:\n - `documents`: List of documents with the split texts. Each document includes:\n - A metadata field `source_id` to track the original document.\n - A metadata field `page_number` to track the original page number.\n - All other metadata copied from the original document.\n\n :raises TypeError: if the input is not a list of Documents.\n :raises ValueError: if the content of a document is None.\n \"\"\"\n\n if not isinstance(documents, list) or (documents and not isinstance(documents[0], Document)):\n raise TypeError(\"DocumentSplitter expects a List of Documents as input.\")\n\n split_docs = []\n for doc in documents:\n if doc.content is None:\n raise ValueError(\n f\"DocumentSplitter only works with text documents but content for document ID {doc.id} is None.\"\n )\n units = self._split_into_units(doc.content, self.split_by)\n text_splits, splits_pages, splits_start_idxs = self._concatenate_units(\n units, self.split_length, self.split_overlap, self.split_threshold\n )\n metadata = deepcopy(doc.meta)\n metadata[\"source_id\"] = doc.id\n split_docs += self._create_docs_from_splits(\n text_splits=text_splits, splits_pages=splits_pages, splits_start_idxs=splits_start_idxs, meta=metadata\n )\n return {\"documents\": split_docs}\n\n def _split_into_units(self, text: str, split_by: Literal[\"word\", \"sentence\", \"passage\", \"page\"]) -> List[str]:\n if split_by == \"page\":\n self.split_at = \"\\f\"\n elif split_by == \"passage\":\n self.split_at = \"\\n\\n\"\n elif split_by == \"sentence\":\n self.split_at = \".\"\n elif split_by == \"word\":\n self.split_at = \" \"\n else:\n raise NotImplementedError(\n \"DocumentSplitter only supports 'word', 'sentence', 'page' or 'passage' split_by options.\"\n )\n units = text.split(self.split_at)\n # Add the delimiter back to all units except the last one\n for i in range(len(units) - 1):\n units[i] += self.split_at\n return units\n\n def _concatenate_units(\n self, elements: List[str], split_length: int, split_overlap: int, split_threshold: int\n ) -> Tuple[List[str], List[int], List[int]]:\n \"\"\"\n Concatenates the elements into parts of split_length units.\n\n Keeps track of the original page number that each element belongs. If the length of the current units is less\n than the pre-defined `split_threshold`, it does not create a new split. Instead, it concatenates the current\n units with the last split, preventing the creation of excessively small splits.\n \"\"\"\n\n text_splits: List[str] = []\n splits_pages = []\n splits_start_idxs = []\n cur_start_idx = 0\n cur_page = 1\n segments = windowed(elements, n=split_length, step=split_length - split_overlap)\n\n for seg in segments:\n current_units = [unit for unit in seg if unit is not None]\n txt = \"\".join(current_units)\n\n # check if length of current units is below split_threshold\n if len(current_units) < split_threshold and len(text_splits) > 0:\n # concatenate the last split with the current one\n text_splits[-1] += txt\n\n elif len(txt) > 0:\n text_splits.append(txt)\n splits_pages.append(cur_page)\n splits_start_idxs.append(cur_start_idx)\n\n processed_units = current_units[: split_length - split_overlap]\n cur_start_idx += len(\"\".join(processed_units))\n\n if self.split_by == \"page\":\n num_page_breaks = len(processed_units)\n else:\n num_page_breaks = sum(processed_unit.count(\"\\f\") for processed_unit in processed_units)\n\n cur_page += num_page_breaks\n\n return text_splits, splits_pages, splits_start_idxs\n\n def _create_docs_from_splits(\n self, text_splits: List[str], splits_pages: List[int], splits_start_idxs: List[int], meta: Dict\n ) -> List[Document]:\n \"\"\"\n Creates Document objects from splits enriching them with page number and the metadata of the original document.\n \"\"\"\n documents: List[Document] = []\n\n for i, (txt, split_idx) in enumerate(zip(text_splits, splits_start_idxs)):\n meta = deepcopy(meta)\n doc = Document(content=txt, meta=meta)\n doc.meta[\"page_number\"] = splits_pages[i]\n doc.meta[\"split_id\"] = i\n doc.meta[\"split_idx_start\"] = split_idx\n documents.append(doc)\n\n if self.split_overlap <= 0:\n continue\n\n doc.meta[\"_split_overlap\"] = []\n\n if i == 0:\n continue\n\n doc_start_idx = splits_start_idxs[i]\n previous_doc = documents[i - 1]\n previous_doc_start_idx = splits_start_idxs[i - 1]\n self._add_split_overlap_information(doc, doc_start_idx, previous_doc, previous_doc_start_idx)\n\n return documents\n\n @staticmethod\n def _add_split_overlap_information(\n current_doc: Document, current_doc_start_idx: int, previous_doc: Document, previous_doc_start_idx: int\n ):\n \"\"\"\n Adds split overlap information to the current and previous Document's meta.\n\n :param current_doc: The Document that is being split.\n :param current_doc_start_idx: The starting index of the current Document.\n :param previous_doc: The Document that was split before the current Document.\n :param previous_doc_start_idx: The starting index of the previous Document.\n \"\"\"\n overlapping_range = (current_doc_start_idx - previous_doc_start_idx, len(previous_doc.content)) # type: ignore\n\n if overlapping_range[0] < overlapping_range[1]:\n overlapping_str = previous_doc.content[overlapping_range[0] : overlapping_range[1]] # type: ignore\n\n if current_doc.content.startswith(overlapping_str): # type: ignore\n # add split overlap information to this Document regarding the previous Document\n current_doc.meta[\"_split_overlap\"].append({\"doc_id\": previous_doc.id, \"range\": overlapping_range})\n\n # add split overlap information to previous Document regarding this Document\n overlapping_range = (0, overlapping_range[1] - overlapping_range[0])\n previous_doc.meta[\"_split_overlap\"].append({\"doc_id\": current_doc.id, \"range\": overlapping_range})\n", "releasenotes/notes/feat-documentsplitter-add-split-by-function-77501f439b63bb49.yaml": null}
|
diff --git a/releasenotes/notes/feat-documentsplitter-add-split-by-function-77501f439b63bb49.yaml b/releasenotes/notes/feat-documentsplitter-add-split-by-function-77501f439b63bb49.yaml
new file mode 100644
index 0000000000..e8b170442a
--- /dev/null
+++ b/releasenotes/notes/feat-documentsplitter-add-split-by-function-77501f439b63bb49.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Added the option to use a custom splitting function in DocumentSplitter. The function must accept a string as
+ input and return a list of strings, representing the split units. To use the feature initialise `DocumentSplitter`
+ with `split_by="function"` providing the custom splitting function as `splitting_function=custom_function`.
|
{"haystack/components/preprocessors/document_splitter.py": [{"type": "function", "name": "DocumentSplitter.to_dict", "lines": [249, 262], "signature": "def to_dict(self) -> Dict[str, Any]:", "doc": "Serializes the component to a dictionary."}, {"type": "function", "name": "DocumentSplitter.from_dict", "lines": [265, 275], "signature": "def from_dict(cls, data: Dict[str, Any]) -> \"DocumentSplitter\":", "doc": "Deserializes the component from a dictionary."}]}
| null |
["test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_function", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_to_dict", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_to_dict_with_splitting_function", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_from_dict", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_from_dict_with_splitting_function", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_roundtrip_serialization", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_roundtrip_serialization_with_splitting_function"]
|
["test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_non_text_document", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_single_doc", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_empty_list", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_unsupported_split_by", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_unsupported_split_length", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_unsupported_split_overlap", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word_with_threshold", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word_multiple_input_docs", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_sentence", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_passage", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_page", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_split_by_word_with_overlap", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_source_id_stored_in_metadata", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_copy_metadata", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_word_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_sentence_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_passage_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_no_overlap_page_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_word_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_sentence_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_passage_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_page_number_to_metadata_with_overlap_page_split", "test/components/preprocessors/test_document_splitter.py::TestDocumentSplitter::test_add_split_overlap_information"]
|
f4d9c2bb917be0ffe132dffcc2ad4f1b0fcc5967
|
{"first_commit_time": 1725555637.0, "pr_title": "feat : DocumentSplitter, adding the option to split_by function", "pr_body": "### Proposed Changes:\r\n\r\nAdding the possibility to pass a function and personalise the way in which DocumentSplitter defines a unit.\r\n\r\nThis means a user can, for example, use the following to split and define units:\r\n\r\n`splitter_function = lambda text: re.split('[\\n]{2,}, text)`\r\n\r\n(or use spacy or anything else).\r\n\r\n### How did you test it?\r\n\r\nAdded two tests with two \"mock\" splitter functions.\r\n\r\n### Notes for the reviewer\r\n\r\nThere are some issues related to document splitting #5922 . Given the fact that the current methods are very basic and the issues have been open for moths, I think it would make sense to let the user define how text is split.\r\n", "pr_timeline": [{"time": 1726067765.0, "comment": "[](https://cla-assistant.io/deepset-ai/haystack?pullRequest=8336) <br/>All committers have signed the CLA."}, {"time": 1725960066.0, "comment": "Hey @GivAlz this is an excellent idea, thank you for opening this PR. Would you please add a reno release note to this PRs branch so we can generate a nice release note about this feature in the upcoming release. See https://github.com/deepset-ai/haystack/blob/main/CONTRIBUTING.md#release-notes for more details on how to create reno release note \ud83d\ude4f "}, {"time": 1726145734.0, "comment": "## Pull Request Test Coverage Report for [Build 10831052207](https://coveralls.io/builds/69760982)\n\n### Warning: This coverage report may be inaccurate.\n\nThis pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.\n\n- For more information on this, see <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#tracking-coverage-changes-with-pull_request-builds\">Tracking coverage changes with pull request builds</a>.\n- To avoid this issue with future PRs, see these <a target=\"_blank\" href=\"https://docs.coveralls.io/build-types#recommended-ci-configurations\">Recommended CI Configurations</a>.\n- For a quick fix, <a target=\"_blank\" href=\"https://github.blog/changelog/2022-02-03-more-ways-to-keep-your-pull-request-branch-up-to-date/#update-your-pull-request-branch-by-rebasing\">rebase this PR at GitHub</a>. Your next report should be accurate.\n\n### Details\n\n* **0** of **0** changed or added relevant lines in **0** files are covered.\n* **2** unchanged lines in **1** file lost coverage.\n* Overall coverage increased (+**0.01%**) to **90.319%**\n\n---\n\n\n| Files with Coverage Reduction | New Missed Lines | % |\n| :-----|--------------|--: |\n| [components/preprocessors/document_splitter.py](https://coveralls.io/builds/69760982/source?filename=components%2Fpreprocessors%2Fdocument_splitter.py#L76) | 2 | 98.26% |\n<!-- | **Total:** | **2** | | -->\n\n| Totals | [](https://coveralls.io/builds/69760982) |\n| :-- | --: |\n| Change from base [Build 10827600883](https://coveralls.io/builds/69756769): | 0.01% |\n| Covered Lines: | 7202 |\n| Relevant Lines: | 7974 |\n\n---\n##### \ud83d\udc9b - [Coveralls](https://coveralls.io)\n"}, {"time": 1725978189.0, "comment": "I've added a release note. Please let me know if I need to modify its name or text content.\r\n\r\nThank you!!"}, {"time": 1725979209.0, "comment": "@GivAlz thanks for a quick turnaround. To stay consistent let's use all small caps in reno release note name (with - between words). And let's remove highlights as that entry is reserved only for major features we want to highlight to users. Although cool perhaps this feature doesn't cross the highlights threshold this time :-) "}, {"time": 1726054994.0, "comment": "Looks much better now @GivAlz - to integrate you need to sign the contribution agreement- pretty much standard procedure in most bigger open source projects \ud83d\ude4f"}, {"time": 1726141263.0, "comment": "The change will be reflected in [docs](https://docs.haystack.deepset.ai/v2.6-unstable/docs/documentsplitter) in the upcoming 2.6 version. "}, {"time": 1726142238.0, "comment": "@GivAlz on my last pass through b4 integration I realized we don't (de)serialize the function in this component. I'll add those changes directly on your branch"}, {"time": 1726144826.0, "comment": "> @GivAlz on my last pass through b4 integration I realized we don't (de)serialize the function in this component. I'll add those changes directly on your branch\r\n\r\nSorry I forgot about that; I guess it could be useful to note this in the doc string for the function."}, {"time": 1726145126.0, "comment": "> > @GivAlz on my last pass through b4 integration I realized we don't (de)serialize the function in this component. I'll add those changes directly on your branch\r\n> \r\n> Sorry I forgot about that; I guess it could be useful to note this in the doc string for the function.\r\n\r\nNo worries, I've been doing this for over a year and I forget all the time as well. Now I have pre-commit check notes :-) Please review https://github.com/deepset-ai/haystack/pull/8336/commits/6a592503e454b276cb7106b88c2063a6908ce113 and say if there is something off"}, {"time": 1726150408.0, "comment": "> > > @GivAlz on my last pass through b4 integration I realized we don't (de)serialize the function in this component. I'll add those changes directly on your branch\r\n> > \r\n> > \r\n> > Sorry I forgot about that; I guess it could be useful to note this in the doc string for the function.\r\n> \r\n> No worries, I've been doing this for over a year and I forget all the time as well. Now I have pre-commit check notes :-) Please review [6a59250](https://github.com/deepset-ai/haystack/commit/6a592503e454b276cb7106b88c2063a6908ce113) and say if there is something off\r\n\r\nLGTM! Just wondering if it makes sense to add a note on the fact that, if the method to_dict is used, the function must be serialisable, but it should be obvious and I think that the error thrown would be pretty clear...if you don't think it is necessary (or maybe a note could be added in the documentation), then I guess it's good to merge!"}, {"time": 1726151578.0, "comment": "I think we treat it as serializable due to its simple string interface. We do this quite often throughout the codebase - it is ok most likely. I think we can merge this now \ud83d\ude80 "}], "issues": {}}
|
django/django
| 16,369
|
https://github.com/django/django/pull/16369
|
django__django-16369
|
["33662"]
|
ab7a85ac297464df82d8363455609979ca3603db
|
diff --git a/django/contrib/sitemaps/__init__.py b/django/contrib/sitemaps/__init__.py
index 3d276b60d490..df57f1cd5c70 100644
--- a/django/contrib/sitemaps/__init__.py
+++ b/django/contrib/sitemaps/__init__.py
@@ -92,6 +92,10 @@ def _get(self, name, item, default=None):
return attr(item)
return attr
+ def get_languages_for_item(self, item):
+ """Languages for which this item is displayed."""
+ return self._languages()
+
def _languages(self):
if self.languages is not None:
return self.languages
@@ -103,8 +107,8 @@ def _items(self):
# This is necessary to paginate with all languages already considered.
items = [
(item, lang_code)
- for lang_code in self._languages()
for item in self.items()
+ for lang_code in self.get_languages_for_item(item)
]
return items
return self.items()
@@ -201,7 +205,8 @@ def _urls(self, page, protocol, domain):
}
if self.i18n and self.alternates:
- for lang_code in self._languages():
+ item_languages = self.get_languages_for_item(item[0])
+ for lang_code in item_languages:
loc = f"{protocol}://{domain}{self._location(item, lang_code)}"
url_info["alternates"].append(
{
@@ -209,7 +214,7 @@ def _urls(self, page, protocol, domain):
"lang_code": lang_code,
}
)
- if self.x_default:
+ if self.x_default and settings.LANGUAGE_CODE in item_languages:
lang_code = settings.LANGUAGE_CODE
loc = f"{protocol}://{domain}{self._location(item, lang_code)}"
loc = loc.replace(f"/{lang_code}/", "/", 1)
diff --git a/docs/ref/contrib/sitemaps.txt b/docs/ref/contrib/sitemaps.txt
index d3225405a38d..7dc3dced5183 100644
--- a/docs/ref/contrib/sitemaps.txt
+++ b/docs/ref/contrib/sitemaps.txt
@@ -311,6 +311,15 @@ Note:
The latest ``lastmod`` returned by calling the method with all
items returned by :meth:`Sitemap.items`.
+ .. method:: Sitemap.get_languages_for_item(item, lang_code)
+
+ .. versionadded:: 4.2
+
+ **Optional.** A method that returns the sequence of language codes for
+ which the item is displayed. By default
+ :meth:`~Sitemap.get_languages_for_item` returns
+ :attr:`~Sitemap.languages`.
+
Shortcuts
=========
diff --git a/docs/releases/4.2.txt b/docs/releases/4.2.txt
index 682fce2a5362..7e93df0702ce 100644
--- a/docs/releases/4.2.txt
+++ b/docs/releases/4.2.txt
@@ -145,7 +145,8 @@ Minor features
:mod:`django.contrib.sitemaps`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-* ...
+* The new :meth:`.Sitemap.get_languages_for_item` method allows customizing the
+ list of languages for which the item is displayed.
:mod:`django.contrib.sites`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
diff --git a/tests/sitemaps_tests/test_http.py b/tests/sitemaps_tests/test_http.py
index 8c16f66896bd..12e387757be7 100644
--- a/tests/sitemaps_tests/test_http.py
+++ b/tests/sitemaps_tests/test_http.py
@@ -10,7 +10,7 @@
from django.utils.formats import localize
from .base import SitemapTestsBase
-from .models import TestModel
+from .models import I18nTestModel, TestModel
class HTTPSitemapTests(SitemapTestsBase):
@@ -440,6 +440,72 @@ def test_alternate_i18n_sitemap_xdefault(self):
)
self.assertXMLEqual(response.content.decode(), expected_content)
+ @override_settings(LANGUAGES=(("en", "English"), ("pt", "Portuguese")))
+ def test_language_for_item_i18n_sitemap(self):
+ """
+ A i18n sitemap index in which item can be chosen to be displayed for a
+ lang or not.
+ """
+ only_pt = I18nTestModel.objects.create(name="Only for PT")
+ response = self.client.get("/item-by-lang/i18n.xml")
+ url, pk, only_pt_pk = self.base_url, self.i18n_model.pk, only_pt.pk
+ expected_urls = (
+ f"<url><loc>{url}/en/i18n/testmodel/{pk}/</loc>"
+ f"<changefreq>never</changefreq><priority>0.5</priority></url>"
+ f"<url><loc>{url}/pt/i18n/testmodel/{pk}/</loc>"
+ f"<changefreq>never</changefreq><priority>0.5</priority></url>"
+ f"<url><loc>{url}/pt/i18n/testmodel/{only_pt_pk}/</loc>"
+ f"<changefreq>never</changefreq><priority>0.5</priority></url>"
+ )
+ expected_content = (
+ f'<?xml version="1.0" encoding="UTF-8"?>\n'
+ f'<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" '
+ f'xmlns:xhtml="http://www.w3.org/1999/xhtml">\n'
+ f"{expected_urls}\n"
+ f"</urlset>"
+ )
+ self.assertXMLEqual(response.content.decode(), expected_content)
+
+ @override_settings(LANGUAGES=(("en", "English"), ("pt", "Portuguese")))
+ def test_alternate_language_for_item_i18n_sitemap(self):
+ """
+ A i18n sitemap index in which item can be chosen to be displayed for a
+ lang or not.
+ """
+ only_pt = I18nTestModel.objects.create(name="Only for PT")
+ response = self.client.get("/item-by-lang-alternates/i18n.xml")
+ url, pk, only_pt_pk = self.base_url, self.i18n_model.pk, only_pt.pk
+ expected_urls = (
+ f"<url><loc>{url}/en/i18n/testmodel/{pk}/</loc>"
+ f"<changefreq>never</changefreq><priority>0.5</priority>"
+ f'<xhtml:link rel="alternate" '
+ f'hreflang="en" href="{url}/en/i18n/testmodel/{pk}/"/>'
+ f'<xhtml:link rel="alternate" '
+ f'hreflang="pt" href="{url}/pt/i18n/testmodel/{pk}/"/>'
+ f'<xhtml:link rel="alternate" '
+ f'hreflang="x-default" href="{url}/i18n/testmodel/{pk}/"/></url>'
+ f"<url><loc>{url}/pt/i18n/testmodel/{pk}/</loc>"
+ f"<changefreq>never</changefreq><priority>0.5</priority>"
+ f'<xhtml:link rel="alternate" '
+ f'hreflang="en" href="{url}/en/i18n/testmodel/{pk}/"/>'
+ f'<xhtml:link rel="alternate" '
+ f'hreflang="pt" href="{url}/pt/i18n/testmodel/{pk}/"/>'
+ f'<xhtml:link rel="alternate" '
+ f'hreflang="x-default" href="{url}/i18n/testmodel/{pk}/"/></url>'
+ f"<url><loc>{url}/pt/i18n/testmodel/{only_pt_pk}/</loc>"
+ f"<changefreq>never</changefreq><priority>0.5</priority>"
+ f'<xhtml:link rel="alternate" '
+ f'hreflang="pt" href="{url}/pt/i18n/testmodel/{only_pt_pk}/"/></url>'
+ )
+ expected_content = (
+ f'<?xml version="1.0" encoding="UTF-8"?>\n'
+ f'<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" '
+ f'xmlns:xhtml="http://www.w3.org/1999/xhtml">\n'
+ f"{expected_urls}\n"
+ f"</urlset>"
+ )
+ self.assertXMLEqual(response.content.decode(), expected_content)
+
def test_sitemap_without_entries(self):
response = self.client.get("/sitemap-without-entries/sitemap.xml")
expected_content = (
diff --git a/tests/sitemaps_tests/urls/http.py b/tests/sitemaps_tests/urls/http.py
index 75dd4834c0a5..2b512cfd6918 100644
--- a/tests/sitemaps_tests/urls/http.py
+++ b/tests/sitemaps_tests/urls/http.py
@@ -48,6 +48,22 @@ class XDefaultI18nSitemap(AlternatesI18nSitemap):
x_default = True
+class ItemByLangSitemap(SimpleI18nSitemap):
+ def get_languages_for_item(self, item):
+ if item.name == "Only for PT":
+ return ["pt"]
+ return super().get_languages_for_item(item)
+
+
+class ItemByLangAlternatesSitemap(AlternatesI18nSitemap):
+ x_default = True
+
+ def get_languages_for_item(self, item):
+ if item.name == "Only for PT":
+ return ["pt"]
+ return super().get_languages_for_item(item)
+
+
class EmptySitemap(Sitemap):
changefreq = "never"
priority = 0.5
@@ -168,6 +184,14 @@ def testmodelview(request, id):
"i18n-xdefault": XDefaultI18nSitemap,
}
+item_by_lang_i18n_sitemaps = {
+ "i18n-item-by-lang": ItemByLangSitemap,
+}
+
+item_by_lang_alternates_i18n_sitemaps = {
+ "i18n-item-by-lang-alternates": ItemByLangAlternatesSitemap,
+}
+
simple_sitemaps_not_callable = {
"simple": SimpleSitemap(),
}
@@ -358,6 +382,18 @@ def testmodelview(request, id):
{"sitemaps": sitemaps_lastmod_ascending},
name="django.contrib.sitemaps.views.sitemap",
),
+ path(
+ "item-by-lang/i18n.xml",
+ views.sitemap,
+ {"sitemaps": item_by_lang_i18n_sitemaps},
+ name="django.contrib.sitemaps.views.sitemap",
+ ),
+ path(
+ "item-by-lang-alternates/i18n.xml",
+ views.sitemap,
+ {"sitemaps": item_by_lang_alternates_i18n_sitemaps},
+ name="django.contrib.sitemaps.views.sitemap",
+ ),
path(
"lastmod-sitemaps/descending.xml",
views.sitemap,
| 2022-12-07T15:09:14
|
{}
|
{"django/contrib/sitemaps/__init__.py": "import warnings\nfrom urllib.parse import urlencode\nfrom urllib.request import urlopen\n\nfrom django.apps import apps as django_apps\nfrom django.conf import settings\nfrom django.core import paginator\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.urls import NoReverseMatch, reverse\nfrom django.utils import translation\nfrom django.utils.deprecation import RemovedInDjango50Warning\n\nPING_URL = \"https://www.google.com/webmasters/tools/ping\"\n\n\nclass SitemapNotFound(Exception):\n pass\n\n\ndef ping_google(sitemap_url=None, ping_url=PING_URL, sitemap_uses_https=True):\n \"\"\"\n Alert Google that the sitemap for the current site has been updated.\n If sitemap_url is provided, it should be an absolute path to the sitemap\n for this site -- e.g., '/sitemap.xml'. If sitemap_url is not provided, this\n function will attempt to deduce it by using urls.reverse().\n \"\"\"\n sitemap_full_url = _get_sitemap_full_url(sitemap_url, sitemap_uses_https)\n params = urlencode({\"sitemap\": sitemap_full_url})\n urlopen(\"%s?%s\" % (ping_url, params))\n\n\ndef _get_sitemap_full_url(sitemap_url, sitemap_uses_https=True):\n if not django_apps.is_installed(\"django.contrib.sites\"):\n raise ImproperlyConfigured(\n \"ping_google requires django.contrib.sites, which isn't installed.\"\n )\n\n if sitemap_url is None:\n try:\n # First, try to get the \"index\" sitemap URL.\n sitemap_url = reverse(\"django.contrib.sitemaps.views.index\")\n except NoReverseMatch:\n try:\n # Next, try for the \"global\" sitemap URL.\n sitemap_url = reverse(\"django.contrib.sitemaps.views.sitemap\")\n except NoReverseMatch:\n pass\n\n if sitemap_url is None:\n raise SitemapNotFound(\n \"You didn't provide a sitemap_url, and the sitemap URL couldn't be \"\n \"auto-detected.\"\n )\n\n Site = django_apps.get_model(\"sites.Site\")\n current_site = Site.objects.get_current()\n scheme = \"https\" if sitemap_uses_https else \"http\"\n return \"%s://%s%s\" % (scheme, current_site.domain, sitemap_url)\n\n\nclass Sitemap:\n # This limit is defined by Google. See the index documentation at\n # https://www.sitemaps.org/protocol.html#index.\n limit = 50000\n\n # If protocol is None, the URLs in the sitemap will use the protocol\n # with which the sitemap was requested.\n protocol = None\n\n # Enables generating URLs for all languages.\n i18n = False\n\n # Override list of languages to use.\n languages = None\n\n # Enables generating alternate/hreflang links.\n alternates = False\n\n # Add an alternate/hreflang link with value 'x-default'.\n x_default = False\n\n def _get(self, name, item, default=None):\n try:\n attr = getattr(self, name)\n except AttributeError:\n return default\n if callable(attr):\n if self.i18n:\n # Split the (item, lang_code) tuples again for the location,\n # priority, lastmod and changefreq method calls.\n item, lang_code = item\n return attr(item)\n return attr\n\n def _languages(self):\n if self.languages is not None:\n return self.languages\n return [lang_code for lang_code, _ in settings.LANGUAGES]\n\n def _items(self):\n if self.i18n:\n # Create (item, lang_code) tuples for all items and languages.\n # This is necessary to paginate with all languages already considered.\n items = [\n (item, lang_code)\n for lang_code in self._languages()\n for item in self.items()\n ]\n return items\n return self.items()\n\n def _location(self, item, force_lang_code=None):\n if self.i18n:\n obj, lang_code = item\n # Activate language from item-tuple or forced one before calling location.\n with translation.override(force_lang_code or lang_code):\n return self._get(\"location\", item)\n return self._get(\"location\", item)\n\n @property\n def paginator(self):\n return paginator.Paginator(self._items(), self.limit)\n\n def items(self):\n return []\n\n def location(self, item):\n return item.get_absolute_url()\n\n def get_protocol(self, protocol=None):\n # Determine protocol\n if self.protocol is None and protocol is None:\n warnings.warn(\n \"The default sitemap protocol will be changed from 'http' to \"\n \"'https' in Django 5.0. Set Sitemap.protocol to silence this \"\n \"warning.\",\n category=RemovedInDjango50Warning,\n stacklevel=2,\n )\n # RemovedInDjango50Warning: when the deprecation ends, replace 'http'\n # with 'https'.\n return self.protocol or protocol or \"http\"\n\n def get_domain(self, site=None):\n # Determine domain\n if site is None:\n if django_apps.is_installed(\"django.contrib.sites\"):\n Site = django_apps.get_model(\"sites.Site\")\n try:\n site = Site.objects.get_current()\n except Site.DoesNotExist:\n pass\n if site is None:\n raise ImproperlyConfigured(\n \"To use sitemaps, either enable the sites framework or pass \"\n \"a Site/RequestSite object in your view.\"\n )\n return site.domain\n\n def get_urls(self, page=1, site=None, protocol=None):\n protocol = self.get_protocol(protocol)\n domain = self.get_domain(site)\n return self._urls(page, protocol, domain)\n\n def get_latest_lastmod(self):\n if not hasattr(self, \"lastmod\"):\n return None\n if callable(self.lastmod):\n try:\n return max([self.lastmod(item) for item in self.items()], default=None)\n except TypeError:\n return None\n else:\n return self.lastmod\n\n def _urls(self, page, protocol, domain):\n urls = []\n latest_lastmod = None\n all_items_lastmod = True # track if all items have a lastmod\n\n paginator_page = self.paginator.page(page)\n for item in paginator_page.object_list:\n loc = f\"{protocol}://{domain}{self._location(item)}\"\n priority = self._get(\"priority\", item)\n lastmod = self._get(\"lastmod\", item)\n\n if all_items_lastmod:\n all_items_lastmod = lastmod is not None\n if all_items_lastmod and (\n latest_lastmod is None or lastmod > latest_lastmod\n ):\n latest_lastmod = lastmod\n\n url_info = {\n \"item\": item,\n \"location\": loc,\n \"lastmod\": lastmod,\n \"changefreq\": self._get(\"changefreq\", item),\n \"priority\": str(priority if priority is not None else \"\"),\n \"alternates\": [],\n }\n\n if self.i18n and self.alternates:\n for lang_code in self._languages():\n loc = f\"{protocol}://{domain}{self._location(item, lang_code)}\"\n url_info[\"alternates\"].append(\n {\n \"location\": loc,\n \"lang_code\": lang_code,\n }\n )\n if self.x_default:\n lang_code = settings.LANGUAGE_CODE\n loc = f\"{protocol}://{domain}{self._location(item, lang_code)}\"\n loc = loc.replace(f\"/{lang_code}/\", \"/\", 1)\n url_info[\"alternates\"].append(\n {\n \"location\": loc,\n \"lang_code\": \"x-default\",\n }\n )\n\n urls.append(url_info)\n\n if all_items_lastmod and latest_lastmod:\n self.latest_lastmod = latest_lastmod\n\n return urls\n\n\nclass GenericSitemap(Sitemap):\n priority = None\n changefreq = None\n\n def __init__(self, info_dict, priority=None, changefreq=None, protocol=None):\n self.queryset = info_dict[\"queryset\"]\n self.date_field = info_dict.get(\"date_field\")\n self.priority = self.priority or priority\n self.changefreq = self.changefreq or changefreq\n self.protocol = self.protocol or protocol\n\n def items(self):\n # Make sure to return a clone; we don't want premature evaluation.\n return self.queryset.filter()\n\n def lastmod(self, item):\n if self.date_field is not None:\n return getattr(item, self.date_field)\n return None\n\n def get_latest_lastmod(self):\n if self.date_field is not None:\n return (\n self.queryset.order_by(\"-\" + self.date_field)\n .values_list(self.date_field, flat=True)\n .first()\n )\n return None\n", "docs/ref/contrib/sitemaps.txt": "=====================\nThe sitemap framework\n=====================\n\n.. module:: django.contrib.sitemaps\n :synopsis: A framework for generating Google sitemap XML files.\n\nDjango comes with a high-level sitemap-generating framework to create sitemap_\nXML files.\n\n.. _sitemap: https://www.sitemaps.org/\n\nOverview\n========\n\nA sitemap is an XML file on your website that tells search-engine indexers how\nfrequently your pages change and how \"important\" certain pages are in relation\nto other pages on your site. This information helps search engines index your\nsite.\n\nThe Django sitemap framework automates the creation of this XML file by letting\nyou express this information in Python code.\n\nIt works much like Django's :doc:`syndication framework\n</ref/contrib/syndication>`. To create a sitemap, write a\n:class:`~django.contrib.sitemaps.Sitemap` class and point to it in your\n:doc:`URLconf </topics/http/urls>`.\n\nInstallation\n============\n\nTo install the sitemap app, follow these steps:\n\n#. Add ``'django.contrib.sitemaps'`` to your :setting:`INSTALLED_APPS` setting.\n\n#. Make sure your :setting:`TEMPLATES` setting contains a ``DjangoTemplates``\n backend whose ``APP_DIRS`` options is set to ``True``. It's in there by\n default, so you'll only need to change this if you've changed that setting.\n\n#. Make sure you've installed the :mod:`sites framework<django.contrib.sites>`.\n\n(Note: The sitemap application doesn't install any database tables. The only\nreason it needs to go into :setting:`INSTALLED_APPS` is so that the\n:func:`~django.template.loaders.app_directories.Loader` template\nloader can find the default templates.)\n\nInitialization\n==============\n\n.. function:: views.sitemap(request, sitemaps, section=None, template_name='sitemap.xml', content_type='application/xml')\n\nTo activate sitemap generation on your Django site, add this line to your\n:doc:`URLconf </topics/http/urls>`::\n\n from django.contrib.sitemaps.views import sitemap\n\n path('sitemap.xml', sitemap, {'sitemaps': sitemaps},\n name='django.contrib.sitemaps.views.sitemap')\n\nThis tells Django to build a sitemap when a client accesses :file:`/sitemap.xml`.\n\nThe name of the sitemap file is not important, but the location is. Search\nengines will only index links in your sitemap for the current URL level and\nbelow. For instance, if :file:`sitemap.xml` lives in your root directory, it may\nreference any URL in your site. However, if your sitemap lives at\n:file:`/content/sitemap.xml`, it may only reference URLs that begin with\n:file:`/content/`.\n\nThe sitemap view takes an extra, required argument: ``{'sitemaps': sitemaps}``.\n``sitemaps`` should be a dictionary that maps a short section label (e.g.,\n``blog`` or ``news``) to its :class:`~django.contrib.sitemaps.Sitemap` class\n(e.g., ``BlogSitemap`` or ``NewsSitemap``). It may also map to an *instance* of\na :class:`~django.contrib.sitemaps.Sitemap` class (e.g.,\n``BlogSitemap(some_var)``).\n\n``Sitemap`` classes\n===================\n\nA :class:`~django.contrib.sitemaps.Sitemap` class is a Python class that\nrepresents a \"section\" of entries in your sitemap. For example, one\n:class:`~django.contrib.sitemaps.Sitemap` class could represent all the entries\nof your blog, while another could represent all of the events in your events\ncalendar.\n\nIn the simplest case, all these sections get lumped together into one\n:file:`sitemap.xml`, but it's also possible to use the framework to generate a\nsitemap index that references individual sitemap files, one per section. (See\n`Creating a sitemap index`_ below.)\n\n:class:`~django.contrib.sitemaps.Sitemap` classes must subclass\n``django.contrib.sitemaps.Sitemap``. They can live anywhere in your codebase.\n\nAn example\n==========\n\nLet's assume you have a blog system, with an ``Entry`` model, and you want your\nsitemap to include all the links to your individual blog entries. Here's how\nyour sitemap class might look::\n\n from django.contrib.sitemaps import Sitemap\n from blog.models import Entry\n\n class BlogSitemap(Sitemap):\n changefreq = \"never\"\n priority = 0.5\n\n def items(self):\n return Entry.objects.filter(is_draft=False)\n\n def lastmod(self, obj):\n return obj.pub_date\n\nNote:\n\n* :attr:`~Sitemap.changefreq` and :attr:`~Sitemap.priority` are class\n attributes corresponding to ``<changefreq>`` and ``<priority>`` elements,\n respectively. They can be made callable as functions, as\n :attr:`~Sitemap.lastmod` was in the example.\n* :attr:`~Sitemap.items()` is a method that returns a :term:`sequence` or\n ``QuerySet`` of objects. The objects returned will get passed to any callable\n methods corresponding to a sitemap property (:attr:`~Sitemap.location`,\n :attr:`~Sitemap.lastmod`, :attr:`~Sitemap.changefreq`, and\n :attr:`~Sitemap.priority`).\n* :attr:`~Sitemap.lastmod` should return a :class:`~datetime.datetime`.\n* There is no :attr:`~Sitemap.location` method in this example, but you\n can provide it in order to specify the URL for your object. By default,\n :attr:`~Sitemap.location()` calls ``get_absolute_url()`` on each object\n and returns the result.\n\n``Sitemap`` class reference\n===========================\n\n.. class:: Sitemap\n\n A ``Sitemap`` class can define the following methods/attributes:\n\n .. attribute:: Sitemap.items\n\n **Required.** A method that returns a :term:`sequence` or ``QuerySet``\n of objects. The framework doesn't care what *type* of objects they are;\n all that matters is that these objects get passed to the\n :attr:`~Sitemap.location()`, :attr:`~Sitemap.lastmod()`,\n :attr:`~Sitemap.changefreq()` and :attr:`~Sitemap.priority()` methods.\n\n .. attribute:: Sitemap.location\n\n **Optional.** Either a method or attribute.\n\n If it's a method, it should return the absolute path for a given object\n as returned by :attr:`~Sitemap.items()`.\n\n If it's an attribute, its value should be a string representing an\n absolute path to use for *every* object returned by\n :attr:`~Sitemap.items()`.\n\n In both cases, \"absolute path\" means a URL that doesn't include the\n protocol or domain. Examples:\n\n * Good: ``'/foo/bar/'``\n * Bad: ``'example.com/foo/bar/'``\n * Bad: ``'https://example.com/foo/bar/'``\n\n If :attr:`~Sitemap.location` isn't provided, the framework will call\n the ``get_absolute_url()`` method on each object as returned by\n :attr:`~Sitemap.items()`.\n\n To specify a protocol other than ``'http'``, use\n :attr:`~Sitemap.protocol`.\n\n .. attribute:: Sitemap.lastmod\n\n **Optional.** Either a method or attribute.\n\n If it's a method, it should take one argument -- an object as returned\n by :attr:`~Sitemap.items()` -- and return that object's last-modified\n date/time as a :class:`~datetime.datetime`.\n\n If it's an attribute, its value should be a :class:`~datetime.datetime`\n representing the last-modified date/time for *every* object returned by\n :attr:`~Sitemap.items()`.\n\n If all items in a sitemap have a :attr:`~Sitemap.lastmod`, the sitemap\n generated by :func:`views.sitemap` will have a ``Last-Modified``\n header equal to the latest ``lastmod``. You can activate the\n :class:`~django.middleware.http.ConditionalGetMiddleware` to make\n Django respond appropriately to requests with an ``If-Modified-Since``\n header which will prevent sending the sitemap if it hasn't changed.\n\n .. attribute:: Sitemap.paginator\n\n **Optional.**\n\n This property returns a :class:`~django.core.paginator.Paginator` for\n :attr:`~Sitemap.items()`. If you generate sitemaps in a batch you may\n want to override this as a cached property in order to avoid multiple\n ``items()`` calls.\n\n .. attribute:: Sitemap.changefreq\n\n **Optional.** Either a method or attribute.\n\n If it's a method, it should take one argument -- an object as returned\n by :attr:`~Sitemap.items()` -- and return that object's change\n frequency as a string.\n\n If it's an attribute, its value should be a string representing the\n change frequency of *every* object returned by :attr:`~Sitemap.items()`.\n\n Possible values for :attr:`~Sitemap.changefreq`, whether you use a\n method or attribute, are:\n\n * ``'always'``\n * ``'hourly'``\n * ``'daily'``\n * ``'weekly'``\n * ``'monthly'``\n * ``'yearly'``\n * ``'never'``\n\n .. attribute:: Sitemap.priority\n\n **Optional.** Either a method or attribute.\n\n If it's a method, it should take one argument -- an object as returned\n by :attr:`~Sitemap.items()` -- and return that object's priority as\n either a string or float.\n\n If it's an attribute, its value should be either a string or float\n representing the priority of *every* object returned by\n :attr:`~Sitemap.items()`.\n\n Example values for :attr:`~Sitemap.priority`: ``0.4``, ``1.0``. The\n default priority of a page is ``0.5``. See the `sitemaps.org\n documentation`_ for more.\n\n .. _sitemaps.org documentation: https://www.sitemaps.org/protocol.html#prioritydef\n\n .. attribute:: Sitemap.protocol\n\n **Optional.**\n\n This attribute defines the protocol (``'http'`` or ``'https'``) of the\n URLs in the sitemap. If it isn't set, the protocol with which the\n sitemap was requested is used. If the sitemap is built outside the\n context of a request, the default is ``'http'``.\n\n .. deprecated:: 4.0\n\n The default protocol for sitemaps built outside the context of a\n request will change from ``'http'`` to ``'https'`` in Django 5.0.\n\n .. attribute:: Sitemap.limit\n\n **Optional.**\n\n This attribute defines the maximum number of URLs included on each page\n of the sitemap. Its value should not exceed the default value of\n ``50000``, which is the upper limit allowed in the `Sitemaps protocol\n <https://www.sitemaps.org/protocol.html#index>`_.\n\n .. attribute:: Sitemap.i18n\n\n **Optional.**\n\n A boolean attribute that defines if the URLs of this sitemap should\n be generated using all of your :setting:`LANGUAGES`. The default is\n ``False``.\n\n .. attribute:: Sitemap.languages\n\n **Optional.**\n\n A :term:`sequence` of :term:`language codes<language code>` to use for\n generating alternate links when :attr:`~Sitemap.i18n` is enabled.\n Defaults to :setting:`LANGUAGES`.\n\n .. attribute:: Sitemap.alternates\n\n **Optional.**\n\n A boolean attribute. When used in conjunction with\n :attr:`~Sitemap.i18n` generated URLs will each have a list of alternate\n links pointing to other language versions using the `hreflang\n attribute`_. The default is ``False``.\n\n .. _hreflang attribute: https://developers.google.com/search/docs/advanced/crawling/localized-versions\n\n .. attribute:: Sitemap.x_default\n\n **Optional.**\n\n A boolean attribute. When ``True`` the alternate links generated by\n :attr:`~Sitemap.alternates` will contain a ``hreflang=\"x-default\"``\n fallback entry with a value of :setting:`LANGUAGE_CODE`. The default is\n ``False``.\n\n .. method:: Sitemap.get_latest_lastmod()\n\n .. versionadded:: 4.1\n\n **Optional.** A method that returns the latest value returned by\n :attr:`~Sitemap.lastmod`. This function is used to add the ``lastmod``\n attribute to :ref:`Sitemap index context\n variables<sitemap-index-context-variables>`.\n\n By default :meth:`~Sitemap.get_latest_lastmod` returns:\n\n * If :attr:`~Sitemap.lastmod` is an attribute:\n :attr:`~Sitemap.lastmod`.\n * If :attr:`~Sitemap.lastmod` is a method:\n The latest ``lastmod`` returned by calling the method with all\n items returned by :meth:`Sitemap.items`.\n\nShortcuts\n=========\n\nThe sitemap framework provides a convenience class for a common case:\n\n.. class:: GenericSitemap(info_dict, priority=None, changefreq=None, protocol=None)\n\n The :class:`django.contrib.sitemaps.GenericSitemap` class allows you to\n create a sitemap by passing it a dictionary which has to contain at least\n a ``queryset`` entry. This queryset will be used to generate the items\n of the sitemap. It may also have a ``date_field`` entry that\n specifies a date field for objects retrieved from the ``queryset``.\n This will be used for the :attr:`~Sitemap.lastmod` attribute and\n :meth:`~Sitemap.get_latest_lastmod` methods in the in the\n generated sitemap.\n\n The :attr:`~Sitemap.priority`, :attr:`~Sitemap.changefreq`,\n and :attr:`~Sitemap.protocol` keyword arguments allow specifying these\n attributes for all URLs.\n\nExample\n-------\n\nHere's an example of a :doc:`URLconf </topics/http/urls>` using\n:class:`GenericSitemap`::\n\n from django.contrib.sitemaps import GenericSitemap\n from django.contrib.sitemaps.views import sitemap\n from django.urls import path\n from blog.models import Entry\n\n info_dict = {\n 'queryset': Entry.objects.all(),\n 'date_field': 'pub_date',\n }\n\n urlpatterns = [\n # some generic view using info_dict\n # ...\n\n # the sitemap\n path('sitemap.xml', sitemap,\n {'sitemaps': {'blog': GenericSitemap(info_dict, priority=0.6)}},\n name='django.contrib.sitemaps.views.sitemap'),\n ]\n\n.. _URLconf: ../url_dispatch/\n\nSitemap for static views\n========================\n\nOften you want the search engine crawlers to index views which are neither\nobject detail pages nor flatpages. The solution is to explicitly list URL\nnames for these views in ``items`` and call :func:`~django.urls.reverse` in\nthe ``location`` method of the sitemap. For example::\n\n # sitemaps.py\n from django.contrib import sitemaps\n from django.urls import reverse\n\n class StaticViewSitemap(sitemaps.Sitemap):\n priority = 0.5\n changefreq = 'daily'\n\n def items(self):\n return ['main', 'about', 'license']\n\n def location(self, item):\n return reverse(item)\n\n # urls.py\n from django.contrib.sitemaps.views import sitemap\n from django.urls import path\n\n from .sitemaps import StaticViewSitemap\n from . import views\n\n sitemaps = {\n 'static': StaticViewSitemap,\n }\n\n urlpatterns = [\n path('', views.main, name='main'),\n path('about/', views.about, name='about'),\n path('license/', views.license, name='license'),\n # ...\n path('sitemap.xml', sitemap, {'sitemaps': sitemaps},\n name='django.contrib.sitemaps.views.sitemap')\n ]\n\n\nCreating a sitemap index\n========================\n\n.. function:: views.index(request, sitemaps, template_name='sitemap_index.xml', content_type='application/xml', sitemap_url_name='django.contrib.sitemaps.views.sitemap')\n\nThe sitemap framework also has the ability to create a sitemap index that\nreferences individual sitemap files, one per each section defined in your\n``sitemaps`` dictionary. The only differences in usage are:\n\n* You use two views in your URLconf: :func:`django.contrib.sitemaps.views.index`\n and :func:`django.contrib.sitemaps.views.sitemap`.\n* The :func:`django.contrib.sitemaps.views.sitemap` view should take a\n ``section`` keyword argument.\n\nHere's what the relevant URLconf lines would look like for the example above::\n\n from django.contrib.sitemaps import views\n\n urlpatterns = [\n path('sitemap.xml', views.index, {'sitemaps': sitemaps},\n name='django.contrib.sitemaps.views.index'),\n path('sitemap-<section>.xml', views.sitemap, {'sitemaps': sitemaps},\n name='django.contrib.sitemaps.views.sitemap'),\n ]\n\nThis will automatically generate a :file:`sitemap.xml` file that references\nboth :file:`sitemap-flatpages.xml` and :file:`sitemap-blog.xml`. The\n:class:`~django.contrib.sitemaps.Sitemap` classes and the ``sitemaps``\ndict don't change at all.\n\nIf all sitemaps have a ``lastmod`` returned by\n:meth:`Sitemap.get_latest_lastmod` the sitemap index will have a\n``Last-Modified`` header equal to the latest ``lastmod``.\n\nYou should create an index file if one of your sitemaps has more than 50,000\nURLs. In this case, Django will automatically paginate the sitemap, and the\nindex will reflect that.\n\nIf you're not using the vanilla sitemap view -- for example, if it's wrapped\nwith a caching decorator -- you must name your sitemap view and pass\n``sitemap_url_name`` to the index view::\n\n from django.contrib.sitemaps import views as sitemaps_views\n from django.views.decorators.cache import cache_page\n\n urlpatterns = [\n path('sitemap.xml',\n cache_page(86400)(sitemaps_views.index),\n {'sitemaps': sitemaps, 'sitemap_url_name': 'sitemaps'}),\n path('sitemap-<section>.xml',\n cache_page(86400)(sitemaps_views.sitemap),\n {'sitemaps': sitemaps}, name='sitemaps'),\n ]\n\n.. versionchanged:: 4.1\n\n Use of the ``Last-Modified`` header was added.\n\nTemplate customization\n======================\n\nIf you wish to use a different template for each sitemap or sitemap index\navailable on your site, you may specify it by passing a ``template_name``\nparameter to the ``sitemap`` and ``index`` views via the URLconf::\n\n from django.contrib.sitemaps import views\n\n urlpatterns = [\n path('custom-sitemap.xml', views.index, {\n 'sitemaps': sitemaps,\n 'template_name': 'custom_sitemap.html'\n }, name='django.contrib.sitemaps.views.index'),\n path('custom-sitemap-<section>.xml', views.sitemap, {\n 'sitemaps': sitemaps,\n 'template_name': 'custom_sitemap.html'\n }, name='django.contrib.sitemaps.views.sitemap'),\n ]\n\n\nThese views return :class:`~django.template.response.TemplateResponse`\ninstances which allow you to easily customize the response data before\nrendering. For more details, see the :doc:`TemplateResponse documentation\n</ref/template-response>`.\n\nContext variables\n-----------------\n\nWhen customizing the templates for the\n:func:`~django.contrib.sitemaps.views.index` and\n:func:`~django.contrib.sitemaps.views.sitemap` views, you can rely on the\nfollowing context variables.\n\n.. _sitemap-index-context-variables:\n\nIndex\n-----\n\nThe variable ``sitemaps`` is a list of objects containing the ``location`` and\n``lastmod`` attribute for each of the sitemaps. Each URL exposes the following\nattributes:\n\n- ``location``: The location (url & page) of the sitemap.\n- ``lastmod``: Populated by the :meth:`~Sitemap.get_latest_lastmod`\n method for each sitemap.\n\n.. versionchanged:: 4.1\n\n The context was changed to a list of objects with ``location`` and optional\n ``lastmod`` attributes.\n\nSitemap\n-------\n\nThe variable ``urlset`` is a list of URLs that should appear in the\nsitemap. Each URL exposes attributes as defined in the\n:class:`~django.contrib.sitemaps.Sitemap` class:\n\n- ``alternates``\n- ``changefreq``\n- ``item``\n- ``lastmod``\n- ``location``\n- ``priority``\n\nThe ``alternates`` attribute is available when :attr:`~Sitemap.i18n` and\n:attr:`~Sitemap.alternates` are enabled. It is a list of other language\nversions, including the optional :attr:`~Sitemap.x_default` fallback, for each\nURL. Each alternate is a dictionary with ``location`` and ``lang_code`` keys.\n\nThe ``item`` attribute has been added for each URL to allow more flexible\ncustomization of the templates, such as `Google news sitemaps`_. Assuming\nSitemap's :attr:`~Sitemap.items()` would return a list of items with\n``publication_data`` and a ``tags`` field something like this would\ngenerate a Google News compatible sitemap:\n\n.. code-block:: xml+django\n\n <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n <urlset\n xmlns=\"https://www.sitemaps.org/schemas/sitemap/0.9\"\n xmlns:news=\"http://www.google.com/schemas/sitemap-news/0.9\">\n {% spaceless %}\n {% for url in urlset %}\n <url>\n <loc>{{ url.location }}</loc>\n {% if url.lastmod %}<lastmod>{{ url.lastmod|date:\"Y-m-d\" }}</lastmod>{% endif %}\n {% if url.changefreq %}<changefreq>{{ url.changefreq }}</changefreq>{% endif %}\n {% if url.priority %}<priority>{{ url.priority }}</priority>{% endif %}\n <news:news>\n {% if url.item.publication_date %}<news:publication_date>{{ url.item.publication_date|date:\"Y-m-d\" }}</news:publication_date>{% endif %}\n {% if url.item.tags %}<news:keywords>{{ url.item.tags }}</news:keywords>{% endif %}\n </news:news>\n </url>\n {% endfor %}\n {% endspaceless %}\n </urlset>\n\n.. _`Google news sitemaps`: https://support.google.com/news/publisher-center/answer/9606710\n\nPinging Google\n==============\n\nYou may want to \"ping\" Google when your sitemap changes, to let it know to\nreindex your site. The sitemaps framework provides a function to do just\nthat: :func:`django.contrib.sitemaps.ping_google()`.\n\n.. function:: ping_google(sitemap_url=None, ping_url=PING_URL, sitemap_uses_https=True)\n\n ``ping_google`` takes these optional arguments:\n\n * ``sitemap_url`` - The absolute path to your site's sitemap (e.g.,\n :file:`'/sitemap.xml'`).\n\n If this argument isn't provided, ``ping_google`` will perform a reverse\n lookup in your URLconf, for URLs named\n ``'django.contrib.sitemaps.views.index'`` and then\n ``'django.contrib.sitemaps.views.sitemap'`` (without further arguments) to\n automatically determine the sitemap URL.\n\n * ``ping_url`` - Defaults to Google's Ping Tool:\n https://www.google.com/webmasters/tools/ping.\n\n * ``sitemap_uses_https`` - Set to ``False`` if your site uses ``http``\n rather than ``https``.\n\n :func:`ping_google` raises the exception\n ``django.contrib.sitemaps.SitemapNotFound`` if it cannot determine your\n sitemap URL.\n\n.. admonition:: Register with Google first!\n\n The :func:`ping_google` command only works if you have registered your\n site with `Google Search Console`_.\n\n.. _`Google Search Console`: https://search.google.com/search-console/welcome\n\nOne useful way to call :func:`ping_google` is from a model's ``save()``\nmethod::\n\n from django.contrib.sitemaps import ping_google\n\n class Entry(models.Model):\n # ...\n def save(self, force_insert=False, force_update=False):\n super().save(force_insert, force_update)\n try:\n ping_google()\n except Exception:\n # Bare 'except' because we could get a variety\n # of HTTP-related exceptions.\n pass\n\nA more efficient solution, however, would be to call :func:`ping_google` from a\ncron script, or some other scheduled task. The function makes an HTTP request\nto Google's servers, so you may not want to introduce that network overhead\neach time you call ``save()``.\n\nPinging Google via ``manage.py``\n--------------------------------\n\n.. django-admin:: ping_google [sitemap_url]\n\nOnce the sitemaps application is added to your project, you may also\nping Google using the ``ping_google`` management command::\n\n python manage.py ping_google [/sitemap.xml]\n\n.. django-admin-option:: --sitemap-uses-http\n\nUse this option if your sitemap uses ``http`` rather than ``https``.\n", "docs/releases/4.2.txt": "============================================\nDjango 4.2 release notes - UNDER DEVELOPMENT\n============================================\n\n*Expected April 2023*\n\nWelcome to Django 4.2!\n\nThese release notes cover the :ref:`new features <whats-new-4.2>`, as well as\nsome :ref:`backwards incompatible changes <backwards-incompatible-4.2>` you'll\nwant to be aware of when upgrading from Django 4.1 or earlier. We've\n:ref:`begun the deprecation process for some features\n<deprecated-features-4.2>`.\n\nSee the :doc:`/howto/upgrade-version` guide if you're updating an existing\nproject.\n\nPython compatibility\n====================\n\nDjango 4.2 supports Python 3.8, 3.9, 3.10, and 3.11. We **highly recommend**\nand only officially support the latest release of each series.\n\n.. _whats-new-4.2:\n\nWhat's new in Django 4.2\n========================\n\nPsycopg 3 support\n-----------------\n\nDjango now supports `psycopg`_ version 3.1 or higher. To update your code,\ninstall the `psycopg library`_, you don't need to change the\n:setting:`ENGINE <DATABASE-ENGINE>` as ``django.db.backends.postgresql``\nsupports both libraries.\n\nSupport for ``psycopg2`` is likely to be deprecated and removed at some point\nin the future.\n\n.. _psycopg: https://www.psycopg.org/psycopg3/\n.. _psycopg library: https://pypi.org/project/psycopg/\n\nMitigation for the BREACH attack\n--------------------------------\n\n:class:`~django.middleware.gzip.GZipMiddleware` now includes a mitigation for\nthe BREACH attack. It will add up to 100 random bytes to gzip responses to make\nBREACH attacks harder. Read more about the mitigation technique in the `Heal\nThe Breach (HTB) paper`_.\n\n.. _Heal The Breach (HTB) paper: https://ieeexplore.ieee.org/document/9754554\n\nMinor features\n--------------\n\n:mod:`django.contrib.admin`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* The light or dark color theme of the admin can now be toggled in the UI, as\n well as being set to follow the system setting.\n\n* The admin's font stack now prefers system UI fonts and no longer requires\n downloading fonts. Additionally, CSS variables are available to more easily\n override the default font families.\n\n* The :source:`admin/delete_confirmation.html\n <django/contrib/admin/templates/admin/delete_confirmation.html>` template now\n has some additional blocks and scripting hooks to ease customization.\n\n* The chosen options of\n :attr:`~django.contrib.admin.ModelAdmin.filter_horizontal` and\n :attr:`~django.contrib.admin.ModelAdmin.filter_vertical` widgets are now\n filterable.\n\n* The ``admin/base.html`` template now has a new block ``nav-breadcrumbs``\n which contains the navigation landmark and the ``breadcrumbs`` block.\n\n* :attr:`.ModelAdmin.list_editable` now uses atomic transactions when making\n edits.\n\n:mod:`django.contrib.admindocs`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\n:mod:`django.contrib.auth`\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* The default iteration count for the PBKDF2 password hasher is increased from\n 390,000 to 480,000.\n\n* :class:`~django.contrib.auth.forms.UserCreationForm` now saves many-to-many\n form fields for a custom user model.\n\n:mod:`django.contrib.contenttypes`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\n:mod:`django.contrib.gis`\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* The :doc:`GeoJSON serializer </ref/contrib/gis/serializers>` now outputs the\n ``id`` key for serialized features, which defaults to the primary key of\n objects.\n\n* The :class:`~django.contrib.gis.gdal.GDALRaster` class now supports\n :class:`pathlib.Path`.\n\n* The :class:`~django.contrib.gis.geoip2.GeoIP2` class now supports ``.mmdb``\n files downloaded from DB-IP.\n\n* The OpenLayers template widget no longer includes inline CSS (which also\n removes the former ``map_css`` block) to better comply with a strict Content\n Security Policy.\n\n:mod:`django.contrib.messages`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\n:mod:`django.contrib.postgres`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* The new :lookup:`trigram_strict_word_similar` lookup, and the\n :class:`TrigramStrictWordSimilarity()\n <django.contrib.postgres.search.TrigramStrictWordSimilarity>` and\n :class:`TrigramStrictWordDistance()\n <django.contrib.postgres.search.TrigramStrictWordDistance>` expressions allow\n using trigram strict word similarity.\n\n* The :lookup:`arrayfield.overlap` lookup now supports ``QuerySet.values()``\n and ``values_list()`` as a right-hand side.\n\n:mod:`django.contrib.redirects`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\n:mod:`django.contrib.sessions`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\n:mod:`django.contrib.sitemaps`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\n:mod:`django.contrib.sites`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\n:mod:`django.contrib.staticfiles`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* :class:`~django.contrib.staticfiles.storage.ManifestStaticFilesStorage` now\n replaces paths to JavaScript modules in ``import`` and ``export`` statements\n with their hashed counterparts.\n\n:mod:`django.contrib.syndication`\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\nCache\n~~~~~\n\n* ...\n\nCSRF\n~~~~\n\n* ...\n\nDecorators\n~~~~~~~~~~\n\n* ...\n\nEmail\n~~~~~\n\n* ...\n\nError Reporting\n~~~~~~~~~~~~~~~\n\n* The debug page now shows :pep:`exception notes <678>` and\n :pep:`fine-grained error locations <657>` on Python 3.11+.\n\nFile Storage\n~~~~~~~~~~~~\n\n* ...\n\nFile Uploads\n~~~~~~~~~~~~\n\n* ...\n\nForms\n~~~~~\n\n* :class:`~django.forms.ModelForm` now accepts the new ``Meta`` option\n ``formfield_callback`` to customize form fields.\n\n* :func:`~django.forms.models.modelform_factory` now respects the\n ``formfield_callback`` attribute of the ``form``’s ``Meta``.\n\n* Session cookies are now treated as credentials and therefore hidden and\n replaced with stars (``**********``) in error reports.\n\nGeneric Views\n~~~~~~~~~~~~~\n\n* ...\n\nInternationalization\n~~~~~~~~~~~~~~~~~~~~\n\n* Added support and translations for the Central Kurdish (Sorani) language.\n\n* The :class:`~django.middleware.locale.LocaleMiddleware` now respects a\n language from the request when :func:`~django.conf.urls.i18n.i18n_patterns`\n is used with the ``prefix_default_language`` argument set to ``False``.\n\nLogging\n~~~~~~~\n\n* The :ref:`django-db-logger` logger now logs transaction management queries\n (``BEGIN``, ``COMMIT``, and ``ROLLBACK``) at the ``DEBUG`` level.\n\nManagement Commands\n~~~~~~~~~~~~~~~~~~~\n\n* :djadmin:`makemessages` command now supports locales with private sub-tags\n such as ``nl_NL-x-informal``.\n\n* The new :option:`makemigrations --update` option merges model changes into\n the latest migration and optimizes the resulting operations.\n\nMigrations\n~~~~~~~~~~\n\n* Migrations now support serialization of ``enum.Flag`` objects.\n\nModels\n~~~~~~\n\n* ``QuerySet`` now extensively supports filtering against\n :ref:`window-functions` with the exception of disjunctive filter lookups\n against window functions when performing aggregation.\n\n* :meth:`~.QuerySet.prefetch_related` now supports\n :class:`~django.db.models.Prefetch` objects with sliced querysets.\n\n* :ref:`Registering lookups <lookup-registration-api>` on\n :class:`~django.db.models.Field` instances is now supported.\n\n* The new ``robust`` argument for :func:`~django.db.transaction.on_commit`\n allows performing actions that can fail after a database transaction is\n successfully committed.\n\n* The new :class:`KT() <django.db.models.fields.json.KT>` expression represents\n the text value of a key, index, or path transform of\n :class:`~django.db.models.JSONField`.\n\n* :class:`~django.db.models.functions.Now` now supports microsecond precision\n on MySQL and millisecond precision on SQLite.\n\n* :class:`F() <django.db.models.F>` expressions that output ``BooleanField``\n can now be negated using ``~F()`` (inversion operator).\n\n* ``Model`` now provides asynchronous versions of some methods that use the\n database, using an ``a`` prefix: :meth:`~.Model.adelete`,\n :meth:`~.Model.arefresh_from_db`, and :meth:`~.Model.asave`.\n\n* Related managers now provide asynchronous versions of methods that change a\n set of related objects, using an ``a`` prefix: :meth:`~.RelatedManager.aadd`,\n :meth:`~.RelatedManager.aclear`, :meth:`~.RelatedManager.aremove`, and\n :meth:`~.RelatedManager.aset`.\n\nRequests and Responses\n~~~~~~~~~~~~~~~~~~~~~~\n\n* ...\n\nSecurity\n~~~~~~~~\n\n* ...\n\nSerialization\n~~~~~~~~~~~~~\n\n* ...\n\nSignals\n~~~~~~~\n\n* ...\n\nTemplates\n~~~~~~~~~\n\n* ...\n\nTests\n~~~~~\n\n* The :option:`test --debug-sql` option now formats SQL queries with\n ``sqlparse``.\n\n* The :class:`~django.test.RequestFactory`,\n :class:`~django.test.AsyncRequestFactory`, :class:`~django.test.Client`, and\n :class:`~django.test.AsyncClient` classes now support the ``headers``\n parameter, which accepts a dictionary of header names and values. This allows\n a more natural syntax for declaring headers.\n\n .. code-block:: python\n\n # Before:\n self.client.get(\"/home/\", HTTP_ACCEPT_LANGUAGE=\"fr\")\n await self.async_client.get(\"/home/\", ACCEPT_LANGUAGE=\"fr\")\n\n # After:\n self.client.get(\"/home/\", headers={\"accept-language\": \"fr\"})\n await self.async_client.get(\"/home/\", headers={\"accept-language\": \"fr\"})\n\nURLs\n~~~~\n\n* ...\n\nUtilities\n~~~~~~~~~\n\n* The new ``encoder`` parameter for :meth:`django.utils.html.json_script`\n function allows customizing a JSON encoder class.\n\n* The private internal vendored copy of ``urllib.parse.urlsplit()`` now strips\n ``'\\r'``, ``'\\n'``, and ``'\\t'`` (see :cve:`2022-0391` and :bpo:`43882`).\n This is to protect projects that may be incorrectly using the internal\n ``url_has_allowed_host_and_scheme()`` function, instead of using one of the\n documented functions for handling URL redirects. The Django functions were\n not affected.\n\n* The new :func:`django.utils.http.content_disposition_header` function returns\n a ``Content-Disposition`` HTTP header value as specified by :rfc:`6266`.\n\nValidators\n~~~~~~~~~~\n\n* The list of common passwords used by ``CommonPasswordValidator`` is updated\n to the most recent version.\n\n.. _backwards-incompatible-4.2:\n\nBackwards incompatible changes in 4.2\n=====================================\n\nDatabase backend API\n--------------------\n\nThis section describes changes that may be needed in third-party database\nbackends.\n\n* ``DatabaseFeatures.allows_group_by_pk`` is removed as it only remained to\n accommodate a MySQL extension that has been supplanted by proper functional\n dependency detection in MySQL 5.7.15. Note that\n ``DatabaseFeatures.allows_group_by_selected_pks`` is still supported and\n should be enabled if your backend supports functional dependency detection in\n ``GROUP BY`` clauses as specified by the ``SQL:1999`` standard.\n\nDropped support for MariaDB 10.3\n--------------------------------\n\nUpstream support for MariaDB 10.3 ends in May 2023. Django 4.2 supports MariaDB\n10.4 and higher.\n\nDropped support for MySQL 5.7\n-----------------------------\n\nUpstream support for MySQL 5.7 ends in October 2023. Django 4.2 supports MySQL\n8 and higher.\n\nDropped support for PostgreSQL 11\n---------------------------------\n\nUpstream support for PostgreSQL 11 ends in November 2023. Django 4.2 supports\nPostgreSQL 12 and higher.\n\nSetting ``update_fields`` in ``Model.save()`` may now be required\n-----------------------------------------------------------------\n\nIn order to avoid updating unnecessary columns,\n:meth:`.QuerySet.update_or_create` now passes ``update_fields`` to the\n:meth:`Model.save() <django.db.models.Model.save>` calls. As a consequence, any\nfields modified in the custom ``save()`` methods should be added to the\n``update_fields`` keyword argument before calling ``super()``. See\n:ref:`overriding-model-methods` for more details.\n\nMiscellaneous\n-------------\n\n* The undocumented ``SimpleTemplateResponse.rendering_attrs`` and\n ``TemplateResponse.rendering_attrs`` are renamed to ``non_picklable_attrs``.\n\n* The undocumented ``django.http.multipartparser.parse_header()`` function is\n removed. Use ``django.utils.http.parse_header_parameters()`` instead.\n\n* :ttag:`{% blocktranslate asvar … %}<blocktranslate>` result is now marked as\n safe for (HTML) output purposes.\n\n* The ``autofocus`` HTML attribute in the admin search box is removed as it can\n be confusing for screen readers.\n\n* The :option:`makemigrations --check` option no longer creates missing\n migration files.\n\n* The ``alias`` argument for :meth:`.Expression.get_group_by_cols` is removed.\n\n* The minimum supported version of ``sqlparse`` is increased from 0.2.2 to\n 0.2.3.\n\n* The undocumented ``negated`` parameter of the\n :class:`~django.db.models.Exists` expression is removed.\n\n* The ``is_summary`` argument of the undocumented ``Query.add_annotation()``\n method is removed.\n\n* The minimum supported version of SQLite is increased from 3.9.0 to 3.21.0.\n\n* :djadmin:`inspectdb` now uses ``display_size`` from\n ``DatabaseIntrospection.get_table_description()`` rather than\n ``internal_size`` for ``CharField``.\n\n.. _deprecated-features-4.2:\n\nFeatures deprecated in 4.2\n==========================\n\n``index_together`` option is deprecated in favor of ``indexes``\n---------------------------------------------------------------\n\nThe :attr:`Meta.index_together <django.db.models.Options.index_together>`\noption is deprecated in favor of the :attr:`~django.db.models.Options.indexes`\noption.\n\nMigrating existing ``index_together`` should be handled as a migration. For\nexample::\n\n class Author(models.Model):\n rank = models.IntegerField()\n name = models.CharField(max_length=30)\n\n class Meta:\n index_together = [[\"rank\", \"name\"]]\n\nShould become::\n\n class Author(models.Model):\n rank = models.IntegerField()\n name = models.CharField(max_length=30)\n\n class Meta:\n indexes = [models.Index(fields=[\"rank\", \"name\"])]\n\nRunning the :djadmin:`makemigrations` command will generate a migration\ncontaining a :class:`~django.db.migrations.operations.RenameIndex` operation\nwhich will rename the existing index.\n\nThe ``AlterIndexTogether`` migration operation is now officially supported only\nfor pre-Django 4.2 migration files. For backward compatibility reasons, it's\nstill part of the public API, and there's no plan to deprecate or remove it,\nbut it should not be used for new migrations. Use\n:class:`~django.db.migrations.operations.AddIndex` and\n:class:`~django.db.migrations.operations.RemoveIndex` operations instead.\n\nPassing encoded JSON string literals to ``JSONField`` is deprecated\n-------------------------------------------------------------------\n\n``JSONField`` and its associated lookups and aggregates use to allow passing\nJSON encoded string literals which caused ambiguity on whether string literals\nwere already encoded from database backend's perspective.\n\nDuring the deprecation period string literals will be attempted to be JSON\ndecoded and a warning will be emitted on success that points at passing\nnon-encoded forms instead.\n\nCode that use to pass JSON encoded string literals::\n\n Document.objects.bulk_create(\n Document(data=Value(\"null\")),\n Document(data=Value(\"[]\")),\n Document(data=Value('\"foo-bar\"')),\n )\n Document.objects.annotate(\n JSONBAgg(\"field\", default=Value('[]')),\n )\n\nShould become::\n\n Document.objects.bulk_create(\n Document(data=Value(None, JSONField())),\n Document(data=[]),\n Document(data=\"foo-bar\"),\n )\n Document.objects.annotate(\n JSONBAgg(\"field\", default=[]),\n )\n\nFrom Django 5.1+ string literals will be implicitly interpreted as JSON string\nliterals.\n\nMiscellaneous\n-------------\n\n* The ``BaseUserManager.make_random_password()`` method is deprecated. See\n `recipes and best practices\n <https://docs.python.org/3/library/secrets.html#recipes-and-best-practices>`_\n for using Python's :py:mod:`secrets` module to generate passwords.\n\n* The ``length_is`` template filter is deprecated in favor of :tfilter:`length`\n and the ``==`` operator within an :ttag:`{% if %}<if>` tag. For example\n\n .. code-block:: html+django\n\n {% if value|length == 4 %}…{% endif %}\n {% if value|length == 4 %}True{% else %}False{% endif %}\n\n instead of:\n\n .. code-block:: html+django\n\n {% if value|length_is:4 %}…{% endif %}\n {{ value|length_is:4 }}\n\n* ``django.contrib.auth.hashers.SHA1PasswordHasher``,\n ``django.contrib.auth.hashers.UnsaltedSHA1PasswordHasher``, and\n ``django.contrib.auth.hashers.UnsaltedMD5PasswordHasher`` are deprecated.\n\n* ``django.contrib.postgres.fields.CICharField`` is deprecated in favor of\n ``CharField(db_collation=\"…\")`` with a case-insensitive non-deterministic\n collation.\n\n* ``django.contrib.postgres.fields.CIEmailField`` is deprecated in favor of\n ``EmailField(db_collation=\"…\")`` with a case-insensitive non-deterministic\n collation.\n\n* ``django.contrib.postgres.fields.CITextField`` is deprecated in favor of\n ``TextField(db_collation=\"…\")`` with a case-insensitive non-deterministic\n collation.\n\n* ``django.contrib.postgres.fields.CIText`` mixin is deprecated.\n\n* The ``map_height`` and ``map_width`` attributes of ``BaseGeometryWidget`` are\n deprecated, use CSS to size map widgets instead.\n\n* ``SimpleTestCase.assertFormsetError()`` is deprecated in favor of\n ``assertFormSetError()``.\n\n* ``TransactionTestCase.assertQuerysetEqual()`` is deprecated in favor of\n ``assertQuerySetEqual()``.\n\n* Passing positional arguments to ``Signer`` and ``TimestampSigner`` is\n deprecated in favor of keyword-only arguments.\n"}
|
diff --git a/docs/ref/contrib/sitemaps.txt b/docs/ref/contrib/sitemaps.txt
index d3225405a38d..7dc3dced5183 100644
--- a/docs/ref/contrib/sitemaps.txt
+++ b/docs/ref/contrib/sitemaps.txt
@@ -311,6 +311,15 @@ Note:
The latest ``lastmod`` returned by calling the method with all
items returned by :meth:`Sitemap.items`.
+ .. method:: Sitemap.get_languages_for_item(item, lang_code)
+
+ .. versionadded:: 4.2
+
+ **Optional.** A method that returns the sequence of language codes for
+ which the item is displayed. By default
+ :meth:`~Sitemap.get_languages_for_item` returns
+ :attr:`~Sitemap.languages`.
+
Shortcuts
=========
diff --git a/docs/releases/4.2.txt b/docs/releases/4.2.txt
index 682fce2a5362..7e93df0702ce 100644
--- a/docs/releases/4.2.txt
+++ b/docs/releases/4.2.txt
@@ -145,7 +145,8 @@ Minor features
:mod:`django.contrib.sitemaps`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-* ...
+* The new :meth:`.Sitemap.get_languages_for_item` method allows customizing the
+ list of languages for which the item is displayed.
:mod:`django.contrib.sites`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
{"django/contrib/sitemaps/__init__.py": [{"type": "function", "name": "Sitemap.get_languages_for_item", "lines": [95, 97], "signature": "def get_languages_for_item(self, item):", "doc": "Languages for which this item is displayed."}]}
|
4.2
|
["A i18n sitemap index in which item can be chosen to be displayed for a"]
|
["A simple sitemap index can be rendered with a custom template", "test_simple_sitemap_custom_index_warning (sitemaps_tests.test_http.DeprecatedTests)", "A i18n sitemap with alternate/hreflang links can be rendered.", "A i18n sitemap index with limited languages can be rendered.", "A i18n sitemap index with x-default can be rendered.", "A cached sitemap index can be rendered (#2713).", "All items in the sitemap have `lastmod`. The `Last-Modified` header", "test_callable_sitemod_no_items (sitemaps_tests.test_http.HTTPSitemapTests)", "Not all items have `lastmod`. Therefore the `Last-Modified` header", "test_empty_page (sitemaps_tests.test_http.HTTPSitemapTests)", "test_empty_sitemap (sitemaps_tests.test_http.HTTPSitemapTests)", "The priority value should not be localized.", "test_no_section (sitemaps_tests.test_http.HTTPSitemapTests)", "test_page_not_int (sitemaps_tests.test_http.HTTPSitemapTests)", "A sitemap may have multiple pages.", "test_requestsite_sitemap (sitemaps_tests.test_http.HTTPSitemapTests)", "A simple sitemap can be rendered with a custom template", "A simple i18n sitemap index can be rendered, without logging variable", "A simple sitemap can be rendered", "A simple sitemap index can be rendered", "A simple sitemap section can be rendered", "sitemapindex.lastmod is included when Sitemap.lastmod is", "sitemapindex.lastmod is omitted when Sitemap.lastmod is", "Check we get ImproperlyConfigured if we don't pass a site object to", "Check we get ImproperlyConfigured when we don't pass a site object to", "Check to make sure that the raw item is included with each", "Last-Modified header is set correctly", "The Last-Modified header should be support dates (without time).", "Last-Modified header is missing when sitemap has no lastmod", "Last-Modified header is omitted when lastmod not on all items", "The Last-Modified header should be converted from timezone aware dates", "lastmod datestamp shows timezones if Sitemap.get_latest_lastmod", "A sitemap may not be callable.", "test_sitemap_without_entries (sitemaps_tests.test_http.HTTPSitemapTests)", "The Last-Modified header is set to the most recent sitemap lastmod.", "The Last-Modified header is omitted when lastmod isn't found in all", "test_x_robots_sitemap (sitemaps_tests.test_http.HTTPSitemapTests)"]
|
016bead6a23989adec5c7ee6948db7ce2fc5e89b
|
{"first_commit_time": 1666366604.0, "pr_title": "Fixed #33662 -- Allowed Sitemap to customize languages for each item.", "pr_body": "[ticket #33662 : Choose which items are displayed per language in Sitemap ](https://code.djangoproject.com/ticket/33662)", "pr_timeline": [], "issues": {"33662": {"issue_title": "Choose which items are displayed per language in Sitemap", "issue_body": "Description\n\t \nThe current implementation of Sitemap is : if we use i18n, then we display a cartesian product between some items and some languages. \nThere is no way to use the provided i18n automation if we want to display some items depending on the language (for instance non-translated blog articles). \nI precise in my case, urls are translated, so given a language the url may not exist or raise an error.", "issue_timeline": [{"time": 1651026600.0, "comment": "OK, sounds reasonable to at least look at. Would you care to take on a patch? In either case could you perhaps expand the description to include a minimal reproduce setup so that someone picking it up had a few breadcrumbs to follow? Thanks!"}, {"time": 1662083167.0, "comment": "I would like to tackle this new feature, it sounds interesting. As Carlton Gibson said, could you expand please the description, so I can fully understand the idea of the new feature?. For instance, on which scenario the URLs get translated? (Not sure, but this maybe sounds more like an error than a future). It does sound as a new feature, being able to display some items depending on the language, by this do you mean like for example, only translate a menu or only translate what is inside a <div> or something similar? Please expand your idea."}, {"time": 1666429787.0, "comment": "Sorry for the delay, I did not at all see any of your comments sooner ! I have in my project a model Article which could be summed up in four fields (url_en, url_fr, content_en and content_fr). Some of these article are only written in one language. With no content, the page is empty. Even if it exists, I don't want this empty page to be in my sitemap. My idea was to add a method in Sitemap to return a boolean indicating whether the item should be displayed in the sitemap given a language. I made a first draft for a patch (\u200bhttps://github.com/django/django/compare/main...roxanebellot:django:ticket_33662). I wait for a green light on the feature before submitting it as a PR."}, {"time": 1668052318.0, "comment": "@roxanebellot Please open a PR and add it here when you've done so. Thanks."}, {"time": 1670404234.0, "comment": "\u200bPR"}, {"time": 1671432679.0, "comment": "In 289e9a7: Fixed #33662 -- Allowed Sitemap to customize languages for each item."}, {"time": 1682600055.0, "comment": "In 5c456a8: Refs #33662 -- Corrected Sitemap.get_languages_for_item() signature in docs."}, {"time": 1682600093.0, "comment": "In 88f23b6b: [4.2.x] Refs #33662 -- Corrected Sitemap.get_languages_for_item() signature in docs. Backport of 5c456a879300e5f51010d3f6aa7449302413efed from main"}]}}}
|
docker/docker-py
| 1,230
|
https://github.com/docker/docker-py/pull/1230
|
docker__docker-py-1230
|
[]
|
52c2cc845346884218f566eeaeee5a5ca3e714ab
|
diff --git a/docker/api/swarm.py b/docker/api/swarm.py
index 7481c67532..2fc877448a 100644
--- a/docker/api/swarm.py
+++ b/docker/api/swarm.py
@@ -69,6 +69,13 @@ def nodes(self, filters=None):
return self._result(self._get(url, params=params), True)
+ @utils.minimum_version('1.24')
+ def update_node(self, node_id, version, node_spec=None):
+ url = self._url('/nodes/{0}/update?version={1}', node_id, str(version))
+ res = self._post_json(url, data=node_spec)
+ self._raise_for_status(res)
+ return True
+
@utils.minimum_version('1.24')
def update_swarm(self, version, swarm_spec=None, rotate_worker_token=False,
rotate_manager_token=False):
diff --git a/docs/api.md b/docs/api.md
index 1699344a66..5cadb83081 100644
--- a/docs/api.md
+++ b/docs/api.md
@@ -1129,6 +1129,11 @@ Update resource configs of one or more containers.
**Returns** (dict): Dictionary containing a `Warnings` key.
+## update_node
+
+Update a node.
+See the [Swarm documentation](swarm.md#clientupdate_node).
+
## update_service
Update a service, similar to the `docker service update` command. See the
diff --git a/docs/swarm.md b/docs/swarm.md
index 3cc44f8741..20c3945352 100644
--- a/docs/swarm.md
+++ b/docs/swarm.md
@@ -232,6 +232,30 @@ List Swarm nodes
**Returns:** A list of dictionaries containing data about each swarm node.
+### Client.update_node
+
+Update the Node's configuration
+
+**Params:**
+
+* version (int): The version number of the node object being updated. This
+ is required to avoid conflicting writes.
+* node_spec (dict): Configuration settings to update. Any values not provided
+ will be removed. See the official [Docker API documentation](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/update-a-node) for more details.
+ Default: `None`.
+
+**Returns:** `True` if the request went through. Raises an `APIError` if it
+ fails.
+
+```python
+node_spec = {'Availability': 'active',
+ 'Name': 'node-name',
+ 'Role': 'manager',
+ 'Labels': {'foo': 'bar'}
+ }
+client.update_node(node_id='24ifsmvkjbyhk', version=8, node_spec=node_spec)
+```
+
### Client.update_swarm
Update the Swarm's configuration
|
diff --git a/tests/integration/swarm_test.py b/tests/integration/swarm_test.py
index 8c62f2ec06..7f02c71170 100644
--- a/tests/integration/swarm_test.py
+++ b/tests/integration/swarm_test.py
@@ -1,3 +1,4 @@
+import copy
import docker
import pytest
@@ -138,3 +139,26 @@ def test_inspect_node(self):
node_data = self.client.inspect_node(node['ID'])
assert node['ID'] == node_data['ID']
assert node['Version'] == node_data['Version']
+
+ @requires_api_version('1.24')
+ def test_update_node(self):
+ assert self.client.init_swarm('eth0')
+ nodes_list = self.client.nodes()
+ node = nodes_list[0]
+ orig_spec = node['Spec']
+
+ # add a new label
+ new_spec = copy.deepcopy(orig_spec)
+ new_spec['Labels'] = {'new.label': 'new value'}
+ self.client.update_node(node_id=node['ID'],
+ version=node['Version']['Index'],
+ node_spec=new_spec)
+ updated_node = self.client.inspect_node(node['ID'])
+ assert new_spec == updated_node['Spec']
+
+ # Revert the changes
+ self.client.update_node(node_id=node['ID'],
+ version=updated_node['Version']['Index'],
+ node_spec=orig_spec)
+ reverted_node = self.client.inspect_node(node['ID'])
+ assert orig_spec == reverted_node['Spec']
diff --git a/tests/unit/fake_api.py b/tests/unit/fake_api.py
index 1e9d318df5..cfe6ef777f 100644
--- a/tests/unit/fake_api.py
+++ b/tests/unit/fake_api.py
@@ -14,6 +14,7 @@
FAKE_URL = 'myurl'
FAKE_PATH = '/path'
FAKE_VOLUME_NAME = 'perfectcherryblossom'
+FAKE_NODE_ID = '24ifsmvkjbyhk'
# Each method is prefixed with HTTP method (get, post...)
# for clarity and readability
@@ -406,6 +407,10 @@ def post_fake_update_container():
return 200, {'Warnings': []}
+def post_fake_update_node():
+ return 200, None
+
+
# Maps real api url to fake response callback
prefix = 'http+docker://localunixsocket'
fake_responses = {
@@ -504,4 +509,8 @@ def post_fake_update_container():
CURRENT_VERSION, prefix, FAKE_VOLUME_NAME
), 'DELETE'):
fake_remove_volume,
+ ('{1}/{0}/nodes/{2}/update?version=1'.format(
+ CURRENT_VERSION, prefix, FAKE_NODE_ID
+ ), 'POST'):
+ post_fake_update_node,
}
diff --git a/tests/unit/swarm_test.py b/tests/unit/swarm_test.py
new file mode 100644
index 0000000000..5580383406
--- /dev/null
+++ b/tests/unit/swarm_test.py
@@ -0,0 +1,32 @@
+# -*- coding: utf-8 -*-
+
+import json
+
+from . import fake_api
+from ..base import requires_api_version
+from .api_test import (DockerClientTest, url_prefix, fake_request)
+
+
+class SwarmTest(DockerClientTest):
+ @requires_api_version('1.24')
+ def test_node_update(self):
+ node_spec = {
+ 'Availability': 'active',
+ 'Name': 'node-name',
+ 'Role': 'manager',
+ 'Labels': {'foo': 'bar'}
+ }
+
+ self.client.update_node(
+ node_id=fake_api.FAKE_NODE_ID, version=1, node_spec=node_spec
+ )
+ args = fake_request.call_args
+ self.assertEqual(
+ args[0][1], url_prefix + 'nodes/24ifsmvkjbyhk/update?version=1'
+ )
+ self.assertEqual(
+ json.loads(args[1]['data']), node_spec
+ )
+ self.assertEqual(
+ args[1]['headers']['Content-Type'], 'application/json'
+ )
| 2016-09-27T18:37:20
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docker/api/swarm.py": "import logging\nfrom six.moves import http_client\nfrom .. import utils\nlog = logging.getLogger(__name__)\n\n\nclass SwarmApiMixin(object):\n\n def create_swarm_spec(self, *args, **kwargs):\n return utils.SwarmSpec(*args, **kwargs)\n\n @utils.minimum_version('1.24')\n def init_swarm(self, advertise_addr=None, listen_addr='0.0.0.0:2377',\n force_new_cluster=False, swarm_spec=None):\n url = self._url('/swarm/init')\n if swarm_spec is not None and not isinstance(swarm_spec, dict):\n raise TypeError('swarm_spec must be a dictionary')\n data = {\n 'AdvertiseAddr': advertise_addr,\n 'ListenAddr': listen_addr,\n 'ForceNewCluster': force_new_cluster,\n 'Spec': swarm_spec,\n }\n response = self._post_json(url, data=data)\n self._raise_for_status(response)\n return True\n\n @utils.minimum_version('1.24')\n def inspect_swarm(self):\n url = self._url('/swarm')\n return self._result(self._get(url), True)\n\n @utils.check_resource\n @utils.minimum_version('1.24')\n def inspect_node(self, node_id):\n url = self._url('/nodes/{0}', node_id)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.24')\n def join_swarm(self, remote_addrs, join_token, listen_addr=None,\n advertise_addr=None):\n data = {\n \"RemoteAddrs\": remote_addrs,\n \"ListenAddr\": listen_addr,\n \"JoinToken\": join_token,\n \"AdvertiseAddr\": advertise_addr,\n }\n url = self._url('/swarm/join')\n response = self._post_json(url, data=data)\n self._raise_for_status(response)\n return True\n\n @utils.minimum_version('1.24')\n def leave_swarm(self, force=False):\n url = self._url('/swarm/leave')\n response = self._post(url, params={'force': force})\n # Ignore \"this node is not part of a swarm\" error\n if force and response.status_code == http_client.NOT_ACCEPTABLE:\n return True\n self._raise_for_status(response)\n return True\n\n @utils.minimum_version('1.24')\n def nodes(self, filters=None):\n url = self._url('/nodes')\n params = {}\n if filters:\n params['filters'] = utils.convert_filters(filters)\n\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.24')\n def update_swarm(self, version, swarm_spec=None, rotate_worker_token=False,\n rotate_manager_token=False):\n url = self._url('/swarm/update')\n response = self._post_json(url, data=swarm_spec, params={\n 'rotateWorkerToken': rotate_worker_token,\n 'rotateManagerToken': rotate_manager_token,\n 'version': version\n })\n self._raise_for_status(response)\n return True\n", "docs/api.md": "# Client API\n\nTo instantiate a `Client` class that will allow you to communicate with a\nDocker daemon, simply do:\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='unix://var/run/docker.sock')\n```\n\n**Params**:\n\n* base_url (str): Refers to the protocol+hostname+port where the Docker server\nis hosted.\n* version (str): The version of the API the client will use. Specify `'auto'`\n to use the API version provided by the server.\n* timeout (int): The HTTP request timeout, in seconds.\n* tls (bool or [TLSConfig](tls.md#TLSConfig)): Equivalent CLI options: `docker --tls ...`\n* user_agent (str): Set a custom user agent for requests to the server.\n\n****\n\n## attach\n\nThe `.logs()` function is a wrapper around this method, which you can use\ninstead if you want to fetch/stream container output without first retrieving\nthe entire backlog.\n\n**Params**:\n\n* container (str): The container to attach to\n* stdout (bool): Get STDOUT\n* stderr (bool): Get STDERR\n* stream (bool): Return an iterator\n* logs (bool): Get all previous output\n\n**Returns** (generator or str): The logs or output for the image\n\n## build\n\nSimilar to the `docker build` command. Either `path` or `fileobj` needs to be\nset. `path` can be a local path (to a directory containing a Dockerfile) or a\nremote URL. `fileobj` must be a readable file-like object to a Dockerfile.\n\nIf you have a tar file for the Docker build context (including a Dockerfile)\nalready, pass a readable file-like object to `fileobj` and also pass\n`custom_context=True`. If the stream is compressed also, set `encoding` to the\ncorrect value (e.g `gzip`).\n\n**Params**:\n\n* path (str): Path to the directory containing the Dockerfile\n* tag (str): A tag to add to the final image\n* quiet (bool): Whether to return the status\n* fileobj: A file object to use as the Dockerfile. (Or a file-like object)\n* nocache (bool): Don't use the cache when set to `True`\n* rm (bool): Remove intermediate containers. The `docker build` command now\n defaults to ``--rm=true``, but we have kept the old default of `False`\n to preserve backward compatibility\n* stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve build output as\n it happens\n* timeout (int): HTTP timeout\n* custom_context (bool): Optional if using `fileobj`\n* encoding (str): The encoding for a stream. Set to `gzip` for compressing\n* pull (bool): Downloads any updates to the FROM image in Dockerfiles\n* forcerm (bool): Always remove intermediate containers, even after unsuccessful builds\n* dockerfile (str): path within the build context to the Dockerfile\n* buildargs (dict): A dictionary of build arguments\n* container_limits (dict): A dictionary of limits applied to each container\n created by the build process. Valid keys:\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g., `\"0-3\"`, `\"0,1\"`\n* decode (bool): If set to `True`, the returned stream will be decoded into\n dicts on the fly. Default `False`.\n\n**Returns** (generator): A generator for the build output\n\n```python\n>>> from io import BytesIO\n>>> from docker import Client\n>>> dockerfile = '''\n... # Shared Volume\n... FROM busybox:buildroot-2014.02\n... MAINTAINER first last, [email protected]\n... VOLUME /data\n... CMD [\"/bin/sh\"]\n... '''\n>>> f = BytesIO(dockerfile.encode('utf-8'))\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> response = [line for line in cli.build(\n... fileobj=f, rm=True, tag='yourname/volume'\n... )]\n>>> response\n['{\"stream\":\" ---\\\\u003e a9eb17255234\\\\n\"}',\n'{\"stream\":\"Step 1 : MAINTAINER first last, [email protected]\\\\n\"}',\n'{\"stream\":\" ---\\\\u003e Running in 08787d0ee8b1\\\\n\"}',\n'{\"stream\":\" ---\\\\u003e 23e5e66a4494\\\\n\"}',\n'{\"stream\":\"Removing intermediate container 08787d0ee8b1\\\\n\"}',\n'{\"stream\":\"Step 2 : VOLUME /data\\\\n\"}',\n'{\"stream\":\" ---\\\\u003e Running in abdc1e6896c6\\\\n\"}',\n'{\"stream\":\" ---\\\\u003e 713bca62012e\\\\n\"}',\n'{\"stream\":\"Removing intermediate container abdc1e6896c6\\\\n\"}',\n'{\"stream\":\"Step 3 : CMD [\\\\\"/bin/sh\\\\\"]\\\\n\"}',\n'{\"stream\":\" ---\\\\u003e Running in dba30f2a1a7e\\\\n\"}',\n'{\"stream\":\" ---\\\\u003e 032b8b2855fc\\\\n\"}',\n'{\"stream\":\"Removing intermediate container dba30f2a1a7e\\\\n\"}',\n'{\"stream\":\"Successfully built 032b8b2855fc\\\\n\"}']\n```\n\n**Raises:** [TypeError](\nhttps://docs.python.org/3.5/library/exceptions.html#TypeError) if `path` nor\n`fileobj` are specified\n\n## commit\n\nIdentical to the `docker commit` command.\n\n**Params**:\n\n* container (str): The image hash of the container\n* repository (str): The repository to push the image to\n* tag (str): The tag to push\n* message (str): A commit message\n* author (str): The name of the author\n* changes (str): Dockerfile instructions to apply while committing\n* conf (dict): The configuration for the container. See the [Docker remote api](\nhttps://docs.docker.com/reference/api/docker_remote_api/) for full details.\n\n## containers\n\nList containers. Identical to the `docker ps` command.\n\n**Params**:\n\n* quiet (bool): Only display numeric Ids\n* all (bool): Show all containers. Only running containers are shown by default\n* trunc (bool): Truncate output\n* latest (bool): Show only the latest created container, include non-running\nones.\n* since (str): Show only containers created since Id or Name, include\nnon-running ones\n* before (str): Show only container created before Id or Name, include\nnon-running ones\n* limit (int): Show `limit` last created containers, include non-running ones\n* size (bool): Display sizes\n* filters (dict): Filters to be processed on the image list. Available filters:\n - `exited` (int): Only containers with specified exit code\n - `status` (str): One of `restarting`, `running`, `paused`, `exited`\n - `label` (str): format either `\"key\"` or `\"key=value\"`\n - `id` (str): The id of the container.\n - `name` (str): The name of the container.\n - `ancestor` (str): Filter by container ancestor. Format of `<image-name>[:tag]`, `<image-id>`, or `<image@digest>`.\n - `before` (str): Only containers created before a particular container. Give the container name or id.\n - `since` (str): Only containers created after a particular container. Give container name or id.\n\n A comprehensive list can be found [here](https://docs.docker.com/engine/reference/commandline/ps/)\n\n**Returns** (dict): The system's containers\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> cli.containers()\n[{'Command': '/bin/sleep 30',\n 'Created': 1412574844,\n 'Id': '6e276c9e6e5759e12a6a9214efec6439f80b4f37618e1a6547f28a3da34db07a',\n 'Image': 'busybox:buildroot-2014.02',\n 'Names': ['/grave_mayer'],\n 'Ports': [],\n 'Status': 'Up 1 seconds'}]\n```\n\n## connect_container_to_network\n\nConnect a container to a network.\n\n**Params**:\n\n* container (str): container-id/name to be connected to the network\n* net_id (str): network id\n* aliases (list): A list of aliases for this endpoint. Names in that list can\n be used within the network to reach the container. Defaults to `None`.\n* links (list): A list of links for this endpoint. Containers declared in this\n list will be [linked](https://docs.docker.com/engine/userguide/networking/work-with-networks/#linking-containers-in-user-defined-networks)\n to this container. Defaults to `None`.\n* ipv4_address (str): The IP address of this container on the network,\n using the IPv4 protocol. Defaults to `None`.\n* ipv6_address (str): The IP address of this container on the network,\n using the IPv6 protocol. Defaults to `None`.\n* link_local_ips (list): A list of link-local (IPv4/IPv6) addresses.\n\n## copy\nIdentical to the `docker cp` command. Get files/folders from the container.\n**Deprecated for API version >= 1.20** – Consider using\n[`get_archive`](#get_archive) **instead.**\n\n**Params**:\n\n* container (str): The container to copy from\n* resource (str): The path within the container\n\n**Returns** (str): The contents of the file as a string\n\n## create_container\n\nCreates a container that can then be `.start()` ed. Parameters are similar to\nthose for the `docker run` command except it doesn't support the attach\noptions (`-a`).\n\nSee [Port bindings](port-bindings.md) and [Using volumes](volumes.md) for more\ninformation on how to create port bindings and volume mappings.\n\nThe `mem_limit` variable accepts float values (which represent the memory limit\nof the created container in bytes) or a string with a units identification char\n('100000b', '1000k', '128m', '1g'). If a string is specified without a units\ncharacter, bytes are assumed as an intended unit.\n\n`volumes_from` and `dns` arguments raise [TypeError](\nhttps://docs.python.org/3.5/library/exceptions.html#TypeError) exception if\nthey are used against v1.10 and above of the Docker remote API. Those\narguments should be passed as part of the `host_config` dictionary.\n\n**Params**:\n\n* image (str): The image to run\n* command (str or list): The command to be run in the container\n* hostname (str): Optional hostname for the container\n* user (str or int): Username or UID\n* detach (bool): Detached mode: run container in the background and print new\ncontainer Id\n* stdin_open (bool): Keep STDIN open even if not attached\n* tty (bool): Allocate a pseudo-TTY\n* mem_limit (float or str): Memory limit (format: [number][optional unit],\nwhere unit = b, k, m, or g)\n* ports (list of ints): A list of port numbers\n* environment (dict or list): A dictionary or a list of strings in the\nfollowing format `[\"PASSWORD=xxx\"]` or `{\"PASSWORD\": \"xxx\"}`.\n* dns (list): DNS name servers\n* dns_opt (list): Additional options to be added to the container's `resolv.conf` file\n* volumes (str or list):\n* volumes_from (str or list): List of container names or Ids to get volumes\nfrom. Optionally a single string joining container id's with commas\n* network_disabled (bool): Disable networking\n* name (str): A name for the container\n* entrypoint (str or list): An entrypoint\n* working_dir (str): Path to the working directory\n* domainname (str or list): Set custom DNS search domains\n* memswap_limit (int):\n* host_config (dict): A [HostConfig](hostconfig.md) dictionary\n* mac_address (str): The Mac Address to assign the container\n* labels (dict or list): A dictionary of name-value labels (e.g. `{\"label1\": \"value1\", \"label2\": \"value2\"}`) or a list of names of labels to set with empty values (e.g. `[\"label1\", \"label2\"]`)\n* volume_driver (str): The name of a volume driver/plugin.\n* stop_signal (str): The stop signal to use to stop the container (e.g. `SIGINT`).\n* networking_config (dict): A [NetworkingConfig](networks.md) dictionary\n\n**Returns** (dict): A dictionary with an image 'Id' key and a 'Warnings' key.\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> container = cli.create_container(image='busybox:latest', command='/bin/sleep 30')\n>>> print(container)\n{'Id': '8a61192da2b3bb2d922875585e29b74ec0dc4e0117fcbf84c962204e97564cd7',\n 'Warnings': None}\n```\n\n### docker.utils.parse_env_file\n\nA utility for parsing an environment file.\n\nThe expected format of the file is as follows:\n\n```\nUSERNAME=jdoe\nPASSWORD=secret\n```\n\nThe utility can be used as follows:\n\n```python\n>>> import docker.utils\n>>> my_envs = docker.utils.parse_env_file('/path/to/file')\n>>> client.create_container('myimage', 'command', environment=my_envs)\n```\n\n## create_network\n\nCreate a network, similar to the `docker network create` command. See the\n[networks documentation](networks.md) for details.\n\n**Params**:\n\n* name (str): Name of the network\n* driver (str): Name of the driver used to create the network\n* options (dict): Driver options as a key-value dictionary\n* ipam (dict): Optional custom IP scheme for the network\n* check_duplicate (bool): Request daemon to check for networks with same name.\n Default: `True`.\n* internal (bool): Restrict external access to the network. Default `False`.\n* labels (dict): Map of labels to set on the network. Default `None`.\n* enable_ipv6 (bool): Enable IPv6 on the network. Default `False`.\n\n**Returns** (dict): The created network reference object\n\n## create_service\n\nCreate a service, similar to the `docker service create` command. See the\n[services documentation](services.md#Clientcreate_service) for details.\n\n## create_volume\n\nCreate and register a named volume\n\n**Params**:\n\n* name (str): Name of the volume\n* driver (str): Name of the driver used to create the volume\n* driver_opts (dict): Driver options as a key-value dictionary\n* labels (dict): Labels to set on the volume\n\n**Returns** (dict): The created volume reference object\n\n```python\n>>> from docker import Client\n>>> cli = Client()\n>>> volume = cli.create_volume(\n name='foobar', driver='local', driver_opts={'foo': 'bar', 'baz': 'false'},\n labels={\"key\": \"value\"}\n)\n>>> print(volume)\n{\n u'Mountpoint': u'/var/lib/docker/volumes/foobar/_data',\n u'Driver': u'local',\n u'Name': u'foobar',\n u'Labels': {u'key': u'value'}\n}\n```\n\n## diff\n\nInspect changes on a container's filesystem.\n\n**Params**:\n\n* container (str): The container to diff\n\n**Returns** (str):\n\n## disconnect_container_from_network\n\n**Params**:\n\n* container (str): container-id/name to be disconnected from a network\n* net_id (str): network id\n* force (bool): Force the container to disconnect from a network.\n Default: `False`\n\n## events\n\nIdentical to the `docker events` command: get real time events from the server. The `events`\nfunction return a blocking generator you can iterate over to retrieve events as they happen.\n\n**Params**:\n\n* since (UTC datetime or int): get events from this point\n* until (UTC datetime or int): get events until this point\n* filters (dict): filter the events by event time, container or image\n* decode (bool): If set to true, stream will be decoded into dicts on the\n fly. False by default.\n\n**Returns** (generator):\n\n```python\n{u'status': u'start',\n u'from': u'image/with:tag',\n u'id': u'container-id',\n u'time': 1423339459}\n```\n\n## execute\n\nThis command is deprecated for docker-py >= 1.2.0 ; use `exec_create` and\n`exec_start` instead.\n\n## exec_create\n\nSets up an exec instance in a running container.\n\n**Params**:\n\n* container (str): Target container where exec instance will be created\n* cmd (str or list): Command to be executed\n* stdout (bool): Attach to stdout of the exec command if true. Default: True\n* stderr (bool): Attach to stderr of the exec command if true. Default: True\n* since (UTC datetime or int): Output logs from this timestamp. Default: `None` (all logs are given)\n* tty (bool): Allocate a pseudo-TTY. Default: False\n* user (str): User to execute command as. Default: root\n\n**Returns** (dict): A dictionary with an exec 'Id' key.\n\n\n## exec_inspect\n\nReturn low-level information about an exec command.\n\n**Params**:\n\n* exec_id (str): ID of the exec instance\n\n**Returns** (dict): Dictionary of values returned by the endpoint.\n\n\n## exec_resize\n\nResize the tty session used by the specified exec command.\n\n**Params**:\n\n* exec_id (str): ID of the exec instance\n* height (int): Height of tty session\n* width (int): Width of tty session\n\n## exec_start\n\nStart a previously set up exec instance.\n\n**Params**:\n\n* exec_id (str): ID of the exec instance\n* detach (bool): If true, detach from the exec command. Default: False\n* tty (bool): Allocate a pseudo-TTY. Default: False\n* stream (bool): Stream response data. Default: False\n\n**Returns** (generator or str): If `stream=True`, a generator yielding response\nchunks. A string containing response data otherwise.\n\n## export\n\nExport the contents of a filesystem as a tar archive to STDOUT.\n\n**Params**:\n\n* container (str): The container to export\n\n**Returns** (str): The filesystem tar archive as a str\n\n## get_archive\n\nRetrieve a file or folder from a container in the form of a tar archive.\n\n**Params**:\n\n* container (str): The container where the file is located\n* path (str): Path to the file or folder to retrieve\n\n**Returns** (tuple): First element is a raw tar data stream. Second element is\na dict containing `stat` information on the specified `path`.\n\n```python\n>>> import docker\n>>> cli = docker.Client()\n>>> ctnr = cli.create_container('busybox', 'true')\n>>> strm, stat = cli.get_archive(ctnr, '/bin/sh')\n>>> print(stat)\n{u'linkTarget': u'', u'mode': 493, u'mtime': u'2015-09-16T12:34:23-07:00', u'name': u'sh', u'size': 962860}\n```\n\n## get_image\n\nGet an image from the docker daemon. Similar to the `docker save` command.\n\n**Params**:\n\n* image (str): Image name to get\n\n**Returns** (urllib3.response.HTTPResponse object): The response from the docker daemon\n\nAn example of how to get (save) an image to a file.\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='unix://var/run/docker.sock')\n>>> image = cli.get_image(“fedora:latest”)\n>>> image_tar = open(‘/tmp/fedora-latest.tar’,’w’)\n>>> image_tar.write(image.data)\n>>> image_tar.close()\n```\n\n## history\n\nShow the history of an image.\n\n**Params**:\n\n* image (str): The image to show history for\n\n**Returns** (str): The history of the image\n\n## images\n\nList images. Identical to the `docker images` command.\n\n**Params**:\n\n* name (str): Only show images belonging to the repository `name`\n* quiet (bool): Only show numeric Ids. Returns a list\n* all (bool): Show all images (by default filter out the intermediate image\nlayers)\n* filters (dict): Filters to be processed on the image list. Available filters:\n - `dangling` (bool)\n - `label` (str): format either `\"key\"` or `\"key=value\"`\n\n**Returns** (dict or list): A list if `quiet=True`, otherwise a dict.\n\n```python\n[{'Created': 1401926735,\n'Id': 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721',\n'ParentId': '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16',\n'RepoTags': ['busybox:buildroot-2014.02', 'busybox:latest'],\n'Size': 0,\n'VirtualSize': 2433303},\n...\n```\n\n## import_image\n\nSimilar to the `docker import` command.\n\nIf `src` is a string or unicode string, it will first be treated as a path to\na tarball on the local system. If there is an error reading from that file,\nsrc will be treated as a URL instead to fetch the image from. You can also pass\nan open file handle as 'src', in which case the data will be read from that\nfile.\n\nIf `src` is unset but `image` is set, the `image` parameter will be taken as\nthe name of an existing image to import from.\n\n**Params**:\n\n* src (str or file): Path to tarfile, URL, or file-like object\n* repository (str): The repository to create\n* tag (str): The tag to apply\n* image (str): Use another image like the `FROM` Dockerfile parameter\n\n## import_image_from_data\n\nLike `.import_image()`, but allows importing in-memory bytes data.\n\n**Params**:\n\n* data (bytes collection): Bytes collection containing valid tar data\n* repository (str): The repository to create\n* tag (str): The tag to apply\n\n## import_image_from_file\n\nLike `.import_image()`, but only supports importing from a tar file on\ndisk. If the file doesn't exist it will raise `IOError`.\n\n**Params**:\n\n* filename (str): Full path to a tar file.\n* repository (str): The repository to create\n* tag (str): The tag to apply\n\n## import_image_from_url\n\nLike `.import_image()`, but only supports importing from a URL.\n\n**Params**:\n\n* url (str): A URL pointing to a tar file.\n* repository (str): The repository to create\n* tag (str): The tag to apply\n\n## import_image_from_image\n\nLike `.import_image()`, but only supports importing from another image,\nlike the `FROM` Dockerfile parameter.\n\n**Params**:\n\n* image (str): Image name to import from\n* repository (str): The repository to create\n* tag (str): The tag to apply\n\n## info\n\nDisplay system-wide information. Identical to the `docker info` command.\n\n**Returns** (dict): The info as a dict\n\n```\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> cli.info()\n{'Containers': 3,\n 'Debug': 1,\n 'Driver': 'aufs',\n 'DriverStatus': [['Root Dir', '/mnt/sda1/var/lib/docker/aufs'],\n ['Dirs', '225']],\n 'ExecutionDriver': 'native-0.2',\n 'IPv4Forwarding': 1,\n 'Images': 219,\n 'IndexServerAddress': 'https://index.docker.io/v1/',\n 'InitPath': '/usr/local/bin/docker',\n 'InitSha1': '',\n 'KernelVersion': '3.16.1-tinycore64',\n 'MemoryLimit': 1,\n 'NEventsListener': 0,\n 'NFd': 11,\n 'NGoroutines': 12,\n 'OperatingSystem': 'Boot2Docker 1.2.0 (TCL 5.3);',\n 'SwapLimit': 1}\n```\n\n## init_swarm\n\nInitialize a new Swarm using the current connected engine as the first node.\nSee the [Swarm documentation](swarm.md#clientinit_swarm).\n\n## insert\n*DEPRECATED*\n\n## inspect_container\n\nIdentical to the `docker inspect` command, but only for containers.\n\n**Params**:\n\n* container (str): The container to inspect\n\n**Returns** (dict): Nearly the same output as `docker inspect`, just as a\nsingle dict\n\n## inspect_image\n\nIdentical to the `docker inspect` command, but only for images.\n\n**Params**:\n\n* image (str): The image to inspect\n\n**Returns** (dict): Nearly the same output as `docker inspect`, just as a\nsingle dict\n\n## inspect_network\n\nRetrieve network info by id.\n\n**Params**:\n\n* net_id (str): network id\n\n**Returns** (dict): Network information dictionary\n\n## inspect_node\n\nRetrieve low-level information about a Swarm node.\nSee the [Swarm documentation](swarm.md#clientinspect_node).\n\n## inspect_service\n\nCreate a service, similar to the `docker service create` command. See the\n[services documentation](services.md#clientinspect_service) for details.\n\n## inspect_swarm\n\nRetrieve information about the current Swarm.\nSee the [Swarm documentation](swarm.md#clientinspect_swarm).\n\n## inspect_task\n\nRetrieve information about a task.\n\n**Params**:\n\n* task (str): Task identifier\n\n**Returns** (dict): Task information dictionary\n\n## inspect_volume\n\nRetrieve volume info by name.\n\n**Params**:\n\n* name (str): volume name\n\n**Returns** (dict): Volume information dictionary\n\n```python\n>>> cli.inspect_volume('foobar')\n{u'Mountpoint': u'/var/lib/docker/volumes/foobar/_data', u'Driver': u'local', u'Name': u'foobar'}\n```\n\n## join_swarm\n\nJoin an existing Swarm.\nSee the [Swarm documentation](swarm.md#clientjoin_swarm).\n\n## kill\n\nKill a container or send a signal to a container.\n\n**Params**:\n\n* container (str): The container to kill\n* signal (str or int): The signal to send. Defaults to `SIGKILL`\n\n## leave_swarm\n\nLeave the current Swarm.\nSee the [Swarm documentation](swarm.md#clientleave_swarm).\n\n## load_image\n\nLoad an image that was previously saved using `Client.get_image`\n(or `docker save`). Similar to `docker load`.\n\n**Params**:\n\n* data (binary): Image data to be loaded\n\n## login\n\nNearly identical to the `docker login` command, but non-interactive.\n\n**Params**:\n\n* username (str): The registry username\n* password (str): The plaintext password\n* email (str): The email for the registry account\n* registry (str): URL to the registry. Ex:`https://index.docker.io/v1/`\n* reauth (bool): Whether refresh existing authentication on the docker server.\n* dockercfg_path (str): Use a custom path for the .dockercfg file\n (default `$HOME/.dockercfg`)\n\n**Returns** (dict): The response from the login request\n\n## logs\n\nIdentical to the `docker logs` command. The `stream` parameter makes the `logs`\nfunction return a blocking generator you can iterate over to retrieve log\noutput as it happens.\n\n**Params**:\n\n* container (str): The container to get logs from\n* stdout (bool): Get STDOUT\n* stderr (bool): Get STDERR\n* stream (bool): Stream the response\n* timestamps (bool): Show timestamps\n* tail (str or int): Output specified number of lines at the end of logs: `\"all\"` or `number`. Default `\"all\"`\n* since (datetime or int): Show logs since a given datetime or integer epoch (in seconds)\n* follow (bool): Follow log output\n\n**Returns** (generator or str):\n\n## networks\n\nList networks currently registered by the docker daemon. Similar to the `docker networks ls` command.\n\n**Params**\n\n* names (list): List of names to filter by\n* ids (list): List of ids to filter by\n\nThe above are combined to create a filters dict.\n\n**Returns** (dict): List of network objects.\n\n## nodes\n\nList Swarm nodes. See the [Swarm documentation](swarm.md#clientnodes).\n\n## pause\n\nPauses all processes within a container.\n\n**Params**:\n\n* container (str): The container to pause\n\n\n## ping\n\nHits the `/_ping` endpoint of the remote API and returns the result. An\nexception will be raised if the endpoint isn't responding.\n\n**Returns** (bool)\n\n## port\nLookup the public-facing port that is NAT-ed to `private_port`. Identical to\nthe `docker port` command.\n\n**Params**:\n\n* container (str): The container to look up\n* private_port (int): The private port to inspect\n\n**Returns** (list of dict): The mapping for the host ports\n\n```bash\n$ docker run -d -p 80:80 ubuntu:14.04 /bin/sleep 30\n7174d6347063a83f412fad6124c99cffd25ffe1a0807eb4b7f9cec76ac8cb43b\n```\n```python\n>>> cli.port('7174d6347063', 80)\n[{'HostIp': '0.0.0.0', 'HostPort': '80'}]\n```\n\n## pull\n\nIdentical to the `docker pull` command.\n\n**Params**:\n\n* repository (str): The repository to pull\n* tag (str): The tag to pull\n* stream (bool): Stream the output as a generator\n* insecure_registry (bool): Use an insecure registry\n* auth_config (dict): Override the credentials that Client.login has set for this request\n `auth_config` should contain the `username` and `password` keys to be valid.\n\n**Returns** (generator or str): The output\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> for line in cli.pull('busybox', stream=True):\n... print(json.dumps(json.loads(line), indent=4))\n{\n \"status\": \"Pulling image (latest) from busybox\",\n \"progressDetail\": {},\n \"id\": \"e72ac664f4f0\"\n}\n{\n \"status\": \"Pulling image (latest) from busybox, endpoint: ...\",\n \"progressDetail\": {},\n \"id\": \"e72ac664f4f0\"\n}\n```\n\n## push\n\nPush an image or a repository to the registry. Identical to the `docker push`\ncommand.\n\n**Params**:\n\n* repository (str): The repository to push to\n* tag (str): An optional tag to push\n* stream (bool): Stream the output as a blocking generator\n* insecure_registry (bool): Use `http://` to connect to the registry\n* auth_config (dict): Override the credentials that Client.login has set for this request\n `auth_config` should contain the `username` and `password` keys to be valid.\n\n**Returns** (generator or str): The output of the upload\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> response = [line for line in cli.push('yourname/app', stream=True)]\n>>> response\n['{\"status\":\"Pushing repository yourname/app (1 tags)\"}\\\\n',\n '{\"status\":\"Pushing\",\"progressDetail\":{},\"id\":\"511136ea3c5a\"}\\\\n',\n '{\"status\":\"Image already pushed, skipping\",\"progressDetail\":{},\n \"id\":\"511136ea3c5a\"}\\\\n',\n ...\n '{\"status\":\"Pushing tag for rev [918af568e6e5] on {\n https://cdn-registry-1.docker.io/v1/repositories/\n yourname/app/tags/latest}\"}\\\\n']\n```\n\n## put_archive\n\nInsert a file or folder in an existing container using a tar archive as source.\n\n**Params**:\n\n* container (str): The container where the file(s) will be extracted\n* path (str): Path inside the container where the file(s) will be extracted.\n Must exist.\n* data (bytes): tar data to be extracted\n\n**Returns** (bool): True if the call succeeds. `docker.errors.APIError` will\nbe raised if an error occurs.\n\n## remove_container\n\nRemove a container. Similar to the `docker rm` command.\n\n**Params**:\n\n* container (str): The container to remove\n* v (bool): Remove the volumes associated with the container\n* link (bool): Remove the specified link and not the underlying container\n* force (bool): Force the removal of a running container (uses SIGKILL)\n\n## remove_image\n\nRemove an image. Similar to the `docker rmi` command.\n\n**Params**:\n\n* image (str): The image to remove\n* force (bool): Force removal of the image\n* noprune (bool): Do not delete untagged parents\n\n## remove_network\n\nRemove a network. Similar to the `docker network rm` command.\n\n**Params**:\n\n* net_id (str): The network's id\n\nFailure to remove will raise a `docker.errors.APIError` exception.\n\n## remove_service\n\nRemove a service, similar to the `docker service rm` command. See the\n[services documentation](services.md#clientremove_service) for details.\n\n## remove_volume\n\nRemove a volume. Similar to the `docker volume rm` command.\n\n**Params**:\n\n* name (str): The volume's name\n\nFailure to remove will raise a `docker.errors.APIError` exception.\n\n## rename\n\nRename a container. Similar to the `docker rename` command.\n\n**Params**:\n\n* container (str): ID of the container to rename\n* name (str): New name for the container\n\n## resize\n\nResize the tty session.\n\n**Params**:\n\n* container (str or dict): The container to resize\n* height (int): Height of tty session\n* width (int): Width of tty session\n\n## restart\n\nRestart a container. Similar to the `docker restart` command.\n\nIf `container` a dict, the `Id` key is used.\n\n**Params**:\n\n* container (str or dict): The container to restart\n* timeout (int): Number of seconds to try to stop for before killing the\ncontainer. Once killed it will then be restarted. Default is 10 seconds.\n\n## search\nIdentical to the `docker search` command.\n\n**Params**:\n\n* term (str): A term to search for\n\n**Returns** (list of dicts): The response of the search\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> response = cli.search('nginx')\n>>> response[:2]\n[{'description': 'Official build of Nginx.',\n 'is_official': True,\n 'is_trusted': False,\n 'name': 'nginx',\n 'star_count': 266},\n {'description': 'Trusted automated Nginx (http://nginx.org/) ...',\n 'is_official': False,\n 'is_trusted': True,\n 'name': 'dockerfile/nginx',\n 'star_count': 60},\n ...\n```\n\n## services\n\nList services, similar to the `docker service ls` command. See the\n[services documentation](services.md#clientservices) for details.\n\n## start\n\nSimilar to the `docker start` command, but doesn't support attach options. Use\n`.logs()` to recover `stdout`/`stderr`.\n\n**Params**:\n\n* container (str): The container to start\n\n**Deprecation warning:** For API version > 1.15, it is highly recommended to\n provide host config options in the\n [`host_config` parameter of `create_container`](#create_container)\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> container = cli.create_container(\n... image='busybox:latest',\n... command='/bin/sleep 30')\n>>> response = cli.start(container=container.get('Id'))\n>>> print(response)\nNone\n```\n\n## stats\n\nThe Docker API parallel to the `docker stats` command.\nThis will stream statistics for a specific container.\n\n**Params**:\n\n* container (str): The container to stream statistics for\n* decode (bool): If set to true, stream will be decoded into dicts on the\n fly. False by default.\n* stream (bool): If set to false, only the current stats will be returned\n instead of a stream. True by default.\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> stats_obj = cli.stats('elasticsearch')\n>>> for stat in stats_obj:\n>>> print(stat)\n{\"read\":\"2015-02-11T21:47:30.49388286+02:00\",\"networks\":{\"eth0\":{\"rx_bytes\":648,\"rx_packets\":8 ...\n...\n...\n...\n```\n\n## stop\n\nStops a container. Similar to the `docker stop` command.\n\n**Params**:\n\n* container (str): The container to stop\n* timeout (int): Timeout in seconds to wait for the container to stop before\nsending a `SIGKILL`. Default: 10\n\n## tag\n\nTag an image into a repository. Identical to the `docker tag` command.\n\n**Params**:\n\n* image (str): The image to tag\n* repository (str): The repository to set for the tag\n* tag (str): The tag name\n* force (bool): Force\n\n**Returns** (bool): True if successful\n\n## tasks\n\nRetrieve a list of tasks.\n\n**Params**:\n\n* filters (dict): A map of filters to process on the tasks list. Valid filters:\n `id`, `name`, `service`, `node`, `label` and `desired-state`.\n\n**Returns** (list): List of task dictionaries.\n\n## top\nDisplay the running processes of a container.\n\n**Params**:\n\n* container (str): The container to inspect\n* ps_args (str): An optional arguments passed to ps (e.g., aux)\n\n**Returns** (str): The output of the top\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> cli.create_container('busybox:latest', '/bin/sleep 30', name='sleeper')\n>>> cli.start('sleeper')\n>>> cli.top('sleeper')\n{'Processes': [['952', 'root', '/bin/sleep 30']],\n 'Titles': ['PID', 'USER', 'COMMAND']}\n```\n\n## unpause\n\nUnpause all processes within a container.\n\n**Params**:\n\n* container (str): The container to unpause\n\n## update_container\n\nUpdate resource configs of one or more containers.\n\n**Params**:\n\n* container (str): The container to inspect\n* blkio_weight (int): Block IO (relative weight), between 10 and 1000\n* cpu_period (int): Limit CPU CFS (Completely Fair Scheduler) period\n* cpu_quota (int): Limit CPU CFS (Completely Fair Scheduler) quota\n* cpu_shares (int): CPU shares (relative weight)\n* cpuset_cpus (str): CPUs in which to allow execution\n* cpuset_mems (str): MEMs in which to allow execution\n* mem_limit (int or str): Memory limit\n* mem_reservation (int or str): Memory soft limit\n* memswap_limit (int or str): Total memory (memory + swap), -1 to disable swap\n* kernel_memory (int or str): Kernel memory limit\n\n**Returns** (dict): Dictionary containing a `Warnings` key.\n\n## update_service\n\nUpdate a service, similar to the `docker service update` command. See the\n[services documentation](services.md#clientupdate_service) for details.\n\n## update_swarm\n\nUpdate the current Swarm.\nSee the [Swarm documentation](swarm.md#clientupdate_swarm).\n\n## version\n\nNearly identical to the `docker version` command.\n\n**Returns** (dict): The server version information\n\n```python\n>>> from docker import Client\n>>> cli = Client(base_url='tcp://127.0.0.1:2375')\n>>> cli.version()\n{\n \"KernelVersion\": \"3.16.4-tinycore64\",\n \"Arch\": \"amd64\",\n \"ApiVersion\": \"1.15\",\n \"Version\": \"1.3.0\",\n \"GitCommit\": \"c78088f\",\n \"Os\": \"linux\",\n \"GoVersion\": \"go1.3.3\"\n}\n```\n\n## volumes\n\nList volumes currently registered by the docker daemon. Similar to the `docker volume ls` command.\n\n**Params**\n\n* filters (dict): Server-side list filtering options.\n\n**Returns** (dict): Dictionary with list of volume objects as value of the `Volumes` key.\n\n```python\n>>> cli.volumes()\n{u'Volumes': [\n {u'Mountpoint': u'/var/lib/docker/volumes/foobar/_data', u'Driver': u'local', u'Name': u'foobar'},\n {u'Mountpoint': u'/var/lib/docker/volumes/baz/_data', u'Driver': u'local', u'Name': u'baz'}\n]}\n```\n\n## wait\nIdentical to the `docker wait` command. Block until a container stops, then\nreturn its exit code. Returns the value `-1` if the API responds without a\n`StatusCode` attribute.\n\nIf `container` is a dict, the `Id` key is used.\n\nIf the timeout value is exceeded, a `requests.exceptions.ReadTimeout`\nexception will be raised.\n\n**Params**:\n\n* container (str or dict): The container to wait on\n* timeout (int): Request timeout\n\n**Returns** (int): The exit code of the container\n\n\n<!---\nTODO:\n\n* load_image\n\n-->\n", "docs/swarm.md": "# Swarm management\n\nStarting with Engine version 1.12 (API 1.24), it is possible to manage the\nengine's associated Swarm cluster using the API.\n\n## Initializing a new Swarm\n\nYou can initialize a new Swarm by calling `Client.init_swarm`. An advertising\naddress needs to be provided, usually simply by indicating which network\ninterface needs to be used. Advanced options are provided using the\n`swarm_spec` parameter, which can easily be created using\n`Client.create_swarm_spec`.\n\n```python\nspec = client.create_swarm_spec(\n snapshot_interval=5000, log_entries_for_slow_followers=1200\n)\nclient.init_swarm(\n advertise_addr='eth0', listen_addr='0.0.0.0:5000', force_new_cluster=False,\n swarm_spec=spec\n)\n```\n\n## Joining an existing Swarm\n\nIf you're looking to have the engine your client is connected to join an\nexisting Swarm, this can be accomplished by using the `Client.join_swarm`\nmethod. You will need to provide a list of at least one remote address\ncorresponding to other machines already part of the swarm as well as the\n`join_token`. In most cases, a `listen_addr` and `advertise_addr` for your\nnode are also required.\n\n```python\nclient.join_swarm(\n remote_addrs=['192.168.14.221:2377'], join_token='SWMTKN-1-redacted',\n listen_addr='0.0.0.0:5000', advertise_addr='eth0:5000'\n)\n```\n\n## Leaving the Swarm\n\nTo leave the swarm you are currently a member of, simply use\n`Client.leave_swarm`. Note that if your engine is the Swarm's manager,\nyou will need to specify `force=True` to be able to leave.\n\n```python\nclient.leave_swarm(force=False)\n```\n\n## Retrieving Swarm status\n\nYou can retrieve information about your current Swarm status by calling\n`Client.inspect_swarm`. This method takes no arguments.\n\n```python\nclient.inspect_swarm()\n```\n\n## Listing Swarm nodes\n\nList all nodes that are part of the current Swarm using `Client.nodes`.\nThe `filters` argument allows to filter the results.\n\n```python\nclient.nodes(filters={'role': 'manager'})\n```\n\n## Swarm API documentation\n\n### Client.init_swarm\n\nInitialize a new Swarm using the current connected engine as the first node.\n\n**Params:**\n\n* advertise_addr (string): Externally reachable address advertised to other\n nodes. This can either be an address/port combination in the form\n `192.168.1.1:4567`, or an interface followed by a port number, like\n `eth0:4567`. If the port number is omitted, the port number from the listen\n address is used. If `advertise_addr` is not specified, it will be\n automatically detected when possible. Default: None\n* listen_addr (string): Listen address used for inter-manager communication,\n as well as determining the networking interface used for the VXLAN Tunnel\n Endpoint (VTEP). This can either be an address/port combination in the form\n `192.168.1.1:4567`, or an interface followed by a port number, like\n `eth0:4567`. If the port number is omitted, the default swarm listening port\n is used. Default: '0.0.0.0:2377'\n* force_new_cluster (bool): Force creating a new Swarm, even if already part of\n one. Default: False\n* swarm_spec (dict): Configuration settings of the new Swarm. Use\n `Client.create_swarm_spec` to generate a valid configuration. Default: None\n\n**Returns:** `True` if the request went through. Raises an `APIError` if it\n fails.\n\n#### Client.create_swarm_spec\n\nCreate a `docker.types.SwarmSpec` instance that can be used as the `swarm_spec`\nargument in `Client.init_swarm`.\n\n**Params:**\n\n* task_history_retention_limit (int): Maximum number of tasks history stored.\n* snapshot_interval (int): Number of logs entries between snapshot.\n* keep_old_snapshots (int): Number of snapshots to keep beyond the current\n snapshot.\n* log_entries_for_slow_followers (int): Number of log entries to keep around\n to sync up slow followers after a snapshot is created.\n* heartbeat_tick (int): Amount of ticks (in seconds) between each heartbeat.\n* election_tick (int): Amount of ticks (in seconds) needed without a leader to\n trigger a new election.\n* dispatcher_heartbeat_period (int): The delay for an agent to send a\n heartbeat to the dispatcher.\n* node_cert_expiry (int): Automatic expiry for nodes certificates.\n* external_ca (dict): Configuration for forwarding signing requests to an\n external certificate authority. Use `docker.types.SwarmExternalCA`.\n* name (string): Swarm's name\n\n**Returns:** `docker.types.SwarmSpec` instance.\n\n#### docker.types.SwarmExternalCA\n\nCreate a configuration dictionary for the `external_ca` argument in a\n`SwarmSpec`.\n\n**Params:**\n\n* protocol (string): Protocol for communication with the external CA (currently\n only “cfssl” is supported).\n* url (string): URL where certificate signing requests should be sent.\n* options (dict): An object with key/value pairs that are interpreted as\n protocol-specific options for the external CA driver.\n\n### Client.inspect_node\n\nRetrieve low-level information about a Swarm node\n\n**Params:**\n\n* node_id (string): ID of the node to be inspected.\n\n**Returns:** A dictionary containing data about this node. See sample below.\n\n```python\n{u'CreatedAt': u'2016-08-11T23:28:39.695834296Z',\n u'Description': {u'Engine': {u'EngineVersion': u'1.12.0',\n u'Plugins': [{u'Name': u'bridge', u'Type': u'Network'},\n {u'Name': u'host', u'Type': u'Network'},\n {u'Name': u'null', u'Type': u'Network'},\n {u'Name': u'overlay', u'Type': u'Network'},\n {u'Name': u'local', u'Type': u'Volume'}]},\n u'Hostname': u'dockerserv-1.local.net',\n u'Platform': {u'Architecture': u'x86_64', u'OS': u'linux'},\n u'Resources': {u'MemoryBytes': 8052109312, u'NanoCPUs': 4000000000}},\n u'ID': u'1kqami616p23dz4hd7km35w63',\n u'ManagerStatus': {u'Addr': u'10.0.131.127:2377',\n u'Leader': True,\n u'Reachability': u'reachable'},\n u'Spec': {u'Availability': u'active', u'Role': u'manager'},\n u'Status': {u'State': u'ready'},\n u'UpdatedAt': u'2016-08-11T23:28:39.979829529Z',\n u'Version': {u'Index': 9}}\n ```\n\n### Client.inspect_swarm\n\nRetrieve information about the current Swarm.\n\n**Returns:** A dictionary containing information about the Swarm. See sample\n below.\n\n```python\n{u'CreatedAt': u'2016-08-04T21:26:18.779800579Z',\n u'ID': u'8hk6e9wh4iq214qtbgvbp84a9',\n u'JoinTokens': {u'Manager': u'SWMTKN-1-redacted-1',\n u'Worker': u'SWMTKN-1-redacted-2'},\n u'Spec': {u'CAConfig': {u'NodeCertExpiry': 7776000000000000},\n u'Dispatcher': {u'HeartbeatPeriod': 5000000000},\n u'Name': u'default',\n u'Orchestration': {u'TaskHistoryRetentionLimit': 10},\n u'Raft': {u'ElectionTick': 3,\n u'HeartbeatTick': 1,\n u'LogEntriesForSlowFollowers': 500,\n u'SnapshotInterval': 10000},\n u'TaskDefaults': {}},\n u'UpdatedAt': u'2016-08-04T21:26:19.391623265Z',\n u'Version': {u'Index': 11}}\n```\n\n### Client.join_swarm\n\nJoin an existing Swarm.\n\n**Params:**\n\n* remote_addrs (list): Addresses of one or more manager nodes already\n participating in the Swarm to join.\n* join_token (string): Secret token for joining this Swarm.\n* listen_addr (string): Listen address used for inter-manager communication\n if the node gets promoted to manager, as well as determining the networking\n interface used for the VXLAN Tunnel Endpoint (VTEP). Default: `None`\n* advertise_addr (string): Externally reachable address advertised to other\n nodes. This can either be an address/port combination in the form\n `192.168.1.1:4567`, or an interface followed by a port number, like\n `eth0:4567`. If the port number is omitted, the port number from the listen\n address is used. If AdvertiseAddr is not specified, it will be automatically\n detected when possible. Default: `None`\n\n**Returns:** `True` if the request went through. Raises an `APIError` if it\n fails.\n\n### Client.leave_swarm\n\nLeave a Swarm.\n\n**Params:**\n\n* force (bool): Leave the Swarm even if this node is a manager.\n Default: `False`\n\n**Returns:** `True` if the request went through. Raises an `APIError` if it\n fails.\n\n### Client.nodes\n\nList Swarm nodes\n\n**Params:**\n\n* filters (dict): Filters to process on the nodes list. Valid filters:\n `id`, `name`, `membership` and `role`. Default: `None`\n\n**Returns:** A list of dictionaries containing data about each swarm node.\n\n### Client.update_swarm\n\nUpdate the Swarm's configuration\n\n**Params:**\n\n* version (int): The version number of the swarm object being updated. This\n is required to avoid conflicting writes.\n* swarm_spec (dict): Configuration settings to update. Use\n `Client.create_swarm_spec` to generate a valid configuration.\n Default: `None`.\n* rotate_worker_token (bool): Rotate the worker join token. Default: `False`.\n* rotate_manager_token (bool): Rotate the manager join token. Default: `False`.\n\n**Returns:** `True` if the request went through. Raises an `APIError` if it\n fails.\n"}
|
diff --git a/docs/api.md b/docs/api.md
index 1699344a66..5cadb83081 100644
--- a/docs/api.md
+++ b/docs/api.md
@@ -1129,6 +1129,11 @@ Update resource configs of one or more containers.
**Returns** (dict): Dictionary containing a `Warnings` key.
+## update_node
+
+Update a node.
+See the [Swarm documentation](swarm.md#clientupdate_node).
+
## update_service
Update a service, similar to the `docker service update` command. See the
diff --git a/docs/swarm.md b/docs/swarm.md
index 3cc44f8741..20c3945352 100644
--- a/docs/swarm.md
+++ b/docs/swarm.md
@@ -232,6 +232,30 @@ List Swarm nodes
**Returns:** A list of dictionaries containing data about each swarm node.
+### Client.update_node
+
+Update the Node's configuration
+
+**Params:**
+
+* version (int): The version number of the node object being updated. This
+ is required to avoid conflicting writes.
+* node_spec (dict): Configuration settings to update. Any values not provided
+ will be removed. See the official [Docker API documentation](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/update-a-node) for more details.
+ Default: `None`.
+
+**Returns:** `True` if the request went through. Raises an `APIError` if it
+ fails.
+
+```python
+node_spec = {'Availability': 'active',
+ 'Name': 'node-name',
+ 'Role': 'manager',
+ 'Labels': {'foo': 'bar'}
+ }
+client.update_node(node_id='24ifsmvkjbyhk', version=8, node_spec=node_spec)
+```
+
### Client.update_swarm
Update the Swarm's configuration
|
{"docker/api/swarm.py": [{"type": "function", "name": "SwarmApiMixin.update_node", "lines": [73, 77], "signature": "def update_node(self, node_id, version, node_spec=None):", "doc": ""}]}
| null |
["tests/unit/swarm_test.py::SwarmTest::test_node_update"]
|
[]
|
26753c81defff28a1a38a34788e9653c8eb87c3d
|
{"first_commit_time": 1474661279.0, "pr_title": "enable setting of node labels #1225", "pr_body": "Added update_node function to enable setting labels on nodes. This\nexposes the Update a Node function from the Docker API and should\nenable promoting/demoting manager nodes inside a swarm.\n\nSigned-off-by: Nathan Shirlberg [email protected]\n", "pr_timeline": [{"time": 1475001700.0, "comment": "This is my first contribution to this project,. Please pardon my questions.\n\nThe tests all passed locally. I tried to follow the style of all existing code. I have a consumer that is functioning properly when docker-py is installed from my source. Please let me know when this is included in a release posted to pypi so that I can switch my application from using the source copy back to using pip install.\n\nIf you have any thoughts or suggestions, please let me know.\n\nThanks,\nNathan\n"}, {"time": 1478319869.0, "comment": "Hey guys, \n I am currently running off my fork, but would like to get back to running off of a version of docker-py from pypi. Is there anything I can do to get this included in a future release?\n"}, {"time": 1478544472.0, "comment": "Hi @nathannis \n\nSorry for the delay on this, other priorities have cropped up since we released 1.10 . Code and docs look good to me, and you added tests too, so this looks perfect as far as I'm concerned.\n\nI'll merge to master for the time being, and I'll update once it makes it into a release.\n"}, {"time": 1478617977.0, "comment": "Thanks!\n"}], "issues": {}}
|
embeddings-benchmark/mteb
| 1,256
|
https://github.com/embeddings-benchmark/mteb/pull/1256
|
embeddings-benchmark__mteb-1256
|
[]
|
f04279d5975f4a5c7fd5f5f284bfe14303b8f2a0
|
diff --git a/README.md b/README.md
index e35ad3bdbc..a7eb03e4f2 100644
--- a/README.md
+++ b/README.md
@@ -378,6 +378,7 @@ df = results_to_dataframe(results)
| Documentation | |
| ------------------------------ | ---------------------- |
| 📋 [Tasks] | Overview of available tasks |
+| 📐 [Benchmarks] | Overview of available benchmarks |
| 📈 [Leaderboard] | The interactive leaderboard of the benchmark |
| 🤖 [Adding a model] | Information related to how to submit a model to the leaderboard |
| 👩🔬 [Reproducible workflows] | Information related to how to reproduce and create reproducible workflows with MTEB |
@@ -387,6 +388,7 @@ df = results_to_dataframe(results)
| 🌐 [MMTEB] | An open-source effort to extend MTEB to cover a broad set of languages |
[Tasks]: docs/tasks.md
+[Benchmarks]: docs/benchmarks.md
[Contributing]: CONTRIBUTING.md
[Adding a model]: docs/adding_a_model.md
[Adding a dataset]: docs/adding_a_dataset.md
diff --git a/docs/benchmarks.md b/docs/benchmarks.md
index 9eb471d187..a5abe50215 100644
--- a/docs/benchmarks.md
+++ b/docs/benchmarks.md
@@ -1,5 +1,5 @@
## Available benchmarks
-The following tables give you an overview of the benchmarks in MTEB.
+The following table gives you an overview of the benchmarks in MTEB.
<details>
diff --git a/mteb/cli.py b/mteb/cli.py
index 24e99bd241..b891d381f4 100644
--- a/mteb/cli.py
+++ b/mteb/cli.py
@@ -30,6 +30,14 @@
mteb available_tasks --task_types Clustering # list tasks of type Clustering
```
+## Listing Available Benchmarks
+
+To list the available benchmarks within MTEB, use the `mteb available_benchmarks` command. For example:
+
+```bash
+mteb available_benchmarks # list all available benchmarks
+```
+
## Creating Model Metadata
@@ -144,6 +152,12 @@ def run(args: argparse.Namespace) -> None:
_save_model_metadata(model, Path(args.output_folder))
+def available_benchmarks(args: argparse.Namespace) -> None:
+ benchmarks = mteb.get_benchmarks()
+ eval = mteb.MTEB(tasks=benchmarks)
+ eval.mteb_benchmarks()
+
+
def available_tasks(args: argparse.Namespace) -> None:
tasks = mteb.get_tasks(
categories=args.categories,
@@ -198,6 +212,15 @@ def add_available_tasks_parser(subparsers) -> None:
parser.set_defaults(func=available_tasks)
+def add_available_benchmarks_parser(subparsers) -> None:
+ parser = subparsers.add_parser(
+ "available_benchmarks", help="List the available benchmarks within MTEB"
+ )
+ add_task_selection_args(parser)
+
+ parser.set_defaults(func=available_benchmarks)
+
+
def add_run_parser(subparsers) -> None:
parser = subparsers.add_parser("run", help="Run a model on a set of tasks")
@@ -321,6 +344,7 @@ def main():
)
add_run_parser(subparsers)
add_available_tasks_parser(subparsers)
+ add_available_benchmarks_parser(subparsers)
add_create_meta_parser(subparsers)
args = parser.parse_args()
diff --git a/mteb/evaluation/MTEB.py b/mteb/evaluation/MTEB.py
index ab25169364..70f3e21ca8 100644
--- a/mteb/evaluation/MTEB.py
+++ b/mteb/evaluation/MTEB.py
@@ -168,6 +168,12 @@ def _display_tasks(self, task_list, name=None):
console.print(f"{prefix}{name}{category}{multilingual}")
console.print("\n")
+ def mteb_benchmarks(self):
+ """Get all benchmarks available in the MTEB."""
+ for benchmark in self._tasks:
+ name = benchmark.name
+ self._display_tasks(benchmark.tasks, name=name)
+
@classmethod
def mteb_tasks(cls):
"""Get all tasks available in the MTEB."""
|
diff --git a/tests/test_cli.py b/tests/test_cli.py
index fdcd1b014a..1d0400e985 100644
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -22,6 +22,15 @@ def test_available_tasks():
), "Sample task Banking77Classification task not found in available tasks"
+def test_available_benchmarks():
+ command = f"{sys.executable} -m mteb available_benchmarks"
+ result = subprocess.run(command, shell=True, capture_output=True, text=True)
+ assert result.returncode == 0, "Command failed"
+ assert (
+ "MTEB(eng)" in result.stdout
+ ), "Sample benchmark MTEB(eng) task not found in available bencmarks"
+
+
run_task_fixures = [
(
"average_word_embeddings_komninos",
| 2024-09-29T12:28:20
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"README.md": "<h1 align=\"center\">Massive Text Embedding Benchmark</h1>\n\n<p align=\"center\">\n <a href=\"https://github.com/embeddings-benchmark/mteb/releases\">\n <img alt=\"GitHub release\" src=\"https://img.shields.io/github/release/embeddings-benchmark/mteb.svg\">\n </a>\n <a href=\"https://arxiv.org/abs/2210.07316\">\n <img alt=\"GitHub release\" src=\"https://img.shields.io/badge/arXiv-2305.14251-b31b1b.svg\">\n </a>\n <a href=\"https://github.com/embeddings-benchmark/mteb/blob/master/LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/embeddings-benchmark/mteb.svg?color=green\">\n </a>\n <a href=\"https://pepy.tech/project/mteb\">\n <img alt=\"Downloads\" src=\"https://static.pepy.tech/personalized-badge/mteb?period=total&units=international_system&left_color=grey&right_color=orange&left_text=Downloads\">\n </a>\n</p>\n\n<h4 align=\"center\">\n <p>\n <a href=\"#installation\">Installation</a> |\n <a href=\"#usage\">Usage</a> |\n <a href=\"https://huggingface.co/spaces/mteb/leaderboard\">Leaderboard</a> |\n <a href=\"#documentation\">Documentation</a> |\n <a href=\"#citing\">Citing</a>\n <p>\n</h4>\n\n<h3 align=\"center\">\n <a href=\"https://huggingface.co/spaces/mteb/leaderboard\"><img style=\"float: middle; padding: 10px 10px 10px 10px;\" width=\"60\" height=\"55\" src=\"./docs/images/hf_logo.png\" /></a>\n</h3>\n\n\n## Installation\n\n```bash\npip install mteb\n```\n\n## Example Usage\n\n* Using a python script:\n\n```python\nimport mteb\nfrom sentence_transformers import SentenceTransformer\n\n# Define the sentence-transformers model name\nmodel_name = \"average_word_embeddings_komninos\"\n# or directly from huggingface:\n# model_name = \"sentence-transformers/all-MiniLM-L6-v2\"\n\nmodel = SentenceTransformer(model_name)\ntasks = mteb.get_tasks(tasks=[\"Banking77Classification\"])\nevaluation = mteb.MTEB(tasks=tasks)\nresults = evaluation.run(model, output_folder=f\"results/{model_name}\")\n```\n\n* Using CLI\n\n```bash\nmteb available_tasks\n\nmteb run -m sentence-transformers/all-MiniLM-L6-v2 \\\n -t Banking77Classification \\\n --verbosity 3\n\n# if nothing is specified default to saving the results in the results/{model_name} folder\n```\n\n* Using multiple GPUs in parallel can be done by just having a custom encode function that distributes the inputs to multiple GPUs like e.g. [here](https://github.com/microsoft/unilm/blob/b60c741f746877293bb85eed6806736fc8fa0ffd/e5/mteb_eval.py#L60) or [here](https://github.com/ContextualAI/gritlm/blob/09d8630f0c95ac6a456354bcb6f964d7b9b6a609/gritlm/gritlm.py#L75).\n\n\n\n## Usage Documentation\nClick on each section below to see the details.\n\n<br /> \n\n<details>\n <summary> Task selection </summary>\n\n### Task selection\n\nTasks can be selected by providing the list of datasets, but also\n\n* by their task (e.g. \"Clustering\" or \"Classification\")\n\n```python\ntasks = mteb.get_tasks(task_types=[\"Clustering\", \"Retrieval\"]) # Only select clustering and retrieval tasks\n```\n\n* by their categories e.g. \"s2s\" (sentence to sentence) or \"p2p\" (paragraph to paragraph)\n\n```python\ntasks = mteb.get_tasks(categories=[\"s2s\", \"p2p\"]) # Only select sentence2sentence and paragraph2paragraph datasets\n```\n\n* by their languages\n\n```python\ntasks = mteb.get_tasks(languages=[\"eng\", \"deu\"]) # Only select datasets which contain \"eng\" or \"deu\" (iso 639-3 codes)\n```\n\nYou can also specify which languages to load for multilingual/cross-lingual tasks like below:\n\n```python\nimport mteb\n\ntasks = [\n mteb.get_task(\"AmazonReviewsClassification\", languages = [\"eng\", \"fra\"]),\n mteb.get_task(\"BUCCBitextMining\", languages = [\"deu\"]), # all subsets containing \"deu\"\n]\n\n# or you can select specific huggingface subsets like this:\nfrom mteb.tasks import AmazonReviewsClassification, BUCCBitextMining\n\nevaluation = mteb.MTEB(tasks=[\n AmazonReviewsClassification(hf_subsets=[\"en\", \"fr\"]) # Only load \"en\" and \"fr\" subsets of Amazon Reviews\n BUCCBitextMining(hf_subsets=[\"de-en\"]), # Only load \"de-en\" subset of BUCC\n])\n# for an example of a HF subset see \"Subset\" in the dataset viewer at: https://huggingface.co/datasets/mteb/bucc-bitext-mining\n```\n\n</details>\n\n<details>\n <summary> Running a benchmark </summary>\n\n### Running a Benchmark\n\n`mteb` comes with a set of predefined benchmarks. These can be fetched using `get_benchmark` and run in a similar fashion to other sets of tasks. \nFor instance to select the 56 English datasets that form the \"Overall MTEB English leaderboard\":\n\n```python\nimport mteb\nbenchmark = mteb.get_benchmark(\"MTEB(eng)\")\nevaluation = mteb.MTEB(tasks=benchmark)\n```\n\nThe benchmark specified not only a list of tasks, but also what splits and language to run on. To get an overview of all available benhcmarks simply run:\n\n```python\nimport mteb\nbenchmarks = mteb.get_benchmarks()\n```\n\nGenerally we use the naming scheme for benchmarks `MTEB(*)`, where the \"*\" denotes the target of the benchmark. In case of a language we use the three letter language code. For large groups of language we use the group notation, e.g. `MTEB(Scandinavian)` for Scandinavian languages. External benchmarks implemented in MTEB like `CoIR` use their original name. When using a benchmark from MTEB please cite `mteb` along with the citations of the benchmark which you can access using:\n\n```python\nbenchmark.citation\n```\n\n</details>\n\n<details>\n <summary> Passing in `encode` arguments </summary>\n\n\n### Passing in `encode` arguments\n\nTo pass in arguments to the model's `encode` function, you can use the encode keyword arguments (`encode_kwargs`):\n\n```python\nevaluation.run(model, encode_kwargs={\"batch_size\": 32}\n```\n</details>\n\n\n<details>\n <summary> Selecting evaluation split </summary>\n\n### Selecting evaluation split\nYou can evaluate only on `test` splits of all tasks by doing the following:\n\n```python\nevaluation.run(model, eval_splits=[\"test\"])\n```\n\nNote that the public leaderboard uses the test splits for all datasets except MSMARCO, where the \"dev\" split is used.\n\n</details>\n\n<details>\n <summary> Using a custom model </summary>\n\n\n### Using a custom model\n\nModels should implement the following interface, implementing an `encode` function taking as inputs a list of sentences, and returning a list of embeddings (embeddings can be `np.array`, `torch.tensor`, etc.). For inspiration, you can look at the [mteb/mtebscripts repo](https://github.com/embeddings-benchmark/mtebscripts) used for running diverse models via SLURM scripts for the paper.\n\n```python\nclass MyModel():\n def encode(\n self, sentences: list[str], **kwargs: Any\n ) -> torch.Tensor | np.ndarray:\n \"\"\"Encodes the given sentences using the encoder.\n\n Args:\n sentences: The sentences to encode.\n **kwargs: Additional arguments to pass to the encoder.\n\n Returns:\n The encoded sentences.\n \"\"\"\n pass\n\nmodel = MyModel()\ntasks = mteb.get_task(\"Banking77Classification\")\nevaluation = MTEB(tasks=tasks)\nevaluation.run(model)\n```\n\nIf you'd like to use different encoding functions for query and corpus when evaluating on Retrieval or Reranking tasks, you can add separate methods for `encode_queries` and `encode_corpus`. If these methods exist, they will be automatically used for those tasks. You can refer to the `DRESModel` at `mteb/evaluation/evaluators/RetrievalEvaluator.py` for an example of these functions.\n\n```python\nclass MyModel():\n def encode_queries(self, queries: list[str], **kwargs) -> list[np.ndarray] | list[torch.Tensor]:\n \"\"\"\n Returns a list of embeddings for the given sentences.\n Args:\n queries: List of sentences to encode\n\n Returns:\n List of embeddings for the given sentences\n \"\"\"\n pass\n\n def encode_corpus(self, corpus: list[str] | list[dict[str, str]], **kwargs) -> list[np.ndarray] | list[torch.Tensor]:\n \"\"\"\n Returns a list of embeddings for the given sentences.\n Args:\n corpus: List of sentences to encode\n or list of dictionaries with keys \"title\" and \"text\"\n\n Returns:\n List of embeddings for the given sentences\n \"\"\"\n pass\n```\n\n</details>\n\n<details>\n <summary> Evaluating on a custom dataset </summary>\n\n\n### Evaluating on a custom dataset\n\nTo evaluate on a custom task, you can run the following code on your custom task. See [how to add a new task](docs/adding_a_dataset.md), for how to create a new task in MTEB.\n\n```python\nfrom mteb import MTEB\nfrom mteb.abstasks.AbsTaskReranking import AbsTaskReranking\nfrom sentence_transformers import SentenceTransformer\n\n\nclass MyCustomTask(AbsTaskReranking):\n ...\n\nmodel = SentenceTransformer(\"average_word_embeddings_komninos\")\nevaluation = MTEB(tasks=[MyCustomTask()])\nevaluation.run(model)\n```\n\n</details>\n\n<details>\n <summary> Using a cross encoder for reranking</summary>\n\n\n### Using a cross encoder for reranking\n\nTo use a cross encoder for reranking, you can directly use a CrossEncoder from SentenceTransformers. The following code shows a two-stage run with the second stage reading results saved from the first stage. \n\n```python\nfrom mteb import MTEB\nimport mteb\nfrom sentence_transformers import CrossEncoder, SentenceTransformer\n\ncross_encoder = CrossEncoder(\"cross-encoder/ms-marco-TinyBERT-L-2-v2\")\ndual_encoder = SentenceTransformer(\"all-MiniLM-L6-v2\")\n\ntasks = mteb.get_tasks(tasks=[\"NFCorpus\"], languages=[\"eng\"])\n\nsubset = \"default\" # subset name used in the NFCorpus dataset\neval_splits = [\"test\"]\n\nevaluation = MTEB(tasks=tasks)\nevaluation.run(\n dual_encoder,\n eval_splits=eval_splits,\n save_predictions=True,\n output_folder=\"results/stage1\",\n)\nevaluation.run(\n cross_encoder,\n eval_splits=eval_splits,\n top_k=5,\n save_predictions=True,\n output_folder=\"results/stage2\",\n previous_results=f\"results/stage1/NFCorpus_{subset}_predictions.json\",\n)\n```\n\n</details>\n\n<details>\n <summary> Saving retrieval task predictions </summary>\n\n### Saving retrieval task predictions\n\nTo save the predictions from a retrieval task, add the `--save_predictions` flag in the CLI or set `save_predictions=True` in the run method. The filename will be in the \"{task_name}_{subset}_predictions.json\" format.\n\nPython:\n```python\nfrom mteb import MTEB\nimport mteb\nfrom sentence_transformers import SentenceTransformer\n\nmodel = SentenceTransformer(\"all-MiniLM-L6-v2\")\n\ntasks = mteb.get_tasks( tasks=[\"NFCorpus\"], languages=[\"eng\"])\n\nevaluation = MTEB(tasks=tasks)\nevaluation.run(\n model,\n eval_splits=[\"test\"],\n save_predictions=True,\n output_folder=\"results\",\n)\n```\n\nCLI:\n```\nmteb run -t NFCorpus -m all-MiniLM-L6-v2 --output_folder results --save_predictions\n```\n\n</details>\n\n<details>\n <summary> Fetching result from the results repository </summary>\n\n### Fetching result from the results repository\n\nMultiple models have already been run on tasks avaiable within MTEB. These results are available results [repository](https://github.com/embeddings-benchmark/results).\n\nTo make the results more easily accessible, we have designed custom functionality for retrieving from the repository. For instance, you are selecting the best model for your French and English retrieval task on legal documents you could fetch the relevant tasks and create a dataframe of the results using the following code:\n\n```python\nimport mteb\nfrom mteb.task_selection import results_to_dataframe\n\ntasks = mteb.get_tasks(\n task_types=[\"Retrieval\"], languages=[\"eng\", \"fra\"], domains=[\"Legal\"]\n)\n\nmodel_names = [\n \"GritLM/GritLM-7B\",\n \"intfloat/multilingual-e5-small\",\n \"intfloat/multilingual-e5-base\",\n \"intfloat/multilingual-e5-large\",\n]\nmodels = [mteb.get_model_meta(name) for name in model_names]\n\nresults = mteb.load_results(models=models, tasks=tasks)\n\ndf = results_to_dataframe(results)\n```\n\n</details>\n\n<br /> \n\n\n\n## Documentation\n\n| Documentation | |\n| ------------------------------ | ---------------------- |\n| 📋 [Tasks] | Overview of available tasks |\n| 📈 [Leaderboard] | The interactive leaderboard of the benchmark |\n| 🤖 [Adding a model] | Information related to how to submit a model to the leaderboard |\n| 👩🔬 [Reproducible workflows] | Information related to how to reproduce and create reproducible workflows with MTEB |\n| 👩💻 [Adding a dataset] | How to add a new task/dataset to MTEB | \n| 👩💻 [Adding a leaderboard tab] | How to add a new leaderboard tab to MTEB | \n| 🤝 [Contributing] | How to contribute to MTEB and set it up for development |\n| 🌐 [MMTEB] | An open-source effort to extend MTEB to cover a broad set of languages | \n\n[Tasks]: docs/tasks.md\n[Contributing]: CONTRIBUTING.md\n[Adding a model]: docs/adding_a_model.md\n[Adding a dataset]: docs/adding_a_dataset.md\n[Adding a leaderboard tab]: docs/adding_a_leaderboard_tab.md\n[Leaderboard]: https://huggingface.co/spaces/mteb/leaderboard\n[MMTEB]: docs/mmteb/readme.md\n[Reproducible workflows]: docs/reproducible_workflow.md\n\n## Citing\n\nMTEB was introduced in \"[MTEB: Massive Text Embedding Benchmark](https://arxiv.org/abs/2210.07316)\", feel free to cite:\n\n```bibtex\n@article{muennighoff2022mteb,\n doi = {10.48550/ARXIV.2210.07316},\n url = {https://arxiv.org/abs/2210.07316},\n author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\\\"\\i}c and Reimers, Nils},\n title = {MTEB: Massive Text Embedding Benchmark},\n publisher = {arXiv},\n journal={arXiv preprint arXiv:2210.07316}, \n year = {2022}\n}\n```\n\nYou may also want to read and cite the amazing work that has extended MTEB & integrated new datasets:\n- Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff. \"[C-Pack: Packaged Resources To Advance General Chinese Embedding](https://arxiv.org/abs/2309.07597)\" arXiv 2023\n- Michael Günther, Jackmin Ong, Isabelle Mohr, Alaeddine Abdessalem, Tanguy Abel, Mohammad Kalim Akram, Susana Guzman, Georgios Mastrapas, Saba Sturua, Bo Wang, Maximilian Werk, Nan Wang, Han Xiao. \"[Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents](https://arxiv.org/abs/2310.19923)\" arXiv 2023\n- Silvan Wehrli, Bert Arnrich, Christopher Irrgang. \"[German Text Embedding Clustering Benchmark](https://arxiv.org/abs/2401.02709)\" arXiv 2024\n- Orion Weller, Benjamin Chang, Sean MacAvaney, Kyle Lo, Arman Cohan, Benjamin Van Durme, Dawn Lawrie, Luca Soldaini. \"[FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions](https://arxiv.org/abs/2403.15246)\" arXiv 2024\n- Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li. \"[LongEmbed: Extending Embedding Models for Long Context Retrieval](https://arxiv.org/abs/2404.12096)\" arXiv 2024\n- Kenneth Enevoldsen, Márton Kardos, Niklas Muennighoff, Kristoffer Laigaard Nielbo. \"[The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding](https://arxiv.org/abs/2406.02396)\" arXiv 2024\n\nFor works that have used MTEB for benchmarking, you can find them on the [leaderboard](https://huggingface.co/spaces/mteb/leaderboard).\n", "docs/benchmarks.md": "## Available benchmarks\nThe following tables give you an overview of the benchmarks in MTEB.\n\n<details>\n\n<!-- This allows the table to be autogenerated in the future: -->\n<!-- BENCHMARKS TABLE START -->\n| Name | # Tasks | Task Types | Domains | Languages |\n|------|---------|------------|---------|-----------|\n| [CoIR](https://github.com/CoIR-team/coir) | 10 | {'Retrieval': 10} | [Written, Programming] | python,c++,sql,go,eng,php,javascript,ruby,java |\n| [MINERSBitextMining](https://arxiv.org/pdf/2406.07424) | 7 | {'BitextMining': 7} | [Written, Social, Reviews] | sun,kaz,tzl,ido,abs,arq,yue,tam,nij,glg,slk,hsb,ber,xho,cbk,pol,uzb,ina,kab,swh,amh,fao,kzj,lfn,uig,sqi,deu,ang,ind,bug,pms,ibo,cym,eus,spa,ceb,tgl,ron,isl,ita,csb,cha,fin,est,pes,jpn,tel,tha,oci,cmn,min,fry,bbc,epo,lit,rus,bos,hrv,war,ara,bjn,mkd,srp,ast,nno,urd,pam,aze,eng,ace,bew,kor,dan,awa,mui,hye,ban,cor,ben,gle,swe,mad,bul,lat,cat,nob,fra,pcm,ell,mar,vie,tat,ukr,gsw,kat,arz,dsb,lvs,nld,tur,bel,max,nds,afr,khm,dtp,yor,ces,gla,zsm,mak,ile,nov,orv,bre,swg,rej,mhr,mon,mal,jav,heb,slv,bhp,kur,wuu,tuk,por,hun,hin,hau,yid |\n| [MTEB(Retrieval w/Instructions)](https://arxiv.org/abs/2403.15246) | 3 | {'InstructionRetrieval': 3} | [Written, News] | eng |\n| [MTEB(Scandinavian)](https://kennethenevoldsen.github.io/scandinavian-embedding-benchmark/) | 28 | {'BitextMining': 2, 'Classification': 13, 'Retrieval': 7, 'Clustering': 6} | [Encyclopaedic, Spoken, Non-fiction, Government, News, Fiction, Social, Blog, Reviews, Written, Web, Legal] | nob,fao,swe,isl,dan,nno |\n| MTEB(code) | 12 | {'Retrieval': 12} | [Written, Programming] | python,c++,sql,c,go,eng,shell,typescript,php,scala,rust,swift,javascript,ruby,java |\n| [MTEB(deu)](https://arxiv.org/html/2401.02709v1) | 19 | {'Classification': 6, 'Clustering': 4, 'PairClassification': 2, 'Reranking': 1, 'Retrieval': 4, 'STS': 2} | [Encyclopaedic, Spoken, News, Reviews, Written, Web] | eng,deu,pol,fra |\n| MTEB(eng) | 67 | {'Classification': 12, 'Retrieval': 26, 'Clustering': 11, 'Reranking': 4, 'STS': 10, 'PairClassification': 3, 'Summarization': 1} | [Encyclopaedic, Spoken, Non-fiction, Blog, News, Medical, Social, Programming, Written, Reviews, Web, Academic] | tur,fra,eng,cmn,pol,ita,nld,spa,deu,ara |\n| [MTEB(fra)](https://arxiv.org/abs/2405.20468) | 26 | {'Classification': 6, 'Clustering': 7, 'PairClassification': 2, 'Reranking': 2, 'Retrieval': 5, 'STS': 3, 'Summarization': 1} | [Encyclopaedic, Spoken, Non-fiction, News, Social, Reviews, Written, Web, Legal, Academic] | eng,deu,pol,fra |\n| MTEB(kor) | 6 | {'Classification': 1, 'Reranking': 1, 'Retrieval': 2, 'STS': 2} | [Encyclopaedic, Spoken, News, Reviews, Written, Web] | kor |\n| [MTEB(law)](https://aclanthology.org/2023.eacl-main.148/) | 8 | {'Retrieval': 8} | [Written, Legal] | eng,deu,zho |\n| [MTEB(pol)](https://arxiv.org/abs/2405.10138) | 18 | {'Classification': 7, 'Clustering': 3, 'PairClassification': 4, 'STS': 4} | [Spoken, Non-fiction, News, Fiction, Social, Written, Web, Legal, Academic] | pol,deu,eng,fra |\n| [MTEB(rus)](https://aclanthology.org/2023.eacl-main.148/) | 23 | {'Classification': 9, 'Clustering': 3, 'MultilabelClassification': 2, 'PairClassification': 1, 'Reranking': 2, 'Retrieval': 3, 'STS': 3} | [Encyclopaedic, Spoken, Blog, News, Social, Reviews, Written, Web, Academic] | rus |\n<!-- BENCHMARKS TABLE END -->", "mteb/cli.py": "\"\"\"Command line interface for various MTEB.\n\nMTEB is a benchmark for evaluating the quality of embeddings in various tasks. It supports the following commands:\n\n- mteb run: Runs a model on a set of tasks\n- mteb available_tasks: Lists the available tasks within MTEB\n- mteb create_meta: Creates the metadata for a model card from a folder of results\n\n## Running Models on Tasks\n\nTo run a model on a set of tasks, use the `mteb run` command. For example:\n \n```bash\nmteb run -m average_word_embeddings_komninos \\\n -t Banking77Classification EmotionClassification \\\n --output_folder mteb_output \\\n --verbosity 3\n```\n\nThis will create a folder `mteb_output/{model_name}/{model_revision}` containing the results of the model on the specified tasks supplied as a json\nfile: \"{task_name}.json\".\n\n\n## Listing Available Tasks\n\nTo list the available tasks within MTEB, use the `mteb available_tasks` command. For example:\n\n```bash\nmteb available_tasks # list all available tasks\nmteb available_tasks --task_types Clustering # list tasks of type Clustering\n```\n\n\n## Creating Model Metadata\n\nOnce a model is run you can create the metadata for a model card from a folder of results, use the `mteb create_meta` command. For example:\n\n```bash\nmteb create_meta --results_folder mteb_output/average_word_embeddings_komninos/{revision} \\\n --output_path model_card.md\n```\n\nThis will create a model card at `model_card.md` containing the metadata for the model on MTEB within the YAML frontmatter. This will make the model\ndiscoverable on the MTEB leaderboard. \n\nAn example frontmatter for a model card is shown below:\n\n```\n---\ntags:\n- mteb\nmodel-index:\n- name: SGPT-5.8B-weightedmean-msmarco-specb-bitfit\n results:\n - task:\n type: classification\n dataset:\n type: mteb/banking77\n name: MTEB Banking77\n config: default\n split: test\n revision: 44fa15921b4c889113cc5df03dd4901b49161ab7\n metrics:\n - type: accuracy\n value: 84.49350649350649\n---\n```\n\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport json\nimport logging\nfrom pathlib import Path\n\nimport torch\n\nimport mteb\nfrom mteb.create_meta import generate_readme\n\nlogging.basicConfig(level=logging.WARNING)\nlogger = logging.getLogger(__name__)\n\n\ndef _save_model_metadata(model: mteb.Encoder, output_folder: Path) -> None:\n meta = model.mteb_model_meta # type: ignore\n\n revision = meta.revision if meta.revision is not None else \"no_revision_available\"\n\n save_path = output_folder / meta.model_name_as_path() / revision / \"model_meta.json\"\n\n with save_path.open(\"w\") as f:\n json.dump(meta.to_dict(), f)\n\n\ndef run(args: argparse.Namespace) -> None:\n # set logging based on verbosity level\n if args.verbosity == 0:\n logging.getLogger(\"mteb\").setLevel(logging.CRITICAL)\n elif args.verbosity == 1:\n logging.getLogger(\"mteb\").setLevel(logging.WARNING)\n elif args.verbosity == 2:\n logging.getLogger(\"mteb\").setLevel(logging.INFO)\n elif args.verbosity == 3:\n logging.getLogger(\"mteb\").setLevel(logging.DEBUG)\n\n logger.info(\"Running with parameters: %s\", args)\n\n if args.device is None:\n device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n else:\n device = args.device\n\n model = mteb.get_model(args.model, args.model_revision, device=device)\n\n tasks = mteb.get_tasks(\n categories=args.categories,\n task_types=args.task_types,\n languages=args.languages,\n tasks=args.tasks,\n )\n eval = mteb.MTEB(tasks=tasks)\n\n encode_kwargs = {}\n if args.batch_size is not None:\n encode_kwargs[\"batch_size\"] = args.batch_size\n\n save_predictions = (\n args.save_predictions if hasattr(args, \"save_predictions\") else False\n )\n\n eval.run(\n model,\n verbosity=args.verbosity,\n output_folder=args.output_folder,\n eval_splits=args.eval_splits,\n co2_tracker=args.co2_tracker,\n overwrite_results=args.overwrite,\n encode_kwargs=encode_kwargs,\n save_predictions=save_predictions,\n )\n\n _save_model_metadata(model, Path(args.output_folder))\n\n\ndef available_tasks(args: argparse.Namespace) -> None:\n tasks = mteb.get_tasks(\n categories=args.categories,\n task_types=args.task_types,\n languages=args.languages,\n tasks=args.tasks,\n )\n eval = mteb.MTEB(tasks=tasks)\n eval.mteb_tasks()\n\n\ndef add_task_selection_args(parser: argparse.ArgumentParser) -> None:\n \"\"\"Adds arguments to the parser for filtering tasks by type, category, language, and task name.\"\"\"\n parser.add_argument(\n \"--task_types\",\n nargs=\"+\",\n type=str,\n default=None,\n help=\"List of task types (Clustering, Retrieval..) to be evaluated. If None, the filter is not applied\",\n )\n parser.add_argument(\n \"--categories\",\n nargs=\"+\",\n type=str,\n default=None,\n help=\"List of task categories (s2s, p2p..) to be evaluated. If None the filter is not applied\",\n )\n parser.add_argument(\n \"-t\",\n \"--tasks\",\n nargs=\"+\",\n type=str,\n default=None,\n help=\"List of tasks to be evaluated. If specified, the other arguments are ignored.\",\n )\n parser.add_argument(\n \"-l\",\n \"--languages\",\n nargs=\"*\",\n type=str,\n default=None,\n help=\"List of languages to be evaluated. if not set, all languages will be evaluated. Specified as ISO 639-3 codes (e.g. eng, deu, fra).\",\n )\n\n\ndef add_available_tasks_parser(subparsers) -> None:\n parser = subparsers.add_parser(\n \"available_tasks\", help=\"List the available tasks within MTEB\"\n )\n add_task_selection_args(parser)\n\n parser.set_defaults(func=available_tasks)\n\n\ndef add_run_parser(subparsers) -> None:\n parser = subparsers.add_parser(\"run\", help=\"Run a model on a set of tasks\")\n\n parser.add_argument(\n \"-m\",\n \"--model\",\n type=str,\n help=\"Model to use. Will priotize model implementation in MTEB's model registry, but default to loading the model using sentence-transformers.\",\n )\n\n add_task_selection_args(parser)\n\n parser.add_argument(\n \"--device\", type=int, default=None, help=\"Device to use for computation\"\n )\n parser.add_argument(\n \"--output_folder\",\n type=str,\n default=\"results\",\n help=\"Output directory for results. Will default to `results` if not set.\",\n )\n parser.add_argument(\n \"-v\", \"--verbosity\", type=int, default=2, help=\"Verbosity level\"\n )\n parser.add_argument(\n \"--co2_tracker\",\n type=bool,\n default=False,\n help=\"Enable CO₂ tracker, disabled by default\",\n )\n parser.add_argument(\n \"--eval_splits\",\n nargs=\"+\",\n type=str,\n default=None,\n help=\"Evaluation splits to use (train, dev, test..). If None, all splits will be used\",\n )\n parser.add_argument(\n \"--model_revision\",\n type=str,\n default=None,\n help=\"Revision of the model to be loaded. Revisions are automatically read if the model is loaded from huggingface.\",\n )\n parser.add_argument(\n \"--batch_size\",\n type=int,\n default=None,\n help=\"Batch size of the encode. Will be passed to the MTEB as MTEB.evaluate(model, encode_kwargs = {'batch_size': value}).\",\n )\n parser.add_argument(\n \"--overwrite\",\n action=\"store_true\",\n default=False,\n help=\"Overwrite the output file if it already exists\",\n )\n parser.add_argument(\n \"--save_predictions\",\n action=\"store_true\",\n default=False,\n help=\"For retrieval tasks. Saves the predictions file in output_folder.\",\n )\n\n parser.set_defaults(func=run)\n\n\ndef create_meta(args: argparse.Namespace) -> None:\n results_folder = Path(args.results_folder)\n output_path = Path(args.output_path)\n overwrite = args.overwrite\n from_existing = Path(args.from_existing) if args.from_existing else None\n if output_path.exists() and overwrite:\n logger.warning(\"Output path already exists, overwriting.\")\n elif output_path.exists():\n raise FileExistsError(\n \"Output path already exists, use --overwrite to overwrite.\"\n )\n\n frontmatter = generate_readme(results_folder, from_existing)\n\n with output_path.open(\"w\") as f:\n f.write(frontmatter)\n\n\ndef add_create_meta_parser(subparsers) -> None:\n parser = subparsers.add_parser(\n \"create_meta\", help=\"Create model metadata from a folder of results\"\n )\n\n parser.add_argument(\n \"--results_folder\",\n type=str,\n help=\"Folder containing the results of a model run\",\n )\n parser.add_argument(\n \"--output_path\",\n type=str,\n default=\"model_card.md\",\n help=\"Output path for the model metadata\",\n )\n parser.add_argument(\n \"--overwrite\",\n action=\"store_true\",\n default=False,\n help=\"Overwrite the output file if it already exists\",\n )\n parser.add_argument(\n \"--from_existing\",\n type=str,\n required=False,\n help=\"Merge results with existing README.md\",\n )\n\n parser.set_defaults(func=create_meta)\n\n\ndef main():\n parser = argparse.ArgumentParser(description=\"The MTEB Command line interface.\")\n\n subparsers = parser.add_subparsers(\n title=\"subcommands\", description=\"valid subcommands\", help=\"additional help\"\n )\n add_run_parser(subparsers)\n add_available_tasks_parser(subparsers)\n add_create_meta_parser(subparsers)\n\n args = parser.parse_args()\n\n # If no subcommand is provided, default to run with a deprecation warning\n if not hasattr(args, \"func\"):\n logger.warning(\n \"Using `mteb` without a subcommand is deprecated. Use `mteb run` instead.\",\n DeprecationWarning,\n )\n # Set default arguments for 'run' if no subcommand is provided\n default_args = parser.parse_args(\n [\"run\"]\n + list(map(str, args._get_args()))\n + [\n f\"--{k}\" if v is None else f\"--{k}={v}\"\n for k, v in vars(args).items()\n if k != \"func\"\n ]\n )\n default_args.func(default_args)\n else:\n args.func(args)\n\n\nif __name__ == \"__main__\":\n main()\n", "mteb/evaluation/MTEB.py": "from __future__ import annotations\n\nimport json\nimport logging\nimport os\nimport traceback\nfrom copy import copy\nfrom datetime import datetime\nfrom pathlib import Path\nfrom time import time\nfrom typing import Any, Iterable\n\nimport datasets\nfrom sentence_transformers import SentenceTransformer\n\nfrom mteb.encoder_interface import Encoder\nfrom mteb.model_meta import ModelMeta\nfrom mteb.models import model_meta_from_sentence_transformers\n\nfrom ..abstasks import *\nfrom ..abstasks import AbsTask\nfrom ..load_results.mteb_results import MTEBResults\nfrom ..tasks import *\nfrom . import LangMapping\n\nlogger = logging.getLogger(__name__)\n\n\nclass MTEB:\n def __init__(\n self,\n tasks: Iterable[str | AbsTask] | None = None,\n *,\n task_types: list[str] | None = None,\n task_categories: list[str] | None = None,\n task_langs: list[str] | None = None,\n version=None,\n err_logs_path: str = \"error_logs.txt\",\n **kwargs,\n ):\n \"\"\"Create an Evaluation pipeline, based on the provided tasks.\n\n Args:\n tasks: List of tasks to be evaluated.\n task_types: Will be deprecated we recommend that you use `mteb.get_tasks()` to filter tasks. List of task types (Clustering, Retrieval..) to be\n evaluated. If None, all tasks will be evaluated\n task_categories: Will be deprecated we recommend that you use `mteb.get_tasks()` to filter tasks. List of task categories (s2s, p2p..) to be\n evaluated. If None, all tasks will be evaluated\n task_langs: Will be deprecated we recommend that you use `mteb.get_tasks()` to filter tasks. List of languages to be evaluated. if None, all\n languages will be evaluated. [\"eng-Latn\", \"deu_Latn\"] will evaluate on all tasks with these languages.\n version: Will be deprecated. Version of the benchmark to use. If None, latest is used\n err_logs_path: Path to save error logs.\n kwargs: Additional arguments to be passed to the tasks\n \"\"\"\n self.deprecation_warning(\n task_types, task_categories, task_langs, tasks, version\n )\n\n if tasks is not None:\n self._tasks = tasks\n assert (\n task_types is None and task_categories is None\n ), \"Cannot specify both `tasks` and `task_types`/`task_categories`\"\n else:\n self._task_types = task_types\n self._task_categories = task_categories\n self._tasks = None\n\n self._task_langs = task_langs if task_langs is not None else []\n if isinstance(self._task_langs, str):\n self._task_langs = [self._task_langs]\n\n self._extend_lang_code()\n self._extend_lang_pairs() # add all possible pairs\n\n self._version = version\n self.err_logs_path = err_logs_path\n\n self.select_tasks(**kwargs)\n\n def deprecation_warning(\n self, task_types, task_categories, task_langs, tasks, version\n ):\n if task_types is not None:\n logger.warning(\n \"The `task_types` argument is deprecated and will be removed in the next release. \"\n + \"Please use `tasks = mteb.get_tasks(... task_types = [...])` to filter tasks instead.\"\n )\n if task_categories is not None:\n logger.warning(\n \"The `task_categories` argument is deprecated and will be removed in the next release. \"\n + \"Please use `tasks = mteb.get_tasks(... categories = [...])` to filter tasks instead.\"\n )\n if task_langs is not None:\n logger.warning(\n \"The `task_langs` argument is deprecated and will be removed in the next release. \"\n + \"Please use `tasks = mteb.get_tasks(... languages = [...])` to filter tasks instead. \"\n + \"Note that this uses 3 letter language codes (ISO 639-3).\"\n )\n if version is not None:\n logger.warning(\n \"The `version` argument is deprecated and will be removed in the next release.\"\n )\n task_contains_strings = any(isinstance(x, str) for x in tasks or [])\n if task_contains_strings:\n logger.warning(\n \"Passing task names as strings is deprecated and will be removed in the next release. \"\n + \"Please use `tasks = mteb.get_tasks(tasks=[...])` method to get tasks instead.\"\n )\n\n @property\n def available_tasks(self):\n return [x.metadata_dict[\"name\"] for x in self.tasks_cls]\n\n @property\n def available_task_types(self):\n return {x.metadata_dict[\"type\"] for x in self.tasks_cls}\n\n @property\n def available_task_categories(self):\n return {x.metadata_dict[\"category\"] for x in self.tasks_cls}\n\n def _extend_lang_code(self):\n # add all possible language codes\n for lang in set(self._task_langs):\n if lang in LangMapping.LANG_MAPPING:\n self._task_langs += LangMapping.LANG_MAPPING[lang]\n\n def _extend_lang_pairs(self):\n # add all possible language pairs\n langs = set(self._task_langs)\n for x in langs:\n if \"-\" not in x:\n for y in langs:\n if \"-\" not in y:\n pair = f\"{x}-{y}\"\n if pair not in langs:\n self._task_langs.append(pair)\n return\n\n def _display_tasks(self, task_list, name=None):\n from rich.console import Console\n\n # disable logging for other ranks\n if int(os.getenv(\"RANK\", 0)) != 0:\n return\n\n console = Console()\n if name:\n console.rule(f\"[bold]{name}\\n\", style=\"grey15\")\n for task_type in self.available_task_types:\n current_type_tasks = list(\n filter(lambda x: x.metadata.type == task_type, task_list)\n )\n if len(current_type_tasks) == 0:\n continue\n else:\n console.print(f\"[bold]{task_type}[/]\")\n for task in current_type_tasks:\n prefix = \" - \"\n name = f\"{task.metadata.name}\"\n category = f\", [italic grey39]{task.metadata.category}[/]\"\n multilingual = (\n f\", [italic red]multilingual {len(task.hf_subsets)} / {len(task.metadata.eval_langs)} Subsets[/]\"\n if task.is_multilingual\n else \"\"\n )\n console.print(f\"{prefix}{name}{category}{multilingual}\")\n console.print(\"\\n\")\n\n @classmethod\n def mteb_tasks(cls):\n \"\"\"Get all tasks available in the MTEB.\"\"\"\n instance = cls()\n instance._display_tasks(instance.tasks_cls, name=\"MTEB tasks\")\n\n def print_selected_tasks(self):\n \"\"\"Print the selected tasks.\"\"\"\n self._display_tasks(self.tasks, name=\"Selected tasks\")\n\n def select_tasks(self, **kwargs):\n \"\"\"Select the tasks to be evaluated.\"\"\"\n # Get all existing tasks\n tasks_categories_cls = list(AbsTask.__subclasses__())\n self.tasks_cls = [\n cls(hf_subsets=self._task_langs, **kwargs)\n for cat_cls in tasks_categories_cls\n for cls in cat_cls.__subclasses__()\n if cat_cls.__name__.startswith(\"AbsTask\")\n ]\n\n # If `task_list` is specified, select list of tasks\n if self._tasks is not None:\n self.tasks = list(\n filter(\n lambda x: (x.metadata_dict[\"name\"] in self._tasks), self.tasks_cls\n )\n )\n if len(self.tasks) != len(self._tasks):\n tasks_known = {x.metadata_dict[\"name\"] for x in self.tasks_cls}\n tasks_unknown = {\n x for x in self._tasks if isinstance(x, str)\n } - tasks_known\n if tasks_unknown:\n unknown_str, known_str = (\n \",\".join(sorted(tasks_unknown)),\n \",\".join(sorted(tasks_known)),\n )\n logger.warning(\n f\"WARNING: Unknown tasks: {unknown_str}. Known tasks: {known_str}.\"\n )\n # add task if subclass of mteb.tasks\n self.tasks.extend([x for x in self._tasks if isinstance(x, AbsTask)])\n return\n\n # Otherwise use filters to select tasks\n filtered_tasks = filter(\n lambda x: (self._task_types is None)\n or (x.metadata_dict[\"type\"] in self._task_types),\n self.tasks_cls,\n )\n filtered_tasks = filter(\n lambda x: (self._task_categories is None)\n or (x.metadata_dict[\"category\"] in self._task_categories),\n filtered_tasks,\n )\n filtered_tasks = filter(\n lambda x: (self._version is None)\n or (x.metadata_dict[\"version\"] >= self._version),\n filtered_tasks,\n )\n # keep only tasks with at least one language in the filter\n filtered_tasks = filter(\n lambda x: (not (self._task_langs))\n or (len(set(x.metadata_dict[\"eval_langs\"]) & set(self._task_langs)) > 0),\n filtered_tasks,\n )\n\n # Get final list of tasks\n self.tasks = list(filtered_tasks)\n\n def load_tasks_data(self):\n \"\"\"Load datasets for the selected tasks.\"\"\"\n logger.info(f\"\\n\\n## Loading datasets for {len(self.tasks)} tasks\")\n for task in self.tasks:\n logger.info(f\"\\n# Loading dataset for {task.metadata_dict['name']}\")\n task.load_data()\n\n @staticmethod\n def _run_eval(\n task: AbsTask,\n model: Encoder,\n split,\n output_folder,\n *,\n encode_kwargs: dict[str, Any],\n **kwargs: Any,\n ):\n tick = time()\n results = task.evaluate(\n model,\n split,\n output_folder=output_folder,\n encode_kwargs=encode_kwargs,\n **kwargs,\n )\n tock = time()\n return results, tick, tock\n\n def run(\n self,\n model: SentenceTransformer | Encoder,\n verbosity: int = 1,\n output_folder: str | None = \"results\",\n eval_splits=None,\n overwrite_results: bool = False,\n raise_error: bool = True,\n co2_tracker: bool = False,\n encode_kwargs: dict[str, Any] = {},\n **kwargs,\n ) -> list[MTEBResults]:\n \"\"\"Run the evaluation pipeline on the selected tasks.\n\n Args:\n model: Model to be used for evaluation\n verbosity: Verbosity level. Default is 1.\n 0: print tasks tqdm progress bar\n 1: print tasks tqdm progress bar and scores\n 2: print everything (including datasets loading)\n output_folder: Folder where the results will be saved. Default to 'results'. Where it will save the results in the format:\n `{output_folder}/{model_name}/{model_revision}/{task_name}.json`.\n eval_splits: List of splits to evaluate on. If None, the splits are taken from the task metadata.\n overwrite_results: Whether to overwrite existing results.\n raise_error: Whether to raise an error if an exception occurs during evaluation.\n co2_tracker: Whether to enable or disable CO2 emissions tracker using codecarbon.\n encode_kwargs: Additional keyword arguments to be passed to the model.encode method.\n kwargs: Additional arguments to be passed to `_run_eval` method and task.load_data.\n\n Returns:\n A list of MTEBResults objects, one for each task evaluated.\n \"\"\"\n if \"batch_size\" in kwargs:\n logger.warning(\n \"The `batch_size` argument is deprecated and will be removed in the next release. \"\n + \"Please use `encode_kwargs = {'batch_size': ...}` to set the batch size instead.\"\n )\n encode_kwargs[\"batch_size\"] = kwargs[\"batch_size\"]\n\n # Set logging\n if verbosity < 2:\n datasets.logging.set_verbosity(40)\n datasets.logging.disable_progress_bar()\n\n meta = self.create_model_meta(model)\n output_path = self.create_output_folder(meta, output_folder)\n\n if output_path:\n self._save_model_metadata(meta, output_path)\n\n # Run selected tasks\n logger.info(f\"\\n\\n## Evaluating {len(self.tasks)} tasks:\")\n self.print_selected_tasks()\n evaluation_results = []\n original_tasks = (\n self.tasks.copy()\n ) # save them in case we re-use the object (e.g. for reranking)\n while len(self.tasks) > 0:\n task = self.tasks[0]\n logger.info(\n f\"\\n\\n********************** Evaluating {task.metadata.name} **********************\"\n )\n\n # skip evaluation if results folder exists and overwrite_results is False\n if output_path:\n save_path = output_path / f\"{task.metadata.name}{task.save_suffix}.json\"\n if save_path.exists() and not overwrite_results:\n logger.info(\n f\"{task.metadata.name} results already exists. Loading results from disk. Set overwrite_results=True to overwrite.\"\n )\n mteb_results = MTEBResults.from_disk(save_path)\n evaluation_results.append(mteb_results)\n del self.tasks[0] # empty memory\n continue\n try:\n task_eval_splits = (\n eval_splits if eval_splits is not None else task.eval_splits\n )\n\n # load data\n logger.info(f\"Loading dataset for {task.metadata_dict['name']}\")\n task.check_if_dataset_is_superseeded()\n task.load_data(eval_splits=task_eval_splits, **kwargs)\n\n # run evaluation\n task_results = {}\n evaluation_time = 0\n kg_co2_emissions: int | None = 0 if co2_tracker else None\n for split in task_eval_splits:\n if co2_tracker:\n try:\n from codecarbon import EmissionsTracker\n except ImportError:\n raise ImportError(\n \"To use the CO2 emissions tracker, please install codecarbon using 'pip install codecarbon'\"\n )\n\n with EmissionsTracker(\n save_to_file=False, save_to_api=False, logging_logger=logger\n ) as tracker:\n results, tick, tock = self._run_eval(\n task,\n model,\n split,\n output_folder,\n encode_kwargs=encode_kwargs,\n **kwargs,\n )\n\n kg_co2_emissions += (\n tracker.final_emissions\n ) # expressed as kilograms of CO₂-equivalents\n else:\n results, tick, tock = self._run_eval(\n task,\n model,\n split,\n output_folder,\n encode_kwargs=encode_kwargs,\n **kwargs,\n )\n\n logger.info(\n f\"Evaluation for {task.metadata_dict['name']} on {split} took {tock - tick:.2f} seconds\"\n )\n evaluation_time += tock - tick\n\n task_results[split] = results\n if verbosity >= 1:\n logger.info(f\"Scores: {results}\")\n\n mteb_task_result = MTEBResults.from_task_results(\n task,\n task_results,\n evaluation_time=evaluation_time,\n kg_co2_emissions=kg_co2_emissions,\n )\n\n # save results\n if output_path:\n with open(save_path, \"w\") as f_out:\n json.dump(\n mteb_task_result.to_dict(), f_out, indent=2, sort_keys=True\n )\n\n evaluation_results.append(mteb_task_result)\n\n except Exception as e:\n logger.error(\n f\"Error while evaluating {task.metadata_dict['name']}: {e}\"\n )\n if raise_error:\n raise e\n logger.error(\n f\"Please check all the error logs at: {self.err_logs_path}\"\n )\n with open(self.err_logs_path, \"a\") as f_out:\n f_out.write(f\"{datetime.now()} >>> {task.metadata_dict['name']}\\n\")\n f_out.write(traceback.format_exc())\n f_out.write(\"\\n\\n\")\n\n # empty memory\n del self.tasks[0]\n\n # restore original tasks\n self.tasks = original_tasks\n return evaluation_results\n\n @staticmethod\n def create_model_meta(model: Encoder) -> ModelMeta:\n if hasattr(model, \"mteb_model_meta\"):\n meta = model.mteb_model_meta # type: ignore\n else:\n try:\n meta = model_meta_from_sentence_transformers(model) # type: ignore\n except AttributeError:\n logger.warning(\n \"Could not find model metadata. Please set the model.mteb_model_meta attribute or if you are using \"\n + \"SentenceTransformers, please upgrade to version 3.0.0 to ensure that the model.mteb_model_meta \"\n + \"attribute is available.\"\n )\n meta = ModelMeta(\n name=None,\n revision=None,\n release_date=None,\n languages=None,\n )\n\n # create a copy of the meta to avoid modifying the original object\n meta = copy(meta)\n meta.revision = meta.revision or \"no_revision_available\"\n meta.name = meta.name or \"no_model_name_available\"\n\n return meta\n\n def create_output_folder(\n self, model_meta: ModelMeta, output_folder: str | None\n ) -> Path | None:\n \"\"\"Create output folder for the results.\"\"\"\n if output_folder is None:\n return None\n\n model_revision: str = model_meta.revision # type: ignore\n model_path_name = model_meta.model_name_as_path()\n\n output_path = Path(output_folder) / model_path_name / model_revision\n output_path.mkdir(parents=True, exist_ok=True)\n return output_path\n\n @staticmethod\n def _save_model_metadata(model_meta: ModelMeta, output_folder: Path) -> None:\n save_path = output_folder / \"model_meta.json\"\n\n with save_path.open(\"w\") as f:\n json.dump(model_meta.to_dict(), f)\n"}
|
diff --git a/README.md b/README.md
index e35ad3bdbc..a7eb03e4f2 100644
--- a/README.md
+++ b/README.md
@@ -378,6 +378,7 @@ df = results_to_dataframe(results)
| Documentation | |
| ------------------------------ | ---------------------- |
| 📋 [Tasks] | Overview of available tasks |
+| 📐 [Benchmarks] | Overview of available benchmarks |
| 📈 [Leaderboard] | The interactive leaderboard of the benchmark |
| 🤖 [Adding a model] | Information related to how to submit a model to the leaderboard |
| 👩🔬 [Reproducible workflows] | Information related to how to reproduce and create reproducible workflows with MTEB |
@@ -387,6 +388,7 @@ df = results_to_dataframe(results)
| 🌐 [MMTEB] | An open-source effort to extend MTEB to cover a broad set of languages |
[Tasks]: docs/tasks.md
+[Benchmarks]: docs/benchmarks.md
[Contributing]: CONTRIBUTING.md
[Adding a model]: docs/adding_a_model.md
[Adding a dataset]: docs/adding_a_dataset.md
diff --git a/docs/benchmarks.md b/docs/benchmarks.md
index 9eb471d187..a5abe50215 100644
--- a/docs/benchmarks.md
+++ b/docs/benchmarks.md
@@ -1,5 +1,5 @@
## Available benchmarks
-The following tables give you an overview of the benchmarks in MTEB.
+The following table gives you an overview of the benchmarks in MTEB.
<details>
|
{"mteb/cli.py": [{"type": "function", "name": "available_benchmarks", "lines": [155, 158], "signature": "def available_benchmarks(args: argparse.Namespace) -> None:", "doc": ""}, {"type": "function", "name": "add_available_benchmarks_parser", "lines": [215, 221], "signature": "def add_available_benchmarks_parser(subparsers) -> None:", "doc": ""}], "mteb/evaluation/MTEB.py": [{"type": "function", "name": "MTEB.mteb_benchmarks", "lines": [171, 175], "signature": "def mteb_benchmarks(self):", "doc": "Get all benchmarks available in the MTEB."}]}
| null |
["tests/test_cli.py::test_available_benchmarks"]
|
["tests/test_cli.py::test_available_tasks", "tests/test_cli.py::test_run_task[average_word_embeddings_komninos-BornholmBitextMining-21eec43590414cb8e3a6f654857abed0483ae36e]", "tests/test_cli.py::test_run_task[intfloat/multilingual-e5-small-BornholmBitextMining-e4ce9877abf3edfe10b0d82785e83bdcb973e22e]", "tests/test_cli.py::test_create_meta", "tests/test_cli.py::test_create_meta_from_existing[existing_readme.md-model_card_gold_existing.md]", "tests/test_cli.py::test_create_meta_from_existing[model_card_without_frontmatter.md-model_card_gold_without_frontmatter.md]", "tests/test_cli.py::test_save_predictions"]
|
b580b95fc91a7e7e675d27c3ae9a9df64ddad169
|
{"first_commit_time": 1727606088.0, "pr_title": "fix: Add listing all available benchmarks CLI option", "pr_body": "<!-- If you are submitting a dataset or a model for the model registry please use the corresponding checklists below otherwise feel free to remove them. -->\r\n\r\n<!-- add additional description, question etc. related to the new dataset -->\r\nAddresses #1250 point 1. \r\n\r\n- Add benchmarks under the \"Docs\" section of README\r\n- Add CLI option to list all available benchmarks\r\n - Headers will print the benchmark name\r\n - The tasks will be printed the same way as it is now\r\n- Add test case for the added CLI option\r\n\r\n## Example\r\nThis command should yield something like the following, starting with MTEB(eng):\r\n```\r\nmteb available_benchmarks\r\n```\r\n<details>\r\n<summary>Full printout</summary>\r\n\r\n```bash\r\n>> mteb available_benchmarks\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(eng) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nSummarization\r\n - SummEval, p2p\r\n\r\n\r\nPairClassification\r\n - SprintDuplicateQuestions, s2s\r\n - TwitterSemEval2015, s2s\r\n - TwitterURLCorpus, s2s\r\n\r\n\r\nClassification\r\n - AmazonCounterfactualClassification, s2s, multilingual 2 / 4 Subsets\r\n - AmazonPolarityClassification, p2p\r\n - AmazonReviewsClassification, s2s, multilingual 1 / 6 Subsets\r\n - Banking77Classification, s2s\r\n - EmotionClassification, s2s\r\n - ImdbClassification, p2p\r\n - MTOPDomainClassification, s2s, multilingual 1 / 6 Subsets\r\n - MTOPIntentClassification, s2s, multilingual 1 / 6 Subsets\r\n - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets\r\n - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets\r\n - ToxicConversationsClassification, s2s\r\n - TweetSentimentExtractionClassification, s2s\r\n\r\n\r\nRetrieval\r\n - ArguAna, s2p\r\n - CQADupstackAndroidRetrieval, s2p\r\n - CQADupstackEnglishRetrieval, s2p\r\n - CQADupstackGamingRetrieval, s2p\r\n - CQADupstackGisRetrieval, s2p\r\n - CQADupstackMathematicaRetrieval, s2p\r\n - CQADupstackPhysicsRetrieval, s2p\r\n - CQADupstackProgrammersRetrieval, s2p\r\n - CQADupstackStatsRetrieval, s2p\r\n - CQADupstackTexRetrieval, s2p\r\n - CQADupstackUnixRetrieval, s2p\r\n - CQADupstackWebmastersRetrieval, s2p\r\n - CQADupstackWordpressRetrieval, s2p\r\n - ClimateFEVER, s2p\r\n - DBPedia, s2p\r\n - FEVER, s2p\r\n - FiQA2018, s2p\r\n - HotpotQA, s2p\r\n - MSMARCO, s2p\r\n - NFCorpus, s2p\r\n - NQ, s2p\r\n - QuoraRetrieval, s2s\r\n - SCIDOCS, s2p\r\n - SciFact, s2p\r\n - TRECCOVID, s2p\r\n - Touche2020, s2p\r\n\r\n\r\nSTS\r\n - BIOSSES, s2s\r\n - SICK-R, s2s\r\n - STS12, s2s\r\n - STS13, s2s\r\n - STS14, s2s\r\n - STS15, s2s\r\n - STS16, s2s\r\n - STS17, s2s, multilingual 8 / 11 Subsets\r\n - STS22, p2p, multilingual 5 / 18 Subsets\r\n - STSBenchmark, s2s\r\n\r\n\r\nClustering\r\n - ArxivClusteringP2P, p2p\r\n - ArxivClusteringS2S, s2s\r\n - BiorxivClusteringP2P, p2p\r\n - BiorxivClusteringS2S, s2s\r\n - MedrxivClusteringP2P, p2p\r\n - MedrxivClusteringS2S, s2s\r\n - RedditClustering, s2s\r\n - RedditClusteringP2P, p2p\r\n - StackExchangeClustering, s2s\r\n - StackExchangeClusteringP2P, p2p\r\n - TwentyNewsgroupsClustering, s2s\r\n\r\n\r\nReranking\r\n - AskUbuntuDupQuestions, s2s\r\n - MindSmallReranking, s2s\r\n - SciDocsRR, s2s\r\n - StackOverflowDupQuestions, s2s\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(rus) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nPairClassification\r\n - TERRa, s2s\r\n\r\n\r\nClassification\r\n - GeoreviewClassification, p2p\r\n - HeadlineClassification, s2s\r\n - InappropriatenessClassification, s2s\r\n - KinopoiskClassification, p2p\r\n - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets\r\n - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets\r\n - RuReviewsClassification, p2p\r\n - RuSciBenchGRNTIClassification, p2p\r\n - RuSciBenchOECDClassification, p2p\r\n\r\n\r\nMultilabelClassification\r\n - CEDRClassification, s2s\r\n - SensitiveTopicsClassification, s2s\r\n\r\n\r\nRetrieval\r\n - MIRACLRetrieval, s2p, multilingual 1 / 18 Subsets\r\n - RiaNewsRetrieval, s2p\r\n - RuBQRetrieval, s2p\r\n\r\n\r\nSTS\r\n - RUParaPhraserSTS, s2s\r\n - RuSTSBenchmarkSTS, s2s\r\n - STS22, p2p, multilingual 1 / 18 Subsets\r\n\r\n\r\nClustering\r\n - GeoreviewClusteringP2P, p2p\r\n - RuSciBenchGRNTIClusteringP2P, p2p\r\n - RuSciBenchOECDClusteringP2P, p2p\r\n\r\n\r\nReranking\r\n - MIRACLReranking, s2s, multilingual 1 / 18 Subsets\r\n - RuBQReranking, s2p\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(Retrieval w/Instructions) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nInstructionRetrieval\r\n - Robust04InstructionRetrieval, s2p\r\n - News21InstructionRetrieval, s2p\r\n - Core17InstructionRetrieval, s2p\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(law) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nRetrieval\r\n - AILACasedocs, p2p\r\n - AILAStatutes, p2p\r\n - LegalSummarization, s2p\r\n - GerDaLIRSmall, p2p\r\n - LeCaRDv2, p2p\r\n - LegalBenchConsumerContractsQA, s2p\r\n - LegalBenchCorporateLobbying, s2p\r\n - LegalQuAD, s2p\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MINERSBitextMining \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nBitextMining\r\n - BUCC, s2s, multilingual 4 / 4 Subsets\r\n - LinceMTBitextMining, s2s, multilingual 1 / 1 Subsets\r\n - NollySentiBitextMining, s2s, multilingual 4 / 4 Subsets\r\n - NusaXBitextMining, s2s, multilingual 11 / 11 Subsets\r\n - NusaTranslationBitextMining, s2s, multilingual 11 / 11 Subsets\r\n - PhincBitextMining, s2s, multilingual 1 / 1 Subsets\r\n - Tatoeba, s2s, multilingual 112 / 112 Subsets\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(Scandinavian) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nClassification\r\n - AngryTweetsClassification, s2s\r\n - DanishPoliticalCommentsClassification, s2s\r\n - DalajClassification, s2s\r\n - DKHateClassification, s2s\r\n - LccSentimentClassification, s2s\r\n - MassiveIntentClassification, s2s, multilingual 3 / 51 Subsets\r\n - MassiveScenarioClassification, s2s, multilingual 3 / 51 Subsets\r\n - NordicLangClassification, s2s\r\n - NoRecClassification, s2s\r\n - NorwegianParliamentClassification, s2s\r\n - ScalaClassification, s2s, multilingual 4 / 4 Subsets\r\n - SwedishSentimentClassification, s2s\r\n - SweRecClassification, s2s\r\n\r\n\r\nBitextMining\r\n - BornholmBitextMining, s2s\r\n - NorwegianCourtsBitextMining, s2s\r\n\r\n\r\nRetrieval\r\n - DanFEVER, p2p\r\n - NorQuadRetrieval, p2p\r\n - SNLRetrieval, p2p\r\n - SwednRetrieval, p2p\r\n - SweFaqRetrieval, s2s\r\n - TV2Nordretrieval, p2p\r\n - TwitterHjerneRetrieval, p2p\r\n\r\n\r\nClustering\r\n - SNLHierarchicalClusteringS2S, s2s\r\n - SNLHierarchicalClusteringP2P, p2p\r\n - SwednClusteringP2P, p2p\r\n - SwednClusteringS2S, s2s\r\n - VGHierarchicalClusteringS2S, p2p\r\n - VGHierarchicalClusteringP2P, p2p\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 CoIR \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nRetrieval\r\n - AppsRetrieval, p2p\r\n - CodeFeedbackMT, p2p\r\n - CodeFeedbackST, p2p\r\n - CodeSearchNetCCRetrieval, p2p, multilingual 6 / 6 Subsets\r\n - CodeTransOceanContest, p2p\r\n - CodeTransOceanDL, p2p\r\n - CosQA, p2p\r\n - COIRCodeSearchNetRetrieval, p2p, multilingual 6 / 6 Subsets\r\n - StackOverflowQA, p2p\r\n - SyntheticText2SQL, p2p\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(fra) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nSummarization\r\n - SummEvalFr, p2p\r\n\r\n\r\nPairClassification\r\n - OpusparcusPC, s2s, multilingual 1 / 6 Subsets\r\n - PawsXPairClassification, s2s, multilingual 1 / 7 Subsets\r\n\r\n\r\nClassification\r\n - AmazonReviewsClassification, s2s, multilingual 1 / 6 Subsets\r\n - MasakhaNEWSClassification, s2s, multilingual 1 / 16 Subsets\r\n - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets\r\n - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets\r\n - MTOPDomainClassification, s2s, multilingual 1 / 6 Subsets\r\n - MTOPIntentClassification, s2s, multilingual 1 / 6 Subsets\r\n\r\n\r\nRetrieval\r\n - AlloprofRetrieval, s2p\r\n - BSARDRetrieval, s2p\r\n - MintakaRetrieval, s2p, multilingual 1 / 8 Subsets\r\n - SyntecRetrieval, s2p\r\n - XPQARetrieval, s2p, multilingual 3 / 36 Subsets\r\n\r\n\r\nSTS\r\n - SICKFr, s2s\r\n - STS22, p2p, multilingual 3 / 18 Subsets\r\n - STSBenchmarkMultilingualSTS, s2s, multilingual 1 / 10 Subsets\r\n\r\n\r\nClustering\r\n - AlloProfClusteringP2P, p2p\r\n - AlloProfClusteringS2S, s2s\r\n - HALClusteringS2S, s2s\r\n - MasakhaNEWSClusteringP2P, p2p, multilingual 1 / 16 Subsets\r\n - MasakhaNEWSClusteringS2S, s2s, multilingual 1 / 16 Subsets\r\n - MLSUMClusteringP2P, p2p, multilingual 1 / 4 Subsets\r\n - MLSUMClusteringS2S, s2s, multilingual 1 / 4 Subsets\r\n\r\n\r\nReranking\r\n - AlloprofReranking, s2p\r\n - SyntecReranking, s2p\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(deu) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nPairClassification\r\n - FalseFriendsGermanEnglish, s2s\r\n - PawsXPairClassification, s2s, multilingual 1 / 7 Subsets\r\n\r\n\r\nClassification\r\n - AmazonCounterfactualClassification, s2s, multilingual 1 / 4 Subsets\r\n - AmazonReviewsClassification, s2s, multilingual 1 / 6 Subsets\r\n - MTOPDomainClassification, s2s, multilingual 1 / 6 Subsets\r\n - MTOPIntentClassification, s2s, multilingual 1 / 6 Subsets\r\n - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets\r\n - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets\r\n\r\n\r\nRetrieval\r\n - GermanQuAD-Retrieval, s2p\r\n - GermanDPR, s2p\r\n - XMarket, s2p, multilingual 1 / 3 Subsets\r\n - GerDaLIR, s2p\r\n\r\n\r\nSTS\r\n - GermanSTSBenchmark, s2s\r\n - STS22, p2p, multilingual 4 / 18 Subsets\r\n\r\n\r\nClustering\r\n - BlurbsClusteringP2P, p2p\r\n - BlurbsClusteringS2S, s2s\r\n - TenKGnadClusteringP2P, p2p\r\n - TenKGnadClusteringS2S, s2s\r\n\r\n\r\nReranking\r\n - MIRACLReranking, s2s, multilingual 1 / 18 Subsets\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(kor) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nClassification\r\n - KLUE-TC, s2s\r\n\r\n\r\nRetrieval\r\n - MIRACLRetrieval, s2p, multilingual 1 / 18 Subsets\r\n - Ko-StrategyQA, s2p\r\n\r\n\r\nSTS\r\n - KLUE-STS, s2s\r\n - KorSTS, s2s\r\n\r\n\r\nReranking\r\n - MIRACLReranking, s2s, multilingual 1 / 18 Subsets\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(pol) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nPairClassification\r\n - CDSC-E, s2s\r\n - PpcPC, s2s\r\n - PSC, s2s\r\n - SICK-E-PL, s2s\r\n\r\n\r\nClassification\r\n - AllegroReviews, s2s\r\n - CBD, s2s\r\n - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets\r\n - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets\r\n - PolEmo2.0-IN, s2s\r\n - PolEmo2.0-OUT, s2s\r\n - PAC, p2p\r\n\r\n\r\nSTS\r\n - CDSC-R, s2s\r\n - STS22, p2p, multilingual 4 / 18 Subsets\r\n - STSBenchmarkMultilingualSTS, s2s, multilingual 1 / 10 Subsets\r\n - SICK-R-PL, s2s\r\n\r\n\r\nClustering\r\n - EightTagsClustering, s2s\r\n - PlscClusteringS2S, s2s\r\n - PlscClusteringP2P, s2s\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(code) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nRetrieval\r\n - AppsRetrieval, p2p\r\n - CodeEditSearchRetrieval, p2p, multilingual 13 / 13 Subsets\r\n - CodeFeedbackMT, p2p\r\n - CodeFeedbackST, p2p\r\n - CodeSearchNetCCRetrieval, p2p, multilingual 6 / 6 Subsets\r\n - CodeSearchNetRetrieval, p2p, multilingual 6 / 6 Subsets\r\n - CodeTransOceanContest, p2p\r\n - CodeTransOceanDL, p2p\r\n - CosQA, p2p\r\n - COIRCodeSearchNetRetrieval, p2p, multilingual 6 / 6 Subsets\r\n - StackOverflowQA, p2p\r\n - SyntheticText2SQL, p2p\r\n\r\n\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 MTEB(Multilingual) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nPairClassification\r\n - CTKFactsNLI, s2s\r\n - SprintDuplicateQuestions, s2s\r\n - TwitterURLCorpus, s2s\r\n - ArmenianParaphrasePC, s2s\r\n - indonli, s2s\r\n - OpusparcusPC, s2s, multilingual 6 / 6 Subsets\r\n - PawsXPairClassification, s2s, multilingual 7 / 7 Subsets\r\n - RTE3, s2s, multilingual 4 / 4 Subsets\r\n - XNLI, s2s, multilingual 14 / 14 Subsets\r\n - PpcPC, s2s\r\n - TERRa, s2s\r\n\r\n\r\nClassification\r\n - BulgarianStoreReviewSentimentClassfication, s2s\r\n - CzechProductReviewSentimentClassification, s2s\r\n - GreekLegalCodeClassification, s2s\r\n - DBpediaClassification, s2s\r\n - FinancialPhrasebankClassification, s2s\r\n - PoemSentimentClassification, s2s\r\n - ToxicConversationsClassification, s2s\r\n - TweetTopicSingleClassification, s2s\r\n - EstonianValenceClassification, s2s\r\n - FilipinoShopeeReviewsClassification, s2s\r\n - GujaratiNewsClassification, s2s\r\n - SentimentAnalysisHindi, s2s\r\n - IndonesianIdClickbaitClassification, s2s\r\n - ItaCaseholdClassification, s2s\r\n - KorSarcasmClassification, s2s\r\n - KurdishSentimentClassification, s2s\r\n - MacedonianTweetSentimentClassification, s2s\r\n - AfriSentiClassification, s2s, multilingual 12 / 12 Subsets\r\n - AmazonCounterfactualClassification, s2s, multilingual 4 / 4 Subsets\r\n - CataloniaTweetClassification, s2s, multilingual 2 / 2 Subsets\r\n - CyrillicTurkicLangClassification, s2s\r\n - IndicLangClassification, s2s\r\n - MasakhaNEWSClassification, s2s, multilingual 16 / 16 Subsets\r\n - MassiveIntentClassification, s2s, multilingual 51 / 51 Subsets\r\n - MultiHateClassification, s2s, multilingual 11 / 11 Subsets\r\n - NordicLangClassification, s2s\r\n - NusaParagraphEmotionClassification, s2s, multilingual 10 / 10 Subsets\r\n - NusaX-senti, s2s, multilingual 12 / 12 Subsets\r\n - ScalaClassification, s2s, multilingual 4 / 4 Subsets\r\n - SwissJudgementClassification, s2s, multilingual 3 / 3 Subsets\r\n - NepaliNewsClassification, s2s\r\n - OdiaNewsClassification, s2s\r\n - PunjabiNewsClassification, s2s\r\n - PolEmo2.0-OUT, s2s\r\n - PAC, p2p\r\n - SinhalaNewsClassification, s2s\r\n - CSFDSKMovieReviewSentimentClassification, s2s\r\n - SiswatiNewsClassification, s2s\r\n - SlovakMovieReviewSentimentClassification, s2s\r\n - SwahiliNewsClassification, s2s\r\n - DalajClassification, s2s\r\n - TswanaNewsClassification, s2s\r\n - IsiZuluNewsClassification, s2s\r\n\r\n\r\nBitextMining\r\n - BornholmBitextMining, s2s\r\n - BibleNLPBitextMining, s2s, multilingual 1656 / 1656 Subsets\r\n - BUCC.v2, s2s, multilingual 4 / 4 Subsets\r\n - DiaBlaBitextMining, s2s, multilingual 2 / 2 Subsets\r\n - FloresBitextMining, s2s, multilingual 41412 / 41412 Subsets\r\n - IN22GenBitextMining, s2s, multilingual 506 / 506 Subsets\r\n - IndicGenBenchFloresBitextMining, s2s, multilingual 58 / 58 Subsets\r\n - NollySentiBitextMining, s2s, multilingual 4 / 4 Subsets\r\n - NorwegianCourtsBitextMining, s2s\r\n - NTREXBitextMining, s2s, multilingual 1916 / 1916 Subsets\r\n - NusaTranslationBitextMining, s2s, multilingual 11 / 11 Subsets\r\n - NusaXBitextMining, s2s, multilingual 11 / 11 Subsets\r\n - Tatoeba, s2s, multilingual 112 / 112 Subsets\r\n\r\n\r\nMultilabelClassification\r\n - KorHateSpeechMLClassification, s2s\r\n - MalteseNewsClassification, s2s\r\n - MultiEURLEXMultilabelClassification, p2p, multilingual 23 / 23 Subsets\r\n - BrazilianToxicTweetsClassification, s2s\r\n - CEDRClassification, s2s\r\n\r\n\r\nInstructionRetrieval\r\n - Core17InstructionRetrieval, s2p\r\n - News21InstructionRetrieval, s2p\r\n - Robust04InstructionRetrieval, s2p\r\n\r\n\r\nRetrieval\r\n - StackOverflowQA, p2p\r\n - TwitterHjerneRetrieval, p2p\r\n - AILAStatutes, p2p\r\n - ArguAna, s2p\r\n - HagridRetrieval, s2p\r\n - LegalBenchCorporateLobbying, s2p\r\n - LEMBPasskeyRetrieval, s2p\r\n - SCIDOCS, s2p\r\n - SpartQA, s2s\r\n - TempReasonL1, s2s\r\n - TRECCOVID, s2p\r\n - WinoGrande, s2s\r\n - BelebeleRetrieval, s2p, multilingual 376 / 376 Subsets\r\n - MLQARetrieval, s2p, multilingual 49 / 49 Subsets\r\n - StatcanDialogueDatasetRetrieval, s2p, multilingual 2 / 2 Subsets\r\n - WikipediaRetrievalMultilingual, s2p, multilingual 16 / 16 Subsets\r\n - CovidRetrieval, s2p\r\n\r\n\r\nSTS\r\n - GermanSTSBenchmark, s2s\r\n - SICK-R, s2s\r\n - STS12, s2s\r\n - STS13, s2s\r\n - STS14, s2s\r\n - STS15, s2s\r\n - STSBenchmark, s2s\r\n - FaroeseSTS, s2s\r\n - FinParaSTS, s2s\r\n - JSICK, s2s\r\n - IndicCrosslingualSTS, s2s, multilingual 12 / 12 Subsets\r\n - SemRel24STS, s2s, multilingual 12 / 12 Subsets\r\n - STS17, s2s, multilingual 11 / 11 Subsets\r\n - STS22.v2, p2p, multilingual 18 / 18 Subsets\r\n - STSES, s2s\r\n - STSB, s2s\r\n\r\n\r\nClustering\r\n - WikiCitiesClustering, p2p\r\n - MasakhaNEWSClusteringS2S, s2s, multilingual 16 / 16 Subsets\r\n - RomaniBibleClustering, p2p\r\n - ArXivHierarchicalClusteringP2P, p2p\r\n - ArXivHierarchicalClusteringS2S, p2p\r\n - BigPatentClustering.v2, p2p\r\n - BiorxivClusteringP2P.v2, p2p\r\n - MedrxivClusteringP2P.v2, p2p\r\n - StackExchangeClustering.v2, s2s\r\n - AlloProfClusteringS2S.v2, s2s\r\n - HALClusteringS2S.v2, s2s\r\n - SIB200ClusteringS2S, s2s, multilingual 197 / 197 Subsets\r\n - WikiClusteringP2P.v2, p2p, multilingual 14 / 14 Subsets\r\n - SNLHierarchicalClusteringP2P, p2p\r\n - PlscClusteringP2P.v2, s2s\r\n - SwednClusteringP2P, p2p\r\n - CLSClusteringP2P.v2, p2p\r\n\r\n\r\nReranking\r\n - WebLINXCandidatesReranking, p2p\r\n - AlloprofReranking, s2p\r\n - VoyageMMarcoReranking, s2s\r\n - WikipediaRerankingMultilingual, s2p, multilingual 16 / 16 Subsets\r\n - RuBQReranking, s2p\r\n - T2Reranking, s2s\r\n\r\n```\r\n\r\n</details>\r\n\r\n\r\n\r\n\r\n## Checklist\r\n<!-- Please do not delete this -->\r\n\r\n- [x] Run tests locally to make sure nothing is broken using `make test`. \r\n- [x] Run the formatter to format the code using `make lint`. \r\n", "pr_timeline": [{"time": 1727618701.0, "comment": "> It might be nice to order the benchmarks such that native (MTEB(*)) benchmarks appear first\r\n\r\nI'll add that as a \"good first issue\"."}], "issues": {}}
|
fairlearn/fairlearn
| 1,436
|
https://github.com/fairlearn/fairlearn/pull/1436
|
fairlearn__fairlearn-1436
|
[]
|
403da1fec74bdf2da28dc49487ccd72caa6f6976
|
diff --git a/fairlearn/metrics/_disaggregated_result.py b/fairlearn/metrics/_disaggregated_result.py
index cab28bf04..b78f0ca51 100644
--- a/fairlearn/metrics/_disaggregated_result.py
+++ b/fairlearn/metrics/_disaggregated_result.py
@@ -2,6 +2,8 @@
# Licensed under the MIT License.
from __future__ import annotations
+from __future__ import annotations
+
import logging
from typing import Literal
@@ -27,14 +29,6 @@
)
-def extract_unique_classes(data: pd.DataFrame, feature_list: list[str]) -> dict[str, np.ndarray]:
- """Compute unique values in a given set of columns."""
- result = dict()
- for feature in feature_list:
- result[feature] = np.unique(data[feature])
- return result
-
-
def apply_to_dataframe(
data: pd.DataFrame,
metric_functions: dict[str, AnnotatedMetricFunction],
@@ -134,48 +128,31 @@ def apply_grouping(
if not control_feature_names:
if errors == "raise":
try:
- mf = self.by_group
- if grouping_function == "min":
- vals = [mf[m].min() for m in mf.columns]
- else:
- vals = [mf[m].max() for m in mf.columns]
-
- result = pd.Series(vals, index=self.by_group.columns)
+ result = self.by_group.agg(grouping_function, axis=0)
except ValueError as ve:
raise ValueError(_MF_CONTAINS_NON_SCALAR_ERROR_MESSAGE) from ve
+
elif errors == "coerce":
- if not control_feature_names:
- mf = self.by_group
- # Fill in the possible min/max values, else np.nan
- if grouping_function == "min":
- vals = [
- mf[m].min() if np.isscalar(mf[m].values[0]) else np.nan
- for m in mf.columns
- ]
- else:
- vals = [
- mf[m].max() if np.isscalar(mf[m].values[0]) else np.nan
- for m in mf.columns
- ]
-
- result = pd.Series(vals, index=mf.columns)
+ # Fill in the possible min/max values, else np.nan
+ mf = self.by_group.apply(
+ lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan)
+ )
+ result = mf.agg(grouping_function, axis=0)
else:
if errors == "raise":
try:
- if grouping_function == "min":
- result = self.by_group.groupby(level=control_feature_names).min()
- else:
- result = self.by_group.groupby(level=control_feature_names).max()
+ result = self.by_group.groupby(level=control_feature_names).agg(
+ grouping_function
+ )
+
except ValueError as ve:
raise ValueError(_MF_CONTAINS_NON_SCALAR_ERROR_MESSAGE) from ve
elif errors == "coerce":
# Fill all impossible columns with NaN before grouping metric frame
- mf = self.by_group.copy()
- mf = mf.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan))
- if grouping_function == "min":
- result = mf.groupby(level=control_feature_names).min()
- else:
- result = mf.groupby(level=control_feature_names).max()
+ mf = self.by_group.apply(
+ lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan)
+ )
+ result = mf.groupby(level=control_feature_names).agg(grouping_function)
assert isinstance(result, pd.Series) or isinstance(result, pd.DataFrame)
@@ -227,10 +204,9 @@ def difference(
else:
raise ValueError("Unrecognised method '{0}' in difference() call".format(method))
- mf = self.by_group.copy()
# Can assume errors='coerce', else error would already have been raised in .group_min
# Fill all non-scalar values with NaN
- mf = mf.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan))
+ mf = self.by_group.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan))
if control_feature_names is None:
result = (mf - subtrahend).abs().max()
@@ -289,7 +265,6 @@ def ratio_sub_one(x):
if errors not in _VALID_ERROR_STRING:
raise ValueError(_INVALID_ERRORS_VALUE_ERROR_MESSAGE)
- result = None
if method == "between_groups":
result = self.apply_grouping(
"min", control_feature_names, errors=errors
@@ -357,49 +332,61 @@ def create(
DisaggregatedResult
Freshly constructed instance of this class
"""
- # Calculate the 'overall' values
- if control_feature_names is None:
- overall = apply_to_dataframe(data, metric_functions=annotated_functions)
- else:
- temp = data.groupby(by=control_feature_names).apply(
- apply_to_dataframe,
- metric_functions=annotated_functions,
- # See note in apply_to_dataframe about include_groups
- include_groups=False,
- )
- # If there are multiple control features, might have missing combinations
- if len(control_feature_names) > 1:
- cf_classes = extract_unique_classes(data, control_feature_names)
- all_indices = pd.MultiIndex.from_product(
- cf_classes.values(), names=cf_classes.keys()
- )
+ overall = DisaggregatedResult._apply_functions(
+ data=data,
+ annotated_functions=annotated_functions,
+ grouping_names=control_feature_names,
+ )
- overall = temp.reindex(index=all_indices)
- else:
- overall = temp
+ by_group = DisaggregatedResult._apply_functions(
+ data=data,
+ annotated_functions=annotated_functions,
+ grouping_names=(control_feature_names or []) + sensitive_feature_names,
+ )
+
+ return DisaggregatedResult(overall, by_group)
+
+ @staticmethod
+ def _apply_functions(
+ *,
+ data: pd.DataFrame,
+ annotated_functions: dict[str, AnnotatedMetricFunction],
+ grouping_names: list[str] | None,
+ ) -> pd.Series | pd.DataFrame:
+ """
+ Apply annotated metric functions to a DataFrame, optionally grouping by specified columns.
+
+ Parameters
+ ----------
+ data : pd.DataFrame
+ The input data on which the metric functions will be applied.
+ annotated_functions : dict[str, AnnotatedMetricFunction]
+ A dictionary where keys are metric names and values are the corresponding annotated metric
+ functions.
+ grouping_names : list[str] | None
+ A list of column names to group by before applying the metric functions. If None, the
+ functions are applied to the entire DataFrame.
- # Calculate the 'by_group' values
- all_grouping_names = [x for x in sensitive_feature_names]
- if control_feature_names is not None:
- # Note that we prepend the control feature names
- all_grouping_names = control_feature_names + all_grouping_names
+ Returns
+ -------
+ Series or DataFrame
+ A Series or DataFrame with the results of the metric functions applied. If grouping_names is provided,
+ the results are grouped accordingly.
+ """
+ if grouping_names is None or len(grouping_names) == 0:
+ return apply_to_dataframe(data, metric_functions=annotated_functions)
- temp = data.groupby(all_grouping_names).apply(
+ temp = data.groupby(grouping_names).apply(
apply_to_dataframe,
metric_functions=annotated_functions,
- # See note in apply_to_dataframe about include_groups
include_groups=False,
)
- if len(all_grouping_names) > 1:
- # We might have missing combinations in the input, so expand to fill
- all_classes = extract_unique_classes(data, all_grouping_names)
+
+ if len(grouping_names) > 1:
all_indices = pd.MultiIndex.from_product(
- all_classes.values(),
- names=all_classes.keys(),
+ [np.unique(data[col]) for col in grouping_names], names=grouping_names
)
- by_group = temp.reindex(index=all_indices)
- else:
- by_group = temp
+ return temp.reindex(index=all_indices)
- return DisaggregatedResult(overall, by_group)
+ return temp
|
diff --git a/test/unit/metrics/test_disaggregated_result.py b/test/unit/metrics/test_disaggregated_result.py
index 7c1486f72..1969f95cb 100644
--- a/test/unit/metrics/test_disaggregated_result.py
+++ b/test/unit/metrics/test_disaggregated_result.py
@@ -1,11 +1,13 @@
# Copyright (c) Microsoft Corporation and Fairlearn contributors.
# Licensed under the MIT License.
+
import pandas as pd
import pytest
import sklearn.metrics as skm
from fairlearn.metrics._annotated_metric_function import AnnotatedMetricFunction
+from fairlearn.metrics._base_metrics import selection_rate
from fairlearn.metrics._disaggregated_result import DisaggregatedResult
from .data_for_test import g_1, y_p, y_t
@@ -89,3 +91,77 @@ def test_bad_ratio_errors(self):
assert (
str(e0.value) == "Invalid error value specified. Valid values are ['raise', 'coerce']"
)
+
+
[email protected](
+ ["grouping_names", "expected"],
+ [(None, pd.Series({"selection_rate": 0.5})), ([], pd.Series({"selection_rate": 0.5}))],
+)
+def test_apply_functions_with_no_grouping(grouping_names, expected):
+ data = pd.DataFrame(
+ {
+ "y_pred": [1, 0, 1, 0, 0, 1],
+ "y_true": [1, 1, 0, 1, 0, 0],
+ "sensitive_feature": ["A", "A", "A", "B", "B", "B"],
+ }
+ )
+
+ annotated_functions = {
+ "selection_rate": AnnotatedMetricFunction(func=selection_rate, name="selection_rate")
+ }
+
+ result = DisaggregatedResult._apply_functions(
+ data=data, annotated_functions=annotated_functions, grouping_names=grouping_names
+ )
+
+ pd.testing.assert_series_equal(result, expected)
+
+
[email protected](
+ ["grouping_names", "expected"],
+ [
+ (
+ ["sensitive_feature"],
+ pd.DataFrame(
+ {"selection_rate": [2 / 3, 1 / 3]},
+ index=pd.Index(["A", "B"], name="sensitive_feature"),
+ ),
+ ),
+ (
+ ["control_feature_1"],
+ pd.DataFrame(
+ {"selection_rate": [1 / 3, 2 / 3]},
+ index=pd.Index(["X", "Y"], name="control_feature_1"),
+ ),
+ ),
+ (
+ ["control_feature_2", "sensitive_feature"],
+ pd.DataFrame(
+ {"selection_rate": [1.0, None, 0.5, 1 / 3]},
+ index=pd.MultiIndex.from_product(
+ [("W", "Z"), ("A", "B")], names=["control_feature_2", "sensitive_feature"]
+ ),
+ ),
+ ),
+ ],
+)
+def test_apply_functions_with_grouping(grouping_names, expected):
+ data = pd.DataFrame(
+ {
+ "y_pred": [1, 0, 1, 0, 0, 1],
+ "y_true": [1, 1, 0, 1, 0, 0],
+ "sensitive_feature": ["A", "A", "A", "B", "B", "B"],
+ "control_feature_1": ["X", "X", "Y", "Y", "X", "Y"],
+ "control_feature_2": ["Z", "Z", "W", "Z", "Z", "Z"],
+ }
+ )
+
+ annotated_functions = {
+ "selection_rate": AnnotatedMetricFunction(func=selection_rate, name="selection_rate")
+ }
+
+ result = DisaggregatedResult._apply_functions(
+ data=data, annotated_functions=annotated_functions, grouping_names=grouping_names
+ )
+
+ pd.testing.assert_frame_equal(result, expected)
| 2024-11-10T17:43:48
|
{}
|
{"fairlearn/metrics/_disaggregated_result.py": "# Copyright (c) Microsoft Corporation and Fairlearn contributors.\n# Licensed under the MIT License.\nfrom __future__ import annotations\n\nimport logging\nfrom typing import Literal\n\nimport numpy as np\nimport pandas as pd\n\nfrom ._annotated_metric_function import AnnotatedMetricFunction\n\nlogger = logging.getLogger(__name__)\n\n_VALID_ERROR_STRING = [\"raise\", \"coerce\"]\n_VALID_GROUPING_FUNCTION = [\"min\", \"max\"]\n\n_INVALID_ERRORS_VALUE_ERROR_MESSAGE = \"Invalid error value specified. Valid values are {0}\".format(\n _VALID_ERROR_STRING\n)\n_INVALID_GROUPING_FUNCTION_ERROR_MESSAGE = (\n \"Invalid grouping function specified. Valid values are {0}\".format(_VALID_GROUPING_FUNCTION)\n)\n_MF_CONTAINS_NON_SCALAR_ERROR_MESSAGE = (\n \"Metric frame contains non-scalar cells. Please remove non-scalar columns from your\"\n \" metric frame or use parameter errors='coerce'.\"\n)\n\n\ndef extract_unique_classes(data: pd.DataFrame, feature_list: list[str]) -> dict[str, np.ndarray]:\n \"\"\"Compute unique values in a given set of columns.\"\"\"\n result = dict()\n for feature in feature_list:\n result[feature] = np.unique(data[feature])\n return result\n\n\ndef apply_to_dataframe(\n data: pd.DataFrame,\n metric_functions: dict[str, AnnotatedMetricFunction],\n include_groups: bool = False,\n) -> pd.Series:\n \"\"\"Apply metric functions to a DataFrame.\n\n The incoming DataFrame may have been sliced via `groupby()`.\n This function applies each annotated function in turn to the\n supplied DataFrame.\n\n The include_groups argument is weird. It appears that pandas\n introduced it as an argument in v2.2, and immediately deprecated\n it (dependent on when this is being read, may need to adjust):\n https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.apply.html\n We don't use this argument, and only include it so that we can be\n compatible with pandas<2.2\n \"\"\"\n values = dict()\n for function_name, metric_function in metric_functions.items():\n values[function_name] = metric_function(data)\n # correctly handle zero provided metrics\n if len(values) == 0:\n result = pd.Series(dtype=float)\n else:\n result = pd.Series(values)\n return result\n\n\nclass DisaggregatedResult:\n \"\"\"Pickier version of MetricFrame.\n\n This holds the internal result from a disaggregated metric\n computation, and provides `apply_grouping()` (to cover min\n and max), `difference()` and `ratio()` methods.\n\n The main difference to the results computed by MetricFrame\n is that no account is made of whether the user supplied\n a bare function or a dictionary. Hence the results are\n always Series or DataFrame.\n\n Parameters\n ----------\n overall: Series or DataFrame\n The metric function(s) computed on the entire dataset, split by\n control features if supplied\n by_group: Series or DataFrame\n The metric function(s) computed on each subgroup identified by\n the sensitive and control features\n \"\"\"\n\n def __init__(self, overall: pd.Series | pd.DataFrame, by_group: pd.DataFrame):\n \"\"\"Construct an object.\"\"\"\n self._overall = overall\n assert isinstance(by_group, pd.DataFrame)\n self._by_group = by_group\n\n @property\n def overall(self) -> pd.Series | pd.DataFrame:\n \"\"\"Return overall metrics.\"\"\"\n return self._overall\n\n @property\n def by_group(self) -> pd.DataFrame:\n \"\"\"Return the metrics by group.\"\"\"\n return self._by_group\n\n def apply_grouping(\n self,\n grouping_function: Literal[\"min\", \"max\"],\n control_feature_names: list[str] | None = None,\n errors: Literal[\"raise\", \"coerce\"] = \"raise\",\n ) -> pd.Series | pd.DataFrame:\n \"\"\"Compute mins or maxes.\n\n Parameters\n ----------\n grouping_function: string {'min', 'max'}\n control_feature_names: list[str] | None\n Names of the control features. Must appear in the index of the `overall`\n and `by_group` properties\n errors: string {'raise', 'coerce'}, default :code:`raise`\n How to deal with any errors. Either coerce to `np.nan` or wrap the\n exception and reraise\n\n Returns\n -------\n Series or DataFrame\n Contains the desired mins or maxes\n \"\"\"\n if grouping_function not in _VALID_GROUPING_FUNCTION:\n raise ValueError(_INVALID_GROUPING_FUNCTION_ERROR_MESSAGE)\n\n if errors not in _VALID_ERROR_STRING:\n raise ValueError(_INVALID_ERRORS_VALUE_ERROR_MESSAGE)\n\n if not control_feature_names:\n if errors == \"raise\":\n try:\n mf = self.by_group\n if grouping_function == \"min\":\n vals = [mf[m].min() for m in mf.columns]\n else:\n vals = [mf[m].max() for m in mf.columns]\n\n result = pd.Series(vals, index=self.by_group.columns)\n except ValueError as ve:\n raise ValueError(_MF_CONTAINS_NON_SCALAR_ERROR_MESSAGE) from ve\n elif errors == \"coerce\":\n if not control_feature_names:\n mf = self.by_group\n # Fill in the possible min/max values, else np.nan\n if grouping_function == \"min\":\n vals = [\n mf[m].min() if np.isscalar(mf[m].values[0]) else np.nan\n for m in mf.columns\n ]\n else:\n vals = [\n mf[m].max() if np.isscalar(mf[m].values[0]) else np.nan\n for m in mf.columns\n ]\n\n result = pd.Series(vals, index=mf.columns)\n else:\n if errors == \"raise\":\n try:\n if grouping_function == \"min\":\n result = self.by_group.groupby(level=control_feature_names).min()\n else:\n result = self.by_group.groupby(level=control_feature_names).max()\n except ValueError as ve:\n raise ValueError(_MF_CONTAINS_NON_SCALAR_ERROR_MESSAGE) from ve\n elif errors == \"coerce\":\n # Fill all impossible columns with NaN before grouping metric frame\n mf = self.by_group.copy()\n mf = mf.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan))\n if grouping_function == \"min\":\n result = mf.groupby(level=control_feature_names).min()\n else:\n result = mf.groupby(level=control_feature_names).max()\n\n assert isinstance(result, pd.Series) or isinstance(result, pd.DataFrame)\n\n return result\n\n def difference(\n self,\n control_feature_names: list[str] | None = None,\n method: Literal[\"between_groups\", \"to_overall\"] = \"between_groups\",\n errors: Literal[\"raise\", \"coerce\"] = \"coerce\",\n ) -> pd.Series | pd.DataFrame:\n \"\"\"Return the maximum absolute difference between groups for each metric.\n\n This method calculates a scalar value for each underlying metric by\n finding the maximum absolute difference between the entries in each\n combination of sensitive features in the :attr:`.by_group` property.\n\n There are two allowed values for the ``method=`` parameter. The\n value ``between_groups`` computes the maximum difference between\n any two pairs of groups in the :attr:`.by_group` property (i.e.\n ``group_max() - group_min()``). Alternatively, ``to_overall``\n computes the difference between each subgroup and the\n corresponding value from :attr:`.overall` (if there are control\n features, then :attr:`.overall` is multivalued for each metric).\n The result is the absolute maximum of these values.\n\n Parameters\n ----------\n control_feature_names: list[str] | None\n Names of the control features. Must appear in the index of the `overall`\n and `by_group` properties\n method : {'between_groups', 'overall'}, default :code:`between_groups`\n How to compute the aggregate.\n errors: {'raise', 'coerce'}, default :code:`coerce`\n if 'raise', then invalid parsing will raise an exception\n if 'coerce', then invalid parsing will be set as NaN\n\n Returns\n -------\n pandas.Series or pandas.DataFrame\n \"\"\"\n if errors not in _VALID_ERROR_STRING:\n raise ValueError(_INVALID_ERRORS_VALUE_ERROR_MESSAGE)\n\n if method == \"between_groups\":\n subtrahend = self.apply_grouping(\"min\", control_feature_names, errors=errors)\n elif method == \"to_overall\":\n subtrahend = self.overall\n else:\n raise ValueError(\"Unrecognised method '{0}' in difference() call\".format(method))\n\n mf = self.by_group.copy()\n # Can assume errors='coerce', else error would already have been raised in .group_min\n # Fill all non-scalar values with NaN\n mf = mf.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan))\n\n if control_feature_names is None:\n result = (mf - subtrahend).abs().max()\n else:\n result = (mf - subtrahend).abs().groupby(level=control_feature_names).max()\n\n assert isinstance(result, pd.Series) or isinstance(result, pd.DataFrame)\n\n return result\n\n def ratio(\n self,\n control_feature_names: list[str] | None = None,\n method: Literal[\"between_groups\", \"to_overall\"] = \"between_groups\",\n errors: Literal[\"raise\", \"coerce\"] = \"coerce\",\n ) -> pd.Series | pd.DataFrame:\n \"\"\"Return the minimum ratio between groups for each metric.\n\n This method calculates a scalar value for each underlying metric by\n finding the minimum ratio (that is, the ratio is forced to be\n less than unity) between the entries in each\n column of the :attr:`.by_group` property.\n\n There are two allowed values for the ``method=`` parameter. The\n value ``between_groups`` computes the minimum ratio between\n any two pairs of groups in the :attr:`.by_group` property (i.e.\n ``group_min() / group_max()``). Alternatively, ``to_overall``\n computes the ratio between each subgroup and the\n corresponding value from :attr:`.overall` (if there are control\n features, then :attr:`.overall` is multivalued for each metric),\n expressing the ratio as a number less than 1.\n The result is the minimum of these values.\n\n Parameters\n ----------\n control_feature_names: list[str] | None\n Names of the control features. Must appear in the index of the `overall`\n and `by_group` properties\n method : {'between_groups', 'overall'}, default :code:`between_groups`\n How to compute the aggregate.\n errors: {'raise', 'coerce'}, default :code:`coerce`\n if 'raise', then invalid parsing will raise an exception\n if 'coerce', then invalid parsing will be set as NaN\n\n Returns\n -------\n typing.Any or pandas.Series or pandas.DataFrame\n \"\"\"\n\n def ratio_sub_one(x):\n if x > 1:\n return 1 / x\n else:\n return x\n\n if errors not in _VALID_ERROR_STRING:\n raise ValueError(_INVALID_ERRORS_VALUE_ERROR_MESSAGE)\n\n result = None\n if method == \"between_groups\":\n result = self.apply_grouping(\n \"min\", control_feature_names, errors=errors\n ) / self.apply_grouping(\"max\", control_feature_names, errors=errors)\n elif method == \"to_overall\":\n ratios = None\n\n if control_feature_names is not None:\n # It's easiest to give in to the DataFrame columns preference\n ratios = self.by_group.unstack(level=control_feature_names) / self.overall.unstack(\n level=control_feature_names\n )\n else:\n ratios = self.by_group / self.overall\n\n ratios = ratios.apply(lambda x: x.transform(ratio_sub_one))\n if not control_feature_names:\n result = ratios.min()\n else:\n result = ratios.min().unstack(0)\n else:\n raise ValueError(\"Unrecognised method '{0}' in ratio() call\".format(method))\n\n assert isinstance(result, pd.Series) or isinstance(result, pd.DataFrame)\n\n return result\n\n @staticmethod\n def create(\n *,\n data: pd.DataFrame,\n annotated_functions: dict[str, AnnotatedMetricFunction],\n sensitive_feature_names: list[str],\n control_feature_names: list[str] | None = None,\n ) -> \"DisaggregatedResult\":\n \"\"\"Manufacture a DisaggregatedResult.\n\n This is essentially a more restricted version of the MetricFrame\n constructor.\n\n All of the `data` have to be supplied as a DataFrame.\n The metric functions have to be supplied as a dictionary of\n AnnotatedMetricFunction.\n The latter class contains the metric function itself, and\n mappings between the metric function arguments and the columns\n of the `data` DataFrame.\n The sensitive and (optional) control features are lists of\n column names in `data`.\n\n Parameters\n ----------\n data : DataFrame\n A DataFrame containing all of the columns required to compute the metrics\n annotated_functions: dict[str, AnnotatedMetricFunction]\n A dictionary of metric functions, each of which is annotated with the\n mapping of columns in `data` to argument names in the function\n sensitive_feature_names: list[str]\n The list of columns in `data` which correspond to the sensitive feature(s)\n control_feature_names: list[str] | None\n Optional list of columns in `data` which correspond to the control features,\n if any\n\n Returns\n -------\n DisaggregatedResult\n Freshly constructed instance of this class\n \"\"\"\n # Calculate the 'overall' values\n if control_feature_names is None:\n overall = apply_to_dataframe(data, metric_functions=annotated_functions)\n else:\n temp = data.groupby(by=control_feature_names).apply(\n apply_to_dataframe,\n metric_functions=annotated_functions,\n # See note in apply_to_dataframe about include_groups\n include_groups=False,\n )\n # If there are multiple control features, might have missing combinations\n if len(control_feature_names) > 1:\n cf_classes = extract_unique_classes(data, control_feature_names)\n all_indices = pd.MultiIndex.from_product(\n cf_classes.values(), names=cf_classes.keys()\n )\n\n overall = temp.reindex(index=all_indices)\n else:\n overall = temp\n\n # Calculate the 'by_group' values\n all_grouping_names = [x for x in sensitive_feature_names]\n if control_feature_names is not None:\n # Note that we prepend the control feature names\n all_grouping_names = control_feature_names + all_grouping_names\n\n temp = data.groupby(all_grouping_names).apply(\n apply_to_dataframe,\n metric_functions=annotated_functions,\n # See note in apply_to_dataframe about include_groups\n include_groups=False,\n )\n if len(all_grouping_names) > 1:\n # We might have missing combinations in the input, so expand to fill\n all_classes = extract_unique_classes(data, all_grouping_names)\n all_indices = pd.MultiIndex.from_product(\n all_classes.values(),\n names=all_classes.keys(),\n )\n\n by_group = temp.reindex(index=all_indices)\n else:\n by_group = temp\n\n return DisaggregatedResult(overall, by_group)\n"}
|
{"fairlearn/metrics/_disaggregated_result.py": [{"type": "function", "name": "DisaggregatedResult._apply_functions", "lines": [350, 392], "signature": "def _apply_functions( *, data: pd.DataFrame, annotated_functions: dict[str, AnnotatedMetricFunction], grouping_names: list[str] | None, ) -> pd.Series | pd.DataFrame:", "doc": "Apply annotated metric functions to a DataFrame, optionally grouping by specified columns.\n\nParameters\n----------\ndata : pd.DataFrame\n The input data on which the metric functions will be applied.\nannotated_functions : dict[str, AnnotatedMetricFunction]\n A dictionary where keys are metric names and values are the corresponding annotated metric\n functions.\ngrouping_names : list[str] | None\n A list of column names to group by before applying the metric functions. If None, the\n functions are applied to the entire DataFrame.\n\nReturns\n-------\nSeries or DataFrame\n A Series or DataFrame with the results of the metric functions applied. If grouping_names is provided,\n the results are grouped accordingly."}]}
| null |
["test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_no_grouping[None-expected0]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_no_grouping[grouping_names1-expected1]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_grouping[grouping_names0-expected0]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_grouping[grouping_names1-expected1]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_grouping[grouping_names2-expected2]"]
|
["test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_grouping", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_difference_method", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_difference_errors", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_ratio_method", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_ratio_errors"]
|
403da1fec74bdf2da28dc49487ccd72caa6f6976
|
{"first_commit_time": 1731257114.0, "pr_title": "refactor: simplify disaggregated result", "pr_body": "## Description\r\n- Simplified code complexity by using pandas built-in methods.\r\n- Refactored the common logic in initializing the `overall` and `by_group` inside `.create`.\r\n- Added unit tests to cover the latter.\r\n\r\nThere are some linting changes to the doc that could be merged as part of this [other PR](https://github.com/fairlearn/fairlearn/pull/1434)\r\n\r\n## Tests\r\n<!--- Select all that apply by putting an x between the brackets: [x] -->\r\n- [ ] no new tests required\r\n- [x] new tests added\r\n- [ ] existing tests adjusted\r\n\r\n## Documentation\r\n<!--- Select all that apply. -->\r\n- [ ] no documentation changes needed\r\n- [x] user guide added or updated\r\n- [x] API docs added or updated\r\n- [ ] example notebook added or updated\r\n\r\n", "pr_timeline": [{"time": 1731943274.0, "comment": "> We've also been thinking of switching to polars backend while using narwhals API internally. You might be very well qualified to take on the task.\r\n\r\n@adrinjalali Indeed I might be able to help :+1: Is it tracked somewhere (e.g., github issue, discord ...) ?\r\n"}, {"time": 1732125513.0, "comment": "@taharallouche checking this first thing tomorrow again and merging thank you! I think I got confused because of the force push leaving the comments frozen so to say. because we squash and merge it you don't have to worry about how the history looks like and it doesn't matter how many commits are there, avoid to force push as it makes the review of longer PRs a bit more difficult. thanks for joining the call today too, was lovely to meet you!"}], "issues": {}}
|
|
falconry/falcon
| 1,174
|
https://github.com/falconry/falcon/pull/1174
|
falconry__falcon-1174
|
["1132"]
|
919fd3f5a3129d04f1c7d23f5eff440ec4598e35
|
diff --git a/falcon/request.py b/falcon/request.py
index b1f92e12f..5bc5d412a 100644
--- a/falcon/request.py
+++ b/falcon/request.py
@@ -26,7 +26,8 @@
import io
NativeStream = io.BufferedReader
-from wsgiref.validate import InputWrapper # NOQA: I202
+from uuid import UUID # NOQA: I202
+from wsgiref.validate import InputWrapper
import mimeparse
import six
@@ -1273,6 +1274,70 @@ def get_param_as_int(self, name,
raise errors.HTTPMissingParam(name)
+ def get_param_as_uuid(self, name, required=False, store=None):
+ """Return the value of a query string parameter as an UUID.
+
+ The value to convert must conform to the standard UUID string
+ representation per RFC 4122. For example, the following
+ strings are all valid::
+
+ # Lowercase
+ '64be949b-3433-4d36-a4a8-9f19d352fee8'
+
+ # Uppercase
+ 'BE71ECAA-F719-4D42-87FD-32613C2EEB60'
+
+ # Mixed
+ '81c8155C-D6de-443B-9495-39Fa8FB239b5'
+
+ Args:
+ name (str): Parameter name, case-sensitive (e.g., 'id').
+
+ Keyword Args:
+ required (bool): Set to ``True`` to raise
+ ``HTTPBadRequest`` instead of returning ``None`` when the
+ parameter is not found or is not a UUID (default
+ ``False``).
+ store (dict): A ``dict``-like object in which to place
+ the value of the param, but only if the param is found
+ (default ``None``).
+
+ Returns:
+ UUID: The value of the param if it is found and can be converted to
+ a ``UUID``. If the param is not found, returns ``None``, unless
+ `required` is ``True``.
+
+ Raises
+ HTTPBadRequest: The param was not found in the request, even though
+ it was required to be there, or it was found but could not
+ be converted to a ``UUID``.
+ """
+
+ params = self._params
+
+ # PERF: Use if..in since it is a good all-around performer; we don't
+ # know how likely params are to be specified by clients.
+ if name in params:
+ val = params[name]
+ if isinstance(val, list):
+ val = val[-1]
+
+ try:
+ val = UUID(val)
+ except ValueError:
+ msg = 'The value must be a UUID string.'
+ raise errors.HTTPInvalidParam(msg, name)
+
+ if store is not None:
+ store[name] = val
+
+ return val
+
+ if not required:
+ return None
+
+ raise errors.HTTPMissingParam(name)
+
def get_param_as_bool(self, name, required=False, store=None,
blank_as_true=False):
"""Return the value of a query string parameter as a boolean
|
diff --git a/tests/test_query_params.py b/tests/test_query_params.py
index e90567d4b..348577333 100644
--- a/tests/test_query_params.py
+++ b/tests/test_query_params.py
@@ -1,4 +1,5 @@
from datetime import date, datetime
+from uuid import UUID
try:
import ujson as json
@@ -72,7 +73,7 @@ class TestQueryParams(object):
def test_none(self, simulate_request, client, resource):
query_string = ''
- client.app.add_route('/', resource)
+ client.app.add_route('/', resource) # TODO: DRY up this setup logic
simulate_request(client=client, path='/', query_string=query_string)
req = resource.captured_req
@@ -213,6 +214,7 @@ def test_allowed_names(self, simulate_request, client, resource):
@pytest.mark.parametrize('method_name', [
'get_param',
'get_param_as_int',
+ 'get_param_as_uuid',
'get_param_as_bool',
'get_param_as_list',
])
@@ -307,6 +309,33 @@ def test_int_neg(self, simulate_request, client, resource):
with pytest.raises(falcon.HTTPBadRequest):
req.get_param_as_int('pos', min=0, max=10)
+ def test_uuid(self, simulate_request, client, resource):
+ client.app.add_route('/', resource)
+ query_string = ('marker1=8d76b7b3-d0dd-46ca-ad6e-3989dcd66959&'
+ 'marker2=64be949b-3433-4d36-a4a8-9f19d352fee8&'
+ 'marker2=8D76B7B3-d0dd-46ca-ad6e-3989DCD66959&'
+ 'short=4be949b-3433-4d36-a4a8-9f19d352fee8')
+ simulate_request(client=client, path='/', query_string=query_string)
+
+ req = resource.captured_req
+
+ expected_uuid = UUID('8d76b7b3-d0dd-46ca-ad6e-3989dcd66959')
+ assert req.get_param_as_uuid('marker1') == expected_uuid
+ assert req.get_param_as_uuid('marker2') == expected_uuid
+ assert req.get_param_as_uuid('marker3') is None
+ assert req.get_param_as_uuid('marker3', required=False) is None
+
+ with pytest.raises(falcon.HTTPBadRequest):
+ req.get_param_as_uuid('short')
+
+ store = {}
+ with pytest.raises(falcon.HTTPBadRequest):
+ req.get_param_as_uuid('marker3', required=True, store=store)
+
+ assert not store
+ assert req.get_param_as_uuid('marker1', store=store)
+ assert store['marker1'] == expected_uuid
+
def test_boolean(self, simulate_request, client, resource):
client.app.add_route('/', resource)
query_string = ('echo=true&doit=false&bogus=bar&bogus2=foo&'
| 2017-12-18T22:06:28
|
{}
|
{"falcon/request.py": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Request class.\"\"\"\n\nfrom datetime import datetime\n\ntry:\n # NOTE(kgrifs): In Python 2.6 and 2.7, socket._fileobject is a\n # standard way of exposing a socket as a file-like object, and\n # is used by wsgiref for wsgi.input.\n import socket\n NativeStream = socket._fileobject\nexcept AttributeError:\n # NOTE(kgriffs): In Python 3.3, wsgiref implements wsgi.input\n # using _io.BufferedReader which is an alias of io.BufferedReader\n import io\n NativeStream = io.BufferedReader\n\nfrom wsgiref.validate import InputWrapper # NOQA: I202\n\nimport mimeparse\nimport six\nfrom six.moves import http_cookies\n\nfrom falcon import DEFAULT_MEDIA_TYPE\nfrom falcon import errors\nfrom falcon import request_helpers as helpers\nfrom falcon import util\nfrom falcon.media import Handlers\nfrom falcon.util import json\nfrom falcon.util.uri import parse_host, parse_query_string, unquote_string\n\n# NOTE(tbug): In some cases, http_cookies is not a module\n# but a dict-like structure. This fixes that issue.\n# See issue https://github.com/falconry/falcon/issues/556\nSimpleCookie = http_cookies.SimpleCookie\n\nDEFAULT_ERROR_LOG_FORMAT = (u'{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'\n u' {1} {2}{3} => ')\n\nTRUE_STRINGS = ('true', 'True', 'yes', '1', 'on')\nFALSE_STRINGS = ('false', 'False', 'no', '0', 'off')\nWSGI_CONTENT_HEADERS = ('CONTENT_TYPE', 'CONTENT_LENGTH')\n\n# PERF(kgriffs): Avoid an extra namespace lookup when using these functions\nstrptime = datetime.strptime\nnow = datetime.now\n\n\nclass Request(object):\n \"\"\"Represents a client's HTTP request.\n\n Note:\n `Request` is not meant to be instantiated directly by responders.\n\n Args:\n env (dict): A WSGI environment dict passed in from the server. See\n also PEP-3333.\n\n Keyword Arguments:\n options (dict): Set of global options passed from the API handler.\n\n Attributes:\n env (dict): Reference to the WSGI environ ``dict`` passed in from the\n server. (See also PEP-3333.)\n context (dict): Dictionary to hold any data about the request which is\n specific to your app (e.g. session object). Falcon itself will\n not interact with this attribute after it has been initialized.\n context_type (class): Class variable that determines the factory or\n type to use for initializing the `context` attribute. By default,\n the framework will instantiate standard ``dict`` objects. However,\n you may override this behavior by creating a custom child class of\n ``falcon.Request``, and then passing that new class to\n `falcon.API()` by way of the latter's `request_type` parameter.\n\n Note:\n When overriding `context_type` with a factory function (as\n opposed to a class), the function is called like a method of\n the current Request instance. Therefore the first argument is\n the Request instance itself (self).\n scheme (str): URL scheme used for the request. Either 'http' or\n 'https'.\n\n Note:\n If the request was proxied, the scheme may not\n match what was originally requested by the client.\n :py:attr:`forwarded_scheme` can be used, instead,\n to handle such cases.\n\n forwarded_scheme (str): Original URL scheme requested by the\n user agent, if the request was proxied. Typical values are\n 'http' or 'https'.\n\n The following request headers are checked, in order of\n preference, to determine the forwarded scheme:\n\n - ``Forwarded``\n - ``X-Forwarded-For``\n\n If none of these headers are available, or if the\n Forwarded header is available but does not contain a\n \"proto\" parameter in the first hop, the value of\n :attr:`scheme` is returned instead.\n\n (See also: RFC 7239, Section 1)\n\n protocol (str): Deprecated alias for `scheme`. Will be removed\n in a future release.\n method (str): HTTP method requested (e.g., 'GET', 'POST', etc.)\n host (str): Host request header field\n forwarded_host (str): Original host request header as received\n by the first proxy in front of the application server.\n\n The following request headers are checked, in order of\n preference, to determine the forwarded scheme:\n\n - ``Forwarded``\n - ``X-Forwarded-Host``\n\n If none of the above headers are available, or if the\n Forwarded header is available but the \"host\"\n parameter is not included in the first hop, the value of\n :attr:`host` is returned instead.\n\n Note:\n Reverse proxies are often configured to set the Host\n header directly to the one that was originally\n requested by the user agent; in that case, using\n :attr:`host` is sufficient.\n\n (See also: RFC 7239, Section 4)\n\n port (int): Port used for the request. If the request URI does\n not specify a port, the default one for the given schema is\n returned (80 for HTTP and 443 for HTTPS).\n netloc (str): Returns the 'host:port' portion of the request\n URL. The port may be ommitted if it is the default one for\n the URL's schema (80 for HTTP and 443 for HTTPS).\n subdomain (str): Leftmost (i.e., most specific) subdomain from the\n hostname. If only a single domain name is given, `subdomain`\n will be ``None``.\n\n Note:\n If the hostname in the request is an IP address, the value\n for `subdomain` is undefined.\n\n app (str): The initial portion of the request URI's path that\n corresponds to the application object, so that the\n application knows its virtual \"location\". This may be an\n empty string, if the application corresponds to the \"root\"\n of the server.\n\n (Corresponds to the \"SCRIPT_NAME\" environ variable defined\n by PEP-3333.)\n uri (str): The fully-qualified URI for the request.\n url (str): Alias for `uri`.\n forwarded_uri (str): Original URI for proxied requests. Uses\n :attr:`forwarded_scheme` and :attr:`forwarded_host` in\n order to reconstruct the original URI requested by the user\n agent.\n relative_uri (str): The path and query string portion of the\n request URI, omitting the scheme and host.\n prefix (str): The prefix of the request URI, including scheme,\n host, and WSGI app (if any).\n forwarded_prefix (str): The prefix of the original URI for\n proxied requests. Uses :attr:`forwarded_scheme` and\n :attr:`forwarded_host` in order to reconstruct the\n original URI.\n path (str): Path portion of the request URI (not including query\n string).\n\n Note:\n `req.path` may be set to a new value by a `process_request()`\n middleware method in order to influence routing.\n query_string (str): Query string portion of the request URI, without\n the preceding '?' character.\n uri_template (str): The template for the route that was matched for\n this request. May be ``None`` if the request has not yet been\n routed, as would be the case for `process_request()` middleware\n methods. May also be ``None`` if your app uses a custom routing\n engine and the engine does not provide the URI template when\n resolving a route.\n remote_addr(str): IP address of the closest client or proxy to\n the WSGI server.\n\n This property is determined by the value of ``REMOTE_ADDR``\n in the WSGI environment dict. Since this address is not\n derived from an HTTP header, clients and proxies can not\n forge it.\n\n Note:\n If your application is behind one or more reverse\n proxies, you can use :py:attr:`~.access_route`\n to retrieve the real IP address of the client.\n access_route(list): IP address of the original client, as well\n as any known addresses of proxies fronting the WSGI server.\n\n The following request headers are checked, in order of\n preference, to determine the addresses:\n\n - ``Forwarded``\n - ``X-Forwarded-For``\n - ``X-Real-IP``\n\n If none of these headers are available, the value of\n :py:attr:`~.remote_addr` is used instead.\n\n Note:\n Per `RFC 7239`_, the access route may contain \"unknown\"\n and obfuscated identifiers, in addition to IPv4 and\n IPv6 addresses\n\n .. _RFC 7239: https://tools.ietf.org/html/rfc7239\n\n Warning:\n Headers can be forged by any client or proxy. Use this\n property with caution and validate all values before\n using them. Do not rely on the access route to authorize\n requests.\n\n forwarded (list): Value of the Forwarded header, as a parsed list\n of :class:`falcon.Forwarded` objects, or ``None`` if the header\n is missing.\n\n (See also: RFC 7239, Section 4)\n date (datetime): Value of the Date header, converted to a\n ``datetime`` instance. The header value is assumed to\n conform to RFC 1123.\n auth (str): Value of the Authorization header, or ``None`` if the\n header is missing.\n user_agent (str): Value of the User-Agent header, or ``None`` if the\n header is missing.\n referer (str): Value of the Referer header, or ``None`` if\n the header is missing.\n accept (str): Value of the Accept header, or '*/*' if the header is\n missing.\n client_accepts_json (bool): ``True`` if the Accept header indicates\n that the client is willing to receive JSON, otherwise ``False``.\n client_accepts_msgpack (bool): ``True`` if the Accept header indicates\n that the client is willing to receive MessagePack, otherwise\n ``False``.\n client_accepts_xml (bool): ``True`` if the Accept header indicates that\n the client is willing to receive XML, otherwise ``False``.\n cookies (dict):\n A dict of name/value cookie pairs. (See also:\n :ref:`Getting Cookies <getting-cookies>`)\n content_type (str): Value of the Content-Type header, or ``None`` if\n the header is missing.\n content_length (int): Value of the Content-Length header converted\n to an ``int``, or ``None`` if the header is missing.\n stream: File-like input object for reading the body of the\n request, if any. This object provides direct access to the\n server's data stream and is non-seekable. In order to\n avoid unintended side effects, and to provide maximum\n flexibility to the application, Falcon itself does not\n buffer or spool the data in any way.\n\n Since this object is provided by the WSGI\n server itself, rather than by Falcon, it may behave\n differently depending on how you host your app. For example,\n attempting to read more bytes than are expected (as\n determined by the Content-Length header) may or may not\n block indefinitely. It's a good idea to test your WSGI\n server to find out how it behaves.\n\n This can be particulary problematic when a request body is\n expected, but none is given. In this case, the following\n call blocks under certain WSGI servers::\n\n # Blocks if Content-Length is 0\n data = req.stream.read()\n\n The workaround is fairly straightforward, if verbose::\n\n # If Content-Length happens to be 0, or the header is\n # missing altogether, this will not block.\n data = req.stream.read(req.content_length or 0)\n\n Alternatively, when passing the stream directly to a\n consumer, it may be necessary to branch off the\n value of the Content-Length header::\n\n if req.content_length:\n doc = json.load(req.stream)\n\n For a slight performance cost, you may instead wish to use\n :py:attr:`bounded_stream`, which wraps the native WSGI\n input object to normalize its behavior.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, and\n the :py:attr:`~.RequestOptions.auto_parse_form_urlencoded`\n option is set, the framework\n will consume `stream` in order to parse the parameters\n and merge them into the query string parameters. In this\n case, the stream will be left at EOF.\n\n bounded_stream: File-like wrapper around `stream` to normalize\n certain differences between the native input objects\n employed by different WSGI servers. In particular,\n `bounded_stream` is aware of the expected Content-Length of\n the body, and will never block on out-of-bounds reads,\n assuming the client does not stall while transmitting the\n data to the server.\n\n For example, the following will not block when\n Content-Length is 0 or the header is missing altogether::\n\n data = req.bounded_stream.read()\n\n This is also safe::\n\n doc = json.load(req.bounded_stream)\n\n expect (str): Value of the Expect header, or ``None`` if the\n header is missing.\n media (object): Returns a deserialized form of the request stream.\n When called, it will attempt to deserialize the request stream\n using the Content-Type header as well as the media-type handlers\n configured via :class:`falcon.RequestOptions`.\n\n See :ref:`media` for more information regarding media handling.\n\n Warning:\n This operation will consume the request stream the first time\n it's called and cache the results. Follow-up calls will just\n retrieve a cached version of the object.\n\n range (tuple of int): A 2-member ``tuple`` parsed from the value of the\n Range header.\n\n The two members correspond to the first and last byte\n positions of the requested resource, inclusive. Negative\n indices indicate offset from the end of the resource,\n where -1 is the last byte, -2 is the second-to-last byte,\n and so forth.\n\n Only continous ranges are supported (e.g., \"bytes=0-0,-1\" would\n result in an HTTPBadRequest exception when the attribute is\n accessed.)\n range_unit (str): Unit of the range parsed from the value of the\n Range header, or ``None`` if the header is missing\n if_match (str): Value of the If-Match header, or ``None`` if the\n header is missing.\n if_none_match (str): Value of the If-None-Match header, or ``None``\n if the header is missing.\n if_modified_since (datetime): Value of the If-Modified-Since header,\n or ``None`` if the header is missing.\n if_unmodified_since (datetime): Value of the If-Unmodified-Since\n header, or ``None`` if the header is missing.\n if_range (str): Value of the If-Range header, or ``None`` if the\n header is missing.\n\n headers (dict): Raw HTTP headers from the request with\n canonical dash-separated names. Parsing all the headers\n to create this dict is done the first time this attribute\n is accessed. This parsing can be costly, so unless you\n need all the headers in this format, you should use the\n `get_header` method or one of the convenience attributes\n instead, to get a value for a specific header.\n\n params (dict): The mapping of request query parameter names to their\n values. Where the parameter appears multiple times in the query\n string, the value mapped to that parameter key will be a list of\n all the values in the order seen.\n\n options (dict): Set of global options passed from the API handler.\n \"\"\"\n\n __slots__ = (\n '__dict__',\n '_bounded_stream',\n '_cached_access_route',\n '_cached_forwarded',\n '_cached_forwarded_prefix',\n '_cached_forwarded_uri',\n '_cached_headers',\n '_cached_prefix',\n '_cached_relative_uri',\n '_cached_uri',\n '_cookies',\n '_params',\n '_wsgierrors',\n 'content_type',\n 'context',\n 'env',\n 'method',\n 'options',\n 'path',\n 'query_string',\n 'stream',\n 'uri_template',\n '_media',\n )\n\n # Child classes may override this\n context_type = dict\n\n _wsgi_input_type_known = False\n _always_wrap_wsgi_input = False\n\n def __init__(self, env, options=None):\n self.env = env\n self.options = options if options else RequestOptions()\n\n self._wsgierrors = env['wsgi.errors']\n self.method = env['REQUEST_METHOD']\n\n self.uri_template = None\n self._media = None\n\n # NOTE(kgriffs): PEP 3333 specifies that PATH_INFO may be the\n # empty string, so normalize it in that case.\n path = env['PATH_INFO'] or '/'\n\n if six.PY3:\n # PEP 3333 specifies that PATH_INFO variable are always\n # \"bytes tunneled as latin-1\" and must be encoded back\n path = path.encode('latin1').decode('utf-8', 'replace')\n\n if (self.options.strip_url_path_trailing_slash and\n len(path) != 1 and path.endswith('/')):\n self.path = path[:-1]\n else:\n self.path = path\n\n # PERF(ueg1990): try/catch cheaper and faster (and more Pythonic)\n try:\n self.query_string = env['QUERY_STRING']\n except KeyError:\n self.query_string = ''\n self._params = {}\n else:\n if self.query_string:\n self._params = parse_query_string(\n self.query_string,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n parse_qs_csv=self.options.auto_parse_qs_csv,\n )\n\n else:\n self._params = {}\n\n self._cookies = None\n\n self._cached_access_route = None\n self._cached_forwarded = None\n self._cached_forwarded_prefix = None\n self._cached_forwarded_uri = None\n self._cached_headers = None\n self._cached_prefix = None\n self._cached_relative_uri = None\n self._cached_uri = None\n\n try:\n self.content_type = self.env['CONTENT_TYPE']\n except KeyError:\n self.content_type = None\n\n # NOTE(kgriffs): Wrap wsgi.input if needed to make read() more robust,\n # normalizing semantics between, e.g., gunicorn and wsgiref.\n #\n # PERF(kgriffs): Accessing via self when reading is faster than\n # via the class name. But we must set the variables using the\n # class name so they are picked up by all future instantiations\n # of the class.\n if not self._wsgi_input_type_known:\n Request._always_wrap_wsgi_input = isinstance(\n env['wsgi.input'],\n (NativeStream, InputWrapper)\n )\n\n Request._wsgi_input_type_known = True\n\n if self._always_wrap_wsgi_input:\n # TODO(kgriffs): In Falcon 2.0, stop wrapping stream since it is\n # less useful now that we have bounded_stream.\n self.stream = self._get_wrapped_wsgi_input()\n self._bounded_stream = self.stream\n else:\n self.stream = env['wsgi.input']\n self._bounded_stream = None # Lazy wrapping\n\n # PERF(kgriffs): Technically, we should spend a few more\n # cycles and parse the content type for real, but\n # this heuristic will work virtually all the time.\n if (\n self.options.auto_parse_form_urlencoded and\n self.content_type is not None and\n 'application/x-www-form-urlencoded' in self.content_type and\n\n # NOTE(kgriffs): Within HTTP, a payload for a GET or HEAD\n # request has no defined semantics, so we don't expect a\n # body in those cases. We would normally not expect a body\n # for OPTIONS either, but RFC 7231 does allow for it.\n self.method not in ('GET', 'HEAD')\n ):\n self._parse_form_urlencoded()\n\n self.context = self.context_type()\n\n def __repr__(self):\n return '<%s: %s %r>' % (self.__class__.__name__, self.method, self.url)\n\n # ------------------------------------------------------------------------\n # Properties\n # ------------------------------------------------------------------------\n\n user_agent = helpers.header_property('HTTP_USER_AGENT')\n auth = helpers.header_property('HTTP_AUTHORIZATION')\n\n expect = helpers.header_property('HTTP_EXPECT')\n\n if_match = helpers.header_property('HTTP_IF_MATCH')\n if_none_match = helpers.header_property('HTTP_IF_NONE_MATCH')\n if_range = helpers.header_property('HTTP_IF_RANGE')\n\n referer = helpers.header_property('HTTP_REFERER')\n\n @property\n def forwarded(self):\n # PERF(kgriffs): We could DRY up this memoization pattern using\n # a decorator, but that would incur additional overhead without\n # resorting to some trickery to rewrite the body of the method\n # itself (vs. simply wrapping it with some memoization logic).\n # At some point we might look into this but I don't think\n # it's worth it right now.\n if self._cached_forwarded is None:\n # PERF(kgriffs): If someone is calling this, they are probably\n # confident that the header exists, so most of the time we\n # expect this call to succeed. Therefore, we won't need to\n # pay the penalty of a raised exception in most cases, and\n # there is no need to spend extra cycles calling get() or\n # checking beforehand whether the key is in the dict.\n try:\n forwarded = self.env['HTTP_FORWARDED']\n except KeyError:\n return None\n\n parsed_elements = []\n\n for element in forwarded.split(','):\n parsed_element = Forwarded()\n\n # NOTE(kgriffs): Calling strip() is necessary here since\n # \"an HTTP list allows white spaces to occur between the\n # identifiers\".\n\n # (See also: RFC 7239, Section 7.1)\n for param in element.strip().split(';'):\n # PERF(kgriffs): partition() is faster than split().\n name, __, value = param.partition('=')\n if not value:\n # NOTE(kgriffs): The '=' separator was not found or\n # the value was missing. Ignore this malformed\n # param.\n continue\n\n # NOTE(kgriffs): According to RFC 7239, parameter\n # names are case-insensitive.\n name = name.lower()\n value = unquote_string(value)\n if name == 'by':\n parsed_element.dest = value\n elif name == 'for':\n parsed_element.src = value\n elif name == 'host':\n parsed_element.host = value\n elif name == 'proto':\n # NOTE(kgriffs): RFC 7239 only requires that\n # the \"proto\" value conform to the Host ABNF\n # described in RFC 7230. The Host ABNF, in turn,\n # does not require that the scheme be in any\n # particular case, so we normalize it here to be\n # consistent with the WSGI spec that *does*\n # require the value of 'wsgi.url_scheme' to be\n # either 'http' or 'https' (case-sensitive).\n parsed_element.scheme = value.lower()\n\n parsed_elements.append(parsed_element)\n\n self._cached_forwarded = parsed_elements\n\n return self._cached_forwarded\n\n @property\n def client_accepts_json(self):\n return self.client_accepts('application/json')\n\n @property\n def client_accepts_msgpack(self):\n return (self.client_accepts('application/x-msgpack') or\n self.client_accepts('application/msgpack'))\n\n @property\n def client_accepts_xml(self):\n return self.client_accepts('application/xml')\n\n @property\n def accept(self):\n # NOTE(kgriffs): Per RFC, a missing accept header is\n # equivalent to '*/*'\n try:\n return self.env['HTTP_ACCEPT'] or '*/*'\n except KeyError:\n return '*/*'\n\n @property\n def content_length(self):\n try:\n value = self.env['CONTENT_LENGTH']\n except KeyError:\n return None\n\n # NOTE(kgriffs): Normalize an empty value to behave as if\n # the header were not included; wsgiref, at least, inserts\n # an empty CONTENT_LENGTH value if the request does not\n # set the header. Gunicorn and uWSGI do not do this, but\n # others might if they are trying to match wsgiref's\n # behavior too closely.\n if not value:\n return None\n\n try:\n value_as_int = int(value)\n except ValueError:\n msg = 'The value of the header must be a number.'\n raise errors.HTTPInvalidHeader(msg, 'Content-Length')\n\n if value_as_int < 0:\n msg = 'The value of the header must be a positive number.'\n raise errors.HTTPInvalidHeader(msg, 'Content-Length')\n\n return value_as_int\n\n @property\n def bounded_stream(self):\n if self._bounded_stream is None:\n self._bounded_stream = self._get_wrapped_wsgi_input()\n\n return self._bounded_stream\n\n @property\n def date(self):\n return self.get_header_as_datetime('Date')\n\n @property\n def if_modified_since(self):\n return self.get_header_as_datetime('If-Modified-Since')\n\n @property\n def if_unmodified_since(self):\n return self.get_header_as_datetime('If-Unmodified-Since')\n\n @property\n def range(self):\n try:\n value = self.env['HTTP_RANGE']\n if '=' in value:\n unit, sep, req_range = value.partition('=')\n else:\n msg = \"The value must be prefixed with a range unit, e.g. 'bytes='\"\n raise errors.HTTPInvalidHeader(msg, 'Range')\n except KeyError:\n return None\n\n if ',' in req_range:\n msg = 'The value must be a continuous range.'\n raise errors.HTTPInvalidHeader(msg, 'Range')\n\n try:\n first, sep, last = req_range.partition('-')\n\n if not sep:\n raise ValueError()\n\n if first:\n return (int(first), int(last or -1))\n elif last:\n return (-int(last), -1)\n else:\n msg = 'The range offsets are missing.'\n raise errors.HTTPInvalidHeader(msg, 'Range')\n\n except ValueError:\n href = 'http://goo.gl/zZ6Ey'\n href_text = 'HTTP/1.1 Range Requests'\n msg = ('It must be a range formatted according to RFC 7233.')\n raise errors.HTTPInvalidHeader(msg, 'Range', href=href,\n href_text=href_text)\n\n @property\n def range_unit(self):\n try:\n value = self.env['HTTP_RANGE']\n\n if '=' in value:\n unit, sep, req_range = value.partition('=')\n return unit\n else:\n msg = \"The value must be prefixed with a range unit, e.g. 'bytes='\"\n raise errors.HTTPInvalidHeader(msg, 'Range')\n except KeyError:\n return None\n\n @property\n def app(self):\n # PERF(kgriffs): try..except is faster than get() assuming that\n # we normally expect the key to exist. Even though PEP-3333\n # allows WSGI servers to omit the key when the value is an\n # empty string, uwsgi, gunicorn, waitress, and wsgiref all\n # include it even in that case.\n try:\n return self.env['SCRIPT_NAME']\n except KeyError:\n return ''\n\n @property\n def scheme(self):\n return self.env['wsgi.url_scheme']\n\n @property\n def forwarded_scheme(self):\n # PERF(kgriffs): Since the Forwarded header is still relatively\n # new, we expect X-Forwarded-Proto to be more common, so\n # try to avoid calling self.forwarded if we can, since it uses a\n # try...catch that will usually result in a relatively expensive\n # raised exception.\n if 'HTTP_FORWARDED' in self.env:\n first_hop = self.forwarded[0]\n scheme = first_hop.scheme or self.scheme\n else:\n # PERF(kgriffs): This call should normally succeed, so\n # just go for it without wasting time checking it\n # first. Note also that the indexing operator is\n # slightly faster than using get().\n try:\n scheme = self.env['HTTP_X_FORWARDED_PROTO'].lower()\n except KeyError:\n scheme = self.env['wsgi.url_scheme']\n\n return scheme\n\n # TODO(kgriffs): Remove this deprecated alias in Falcon 2.0\n protocol = scheme\n\n @property\n def uri(self):\n if self._cached_uri is None:\n scheme = self.env['wsgi.url_scheme']\n\n # PERF: For small numbers of items, '+' is faster\n # than ''.join(...). Concatenation is also generally\n # faster than formatting.\n value = (scheme + '://' +\n self.netloc +\n self.relative_uri)\n\n self._cached_uri = value\n\n return self._cached_uri\n\n url = uri\n\n @property\n def forwarded_uri(self):\n if self._cached_forwarded_uri is None:\n # PERF: For small numbers of items, '+' is faster\n # than ''.join(...). Concatenation is also generally\n # faster than formatting.\n value = (self.forwarded_scheme + '://' +\n self.forwarded_host +\n self.relative_uri)\n\n self._cached_forwarded_uri = value\n\n return self._cached_forwarded_uri\n\n @property\n def relative_uri(self):\n if self._cached_relative_uri is None:\n if self.query_string:\n self._cached_relative_uri = (self.app + self.path + '?' +\n self.query_string)\n else:\n self._cached_relative_uri = self.app + self.path\n\n return self._cached_relative_uri\n\n @property\n def prefix(self):\n if self._cached_prefix is None:\n self._cached_prefix = (\n self.env['wsgi.url_scheme'] + '://' +\n self.netloc +\n self.app\n )\n\n return self._cached_prefix\n\n @property\n def forwarded_prefix(self):\n if self._cached_forwarded_prefix is None:\n self._cached_forwarded_prefix = (\n self.forwarded_scheme + '://' +\n self.forwarded_host +\n self.app\n )\n\n return self._cached_forwarded_prefix\n\n @property\n def host(self):\n try:\n # NOTE(kgriffs): Prefer the host header; the web server\n # isn't supposed to mess with it, so it should be what\n # the client actually sent.\n host_header = self.env['HTTP_HOST']\n host, port = parse_host(host_header)\n except KeyError:\n # PERF(kgriffs): According to PEP-3333, this header\n # will always be present.\n host = self.env['SERVER_NAME']\n\n return host\n\n @property\n def forwarded_host(self):\n # PERF(kgriffs): Since the Forwarded header is still relatively\n # new, we expect X-Forwarded-Host to be more common, so\n # try to avoid calling self.forwarded if we can, since it uses a\n # try...catch that will usually result in a relatively expensive\n # raised exception.\n if 'HTTP_FORWARDED' in self.env:\n first_hop = self.forwarded[0]\n host = first_hop.host or self.host\n else:\n # PERF(kgriffs): This call should normally succeed, assuming\n # that the caller is expecting a forwarded header, so\n # just go for it without wasting time checking it\n # first.\n try:\n host = self.env['HTTP_X_FORWARDED_HOST']\n except KeyError:\n host = self.host\n\n return host\n\n @property\n def subdomain(self):\n # PERF(kgriffs): .partition is slightly faster than .split\n subdomain, sep, remainder = self.host.partition('.')\n return subdomain if sep else None\n\n @property\n def headers(self):\n # NOTE(kgriffs: First time here will cache the dict so all we\n # have to do is clone it in the future.\n if self._cached_headers is None:\n headers = self._cached_headers = {}\n\n env = self.env\n for name, value in env.items():\n if name.startswith('HTTP_'):\n # NOTE(kgriffs): Don't take the time to fix the case\n # since headers are supposed to be case-insensitive\n # anyway.\n headers[name[5:].replace('_', '-')] = value\n\n elif name in WSGI_CONTENT_HEADERS:\n headers[name.replace('_', '-')] = value\n\n return self._cached_headers.copy()\n\n @property\n def params(self):\n return self._params\n\n @property\n def cookies(self):\n if self._cookies is None:\n # NOTE(tbug): We might want to look into parsing\n # cookies ourselves. The SimpleCookie is doing a\n # lot if stuff only required to SEND cookies.\n cookie_header = self.get_header('Cookie', default='')\n parser = SimpleCookie()\n for cookie_part in cookie_header.split('; '):\n try:\n parser.load(cookie_part)\n except http_cookies.CookieError:\n pass\n cookies = {}\n for morsel in parser.values():\n cookies[morsel.key] = morsel.value\n\n self._cookies = cookies\n\n return self._cookies.copy()\n\n @property\n def access_route(self):\n if self._cached_access_route is None:\n # NOTE(kgriffs): Try different headers in order of\n # preference; if none are found, fall back to REMOTE_ADDR.\n #\n # If one of these headers is present, but its value is\n # malformed such that we end up with an empty list, or\n # a non-empty list containing malformed values, go ahead\n # and return the results as-is. The alternative would be\n # to fall back to another header or to REMOTE_ADDR, but\n # that only masks the problem; the operator needs to be\n # aware that an upstream proxy is malfunctioning.\n\n if 'HTTP_FORWARDED' in self.env:\n self._cached_access_route = []\n for hop in self.forwarded:\n if hop.src is not None:\n host, __ = parse_host(hop.src)\n self._cached_access_route.append(host)\n elif 'HTTP_X_FORWARDED_FOR' in self.env:\n addresses = self.env['HTTP_X_FORWARDED_FOR'].split(',')\n self._cached_access_route = [ip.strip() for ip in addresses]\n elif 'HTTP_X_REAL_IP' in self.env:\n self._cached_access_route = [self.env['HTTP_X_REAL_IP']]\n elif 'REMOTE_ADDR' in self.env:\n self._cached_access_route = [self.env['REMOTE_ADDR']]\n else:\n self._cached_access_route = []\n\n return self._cached_access_route\n\n @property\n def remote_addr(self):\n return self.env.get('REMOTE_ADDR')\n\n @property\n def port(self):\n try:\n host_header = self.env['HTTP_HOST']\n\n default_port = 80 if self.env['wsgi.url_scheme'] == 'http' else 443\n host, port = parse_host(host_header, default_port=default_port)\n except KeyError:\n # NOTE(kgriffs): Normalize to an int, since that is the type\n # returned by parse_host().\n #\n # NOTE(kgriffs): In the case that SERVER_PORT was used,\n # PEP-3333 requires that the port never be an empty string.\n port = int(self.env['SERVER_PORT'])\n\n return port\n\n @property\n def netloc(self):\n env = self.env\n protocol = env['wsgi.url_scheme']\n\n # NOTE(kgriffs): According to PEP-3333 we should first\n # try to use the Host header if present.\n #\n # PERF(kgriffs): try..except is faster than get() when we\n # expect the key to be present most of the time.\n try:\n netloc_value = env['HTTP_HOST']\n except KeyError:\n netloc_value = env['SERVER_NAME']\n\n port = env['SERVER_PORT']\n if protocol == 'https':\n if port != '443':\n netloc_value += ':' + port\n else:\n if port != '80':\n netloc_value += ':' + port\n\n return netloc_value\n\n @property\n def media(self):\n if self._media:\n return self._media\n\n handler = self.options.media_handlers.find_by_media_type(\n self.content_type,\n self.options.default_media_type\n )\n\n # Consume the stream\n raw = self.bounded_stream.read()\n\n # Deserialize and Return\n self._media = handler.deserialize(raw)\n return self._media\n\n # ------------------------------------------------------------------------\n # Methods\n # ------------------------------------------------------------------------\n\n def client_accepts(self, media_type):\n \"\"\"Determine whether or not the client accepts a given media type.\n\n Args:\n media_type (str): An Internet media type to check.\n\n Returns:\n bool: ``True`` if the client has indicated in the Accept header\n that it accepts the specified media type. Otherwise, returns\n ``False``.\n \"\"\"\n\n accept = self.accept\n\n # PERF(kgriffs): Usually the following will be true, so\n # try it first.\n if (accept == media_type) or (accept == '*/*'):\n return True\n\n # Fall back to full-blown parsing\n try:\n return mimeparse.quality(media_type, accept) != 0.0\n except ValueError:\n return False\n\n def client_prefers(self, media_types):\n \"\"\"Return the client's preferred media type, given several choices.\n\n Args:\n media_types (iterable of str): One or more Internet media types\n from which to choose the client's preferred type. This value\n **must** be an iterable collection of strings.\n\n Returns:\n str: The client's preferred media type, based on the Accept\n header. Returns ``None`` if the client does not accept any\n of the given types.\n \"\"\"\n\n try:\n # NOTE(kgriffs): best_match will return '' if no match is found\n preferred_type = mimeparse.best_match(media_types, self.accept)\n except ValueError:\n # Value for the accept header was not formatted correctly\n preferred_type = ''\n\n return (preferred_type if preferred_type else None)\n\n def get_header(self, name, required=False, default=None):\n \"\"\"Retrieve the raw string value for the given header.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Content-Type')\n\n Keyword Args:\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n default (any): Value to return if the header\n is not found (default ``None``).\n\n Returns:\n str: The value of the specified header if it exists, or\n the default value if the header is not found and is not\n required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n\n \"\"\"\n\n wsgi_name = name.upper().replace('-', '_')\n\n # Use try..except to optimize for the header existing in most cases\n try:\n # Don't take the time to cache beforehand, using HTTP naming.\n # This will be faster, assuming that most headers are looked\n # up only once, and not all headers will be requested.\n return self.env['HTTP_' + wsgi_name]\n\n except KeyError:\n # NOTE(kgriffs): There are a couple headers that do not\n # use the HTTP prefix in the env, so try those. We expect\n # people to usually just use the relevant helper properties\n # to access these instead of .get_header.\n if wsgi_name in WSGI_CONTENT_HEADERS:\n try:\n return self.env[wsgi_name]\n except KeyError:\n pass\n\n if not required:\n return default\n\n raise errors.HTTPMissingHeader(name)\n\n def get_header_as_datetime(self, header, required=False, obs_date=False):\n \"\"\"Return an HTTP header with HTTP-Date values as a datetime.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Date')\n\n Keyword Args:\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n obs_date (bool): Support obs-date formats according to\n RFC 7231, e.g.: \"Sunday, 06-Nov-94 08:49:37 GMT\"\n (default ``False``).\n\n Returns:\n datetime: The value of the specified header if it exists,\n or ``None`` if the header is not found and is not required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n HttpInvalidHeader: The header contained a malformed/invalid value.\n \"\"\"\n\n try:\n http_date = self.get_header(header, required=required)\n return util.http_date_to_dt(http_date, obs_date=obs_date)\n except TypeError:\n # When the header does not exist and isn't required\n return None\n except ValueError:\n msg = ('It must be formatted according to RFC 7231, '\n 'Section 7.1.1.1')\n raise errors.HTTPInvalidHeader(msg, header)\n\n def get_param(self, name, required=False, store=None, default=None):\n \"\"\"Return the raw value of a query string parameter as a string.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, Falcon can\n automatically parse the parameters from the request body\n and merge them into the query string parameters. To enable\n this functionality, set\n :py:attr:`~.RequestOptions.auto_parse_form_urlencoded` to\n ``True`` via :any:`API.req_options`.\n\n If a key appears more than once in the form data, one of the\n values will be returned as a string, but it is undefined which\n one. Use `req.get_param_as_list()` to retrieve all the values.\n\n Note:\n Similar to the way multiple keys in form data is handled,\n if a query parameter is assigned a comma-separated list of\n values (e.g., 'foo=a,b,c'), only one of those values will be\n returned, and it is undefined which one. Use\n `req.get_param_as_list()` to retrieve all the values.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'sort').\n\n Keyword Args:\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict): A ``dict``-like object in which to place\n the value of the param, but only if the param is present.\n default (any): If the param is not found returns the\n given value instead of None\n\n Returns:\n str: The value of the param as a string, or ``None`` if param is\n not found and is not required.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n # NOTE(warsaw): If the key appeared multiple times, it will be\n # stored internally as a list. We do not define which one\n # actually gets returned, but let's pick the last one for grins.\n param = params[name]\n if isinstance(param, list):\n param = param[-1]\n\n if store is not None:\n store[name] = param\n\n return param\n\n if not required:\n return default\n\n raise errors.HTTPMissingParam(name)\n\n def get_param_as_int(self, name,\n required=False, min=None, max=None, store=None):\n \"\"\"Return the value of a query string parameter as an int.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'limit').\n\n Keyword Args:\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not an integer (default\n ``False``).\n min (int): Set to the minimum value allowed for this\n param. If the param is found and it is less than min, an\n ``HTTPError`` is raised.\n max (int): Set to the maximum value allowed for this\n param. If the param is found and its value is greater than\n max, an ``HTTPError`` is raised.\n store (dict): A ``dict``-like object in which to place\n the value of the param, but only if the param is found\n (default ``None``).\n\n Returns:\n int: The value of the param if it is found and can be converted to\n an integer. If the param is not found, returns ``None``, unless\n `required` is ``True``.\n\n Raises\n HTTPBadRequest: The param was not found in the request, even though\n it was required to be there. Also raised if the param's value\n falls outside the given interval, i.e., the value must be in\n the interval: min <= value <= max to avoid triggering an error.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n try:\n val = int(val)\n except ValueError:\n msg = 'The value must be an integer.'\n raise errors.HTTPInvalidParam(msg, name)\n\n if min is not None and val < min:\n msg = 'The value must be at least ' + str(min)\n raise errors.HTTPInvalidParam(msg, name)\n\n if max is not None and max < val:\n msg = 'The value may not exceed ' + str(max)\n raise errors.HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise errors.HTTPMissingParam(name)\n\n def get_param_as_bool(self, name, required=False, store=None,\n blank_as_true=False):\n \"\"\"Return the value of a query string parameter as a boolean\n\n The following boolean strings are supported::\n\n TRUE_STRINGS = ('true', 'True', 'yes', '1', 'on')\n FALSE_STRINGS = ('false', 'False', 'no', '0', 'off')\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'detailed').\n\n Keyword Args:\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not a recognized boolean\n string (default ``False``).\n store (dict): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n blank_as_true (bool): If ``True``, an empty string value will be\n treated as ``True`` (default ``False``). Normally empty strings\n are ignored; if you would like to recognize such parameters, you\n must set the `keep_blank_qs_values` request option to ``True``.\n Request options are set globally for each instance of\n ``falcon.API`` through the `req_options` attribute.\n\n Returns:\n bool: The value of the param if it is found and can be converted\n to a ``bool``. If the param is not found, returns ``None``\n unless required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n if val in TRUE_STRINGS:\n val = True\n elif val in FALSE_STRINGS:\n val = False\n elif blank_as_true and not val:\n val = True\n else:\n msg = 'The value of the parameter must be \"true\" or \"false\".'\n raise errors.HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise errors.HTTPMissingParam(name)\n\n def get_param_as_list(self, name,\n transform=None, required=False, store=None):\n \"\"\"Return the value of a query string parameter as a list.\n\n List items must be comma-separated or must be provided\n as multiple instances of the same param in the query string\n ala *application/x-www-form-urlencoded*.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n\n Keyword Args:\n transform (callable): An optional transform function\n that takes as input each element in the list as a ``str`` and\n outputs a transformed element for inclusion in the list that\n will be returned. For example, passing ``int`` will\n transform list items into numbers.\n required (bool): Set to ``True`` to raise ``HTTPBadRequest``\n instead of returning ``None`` when the parameter is not\n found (default ``False``).\n store (dict): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n\n Returns:\n list: The value of the param if it is found. Otherwise, returns\n ``None`` unless required is True. Empty list elements will be\n discarded. For example, the following query strings would\n both result in `['1', '3']`::\n\n things=1,,3\n things=1&things=&things=3\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n items = params[name]\n\n # NOTE(warsaw): When a key appears multiple times in the request\n # query, it will already be represented internally as a list.\n # NOTE(kgriffs): Likewise for comma-delimited values.\n if not isinstance(items, list):\n items = [items]\n\n # PERF(kgriffs): Use if-else rather than a DRY approach\n # that sets transform to a passthrough function; avoids\n # function calling overhead.\n if transform is not None:\n try:\n items = [transform(i) for i in items]\n\n except ValueError:\n msg = 'The value is not formatted correctly.'\n raise errors.HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = items\n\n return items\n\n if not required:\n return None\n\n raise errors.HTTPMissingParam(name)\n\n def get_param_as_datetime(self, name, format_string='%Y-%m-%dT%H:%M:%SZ',\n required=False, store=None):\n \"\"\"Return the value of a query string parameter as a datetime.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n\n Keyword Args:\n format_string (str): String used to parse the param value\n into a datetime. Any format recognized by strptime() is\n supported (default ``'%Y-%m-%dT%H:%M:%SZ'``).\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n Returns:\n datetime.datetime: The value of the param if it is found and can be\n converted to a ``datetime`` according to the supplied format\n string. If the param is not found, returns ``None`` unless\n required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n \"\"\"\n\n param_value = self.get_param(name, required=required)\n\n if param_value is None:\n return None\n\n try:\n date_time = strptime(param_value, format_string)\n except ValueError:\n msg = 'The date value does not match the required format.'\n raise errors.HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = date_time\n\n return date_time\n\n def get_param_as_date(self, name, format_string='%Y-%m-%d',\n required=False, store=None):\n \"\"\"Return the value of a query string parameter as a date.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n\n Keyword Args:\n format_string (str): String used to parse the param value\n into a date. Any format recognized by strptime() is\n supported (default ``\"%Y-%m-%d\"``).\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n Returns:\n datetime.date: The value of the param if it is found and can be\n converted to a ``date`` according to the supplied format\n string. If the param is not found, returns ``None`` unless\n required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n \"\"\"\n\n date_time = self.get_param_as_datetime(name, format_string, required)\n if date_time:\n date = date_time.date()\n else:\n return None\n\n if store is not None:\n store[name] = date\n\n return date\n\n def get_param_as_dict(self, name, required=False, store=None):\n \"\"\"Return the value of a query string parameter as a dict.\n\n Given a JSON value, parse and return it as a dict.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'payload').\n\n Keyword Args:\n required (bool): Set to ``True`` to raise ``HTTPBadRequest``\n instead of returning ``None`` when the parameter is not\n found (default ``False``).\n store (dict): A ``dict``-like object in which to place the\n value of the param, but only if the param is found\n (default ``None``).\n\n Returns:\n dict: The value of the param if it is found. Otherwise, returns\n ``None`` unless required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: The parameter's value could not be parsed as JSON.\n \"\"\"\n\n param_value = self.get_param(name, required=required)\n\n if param_value is None:\n return None\n\n try:\n val = json.loads(param_value)\n except ValueError:\n msg = 'It could not be parsed as JSON.'\n raise errors.HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n def log_error(self, message):\n \"\"\"Write an error message to the server's log.\n\n Prepends timestamp and request info to message, and writes the\n result out to the WSGI server's error stream (`wsgi.error`).\n\n Args:\n message (str or unicode): Description of the problem. On Python 2,\n instances of ``unicode`` will be converted to UTF-8.\n\n \"\"\"\n\n if self.query_string:\n query_string_formatted = '?' + self.query_string\n else:\n query_string_formatted = ''\n\n log_line = (\n DEFAULT_ERROR_LOG_FORMAT.\n format(now(), self.method, self.path, query_string_formatted)\n )\n\n if six.PY3:\n self._wsgierrors.write(log_line + message + '\\n')\n else:\n if isinstance(message, unicode):\n message = message.encode('utf-8')\n\n self._wsgierrors.write(log_line.encode('utf-8'))\n self._wsgierrors.write(message + '\\n')\n\n # ------------------------------------------------------------------------\n # Helpers\n # ------------------------------------------------------------------------\n\n def _get_wrapped_wsgi_input(self):\n try:\n content_length = self.content_length or 0\n\n # NOTE(kgriffs): This branch is indeed covered in test_wsgi.py\n # even though coverage isn't able to detect it.\n except errors.HTTPInvalidHeader: # pragma: no cover\n # NOTE(kgriffs): The content-length header was specified,\n # but it had an invalid value. Assume no content.\n content_length = 0\n\n return helpers.BoundedStream(self.env['wsgi.input'], content_length)\n\n def _parse_form_urlencoded(self):\n content_length = self.content_length\n if not content_length:\n return\n\n body = self.stream.read(content_length)\n\n # NOTE(kgriffs): According to http://goo.gl/6rlcux the\n # body should be US-ASCII. Enforcing this also helps\n # catch malicious input.\n try:\n body = body.decode('ascii')\n except UnicodeDecodeError:\n body = None\n self.log_error('Non-ASCII characters found in form body '\n 'with Content-Type of '\n 'application/x-www-form-urlencoded. Body '\n 'will be ignored.')\n\n if body:\n extra_params = parse_query_string(\n body,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n parse_qs_csv=self.options.auto_parse_qs_csv,\n )\n\n self._params.update(extra_params)\n\n\n# PERF: To avoid typos and improve storage space and speed over a dict.\nclass RequestOptions(object):\n \"\"\"Defines a set of configurable request options.\n\n An instance of this class is exposed via :any:`API.req_options` for\n configuring certain :py:class:`~.Request` behaviors.\n\n Attributes:\n keep_blank_qs_values (bool): Set to ``True`` to keep query string\n fields even if they do not have a value (default ``False``).\n For comma-separated values, this option also determines\n whether or not empty elements in the parsed list are\n retained.\n\n auto_parse_form_urlencoded: Set to ``True`` in order to\n automatically consume the request stream and merge the\n results into the request's query string params when the\n request's content type is\n *application/x-www-form-urlencoded* (default ``False``).\n\n Enabling this option makes the form parameters accessible\n via :attr:`~.params`, :meth:`~.get_param`, etc.\n\n Warning:\n When this option is enabled, the request's body\n stream will be left at EOF. The original data is\n not retained by the framework.\n\n Note:\n The character encoding for fields, before\n percent-encoding non-ASCII bytes, is assumed to be\n UTF-8. The special `_charset_` field is ignored if\n present.\n\n Falcon expects form-encoded request bodies to be\n encoded according to the standard W3C algorithm (see\n also http://goo.gl/6rlcux).\n\n auto_parse_qs_csv: Set to ``False`` to treat commas in a query\n string value as literal characters, rather than as a comma-\n separated list (default ``True``). When this option is\n enabled, the value will be split on any non-percent-encoded\n commas. Disable this option when encoding lists as multiple\n occurrences of the same parameter, and when values may be\n encoded in alternative formats in which the comma character\n is significant.\n\n strip_url_path_trailing_slash: Set to ``False`` in order to\n retain a trailing slash, if present, at the end of the URL\n path (default ``True``). When this option is enabled,\n the URL path is normalized by stripping the trailing slash\n character. This lets the application define a single route\n to a resource for a path that may or may not end in a\n forward slash. However, this behavior can be problematic in\n certain cases, such as when working with authentication\n schemes that employ URL-based signatures.\n\n default_media_type (str): The default media-type to use when\n deserializing a response. This value is normally set to the media\n type provided when a :class:`falcon.API` is initialized; however,\n if created independently, this will default to the\n ``DEFAULT_MEDIA_TYPE`` specified by Falcon.\n\n media_handlers (Handlers): A dict-like object that allows you to\n configure the media-types that you would like to handle.\n By default, a handler is provided for the ``application/json``\n media type.\n \"\"\"\n __slots__ = (\n 'keep_blank_qs_values',\n 'auto_parse_form_urlencoded',\n 'auto_parse_qs_csv',\n 'strip_url_path_trailing_slash',\n 'default_media_type',\n 'media_handlers',\n )\n\n def __init__(self):\n self.keep_blank_qs_values = False\n self.auto_parse_form_urlencoded = False\n self.auto_parse_qs_csv = True\n self.strip_url_path_trailing_slash = True\n self.default_media_type = DEFAULT_MEDIA_TYPE\n self.media_handlers = Handlers()\n\n\nclass Forwarded(object):\n \"\"\"Represents a parsed Forwarded header.\n\n (See also: RFC 7239, Section 4)\n\n Attributes:\n src (str): The value of the \"for\" parameter, or\n ``None`` if the parameter is absent. Identifies the\n node making the request to the proxy.\n dest (str): The value of the \"by\" parameter, or\n ``None`` if the parameter is absent. Identifies the\n client-facing interface of the proxy.\n host (str): The value of the \"host\" parameter, or\n ``None`` if the parameter is absent. Provides the host\n request header field as received by the proxy.\n scheme (str): The value of the \"proto\" parameter, or\n ``None`` if the parameter is absent. Indicates the\n protocol that was used to make the request to\n the proxy.\n \"\"\"\n\n # NOTE(kgriffs): Use \"client\" since \"for\" is a keyword, and\n # \"scheme\" instead of \"proto\" to be consistent with the\n # falcon.Request interface.\n __slots__ = ('src', 'dest', 'host', 'scheme')\n\n def __init__(self):\n self.src = None\n self.dest = None\n self.host = None\n self.scheme = None\n"}
|
{"falcon/request.py": [{"type": "function", "name": "Request.get_param_as_uuid", "lines": [1277, 1339], "signature": "def get_param_as_uuid(self, name, required=False, store=None):", "doc": "Return the value of a query string parameter as an UUID.\n\nThe value to convert must conform to the standard UUID string\nrepresentation per RFC 4122. For example, the following\nstrings are all valid::\n\n # Lowercase\n '64be949b-3433-4d36-a4a8-9f19d352fee8'\n\n # Uppercase\n 'BE71ECAA-F719-4D42-87FD-32613C2EEB60'\n\n # Mixed\n '81c8155C-D6de-443B-9495-39Fa8FB239b5'\n\nArgs:\n name (str): Parameter name, case-sensitive (e.g., 'id').\n\nKeyword Args:\n required (bool): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not a UUID (default\n ``False``).\n store (dict): A ``dict``-like object in which to place\n the value of the param, but only if the param is found\n (default ``None``).\n\nReturns:\n UUID: The value of the param if it is found and can be converted to\n a ``UUID``. If the param is not found, returns ``None``, unless\n `required` is ``True``.\n\nRaises\n HTTPBadRequest: The param was not found in the request, even though\n it was required to be there, or it was found but could not\n be converted to a ``UUID``."}]}
| null |
["tests/test_query_params.py::TestQueryParams::test_required[get_param_as_uuid-simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param_as_uuid-simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_uuid[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_uuid[simulate_request_post_query_params]"]
|
["tests/test_query_params.py::TestQueryParams::test_none[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_none[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_blank[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_blank[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_simple[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_simple[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_percent_encoded[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_percent_encoded[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_option_auto_parse_qs_csv_simple_false[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_option_auto_parse_qs_csv_simple_false[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_option_auto_parse_qs_csv_simple_true[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_option_auto_parse_qs_csv_simple_true[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_option_auto_parse_qs_csv_complex_false[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_option_auto_parse_qs_csv_complex_false[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_bad_percentage[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_bad_percentage[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_allowed_names[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_allowed_names[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param-simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param-simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param_as_int-simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param_as_int-simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param_as_bool-simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param_as_bool-simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param_as_list-simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_required[get_param_as_list-simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_int[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_int[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_int_neg[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_int_neg[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_boolean[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_boolean[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_boolean_blank[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_boolean_blank[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_list_type[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_list_type[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_list_type_blank[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_list_type_blank[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_list_transformer[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_list_transformer[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_param_property[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_param_property[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_form_keys[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_form_keys[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_keys_as_bool[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_keys_as_bool[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_keys_as_int[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_keys_as_int[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_form_keys_as_list[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_multiple_form_keys_as_list[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_valid[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_valid[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_missing_param[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_missing_param[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_valid_with_format[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_valid_with_format[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_store[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_store[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_invalid[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_date_invalid[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_valid[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_valid[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_missing_param[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_missing_param[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_valid_with_format[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_valid_with_format[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_store[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_store[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_invalid[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_datetime_invalid[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_valid[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_valid[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_missing_param[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_missing_param[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_store[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_store[simulate_request_post_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_invalid[simulate_request_get_query_params]", "tests/test_query_params.py::TestQueryParams::test_get_dict_invalid[simulate_request_post_query_params]", "tests/test_query_params.py::TestPostQueryParams::test_http_methods_body_expected[POST]", "tests/test_query_params.py::TestPostQueryParams::test_http_methods_body_expected[PUT]", "tests/test_query_params.py::TestPostQueryParams::test_http_methods_body_expected[PATCH]", "tests/test_query_params.py::TestPostQueryParams::test_http_methods_body_expected[DELETE]", "tests/test_query_params.py::TestPostQueryParams::test_http_methods_body_expected[OPTIONS]", "tests/test_query_params.py::TestPostQueryParams::test_http_methods_body_not_expected[GET]", "tests/test_query_params.py::TestPostQueryParams::test_http_methods_body_not_expected[HEAD]", "tests/test_query_params.py::TestPostQueryParams::test_non_ascii", "tests/test_query_params.py::TestPostQueryParams::test_empty_body", "tests/test_query_params.py::TestPostQueryParams::test_empty_body_no_content_length", "tests/test_query_params.py::TestPostQueryParams::test_explicitly_disable_auto_parse", "tests/test_query_params.py::TestPostQueryParamsDefaultBehavior::test_dont_auto_parse_by_default"]
|
77d5e6394a88ead151c9469494749f95f06b24bf
|
{"first_commit_time": 1513634618.0, "pr_title": "feat(Request): Add get_param_as_uuid()", "pr_body": "Closes #1132", "pr_timeline": [{"time": 1513635609.0, "comment": "# [Codecov](https://codecov.io/gh/falconry/falcon/pull/1174?src=pr&el=h1) Report\n> Merging [#1174](https://codecov.io/gh/falconry/falcon/pull/1174?src=pr&el=desc) into [master](https://codecov.io/gh/falconry/falcon/commit/919fd3f5a3129d04f1c7d23f5eff440ec4598e35?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/falconry/falcon/pull/1174?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1174 +/- ##\n======================================\n Coverage 100% 100% \n======================================\n Files 37 37 \n Lines 2419 2437 +18 \n Branches 350 354 +4 \n======================================\n+ Hits 2419 2437 +18\n```\n\n\n| [Impacted Files](https://codecov.io/gh/falconry/falcon/pull/1174?src=pr&el=tree) | Coverage \u0394 | |\n|---|---|---|\n| [falcon/request.py](https://codecov.io/gh/falconry/falcon/pull/1174/diff?src=pr&el=tree#diff-ZmFsY29uL3JlcXVlc3QucHk=) | `100% <100%> (\u00f8)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/falconry/falcon/pull/1174?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute <relative> (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/falconry/falcon/pull/1174?src=pr&el=footer). Last update [919fd3f...a5ceeec](https://codecov.io/gh/falconry/falcon/pull/1174?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"}], "issues": {"1132": {"issue_title": "Add a get_param_as_uuid method to `falcon.Request`", "issue_body": "It would be great to be able to get a parameter and to cast it to an UUID, raising a HTTPBadRequest otherwise.\r\n", "issue_timeline": []}}}
|
|
falconry/falcon
| 1,603
|
https://github.com/falconry/falcon/pull/1603
|
falconry__falcon-1603
|
[]
|
b68c4ff5b8ddf83f84d0e9d7875fb478995a70eb
|
diff --git a/docs/_newsfragments/1603-smart-exception-handler-precedence.feature.rst b/docs/_newsfragments/1603-smart-exception-handler-precedence.feature.rst
new file mode 100644
index 000000000..ba23500ae
--- /dev/null
+++ b/docs/_newsfragments/1603-smart-exception-handler-precedence.feature.rst
@@ -0,0 +1,1 @@
+Exceptions are now handled by the registered handler for the most specific matching exception class, rather than in reverse order of registration. "Specificity" is determined by the method resolution order of the raised exception type. (See :meth:`~.App.add_error_handler` for more details.)
diff --git a/falcon/app.py b/falcon/app.py
index be2ef5df8..9313cca54 100644
--- a/falcon/app.py
+++ b/falcon/app.py
@@ -207,7 +207,7 @@ def __init__(self, media_type=DEFAULT_MEDIA_TYPE,
self._request_type = request_type
self._response_type = response_type
- self._error_handlers = []
+ self._error_handlers = {}
self._serialize_error = helpers.default_serialize_error
self.req_options = RequestOptions()
@@ -538,18 +538,30 @@ def add_error_handler(self, exception, handler=None):
the client. Alternatively, a handler may modify `resp`
directly.
- Error handlers are matched in LIFO order. In other words, when
- searching for an error handler to match a raised exception, and
- more than one handler matches the exception type, the framework
- will choose the one that was most recently registered.
- Therefore, more general error handlers (e.g., for the
- standard ``Exception`` type) should be added first, to avoid
- masking more specific handlers for subclassed types. For example::
+ An error handler "matches" a raised exception if the exception is an
+ instance of the corresponding exception type. If more than one error
+ handler matches the raised exception, the framework will choose the
+ most specific one, as determined by the method resolution order of the
+ raised exception type. If multiple error handlers are registered for the
+ *same* exception class, then the most recently-registered handler is
+ used.
- app = falcon.App()
- app.add_error_handler(Exception, custom_handle_uncaught_exception)
+ For example, suppose we register error handlers as follows::
+
+ app = falcon.API()
+ app.add_error_handler(falcon.HTTPNotFound, custom_handle_not_found)
app.add_error_handler(falcon.HTTPError, custom_handle_http_error)
- app.add_error_handler(CustomException)
+ app.add_error_handler(Exception, custom_handle_uncaught_exception)
+ app.add_error_handler(falcon.HTTPNotFound, custom_handle_404)
+
+ If an instance of ``falcon.HTTPForbidden`` is raised, it will be
+ handled by ``custom_handle_http_error()``. ``falcon.HTTPError`` is a
+ superclass of ``falcon.HTTPForbidden`` and a subclass of ``Exception``,
+ so it is the most specific exception type with a registered handler.
+
+ If an instance of ``falcon.HTTPNotFound`` is raised, it will be handled
+ by ``custom_handle_404()``, not by ``custom_handle_not_found()``, because
+ ``custom_handle_404()`` was registered more recently.
.. Note::
@@ -559,11 +571,6 @@ def add_error_handler(self, exception, handler=None):
exceptions to the WSGI server. These can be overridden by adding a
custom error handler method for the exception type in question.
- Be aware that both :class:`~.HTTPError` and :class:`~.HTTPStatus`
- inherit from the standard ``Exception`` type, so overriding the
- default ``Exception`` handler will override all three default
- handlers, due to the LIFO ordering of handler-matching.
-
Args:
exception (type or iterable of types): When handling a request,
whenever an error occurs that is an instance of the specified
@@ -589,6 +596,11 @@ def handle(req, resp, ex, params):
If an iterable of exception types is specified instead of
a single type, the handler must be explicitly specified.
+ .. versionchanged:: 3.0
+ Breaking change: error handler now selected by most specific
+ matching error class, rather than most recently registered matching
+ error class.
+
"""
def wrap_old_handler(old_handler):
@wraps(old_handler)
@@ -616,18 +628,11 @@ def handler(req, resp, ex, params):
except TypeError:
exception_tuple = (exception, )
- if all(issubclass(exc, BaseException) for exc in exception_tuple):
- # Insert at the head of the list in case we get duplicate
- # adds (will cause the most recently added one to win).
- if len(exception_tuple) == 1:
- # In this case, insert only the single exception type
- # (not a tuple), to avoid unnnecessary overhead in the
- # exception handling path.
- self._error_handlers.insert(0, (exception_tuple[0], handler))
- else:
- self._error_handlers.insert(0, (exception_tuple, handler))
- else:
- raise TypeError('"exception" must be an exception type.')
+ for exc in exception_tuple:
+ if not issubclass(exc, BaseException):
+ raise TypeError('"exception" must be an exception type.')
+
+ self._error_handlers[exc] = handler
def set_error_serializer(self, serializer):
"""Override the default serializer for instances of :class:`~.HTTPError`.
@@ -799,6 +804,22 @@ def _python_error_handler(self, req, resp, error, params):
self._compose_error_response(
req, resp, falcon.HTTPInternalServerError())
+ def _find_error_handler(self, ex):
+ # NOTE(csojinb): The `__mro__` class attribute returns the method
+ # resolution order tuple, i.e. the complete linear inheritance chain
+ # ``(type(ex), ..., object)``. For a valid exception class, the last
+ # two entries in the tuple will always be ``BaseException``and
+ # ``object``, so here we iterate over the lineage of exception types,
+ # from most to least specific.
+
+ # PERF(csojinb): The expression ``type(ex).__mro__[:-1]`` here is not
+ # super readable, but we inline it to avoid function call overhead.
+ for exc in type(ex).__mro__[:-1]:
+ handler = self._error_handlers.get(exc)
+
+ if handler is not None:
+ return handler
+
def _handle_exception(self, req, resp, ex, params):
"""Handle an exception raised from mw or a responder.
@@ -815,17 +836,17 @@ def _handle_exception(self, req, resp, ex, params):
bool: ``True`` if a handler was found and called for the
exception, ``False`` otherwise.
"""
+ err_handler = self._find_error_handler(ex)
- for err_type, err_handler in self._error_handlers:
- if isinstance(ex, err_type):
- try:
- err_handler(req, resp, ex, params)
- except HTTPStatus as status:
- self._compose_status_response(req, resp, status)
- except HTTPError as error:
- self._compose_error_response(req, resp, error)
+ if err_handler is not None:
+ try:
+ err_handler(req, resp, ex, params)
+ except HTTPStatus as status:
+ self._compose_status_response(req, resp, status)
+ except HTTPError as error:
+ self._compose_error_response(req, resp, error)
- return True
+ return True
# NOTE(kgriffs): No error handlers are defined for ex
# and it is not one of (HTTPStatus, HTTPError), since it
|
diff --git a/tests/test_error_handlers.py b/tests/test_error_handlers.py
index b46d72185..35aac39fe 100644
--- a/tests/test_error_handlers.py
+++ b/tests/test_error_handlers.py
@@ -91,14 +91,14 @@ def test_subclass_error(self, client):
assert result.status_code == 723
assert result.text == 'error: CustomException'
- def test_error_order_duplicate(self, client):
+ def test_error_precedence_duplicate(self, client):
client.app.add_error_handler(Exception, capture_error)
client.app.add_error_handler(Exception, handle_error_first)
result = client.simulate_get()
assert result.text == 'first error handler'
- def test_error_order_subclass(self, client):
+ def test_error_precedence_subclass(self, client):
client.app.add_error_handler(Exception, capture_error)
client.app.add_error_handler(CustomException, handle_error_first)
@@ -110,13 +110,13 @@ def test_error_order_subclass(self, client):
assert result.status_code == 723
assert result.text == 'error: Plain Exception'
- def test_error_order_subclass_masked(self, client):
+ def test_error_precedence_subclass_order_indifference(self, client):
client.app.add_error_handler(CustomException, handle_error_first)
client.app.add_error_handler(Exception, capture_error)
result = client.simulate_delete()
- assert result.status_code == 723
- assert result.text == 'error: CustomException'
+ assert result.status_code == 200
+ assert result.text == 'first error handler'
@pytest.mark.parametrize('exceptions', [
(Exception, CustomException),
| 2019-11-02T15:26:06
|
{}
|
{"docs/_newsfragments/1603-smart-exception-handler-precedence.feature.rst": null, "falcon/app.py": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Falcon App class.\"\"\"\n\nfrom functools import wraps\nimport re\nimport traceback\n\nfrom falcon import api_helpers as helpers, DEFAULT_MEDIA_TYPE, routing\nfrom falcon.http_error import HTTPError\nfrom falcon.http_status import HTTPStatus\nfrom falcon.middlewares import CORSMiddleware\nfrom falcon.request import Request, RequestOptions\nimport falcon.responders\nfrom falcon.response import Response, ResponseOptions\nimport falcon.status_codes as status\nfrom falcon.util import misc\n\n\n# PERF(vytas): On Python 3.5+ (including cythonized modules),\n# reference via module global is faster than going via self\n_BODILESS_STATUS_CODES = frozenset([\n status.HTTP_100,\n status.HTTP_101,\n status.HTTP_204,\n status.HTTP_304,\n])\n\n_TYPELESS_STATUS_CODES = frozenset([\n status.HTTP_204,\n status.HTTP_304,\n])\n\n\nclass App:\n \"\"\"This class is the main entry point into a Falcon-based app.\n\n Each App instance provides a callable WSGI interface and a routing\n engine.\n\n Note:\n The ``API`` class was renamed to ``App`` in Falcon 3.0. The\n old class name remains available as an alias for\n backwards-compatibility, but will be removed in a future\n release.\n\n Keyword Arguments:\n media_type (str): Default media type to use as the\n value for the Content-Type header on responses (default\n 'application/json'). The ``falcon`` module provides a\n number of constants for common media types, such as\n ``falcon.MEDIA_MSGPACK``, ``falcon.MEDIA_YAML``,\n ``falcon.MEDIA_XML``, etc.\n middleware(object or list): Either a single object or a list\n of objects (instantiated classes) that implement the\n following middleware component interface::\n\n class ExampleComponent:\n def process_request(self, req, resp):\n \\\"\\\"\\\"Process the request before routing it.\n\n Note:\n Because Falcon routes each request based on\n req.path, a request can be effectively re-routed\n by setting that attribute to a new value from\n within process_request().\n\n Args:\n req: Request object that will eventually be\n routed to an on_* responder method.\n resp: Response object that will be routed to\n the on_* responder.\n \\\"\\\"\\\"\n\n def process_resource(self, req, resp, resource, params):\n \\\"\\\"\\\"Process the request and resource *after* routing.\n\n Note:\n This method is only called when the request matches\n a route to a resource.\n\n Args:\n req: Request object that will be passed to the\n routed responder.\n resp: Response object that will be passed to the\n responder.\n resource: Resource object to which the request was\n routed. May be None if no route was found for\n the request.\n params: A dict-like object representing any\n additional params derived from the route's URI\n template fields, that will be passed to the\n resource's responder method as keyword\n arguments.\n \\\"\\\"\\\"\n\n def process_response(self, req, resp, resource, req_succeeded)\n \\\"\\\"\\\"Post-processing of the response (after routing).\n\n Args:\n req: Request object.\n resp: Response object.\n resource: Resource object to which the request was\n routed. May be None if no route was found\n for the request.\n req_succeeded: True if no exceptions were raised\n while the framework processed and routed the\n request; otherwise False.\n \\\"\\\"\\\"\n\n (See also: :ref:`Middleware <middleware>`)\n\n request_type (Request): ``Request``-like class to use instead\n of Falcon's default class. Among other things, this feature\n affords inheriting from ``falcon.request.Request`` in order\n to override the ``context_type`` class variable.\n (default ``falcon.request.Request``)\n\n response_type (Response): ``Response``-like class to use\n instead of Falcon's default class. (default\n ``falcon.response.Response``)\n\n router (object): An instance of a custom router\n to use in lieu of the default engine.\n (See also: :ref:`Custom Routers <routing_custom>`)\n\n independent_middleware (bool): Set to ``False`` if response\n middleware should not be executed independently of whether or\n not request middleware raises an exception (default\n ``True``). When this option is set to ``False``, a middleware\n component's ``process_response()`` method will NOT be called\n when that same component's ``process_request()`` (or that of\n a component higher up in the stack) raises an exception.\n\n cors_enable (bool): Set this flag to ``True`` to enable a simple\n CORS policy for all responses, including support for preflighted\n requests (default ``False``).\n (See also: :ref:`CORS <cors>`)\n\n Attributes:\n req_options: A set of behavioral options related to incoming\n requests. (See also: :py:class:`~.RequestOptions`)\n resp_options: A set of behavioral options related to outgoing\n responses. (See also: :py:class:`~.ResponseOptions`)\n router_options: Configuration options for the router. If a\n custom router is in use, and it does not expose any\n configurable options, referencing this attribute will raise\n an instance of ``AttributeError``.\n\n (See also: :ref:`CompiledRouterOptions <compiled_router_options>`)\n \"\"\"\n\n _STREAM_BLOCK_SIZE = 8 * 1024 # 8 KiB\n\n __slots__ = ('_request_type', '_response_type',\n '_error_handlers', '_media_type', '_router', '_sinks',\n '_serialize_error', 'req_options', 'resp_options',\n '_middleware', '_independent_middleware', '_router_search',\n '_static_routes', '_cors_enable')\n\n def __init__(self, media_type=DEFAULT_MEDIA_TYPE,\n request_type=Request, response_type=Response,\n middleware=None, router=None,\n independent_middleware=True, cors_enable=False):\n self._sinks = []\n self._media_type = media_type\n self._static_routes = []\n\n if cors_enable:\n cm = CORSMiddleware()\n\n if middleware is None:\n middleware = [cm]\n else:\n try:\n # NOTE(kgriffs): Check to see if middleware is an\n # iterable, and if so, append the CORSMiddleware\n # instance.\n iter(middleware)\n middleware = list(middleware)\n middleware.append(cm)\n except TypeError:\n # NOTE(kgriffs): Assume the middleware kwarg references\n # a single middleware component.\n middleware = [middleware, cm]\n\n # set middleware\n self._middleware = helpers.prepare_middleware(\n middleware, independent_middleware=independent_middleware)\n self._independent_middleware = independent_middleware\n\n self._router = router or routing.DefaultRouter()\n self._router_search = self._router.find\n\n self._request_type = request_type\n self._response_type = response_type\n\n self._error_handlers = []\n self._serialize_error = helpers.default_serialize_error\n\n self.req_options = RequestOptions()\n self.resp_options = ResponseOptions()\n\n self.req_options.default_media_type = media_type\n self.resp_options.default_media_type = media_type\n\n # NOTE(kgriffs): Add default error handlers\n self.add_error_handler(Exception, self._python_error_handler)\n self.add_error_handler(falcon.HTTPError, self._http_error_handler)\n self.add_error_handler(falcon.HTTPStatus, self._http_status_handler)\n\n def __call__(self, env, start_response): # noqa: C901\n \"\"\"WSGI `app` method.\n\n Makes instances of App callable from a WSGI server. May be used to\n host an App or called directly in order to simulate requests when\n testing the App.\n\n (See also: PEP 3333)\n\n Args:\n env (dict): A WSGI environment dictionary\n start_response (callable): A WSGI helper function for setting\n status and headers on a response.\n\n \"\"\"\n req = self._request_type(env, options=self.req_options)\n resp = self._response_type(options=self.resp_options)\n resource = None\n responder = None\n params = {}\n\n dependent_mw_resp_stack = []\n mw_req_stack, mw_rsrc_stack, mw_resp_stack = self._middleware\n\n req_succeeded = False\n\n try:\n try:\n # NOTE(ealogar): The execution of request middleware\n # should be before routing. This will allow request mw\n # to modify the path.\n # NOTE: if flag set to use independent middleware, execute\n # request middleware independently. Otherwise, only queue\n # response middleware after request middleware succeeds.\n if self._independent_middleware:\n for process_request in mw_req_stack:\n process_request(req, resp)\n if resp.complete:\n break\n else:\n for process_request, process_response in mw_req_stack:\n if process_request and not resp.complete:\n process_request(req, resp)\n if process_response:\n dependent_mw_resp_stack.insert(0, process_response)\n\n if not resp.complete:\n # NOTE(warsaw): Moved this to inside the try except\n # because it is possible when using object-based\n # traversal for _get_responder() to fail. An example is\n # a case where an object does not have the requested\n # next-hop child resource. In that case, the object\n # being asked to dispatch to its child will raise an\n # HTTP exception signalling the problem, e.g. a 404.\n responder, params, resource, req.uri_template = self._get_responder(req)\n except Exception as ex:\n if not self._handle_exception(req, resp, ex, params):\n raise\n else:\n try:\n # NOTE(kgriffs): If the request did not match any\n # route, a default responder is returned and the\n # resource is None. In that case, we skip the\n # resource middleware methods. Resource will also be\n # None when a middleware method already set\n # resp.complete to True.\n if resource:\n # Call process_resource middleware methods.\n for process_resource in mw_rsrc_stack:\n process_resource(req, resp, resource, params)\n if resp.complete:\n break\n\n if not resp.complete:\n responder(req, resp, **params)\n\n req_succeeded = True\n except Exception as ex:\n if not self._handle_exception(req, resp, ex, params):\n raise\n finally:\n # NOTE(kgriffs): It may not be useful to still execute\n # response middleware methods in the case of an unhandled\n # exception, but this is done for the sake of backwards\n # compatibility, since it was incidentally the behavior in\n # the 1.0 release before this section of the code was\n # reworked.\n\n # Call process_response middleware methods.\n for process_response in mw_resp_stack or dependent_mw_resp_stack:\n try:\n process_response(req, resp, resource, req_succeeded)\n except Exception as ex:\n if not self._handle_exception(req, resp, ex, params):\n raise\n\n req_succeeded = False\n\n #\n # Set status and headers\n #\n\n resp_status = resp.status\n media_type = self._media_type\n\n if req.method == 'HEAD' or resp_status in _BODILESS_STATUS_CODES:\n body = []\n\n # PERF(vytas): move check for the less common and much faster path\n # of resp_status being in {204, 304} here; NB: this builds on the\n # assumption _TYPELESS_STATUS_CODES <= _BODILESS_STATUS_CODES.\n\n # NOTE(kgriffs): Based on wsgiref.validate's interpretation of\n # RFC 2616, as commented in that module's source code. The\n # presence of the Content-Length header is not similarly\n # enforced.\n if resp_status in _TYPELESS_STATUS_CODES:\n media_type = None\n\n else:\n body, length = self._get_body(resp, env.get('wsgi.file_wrapper'))\n\n # PERF(kgriffs): Böse mußt sein. Operate directly on resp._headers\n # to reduce overhead since this is a hot/critical code path.\n # NOTE(kgriffs): We always set content-length to match the\n # body bytes length, even if content-length is already set. The\n # reason being that web servers and LBs behave unpredictably\n # when the header doesn't match the body (sometimes choosing to\n # drop the HTTP connection prematurely, for example).\n if length is not None:\n resp._headers['content-length'] = str(length)\n\n headers = resp._wsgi_headers(media_type)\n\n # Return the response per the WSGI spec.\n start_response(resp_status, headers)\n return body\n\n @property\n def router_options(self):\n return self._router.options\n\n def add_route(self, uri_template, resource, **kwargs):\n \"\"\"Associate a templatized URI path with a resource.\n\n Falcon routes incoming requests to resources based on a set of\n URI templates. If the path requested by the client matches the\n template for a given route, the request is then passed on to the\n associated resource for processing.\n\n If no route matches the request, control then passes to a\n default responder that simply raises an instance of\n :class:`~.HTTPNotFound`.\n\n This method delegates to the configured router's ``add_route()``\n method. To override the default behavior, pass a custom router\n object to the :class:`~.App` initializer.\n\n (See also: :ref:`Routing <routing>`)\n\n Args:\n uri_template (str): A templatized URI. Care must be\n taken to ensure the template does not mask any sink\n patterns, if any are registered.\n\n (See also: :meth:`~.add_sink`)\n\n resource (instance): Object which represents a REST\n resource. Falcon will pass GET requests to ``on_get()``,\n PUT requests to ``on_put()``, etc. If any HTTP methods are not\n supported by your resource, simply don't define the\n corresponding request handlers, and Falcon will do the right\n thing.\n\n Keyword Args:\n suffix (str): Optional responder name suffix for this route. If\n a suffix is provided, Falcon will map GET requests to\n ``on_get_{suffix}()``, POST requests to ``on_post_{suffix}()``,\n etc. In this way, multiple closely-related routes can be\n mapped to the same resource. For example, a single resource\n class can use suffixed responders to distinguish requests\n for a single item vs. a collection of those same items.\n Another class might use a suffixed responder to handle\n a shortlink route in addition to the regular route for the\n resource.\n\n Note:\n Any additional keyword arguments not defined above are passed\n through to the underlying router's ``add_route()`` method. The\n default router ignores any additional keyword arguments, but\n custom routers may take advantage of this feature to receive\n additional options when setting up routes. Custom routers MUST\n accept such arguments using the variadic pattern (``**kwargs``), and\n ignore any keyword arguments that they don't support.\n \"\"\"\n\n # NOTE(richardolsson): Doing the validation here means it doesn't have\n # to be duplicated in every future router implementation.\n if not isinstance(uri_template, str):\n raise TypeError('uri_template is not a string')\n\n if not uri_template.startswith('/'):\n raise ValueError(\"uri_template must start with '/'\")\n\n if '//' in uri_template:\n raise ValueError(\"uri_template may not contain '//'\")\n\n self._router.add_route(uri_template, resource, **kwargs)\n\n def add_static_route(self, prefix, directory, downloadable=False, fallback_filename=None):\n \"\"\"Add a route to a directory of static files.\n\n Static routes provide a way to serve files directly. This\n feature provides an alternative to serving files at the web server\n level when you don't have that option, when authorization is\n required, or for testing purposes.\n\n Warning:\n Serving files directly from the web server,\n rather than through the Python app, will always be more efficient,\n and therefore should be preferred in production deployments.\n For security reasons, the directory and the fallback_filename (if provided)\n should be read only for the account running the application.\n\n Static routes are matched in LIFO order. Therefore, if the same\n prefix is used for two routes, the second one will override the\n first. This also means that more specific routes should be added\n *after* less specific ones. For example, the following sequence\n would result in ``'/foo/bar/thing.js'`` being mapped to the\n ``'/foo/bar'`` route, and ``'/foo/xyz/thing.js'`` being mapped to the\n ``'/foo'`` route::\n\n app.add_static_route('/foo', foo_path)\n app.add_static_route('/foo/bar', foobar_path)\n\n Args:\n prefix (str): The path prefix to match for this route. If the\n path in the requested URI starts with this string, the remainder\n of the path will be appended to the source directory to\n determine the file to serve. This is done in a secure manner\n to prevent an attacker from requesting a file outside the\n specified directory.\n\n Note that static routes are matched in LIFO order, and are only\n attempted after checking dynamic routes and sinks.\n\n directory (str): The source directory from which to serve files.\n downloadable (bool): Set to ``True`` to include a\n Content-Disposition header in the response. The \"filename\"\n directive is simply set to the name of the requested file.\n fallback_filename (str): Fallback filename used when the requested file\n is not found. Can be a relative path inside the prefix folder or any valid\n absolute path.\n\n \"\"\"\n\n self._static_routes.insert(\n 0,\n routing.StaticRoute(prefix, directory, downloadable=downloadable,\n fallback_filename=fallback_filename)\n )\n\n def add_sink(self, sink, prefix=r'/'):\n \"\"\"Register a sink method for the App.\n\n If no route matches a request, but the path in the requested URI\n matches a sink prefix, Falcon will pass control to the\n associated sink, regardless of the HTTP method requested.\n\n Using sinks, you can drain and dynamically handle a large number\n of routes, when creating static resources and responders would be\n impractical. For example, you might use a sink to create a smart\n proxy that forwards requests to one or more backend services.\n\n Args:\n sink (callable): A callable taking the form ``func(req, resp)``.\n\n prefix (str): A regex string, typically starting with '/', which\n will trigger the sink if it matches the path portion of the\n request's URI. Both strings and precompiled regex objects\n may be specified. Characters are matched starting at the\n beginning of the URI path.\n\n Note:\n Named groups are converted to kwargs and passed to\n the sink as such.\n\n Warning:\n If the prefix overlaps a registered route template,\n the route will take precedence and mask the sink.\n\n (See also: :meth:`~.add_route`)\n\n \"\"\"\n\n if not hasattr(prefix, 'match'):\n # Assume it is a string\n prefix = re.compile(prefix)\n\n # NOTE(kgriffs): Insert at the head of the list such that\n # in the case of a duplicate prefix, the last one added\n # is preferred.\n self._sinks.insert(0, (prefix, sink))\n\n def add_error_handler(self, exception, handler=None):\n \"\"\"Register a handler for one or more exception types.\n\n Error handlers may be registered for any exception type, including\n :class:`~.HTTPError` or :class:`~.HTTPStatus`. This feature\n provides a central location for logging and otherwise handling\n exceptions raised by responders, hooks, and middleware components.\n\n A handler can raise an instance of :class:`~.HTTPError` or\n :class:`~.HTTPStatus` to communicate information about the issue to\n the client. Alternatively, a handler may modify `resp`\n directly.\n\n Error handlers are matched in LIFO order. In other words, when\n searching for an error handler to match a raised exception, and\n more than one handler matches the exception type, the framework\n will choose the one that was most recently registered.\n Therefore, more general error handlers (e.g., for the\n standard ``Exception`` type) should be added first, to avoid\n masking more specific handlers for subclassed types. For example::\n\n app = falcon.App()\n app.add_error_handler(Exception, custom_handle_uncaught_exception)\n app.add_error_handler(falcon.HTTPError, custom_handle_http_error)\n app.add_error_handler(CustomException)\n\n .. Note::\n\n By default, the framework installs three handlers, one for\n :class:`~.HTTPError`, one for :class:`~.HTTPStatus`, and one for\n the standard ``Exception`` type, which prevents passing uncaught\n exceptions to the WSGI server. These can be overridden by adding a\n custom error handler method for the exception type in question.\n\n Be aware that both :class:`~.HTTPError` and :class:`~.HTTPStatus`\n inherit from the standard ``Exception`` type, so overriding the\n default ``Exception`` handler will override all three default\n handlers, due to the LIFO ordering of handler-matching.\n\n Args:\n exception (type or iterable of types): When handling a request,\n whenever an error occurs that is an instance of the specified\n type(s), the associated handler will be called. Either a single\n type or an iterable of types may be specified.\n handler (callable): A function or callable object taking the form\n ``func(req, resp, ex, params)``.\n\n If not specified explicitly, the handler will default to\n ``exception.handle``, where ``exception`` is the error\n type specified above, and ``handle`` is a static method\n (i.e., decorated with ``@staticmethod``) that accepts\n the same params just described. For example::\n\n class CustomException(CustomBaseException):\n\n @staticmethod\n def handle(req, resp, ex, params):\n # TODO: Log the error\n # Convert to an instance of falcon.HTTPError\n raise falcon.HTTPError(falcon.HTTP_792)\n\n If an iterable of exception types is specified instead of\n a single type, the handler must be explicitly specified.\n\n \"\"\"\n def wrap_old_handler(old_handler):\n @wraps(old_handler)\n def handler(req, resp, ex, params):\n old_handler(ex, req, resp, params)\n return handler\n\n if handler is None:\n try:\n handler = exception.handle\n except AttributeError:\n raise AttributeError('handler must either be specified '\n 'explicitly or defined as a static'\n 'method named \"handle\" that is a '\n 'member of the given exception class.')\n\n # TODO(vytas): Remove this shimming in a future Falcon version.\n arg_names = tuple(misc.get_argnames(handler))\n if (arg_names[0:1] in (('e',), ('err',), ('error',), ('ex',), ('exception',)) or\n arg_names[1:3] in (('req', 'resp'), ('request', 'response'))):\n handler = wrap_old_handler(handler)\n\n try:\n exception_tuple = tuple(exception)\n except TypeError:\n exception_tuple = (exception, )\n\n if all(issubclass(exc, BaseException) for exc in exception_tuple):\n # Insert at the head of the list in case we get duplicate\n # adds (will cause the most recently added one to win).\n if len(exception_tuple) == 1:\n # In this case, insert only the single exception type\n # (not a tuple), to avoid unnnecessary overhead in the\n # exception handling path.\n self._error_handlers.insert(0, (exception_tuple[0], handler))\n else:\n self._error_handlers.insert(0, (exception_tuple, handler))\n else:\n raise TypeError('\"exception\" must be an exception type.')\n\n def set_error_serializer(self, serializer):\n \"\"\"Override the default serializer for instances of :class:`~.HTTPError`.\n\n When a responder raises an instance of :class:`~.HTTPError`,\n Falcon converts it to an HTTP response automatically. The\n default serializer supports JSON and XML, but may be overridden\n by this method to use a custom serializer in order to support\n other media types.\n\n Note:\n If a custom media type is used and the type includes a\n \"+json\" or \"+xml\" suffix, the default serializer will\n convert the error to JSON or XML, respectively.\n\n Note:\n The default serializer will not render any response body for\n :class:`~.HTTPError` instances where the `has_representation`\n property evaluates to ``False`` (such as in the case of types\n that subclass :class:`falcon.http_error.NoRepresentation`).\n However a custom serializer will be called regardless of the\n property value, and it may choose to override the\n representation logic.\n\n Note:\n A custom serializer set with this method may not be called if the\n default error handler for :class:`~.HTTPError` has been overriden.\n See :meth:`~.add_error_handler` for more details.\n\n The :class:`~.HTTPError` class contains helper methods,\n such as `to_json()` and `to_dict()`, that can be used from\n within custom serializers. For example::\n\n def my_serializer(req, resp, exception):\n representation = None\n\n preferred = req.client_prefers(('application/x-yaml',\n 'application/json'))\n\n if exception.has_representation and preferred is not None:\n if preferred == 'application/json':\n representation = exception.to_json()\n else:\n representation = yaml.dump(exception.to_dict(),\n encoding=None)\n resp.body = representation\n resp.content_type = preferred\n\n resp.append_header('Vary', 'Accept')\n\n Args:\n serializer (callable): A function taking the form\n ``func(req, resp, exception)``, where `req` is the request\n object that was passed to the responder method, `resp` is\n the response object, and `exception` is an instance of\n ``falcon.HTTPError``.\n\n \"\"\"\n\n self._serialize_error = serializer\n\n # ------------------------------------------------------------------------\n # Helpers that require self\n # ------------------------------------------------------------------------\n\n def _get_responder(self, req):\n \"\"\"Search routes for a matching responder.\n\n Args:\n req: The request object.\n\n Returns:\n tuple: A 4-member tuple consisting of a responder callable,\n a ``dict`` containing parsed path fields (if any were specified in\n the matching route's URI template), a reference to the responder's\n resource instance, and the matching URI template.\n\n Note:\n If a responder was matched to the given URI, but the HTTP\n method was not found in the method_map for the responder,\n the responder callable element of the returned tuple will be\n `falcon.responder.bad_request`.\n\n Likewise, if no responder was matched for the given URI, then\n the responder callable element of the returned tuple will be\n `falcon.responder.path_not_found`\n \"\"\"\n\n path = req.path\n method = req.method\n uri_template = None\n\n route = self._router_search(path, req=req)\n\n if route is not None:\n try:\n resource, method_map, params, uri_template = route\n except ValueError:\n # NOTE(kgriffs): Older routers may not return the\n # template. But for performance reasons they should at\n # least return None if they don't support it.\n resource, method_map, params = route\n else:\n # NOTE(kgriffs): Older routers may indicate that no route\n # was found by returning (None, None, None). Therefore, we\n # normalize resource as the flag to indicate whether or not\n # a route was found, for the sake of backwards-compat.\n resource = None\n\n if resource is not None:\n try:\n responder = method_map[method]\n except KeyError:\n responder = falcon.responders.bad_request\n else:\n params = {}\n\n for pattern, sink in self._sinks:\n m = pattern.match(path)\n if m:\n params = m.groupdict()\n responder = sink\n\n break\n else:\n\n for sr in self._static_routes:\n if sr.match(path):\n responder = sr\n break\n else:\n responder = falcon.responders.path_not_found\n\n return (responder, params, resource, uri_template)\n\n def _compose_status_response(self, req, resp, http_status):\n \"\"\"Compose a response for the given HTTPStatus instance.\"\"\"\n\n # PERF(kgriffs): The code to set the status and headers is identical\n # to that used in _compose_error_response(), but refactoring in the\n # name of DRY isn't worth the extra CPU cycles.\n resp.status = http_status.status\n\n if http_status.headers is not None:\n resp.set_headers(http_status.headers)\n\n # NOTE(kgriffs): If http_status.body is None, that's OK because\n # it's acceptable to set resp.body to None (to indicate no body).\n resp.body = http_status.body\n\n def _compose_error_response(self, req, resp, error):\n \"\"\"Compose a response for the given HTTPError instance.\"\"\"\n\n resp.status = error.status\n\n if error.headers is not None:\n resp.set_headers(error.headers)\n\n self._serialize_error(req, resp, error)\n\n def _http_status_handler(self, req, resp, status, params):\n self._compose_status_response(req, resp, status)\n\n def _http_error_handler(self, req, resp, error, params):\n self._compose_error_response(req, resp, error)\n\n def _python_error_handler(self, req, resp, error, params):\n req.log_error(traceback.format_exc())\n self._compose_error_response(\n req, resp, falcon.HTTPInternalServerError())\n\n def _handle_exception(self, req, resp, ex, params):\n \"\"\"Handle an exception raised from mw or a responder.\n\n Args:\n ex: Exception to handle\n req: Current request object to pass to the handler\n registered for the given exception type\n resp: Current response object to pass to the handler\n registered for the given exception type\n params: Responder params to pass to the handler\n registered for the given exception type\n\n Returns:\n bool: ``True`` if a handler was found and called for the\n exception, ``False`` otherwise.\n \"\"\"\n\n for err_type, err_handler in self._error_handlers:\n if isinstance(ex, err_type):\n try:\n err_handler(req, resp, ex, params)\n except HTTPStatus as status:\n self._compose_status_response(req, resp, status)\n except HTTPError as error:\n self._compose_error_response(req, resp, error)\n\n return True\n\n # NOTE(kgriffs): No error handlers are defined for ex\n # and it is not one of (HTTPStatus, HTTPError), since it\n # would have matched one of the corresponding default\n # handlers.\n return False\n\n # PERF(kgriffs): Moved from api_helpers since it is slightly faster\n # to call using self, and this function is called for most\n # requests.\n def _get_body(self, resp, wsgi_file_wrapper=None):\n \"\"\"Convert resp content into an iterable as required by PEP 333\n\n Args:\n resp: Instance of falcon.Response\n wsgi_file_wrapper: Reference to wsgi.file_wrapper from the\n WSGI environ dict, if provided by the WSGI server. Used\n when resp.stream is a file-like object (default None).\n\n Returns:\n tuple: A two-member tuple of the form (iterable, content_length).\n The length is returned as ``None`` when unknown. The\n iterable is determined as follows:\n\n * If resp.body is not ``None``, returns\n ([resp.body], len(resp.body)),\n encoded as UTF-8 if it is a Unicode string.\n Bytestrings are returned as-is.\n * If resp.data is not ``None``, returns ([resp.data], len(resp.data))\n * If resp.stream is not ``None``, returns resp.stream\n iterable using wsgi.file_wrapper, if necessary:\n (closeable_iterator, None)\n * Otherwise, returns ([], 0)\n\n \"\"\"\n body = resp.body\n if body is not None:\n if not isinstance(body, bytes):\n body = body.encode('utf-8')\n return [body], len(body)\n\n data = resp.data\n if data is not None:\n return [data], len(data)\n\n stream = resp.stream\n if stream is not None:\n # NOTE(kgriffs): Heuristic to quickly check if stream is\n # file-like. Not perfect, but should be good enough until\n # proven otherwise.\n if hasattr(stream, 'read'):\n if wsgi_file_wrapper is not None:\n # TODO(kgriffs): Make block size configurable at the\n # global level, pending experimentation to see how\n # useful that would be. See also the discussion on\n # this GitHub PR: http://goo.gl/XGrtDz\n iterable = wsgi_file_wrapper(stream,\n self._STREAM_BLOCK_SIZE)\n else:\n iterable = helpers.CloseableStreamIterator(stream, self._STREAM_BLOCK_SIZE)\n else:\n iterable = stream\n\n return iterable, None\n\n return [], 0\n\n\n# TODO(mikeyusko): This class is a compatibility alias, and should be removed\n# in the next major release (4.0).\nclass API(App):\n \"\"\"\n This class is a compatibility alias of :class:`falcon.App`.\n\n ``API`` was renamed to :class:`App <falcon.App>` in Falcon 3.0 in order to\n reflect the breadth of applications that :class:`App <falcon.App>`, and its\n ASGI counterpart in particular, can now be used for.\n\n This compatibility alias should be considered deprecated; it will be\n removed in a future release.\n \"\"\"\n\n @misc.deprecated('API class may be removed in a future release, use falcon.App instead.')\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n"}
|
diff --git a/docs/_newsfragments/1603-smart-exception-handler-precedence.feature.rst b/docs/_newsfragments/1603-smart-exception-handler-precedence.feature.rst
new file mode 100644
index 000000000..ba23500ae
--- /dev/null
+++ b/docs/_newsfragments/1603-smart-exception-handler-precedence.feature.rst
@@ -0,0 +1,1 @@
+Exceptions are now handled by the registered handler for the most specific matching exception class, rather than in reverse order of registration. "Specificity" is determined by the method resolution order of the raised exception type. (See :meth:`~.App.add_error_handler` for more details.)
|
{"falcon/app.py": [{"type": "function", "name": "App._find_error_handler", "lines": [807, 821], "signature": "def _find_error_handler(self, ex):", "doc": ""}]}
| null |
["tests/test_error_handlers.py::TestErrorHandler::test_error_precedence_subclass_order_indifference"]
|
["tests/test_error_handlers.py::TestErrorHandler::test_caught_error", "tests/test_error_handlers.py::TestErrorHandler::test_uncaught_python_error[None-application/json-{\"]", "tests/test_error_handlers.py::TestErrorHandler::test_uncaught_python_error[get_headers1-application/json-{\"]", "tests/test_error_handlers.py::TestErrorHandler::test_uncaught_python_error[get_headers2-application/xml-<?xml]", "tests/test_error_handlers.py::TestErrorHandler::test_converted_error", "tests/test_error_handlers.py::TestErrorHandler::test_handle_not_defined", "tests/test_error_handlers.py::TestErrorHandler::test_subclass_error", "tests/test_error_handlers.py::TestErrorHandler::test_error_precedence_duplicate", "tests/test_error_handlers.py::TestErrorHandler::test_error_precedence_subclass", "tests/test_error_handlers.py::TestErrorHandler::test_handler_multiple_exception_iterable[exceptions0]", "tests/test_error_handlers.py::TestErrorHandler::test_handler_multiple_exception_iterable[exceptions1]", "tests/test_error_handlers.py::TestErrorHandler::test_handler_single_exception_iterable", "tests/test_error_handlers.py::TestErrorHandler::test_invalid_add_exception_handler_input[exceptions0]", "tests/test_error_handlers.py::TestErrorHandler::test_invalid_add_exception_handler_input[Hello,", "tests/test_error_handlers.py::TestErrorHandler::test_invalid_add_exception_handler_input[exceptions2]", "tests/test_error_handlers.py::TestErrorHandler::test_invalid_add_exception_handler_input[exceptions3]", "tests/test_error_handlers.py::TestErrorHandler::test_handler_signature_shim"]
|
77d5e6394a88ead151c9469494749f95f06b24bf
|
{"first_commit_time": 1572728034.0, "pr_title": "feat(API): on exception, select most specific error handler available", "pr_body": "# Summary of Changes\r\n\r\nRather than selecting error handlers in LIFO order of registration, select the error handler corresponding to the nearest direct ancestor of the exception type raised. So, for example, if an app only adds a custom error handler for the Python ``Exception`` class, and an ``HTTPForbidden`` error is raised, then we use the default handler for ``HTTPError`` rather than the more general ``Exception`` handler (which is the pre-existing behavior). \r\n\r\nThis is implemented by storing error handlers on the API object as a dict rather than a list and looking them up using the method resolution order attribute (`__mro__`) on the raised exception class. \r\n\r\n### NOTE: \r\n~~This commit only includes the actual implementation and does not address testing or documentation. I am seeking implementation feedback before completing those additional changes.~~ Tests and documentation have been added. Thanks @vytas7 for implementation notes!\r\n\r\n### BREAKING CHANGE: \r\nRegistration of a new error handler for type `E` will no longer override previously-registered error handlers for subclasses of type `E`. Registration order will no longer matter *except* when multiple error handlers are registered for the exact same exception type, in which case the most recently registered error handler overrides the previous ones. \r\n\r\n# Related Issues\r\n\r\nAddresses #1514.\r\n\r\n~~I believe it would also allow reversion of https://github.com/falconry/falcon/pull/1599.~~ Not true. I misremembered why #1599 was needed.\r\n\r\n# Pull Request Checklist\r\n\r\nThis is just a reminder about the most common mistakes. Please make sure that you tick all *appropriate* boxes. But please read our [contribution guide](https://github.com/falconry/falcon/blob/master/CONTRIBUTING.md) at least once; it will save you a few review cycles!\r\n\r\nIf an item doesn't apply to your pull request, **check it anyway** to make it apparent that there's nothing to do.\r\n\r\n- [x] Added **tests** for changed code.\r\n- [x] Prefixed code comments with GitHub nick and an appropriate prefix.\r\n- [x] Coding style is consistent with the rest of the framework.\r\n- [x] Updated **documentation** for changed code.\r\n - [x] Added docstrings for any new classes, functions, or modules.\r\n - [x] Updated docstrings for any modifications to existing code.\r\n - [x] Added references to new classes, functions, or modules to the relevant RST file under `docs/`.\r\n - [x] Updated all relevant supporting documentation files under `docs/`.\r\n - [x] A copyright notice is included at the top of any new modules (using your own name or the name of your organization).\r\n - [x] Changed/added classes/methods/functions have appropriate `versionadded`, `versionchanged`, or `deprecated` [directives](http://www.sphinx-doc.org/en/stable/usage/restructuredtext/directives.html?highlight=versionadded#directive-versionadded).\r\n- [x] Changes (and possible deprecations) have [towncrier](https://pypi.org/project/towncrier/) news fragments under `docs/_newsfragments/`. (Run `towncrier --draft` to ensure it renders correctly.)\r\n\r\nIf you have *any* questions to *any* of the points above, just **submit and ask**! This checklist is here to *help* you, not to deter you from contributing!\r\n\r\n*PR template inspired by the attrs project.*\r\n", "pr_timeline": [{"time": 1572730910.0, "comment": "Would this change warrant a towncrier entry? Or a `versionchanged` in the docstring? Seems like probably a `versionchanged` would be good... ~~should I just assume 3.0 for now?~~ I see that #1514 is currently marked as version 3.0, so I'll go with that."}, {"time": 1576535634.0, "comment": "# [Codecov](https://codecov.io/gh/falconry/falcon/pull/1603?src=pr&el=h1) Report\n> Merging [#1603](https://codecov.io/gh/falconry/falcon/pull/1603?src=pr&el=desc) into [master](https://codecov.io/gh/falconry/falcon/commit/b68c4ff5b8ddf83f84d0e9d7875fb478995a70eb?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/falconry/falcon/pull/1603?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1603 +/- ##\n======================================\n Coverage 100% 100% \n======================================\n Files 40 40 \n Lines 2692 2692 \n Branches 397 397 \n======================================\n Hits 2692 2692\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/falconry/falcon/pull/1603?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `\u0394 = absolute <relative> (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/falconry/falcon/pull/1603?src=pr&el=footer). Last update [b68c4ff...2cee9c1](https://codecov.io/gh/falconry/falcon/pull/1603?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"}, {"time": 1572805968.0, "comment": "@csojinb Re Towncrier I would think, yes, we need an entry (or actually two?) since it is both a breaking change, and a new feature. "}, {"time": 1576191853.0, "comment": "Yes, I can try to do it this weekend\n\nOn Thu, Dec 12, 2019 at 13:44 Kurt Griffiths <[email protected]>\nwrote:\n\n> *@kgriffs* requested changes on this pull request.\n>\n> This looks good, do you think you can finish this up soon? I'd love to get\n> this merged for the alpha release. AFAICT the remaining TODO items include:\n>\n> - Add towncrier news fragments\n> - Rebase on master (fix conflicts)\n> - Tweak docstrings (optional)\n> - Tweak line wrapping (optional)\n>\n> ------------------------------\n>\n> In falcon/api.py\n> <https://github.com/falconry/falcon/pull/1603#discussion_r357306845>:\n>\n> > app.add_error_handler(falcon.HTTPError, custom_handle_http_error)\n> - app.add_error_handler(CustomException)\n> + app.add_error_handler(Exception, custom_handle_uncaught_exception)\n> + app.add_error_handler(falcon.HTTPNotFound, custom_handle_404)\n> +\n> + If an instance of ``falcon.HTTPForbidden`` is raised, it will be\n>\n> double backticks are fine, although I tend to add parens to the function\n> name myself to help make it obvious what it is.\n>\n> Re class names, I'm fine either way TBH, since you are referencing example\n> code here. But it couldn't hurt to add the :class: directive.\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/falconry/falcon/pull/1603?email_source=notifications&email_token=ABRVDSYBXSEUQM545YHV3I3QYKA7HA5CNFSM4JIGEACKYY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCPAWL7Q#pullrequestreview-331441662>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABRVDS5R4LKUAQNI3GFPSEDQYKA7HANCNFSM4JIGEACA>\n> .\n>\n"}, {"time": 1576450895.0, "comment": "Okay, I rebased, undid the overzealous pep8-ing, tweaked the `add_error_handler` docs, and added a towncrier fragment. I think that's everything. cc @kgriffs @vytas7 "}, {"time": 1576451760.0, "comment": "Oh, so it is. I guess kdiff made that decision for me and I didn't look closely enough. I'll fix it and re-push"}, {"time": 1576452522.0, "comment": "@vytas7 fixed"}, {"time": 1576452603.0, "comment": "Moving my copied TODO list out of the PR description into this comment:\r\n\r\n### TODOs:\r\n- [x] Add towncrier news fragments\r\n- [x] Rebase on master (fix conflicts)\r\n- [x] Tweak docstrings (optional)\r\n- [x] Tweak line wrapping (optional)\r\n\r\n(Turns out you can't just check off checkboxes in someone else's comment.)"}, {"time": 1576536095.0, "comment": "\ud83c\udf89 "}], "issues": {}}
|
falconry/falcon
| 640
|
https://github.com/falconry/falcon/pull/640
|
falconry__falcon-640
|
[]
|
0fefca82a3e1f6f35f147af01c7daeebb0414d87
|
diff --git a/falcon/request.py b/falcon/request.py
index 0a2eb8d34..3312667f7 100644
--- a/falcon/request.py
+++ b/falcon/request.py
@@ -27,9 +27,9 @@
import mimeparse
import six
-from falcon.errors import *
+from falcon.errors import * # NOQA
from falcon import util
-from falcon.util.uri import parse_query_string, parse_host
+from falcon.util.uri import parse_query_string, parse_host, unquote_string
from falcon import request_helpers as helpers
# NOTE(tbug): In some cases, http_cookies is not a module
@@ -205,6 +205,7 @@ class Request(object):
'_wsgierrors',
'options',
'_cookies',
+ '_cached_access_route',
)
# Allow child classes to override this
@@ -257,6 +258,7 @@ def __init__(self, env, options=None):
self._cached_headers = None
self._cached_uri = None
self._cached_relative_uri = None
+ self._cached_access_route = None
try:
self.content_type = self.env['CONTENT_TYPE']
@@ -521,6 +523,64 @@ def cookies(self):
return self._cookies.copy()
+ @property
+ def access_route(self):
+ """A list of all addresses from client to the last proxy server.
+
+ Inspired by werkzeug's ``access_route``.
+
+ Note:
+ The list may contain string(s) other than IPv4 / IPv6 address. For
+ example the "unknown" identifier and obfuscated identifier defined
+ by `RFC 7239`_.
+
+ .. _RFC 7239: https://tools.ietf.org/html/rfc7239#section-6
+
+ Warning:
+ HTTP Forwarded headers can be forged by any client or proxy.
+ Use this property with caution and write your own verify function.
+ The best practice is always using :py:attr:`~.remote_addr` unless
+ your application is hosted behind some reverse proxy server(s).
+ Also only trust the **last N** addresses provided by those reverse
+ proxy servers.
+
+ This property will try to derive addresses sequentially from:
+
+ - ``Forwarded``
+ - ``X-Forwarded-For``
+ - ``X-Real-IP``
+ - **or** the IP address of the closest client/proxy
+
+ """
+ if self._cached_access_route is None:
+ access_route = []
+ if 'HTTP_FORWARDED' in self.env:
+ access_route = self._parse_rfc_forwarded()
+ if not access_route and 'HTTP_X_FORWARDED_FOR' in self.env:
+ access_route = [ip.strip() for ip in
+ self.env['HTTP_X_FORWARDED_FOR'].split(',')]
+ if not access_route and 'HTTP_X_REAL_IP' in self.env:
+ access_route = [self.env['HTTP_X_REAL_IP']]
+ if not access_route and 'REMOTE_ADDR' in self.env:
+ access_route = [self.env['REMOTE_ADDR']]
+ self._cached_access_route = access_route
+
+ return self._cached_access_route
+
+ @property
+ def remote_addr(self):
+ """String of the IP address of the closest client/proxy.
+
+ Address will only be derived from WSGI ``REMOTE_ADDR`` header, which
+ can not be modified by any client or proxy.
+
+ Note:
+ If your application is behind one or more reverse proxies, you may
+ need to use :py:obj:`~.access_route` to retrieve the real IP
+ address of the client.
+ """
+ return self.env.get('REMOTE_ADDR')
+
# ------------------------------------------------------------------------
# Methods
# ------------------------------------------------------------------------
@@ -626,7 +686,8 @@ def get_header_as_datetime(self, header, required=False, obs_date=False):
``HTTPBadRequest`` instead of returning gracefully when the
header is not found (default ``False``).
obs_date (bool, optional): Support obs-date formats according to
- RFC 7231, e.g.: "Sunday, 06-Nov-94 08:49:37 GMT" (default ``False``).
+ RFC 7231, e.g.: "Sunday, 06-Nov-94 08:49:37 GMT"
+ (default ``False``).
Returns:
datetime: The value of the specified header if it exists,
@@ -1035,6 +1096,26 @@ def _parse_form_urlencoded(self):
self._params.update(extra_params)
+ def _parse_rfc_forwarded(self):
+ """Parse RFC 7239 "Forwarded" header.
+
+ Returns:
+ list: addresses derived from "for" parameters.
+ """
+ addr = []
+ for forwarded in self.env['HTTP_FORWARDED'].split(','):
+ for param in forwarded.split(';'):
+ param = param.strip().split('=', 1)
+ if len(param) == 1:
+ continue
+ key, val = param
+ if key.lower() != 'for':
+ # we only want for params
+ continue
+ host, _ = parse_host(unquote_string(val))
+ addr.append(host)
+ return addr
+
# PERF: To avoid typos and improve storage space and speed over a dict.
class RequestOptions(object):
diff --git a/falcon/util/misc.py b/falcon/util/misc.py
index 7c396611e..67755661c 100644
--- a/falcon/util/misc.py
+++ b/falcon/util/misc.py
@@ -107,8 +107,9 @@ def http_date_to_dt(http_date, obs_date=False):
Args:
http_date (str): An RFC 1123 date string, e.g.:
"Tue, 15 Nov 1994 12:45:26 GMT".
- obs_date (bool, optional): Support obs-date formats according to
- RFC 7231, e.g.: "Sunday, 06-Nov-94 08:49:37 GMT" (default ``False``).
+ obs_date (bool, optional): Support obs-date formats according to
+ RFC 7231, e.g.:
+ "Sunday, 06-Nov-94 08:49:37 GMT" (default ``False``).
Returns:
datetime: A UTC datetime instance corresponding to the given
diff --git a/falcon/util/uri.py b/falcon/util/uri.py
index b3e33e844..2b9af610b 100644
--- a/falcon/util/uri.py
+++ b/falcon/util/uri.py
@@ -376,3 +376,35 @@ def parse_host(host, default_port=None):
# or a domain name plus a port
name, _, port = host.partition(':')
return (name, int(port))
+
+
+def unquote_string(quoted):
+ """Unquote an RFC 7320 "quoted-string".
+
+ Args:
+ quoted (str): Original quoted string
+
+ Returns:
+ str: unquoted string
+
+ Raises:
+ TypeError: `quoted` was not a ``str``.
+ """
+ tmp_quoted = quoted.strip()
+ if len(tmp_quoted) < 2:
+ return quoted
+ elif tmp_quoted[0] != '"' or tmp_quoted[-1] != '"':
+ # return original one, prevent side-effect
+ return quoted
+
+ tmp_quoted = tmp_quoted[1:-1]
+ # PERF(philiptzou): Most header strings don't contain "quoted-pair" which
+ # defined by RFC 7320. We use this little trick (quick string search) to
+ # speed up string parsing by preventing unnecessary processes if possible.
+ if '\\' not in tmp_quoted:
+ return tmp_quoted
+ elif r'\\' not in tmp_quoted:
+ return tmp_quoted.replace('\\', '')
+ else:
+ return '\\'.join([q.replace('\\', '')
+ for q in tmp_quoted.split(r'\\')])
|
diff --git a/tests/test_access_route.py b/tests/test_access_route.py
new file mode 100644
index 000000000..90fa871f9
--- /dev/null
+++ b/tests/test_access_route.py
@@ -0,0 +1,74 @@
+from falcon.request import Request
+import falcon.testing as testing
+
+
+class TestAccessRoute(testing.TestBase):
+
+ def test_remote_addr_only(self):
+ req = Request(testing.create_environ(
+ host='example.com',
+ path='/access_route',
+ headers={
+ 'Forwarded': ('for=192.0.2.43, for="[2001:db8:cafe::17]:555",'
+ 'for="unknown", by=_hidden,for="\\"\\\\",'
+ 'for="198\\.51\\.100\\.17\\:1236";'
+ 'proto=https;host=example.com')
+ }))
+ self.assertEqual(req.remote_addr, '127.0.0.1')
+
+ def test_rfc_forwarded(self):
+ req = Request(testing.create_environ(
+ host='example.com',
+ path='/access_route',
+ headers={
+ 'Forwarded': ('for=192.0.2.43,for=,'
+ 'for="[2001:db8:cafe::17]:555",'
+ 'for="unknown", by=_hidden,for="\\"\\\\",'
+ 'for="_don\\\"t_\\try_this\\\\at_home_\\42",'
+ 'for="198\\.51\\.100\\.17\\:1236";'
+ 'proto=https;host=example.com')
+ }))
+ compares = ['192.0.2.43', '', '2001:db8:cafe::17',
+ 'unknown', '"\\', '_don"t_try_this\\at_home_42',
+ '198.51.100.17']
+ self.assertEqual(req.access_route, compares)
+ # test cached
+ self.assertEqual(req.access_route, compares)
+
+ def test_malformed_rfc_forwarded(self):
+ req = Request(testing.create_environ(
+ host='example.com',
+ path='/access_route',
+ headers={
+ 'Forwarded': 'for'
+ }))
+ self.assertEqual(req.access_route, ['127.0.0.1'])
+ # test cached
+ self.assertEqual(req.access_route, ['127.0.0.1'])
+
+ def test_x_forwarded_for(self):
+ req = Request(testing.create_environ(
+ host='example.com',
+ path='/access_route',
+ headers={
+ 'X-Forwarded-For': ('192.0.2.43, 2001:db8:cafe::17,'
+ 'unknown, _hidden, 203.0.113.60')
+ }))
+ self.assertEqual(req.access_route,
+ ['192.0.2.43', '2001:db8:cafe::17',
+ 'unknown', '_hidden', '203.0.113.60'])
+
+ def test_x_real_ip(self):
+ req = Request(testing.create_environ(
+ host='example.com',
+ path='/access_route',
+ headers={
+ 'X-Real-IP': '2001:db8:cafe::17'
+ }))
+ self.assertEqual(req.access_route, ['2001:db8:cafe::17'])
+
+ def test_remote_addr(self):
+ req = Request(testing.create_environ(
+ host='example.com',
+ path='/access_route'))
+ self.assertEqual(req.access_route, ['127.0.0.1'])
| 2015-10-27T00:17:50
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"falcon/request.py": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom datetime import datetime\n\ntry:\n # NOTE(kgrifs): In Python 2.6 and 2.7, socket._fileobject is a\n # standard way of exposing a socket as a file-like object, and\n # is used by wsgiref for wsgi.input.\n import socket\n NativeStream = socket._fileobject # pylint: disable=E1101\nexcept AttributeError: # pragma nocover\n # NOTE(kgriffs): In Python 3.3, wsgiref implements wsgi.input\n # using _io.BufferedReader which is an alias of io.BufferedReader\n import io\n NativeStream = io.BufferedReader\n\nimport mimeparse\nimport six\n\nfrom falcon.errors import *\nfrom falcon import util\nfrom falcon.util.uri import parse_query_string, parse_host\nfrom falcon import request_helpers as helpers\n\n# NOTE(tbug): In some cases, http_cookies is not a module\n# but a dict-like structure. This fixes that issue.\n# See issue https://github.com/falconry/falcon/issues/556\nfrom six.moves import http_cookies\nSimpleCookie = http_cookies.SimpleCookie\n\n\nDEFAULT_ERROR_LOG_FORMAT = (u'{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'\n u' {1} {2}{3} => ')\n\nTRUE_STRINGS = ('true', 'True', 'yes')\nFALSE_STRINGS = ('false', 'False', 'no')\nWSGI_CONTENT_HEADERS = ('CONTENT_TYPE', 'CONTENT_LENGTH')\n\n\n_maybe_wrap_wsgi_stream = True\n\n\n# PERF(kgriffs): Avoid an extra namespace lookup when using these functions\nstrptime = datetime.strptime\nnow = datetime.now\n\n\nclass Request(object):\n \"\"\"Represents a client's HTTP request.\n\n Note:\n `Request` is not meant to be instantiated directly by responders.\n\n Args:\n env (dict): A WSGI environment dict passed in from the server. See\n also PEP-3333.\n options (dict): Set of global options passed from the API handler.\n\n Attributes:\n protocol (str): Either 'http' or 'https'.\n method (str): HTTP method requested (e.g., 'GET', 'POST', etc.)\n host (str): Hostname requested by the client\n subdomain (str): Leftmost (i.e., most specific) subdomain from the\n hostname. If only a single domain name is given, `subdomain`\n will be ``None``.\n\n Note:\n If the hostname in the request is an IP address, the value\n for `subdomain` is undefined.\n\n user_agent (str): Value of the User-Agent header, or ``None`` if the\n header is missing.\n app (str): Name of the WSGI app (if using WSGI's notion of virtual\n hosting).\n env (dict): Reference to the WSGI environ ``dict`` passed in from the\n server. See also PEP-3333.\n context (dict): Dictionary to hold any data about the request which is\n specific to your app (e.g. session object). Falcon itself will\n not interact with this attribute after it has been initialized.\n context_type (class): Class variable that determines the\n factory or type to use for initializing the\n `context` attribute. By default, the framework will\n instantiate standard\n ``dict`` objects. However, You may override this behavior\n by creating a custom child class of ``falcon.Request``, and\n then passing that new class to `falcon.API()` by way of the\n latter's `request_type` parameter.\n\n Note:\n When overriding `context_type` with a factory function (as\n opposed to a class), the function is called like a method of\n the current Request instance. Therefore the first argument is\n the Request instance itself (self).\n\n uri (str): The fully-qualified URI for the request.\n url (str): alias for `uri`.\n relative_uri (str): The path + query string portion of the full URI.\n path (str): Path portion of the request URL (not including query\n string).\n query_string (str): Query string portion of the request URL, without\n the preceding '?' character.\n accept (str): Value of the Accept header, or '*/*' if the header is\n missing.\n auth (str): Value of the Authorization header, or ``None`` if the\n header is missing.\n client_accepts_json (bool): ``True`` if the Accept header indicates\n that the client is willing to receive JSON, otherwise ``False``.\n client_accepts_msgpack (bool): ``True`` if the Accept header indicates\n that the client is willing to receive MessagePack, otherwise\n ``False``.\n client_accepts_xml (bool): ``True`` if the Accept header indicates that\n the client is willing to receive XML, otherwise ``False``.\n content_type (str): Value of the Content-Type header, or ``None`` if\n the header is missing.\n content_length (int): Value of the Content-Length header converted\n to an ``int``, or ``None`` if the header is missing.\n stream: File-like object for reading the body of the request, if any.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, Falcon\n will consume `stream` in order to parse the parameters\n and merge them into the query string parameters. In this\n case, the stream will be left at EOF.\n\n Note also that the character encoding for fields, before\n percent-encoding non-ASCII bytes, is assumed to be\n UTF-8. The special `_charset_` field is ignored if present.\n\n Falcon expects form-encoded request bodies to be\n encoded according to the standard W3C algorithm (see\n also http://goo.gl/6rlcux).\n\n date (datetime): Value of the Date header, converted to a\n ``datetime`` instance. The header value is assumed to\n conform to RFC 1123.\n expect (str): Value of the Expect header, or ``None`` if the\n header is missing.\n range (tuple of int): A 2-member ``tuple`` parsed from the value of the\n Range header.\n\n The two members correspond to the first and last byte\n positions of the requested resource, inclusive. Negative\n indices indicate offset from the end of the resource,\n where -1 is the last byte, -2 is the second-to-last byte,\n and so forth.\n\n Only continous ranges are supported (e.g., \"bytes=0-0,-1\" would\n result in an HTTPBadRequest exception when the attribute is\n accessed.)\n if_match (str): Value of the If-Match header, or ``None`` if the\n header is missing.\n if_none_match (str): Value of the If-None-Match header, or ``None``\n if the header is missing.\n if_modified_since (datetime): Value of the If-Modified-Since header,\n or ``None`` if the header is missing.\n if_unmodified_since (datetime): Value of the If-Unmodified-Since\n header, or ``None`` if the header is missing.\n if_range (str): Value of the If-Range header, or ``None`` if the\n header is missing.\n\n headers (dict): Raw HTTP headers from the request with\n canonical dash-separated names. Parsing all the headers\n to create this dict is done the first time this attribute\n is accessed. This parsing can be costly, so unless you\n need all the headers in this format, you should use the\n `get_header` method or one of the convenience attributes\n instead, to get a value for a specific header.\n\n params (dict): The mapping of request query parameter names to their\n values. Where the parameter appears multiple times in the query\n string, the value mapped to that parameter key will be a list of\n all the values in the order seen.\n\n options (dict): Set of global options passed from the API handler.\n\n cookies (dict):\n A dict of name/value cookie pairs.\n See also: :ref:`Getting Cookies <getting-cookies>`\n\n \"\"\"\n\n __slots__ = (\n '_cached_headers',\n '_cached_uri',\n '_cached_relative_uri',\n 'content_type',\n 'env',\n 'method',\n '_params',\n 'path',\n 'query_string',\n 'stream',\n 'context',\n '_wsgierrors',\n 'options',\n '_cookies',\n )\n\n # Allow child classes to override this\n context_type = None\n\n def __init__(self, env, options=None):\n global _maybe_wrap_wsgi_stream\n\n self.env = env\n self.options = options if options else RequestOptions()\n\n self._wsgierrors = env['wsgi.errors']\n self.stream = env['wsgi.input']\n self.method = env['REQUEST_METHOD']\n\n # Normalize path\n path = env['PATH_INFO']\n if path:\n if six.PY3: # pragma: no cover\n # PEP 3333 specifies that PATH_INFO variable are always\n # \"bytes tunneled as latin-1\" and must be encoded back\n path = path.encode('latin1').decode('utf-8', 'replace')\n\n if len(path) != 1 and path.endswith('/'):\n self.path = path[:-1]\n else:\n self.path = path\n else:\n self.path = '/'\n\n # PERF(kgriffs): if...in is faster than using env.get(...)\n if 'QUERY_STRING' in env:\n self.query_string = env['QUERY_STRING']\n\n if self.query_string:\n self._params = parse_query_string(\n self.query_string,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n )\n\n else:\n self._params = {}\n\n else:\n self.query_string = ''\n self._params = {}\n\n self._cookies = None\n\n self._cached_headers = None\n self._cached_uri = None\n self._cached_relative_uri = None\n\n try:\n self.content_type = self.env['CONTENT_TYPE']\n except KeyError:\n self.content_type = None\n\n # NOTE(kgriffs): Wrap wsgi.input if needed to make read() more robust,\n # normalizing semantics between, e.g., gunicorn and wsgiref.\n if _maybe_wrap_wsgi_stream:\n if isinstance(self.stream, NativeStream):\n # NOTE(kgriffs): This is covered by tests, it's just that\n # coverage can't figure this out for some reason (TBD).\n self._wrap_stream() # pragma nocover\n else:\n # PERF(kgriffs): If self.stream does not need to be wrapped\n # this time, it never needs to be wrapped since the server\n # will continue using the same type for wsgi.input.\n _maybe_wrap_wsgi_stream = False\n\n # PERF(kgriffs): Technically, we should spend a few more\n # cycles and parse the content type for real, but\n # this heuristic will work virtually all the time.\n if (self.content_type is not None and\n 'application/x-www-form-urlencoded' in self.content_type):\n self._parse_form_urlencoded()\n\n if self.context_type is None:\n # Literal syntax is more efficient than using dict()\n self.context = {}\n else:\n # pylint will detect this as not-callable because it only sees the\n # declaration of None, not whatever type a subclass may have set.\n self.context = self.context_type() # pylint: disable=not-callable\n\n # ------------------------------------------------------------------------\n # Properties\n # ------------------------------------------------------------------------\n\n user_agent = helpers.header_property('HTTP_USER_AGENT')\n auth = helpers.header_property('HTTP_AUTHORIZATION')\n\n expect = helpers.header_property('HTTP_EXPECT')\n\n if_match = helpers.header_property('HTTP_IF_MATCH')\n if_none_match = helpers.header_property('HTTP_IF_NONE_MATCH')\n if_range = helpers.header_property('HTTP_IF_RANGE')\n\n @property\n def client_accepts_json(self):\n return self.client_accepts('application/json')\n\n @property\n def client_accepts_msgpack(self):\n return self.client_accepts('application/x-msgpack')\n\n @property\n def client_accepts_xml(self):\n return self.client_accepts('application/xml')\n\n @property\n def accept(self):\n # NOTE(kgriffs): Per RFC, a missing accept header is\n # equivalent to '*/*'\n try:\n return self.env['HTTP_ACCEPT'] or '*/*'\n except KeyError:\n return '*/*'\n\n @property\n def content_length(self):\n try:\n value = self.env['CONTENT_LENGTH']\n except KeyError:\n return None\n\n # NOTE(kgriffs): Normalize an empty value to behave as if\n # the header were not included; wsgiref, at least, inserts\n # an empty CONTENT_LENGTH value if the request does not\n # set the header. Gunicorn and uWSGI do not do this, but\n # others might if they are trying to match wsgiref's\n # behavior too closely.\n if not value:\n return None\n\n try:\n value_as_int = int(value)\n except ValueError:\n msg = 'The value of the header must be a number.'\n raise HTTPInvalidHeader(msg, 'Content-Length')\n\n if value_as_int < 0:\n msg = 'The value of the header must be a positive number.'\n raise HTTPInvalidHeader(msg, 'Content-Length')\n\n return value_as_int\n\n @property\n def date(self):\n return self.get_header_as_datetime('Date')\n\n @property\n def if_modified_since(self):\n return self.get_header_as_datetime('If-Modified-Since')\n\n @property\n def if_unmodified_since(self):\n return self.get_header_as_datetime('If-Unmodified-Since')\n\n @property\n def range(self):\n try:\n value = self.env['HTTP_RANGE']\n if value.startswith('bytes='):\n value = value[6:]\n else:\n msg = \"The value must be prefixed with 'bytes='\"\n raise HTTPInvalidHeader(msg, 'Range')\n except KeyError:\n return None\n\n if ',' in value:\n msg = 'The value must be a continuous byte range.'\n raise HTTPInvalidHeader(msg, 'Range')\n\n try:\n first, sep, last = value.partition('-')\n\n if not sep:\n raise ValueError()\n\n if first:\n return (int(first), int(last or -1))\n elif last:\n return (-int(last), -1)\n else:\n msg = 'The byte offsets are missing.'\n raise HTTPInvalidHeader(msg, 'Range')\n\n except ValueError:\n href = 'http://goo.gl/zZ6Ey'\n href_text = 'HTTP/1.1 Range Requests'\n msg = ('It must be a byte range formatted according to RFC 2616.')\n raise HTTPInvalidHeader(msg, 'Range', href=href,\n href_text=href_text)\n\n @property\n def app(self):\n return self.env.get('SCRIPT_NAME', '')\n\n @property\n def protocol(self):\n return self.env['wsgi.url_scheme']\n\n @property\n def uri(self):\n if self._cached_uri is None:\n env = self.env\n protocol = env['wsgi.url_scheme']\n\n # NOTE(kgriffs): According to PEP-3333 we should first\n # try to use the Host header if present.\n #\n # PERF(kgriffs): try..except is faster than .get\n try:\n host = env['HTTP_HOST']\n except KeyError:\n host = env['SERVER_NAME']\n port = env['SERVER_PORT']\n\n if protocol == 'https':\n if port != '443':\n host += ':' + port\n else:\n if port != '80':\n host += ':' + port\n\n # PERF: For small numbers of items, '+' is faster\n # than ''.join(...). Concatenation is also generally\n # faster than formatting.\n value = (protocol + '://' +\n host +\n self.app +\n self.path)\n\n if self.query_string:\n value = value + '?' + self.query_string\n\n self._cached_uri = value\n\n return self._cached_uri\n\n url = uri\n\n @property\n def host(self):\n try:\n # NOTE(kgriffs): Prefer the host header; the web server\n # isn't supposed to mess with it, so it should be what\n # the client actually sent.\n host_header = self.env['HTTP_HOST']\n host, port = parse_host(host_header)\n except KeyError:\n # PERF(kgriffs): According to PEP-3333, this header\n # will always be present.\n host = self.env['SERVER_NAME']\n\n return host\n\n @property\n def subdomain(self):\n # PERF(kgriffs): .partition is slightly faster than .split\n subdomain, sep, remainder = self.host.partition('.')\n return subdomain if sep else None\n\n @property\n def relative_uri(self):\n if self._cached_relative_uri is None:\n if self.query_string:\n self._cached_relative_uri = (self.app + self.path + '?' +\n self.query_string)\n else:\n self._cached_relative_uri = self.app + self.path\n\n return self._cached_relative_uri\n\n @property\n def headers(self):\n # NOTE(kgriffs: First time here will cache the dict so all we\n # have to do is clone it in the future.\n if self._cached_headers is None:\n headers = self._cached_headers = {}\n\n env = self.env\n for name, value in env.items():\n if name.startswith('HTTP_'):\n # NOTE(kgriffs): Don't take the time to fix the case\n # since headers are supposed to be case-insensitive\n # anyway.\n headers[name[5:].replace('_', '-')] = value\n\n elif name in WSGI_CONTENT_HEADERS:\n headers[name.replace('_', '-')] = value\n\n return self._cached_headers.copy()\n\n @property\n def params(self):\n return self._params\n\n @property\n def cookies(self):\n if self._cookies is None:\n # NOTE(tbug): We might want to look into parsing\n # cookies ourselves. The SimpleCookie is doing a\n # lot if stuff only required to SEND cookies.\n parser = SimpleCookie(self.get_header(\"Cookie\"))\n cookies = {}\n for morsel in parser.values():\n cookies[morsel.key] = morsel.value\n\n self._cookies = cookies\n\n return self._cookies.copy()\n\n # ------------------------------------------------------------------------\n # Methods\n # ------------------------------------------------------------------------\n\n def client_accepts(self, media_type):\n \"\"\"Determines whether or not the client accepts a given media type.\n\n Args:\n media_type (str): An Internet media type to check.\n\n Returns:\n bool: ``True`` if the client has indicated in the Accept header\n that it accepts the specified media type. Otherwise, returns\n ``False``.\n \"\"\"\n\n accept = self.accept\n\n # PERF(kgriffs): Usually the following will be true, so\n # try it first.\n if (accept == media_type) or (accept == '*/*'):\n return True\n\n # Fall back to full-blown parsing\n try:\n return mimeparse.quality(media_type, accept) != 0.0\n except ValueError:\n return False\n\n def client_prefers(self, media_types):\n \"\"\"Returns the client's preferred media type, given several choices.\n\n Args:\n media_types (iterable of str): One or more Internet media types\n from which to choose the client's preferred type. This value\n **must** be an iterable collection of strings.\n\n Returns:\n str: The client's preferred media type, based on the Accept\n header. Returns ``None`` if the client does not accept any\n of the given types.\n \"\"\"\n\n try:\n # NOTE(kgriffs): best_match will return '' if no match is found\n preferred_type = mimeparse.best_match(media_types, self.accept)\n except ValueError:\n # Value for the accept header was not formatted correctly\n preferred_type = ''\n\n return (preferred_type if preferred_type else None)\n\n def get_header(self, name, required=False):\n \"\"\"Retrieve the raw string value for the given header.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Content-Type')\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n\n Returns:\n str: The value of the specified header if it exists, or ``None`` if\n the header is not found and is not required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n\n \"\"\"\n\n wsgi_name = name.upper().replace('-', '_')\n\n # Use try..except to optimize for the header existing in most cases\n try:\n # Don't take the time to cache beforehand, using HTTP naming.\n # This will be faster, assuming that most headers are looked\n # up only once, and not all headers will be requested.\n return self.env['HTTP_' + wsgi_name]\n\n except KeyError:\n # NOTE(kgriffs): There are a couple headers that do not\n # use the HTTP prefix in the env, so try those. We expect\n # people to usually just use the relevant helper properties\n # to access these instead of .get_header.\n if wsgi_name in WSGI_CONTENT_HEADERS:\n try:\n return self.env[wsgi_name]\n except KeyError:\n pass\n\n if not required:\n return None\n\n raise HTTPMissingHeader(name)\n\n def get_header_as_datetime(self, header, required=False, obs_date=False):\n \"\"\"Return an HTTP header with HTTP-Date values as a datetime.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Date')\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n obs_date (bool, optional): Support obs-date formats according to\n RFC 7231, e.g.: \"Sunday, 06-Nov-94 08:49:37 GMT\" (default ``False``).\n\n Returns:\n datetime: The value of the specified header if it exists,\n or ``None`` if the header is not found and is not required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n HttpInvalidHeader: The header contained a malformed/invalid value.\n \"\"\"\n\n try:\n http_date = self.get_header(header, required=required)\n return util.http_date_to_dt(http_date, obs_date=obs_date)\n except TypeError:\n # When the header does not exist and isn't required\n return None\n except ValueError:\n msg = ('It must be formatted according to RFC 7231, '\n 'Section 7.1.1.1')\n raise HTTPInvalidHeader(msg, header)\n\n def get_param(self, name, required=False, store=None, default=None):\n \"\"\"Return the raw value of a query string parameter as a string.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, the\n parameters from the request body will be merged into\n the query string parameters.\n\n If a key appears more than once in the form data, one of the\n values will be returned as a string, but it is undefined which\n one. Use `req.get_param_as_list()` to retrieve all the values.\n\n Note:\n Similar to the way multiple keys in form data is handled,\n if a query parameter is assigned a comma-separated list of\n values (e.g., 'foo=a,b,c'), only one of those values will be\n returned, and it is undefined which one. Use\n `req.get_param_as_list()` to retrieve all the values.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'sort').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is present.\n default (any, optional): If the param is not found returns the\n given value instead of None\n\n Returns:\n str: The value of the param as a string, or ``None`` if param is\n not found and is not required.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n # NOTE(warsaw): If the key appeared multiple times, it will be\n # stored internally as a list. We do not define which one\n # actually gets returned, but let's pick the last one for grins.\n param = params[name]\n if isinstance(param, list):\n param = param[-1]\n\n if store is not None:\n store[name] = param\n\n return param\n\n if not required:\n return default\n\n raise HTTPMissingParam(name)\n\n def get_param_as_int(self, name,\n required=False, min=None, max=None, store=None):\n \"\"\"Return the value of a query string parameter as an int.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'limit').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not an integer (default\n ``False``).\n min (int, optional): Set to the minimum value allowed for this\n param. If the param is found and it is less than min, an\n ``HTTPError`` is raised.\n max (int, optional): Set to the maximum value allowed for this\n param. If the param is found and its value is greater than\n max, an ``HTTPError`` is raised.\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found\n (default ``None``).\n\n Returns:\n int: The value of the param if it is found and can be converted to\n an integer. If the param is not found, returns ``None``, unless\n `required` is ``True``.\n\n Raises\n HTTPBadRequest: The param was not found in the request, even though\n it was required to be there. Also raised if the param's value\n falls outside the given interval, i.e., the value must be in\n the interval: min <= value <= max to avoid triggering an error.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n try:\n val = int(val)\n except ValueError:\n msg = 'The value must be an integer.'\n raise HTTPInvalidParam(msg, name)\n\n if min is not None and val < min:\n msg = 'The value must be at least ' + str(min)\n raise HTTPInvalidParam(msg, name)\n\n if max is not None and max < val:\n msg = 'The value may not exceed ' + str(max)\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_bool(self, name, required=False, store=None,\n blank_as_true=False):\n \"\"\"Return the value of a query string parameter as a boolean\n\n The following boolean strings are supported::\n\n TRUE_STRINGS = ('true', 'True', 'yes')\n FALSE_STRINGS = ('false', 'False', 'no')\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'detailed').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not a recognized boolean\n string (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n blank_as_true (bool): If ``True``, an empty string value will be\n treated as ``True``. Normally empty strings are ignored; if\n you would like to recognize such parameters, you must set the\n `keep_blank_qs_values` request option to ``True``. Request\n options are set globally for each instance of ``falcon.API``\n through the `req_options` attribute.\n\n Returns:\n bool: The value of the param if it is found and can be converted\n to a ``bool``. If the param is not found, returns ``None``\n unless required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n if val in TRUE_STRINGS:\n val = True\n elif val in FALSE_STRINGS:\n val = False\n elif blank_as_true and not val:\n val = True\n else:\n msg = 'The value of the parameter must be \"true\" or \"false\".'\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_list(self, name,\n transform=None, required=False, store=None):\n \"\"\"Return the value of a query string parameter as a list.\n\n List items must be comma-separated or must be provided\n as multiple instances of the same param in the query string\n ala *application/x-www-form-urlencoded*.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n transform (callable, optional): An optional transform function\n that takes as input each element in the list as a ``str`` and\n outputs a transformed element for inclusion in the list that\n will be returned. For example, passing ``int`` will\n transform list items into numbers.\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n\n Returns:\n list: The value of the param if it is found. Otherwise, returns\n ``None`` unless required is True. Empty list elements will be\n discarded. For example, the following query strings would\n both result in `['1', '3']`::\n\n things=1,,3\n things=1&things=&things=3\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n items = params[name]\n\n # NOTE(warsaw): When a key appears multiple times in the request\n # query, it will already be represented internally as a list.\n # NOTE(kgriffs): Likewise for comma-delimited values.\n if not isinstance(items, list):\n items = [items]\n\n # PERF(kgriffs): Use if-else rather than a DRY approach\n # that sets transform to a passthrough function; avoids\n # function calling overhead.\n if transform is not None:\n try:\n items = [transform(i) for i in items]\n\n except ValueError:\n msg = 'The value is not formatted correctly.'\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = items\n\n return items\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_date(self, name, format_string='%Y-%m-%d',\n required=False, store=None):\n \"\"\"Return the value of a query string parameter as a date.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n format_string (str): String used to parse the param value into a\n date.\n Any format recognized by strptime() is supported.\n (default ``\"%Y-%m-%d\"``)\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n Returns:\n datetime.date: The value of the param if it is found and can be\n converted to a ``date`` according to the supplied format\n string. If the param is not found, returns ``None`` unless\n required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n \"\"\"\n\n param_value = self.get_param(name, required=required)\n\n if param_value is None:\n return None\n\n try:\n date = strptime(param_value, format_string).date()\n except ValueError:\n msg = \"The date value does not match the required format\"\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = date\n\n return date\n\n # TODO(kgriffs): Use the nocover pragma only for the six.PY3 if..else\n def log_error(self, message): # pragma: no cover\n \"\"\"Write an error message to the server's log.\n\n Prepends timestamp and request info to message, and writes the\n result out to the WSGI server's error stream (`wsgi.error`).\n\n Args:\n message (str or unicode): Description of the problem. On Python 2,\n instances of ``unicode`` will be converted to UTF-8.\n\n \"\"\"\n\n if self.query_string:\n query_string_formatted = '?' + self.query_string\n else:\n query_string_formatted = ''\n\n log_line = (\n DEFAULT_ERROR_LOG_FORMAT.\n format(now(), self.method, self.path, query_string_formatted)\n )\n\n if six.PY3:\n self._wsgierrors.write(log_line + message + '\\n')\n else:\n if isinstance(message, unicode): # pylint: disable=E0602\n message = message.encode('utf-8')\n\n self._wsgierrors.write(log_line.encode('utf-8'))\n self._wsgierrors.write(message + '\\n')\n\n # ------------------------------------------------------------------------\n # Helpers\n # ------------------------------------------------------------------------\n\n def _wrap_stream(self): # pragma nocover\n try:\n # NOTE(kgriffs): We can only add the wrapper if the\n # content-length header was provided.\n if self.content_length is not None:\n self.stream = helpers.Body(self.stream, self.content_length)\n\n except HTTPInvalidHeader:\n # NOTE(kgriffs): The content-length header was specified,\n # but it had an invalid value.\n pass\n\n def _parse_form_urlencoded(self):\n # NOTE(kgriffs): This assumes self.stream has been patched\n # above in the case of wsgiref, so that self.content_length\n # is not needed. Normally we just avoid accessing\n # self.content_length, because it is a little expensive\n # to call. We could cache self.content_length, but the\n # overhead to do that won't usually be helpful, since\n # content length will only ever be read once per\n # request in most cases.\n body = self.stream.read()\n\n # NOTE(kgriffs): According to http://goo.gl/6rlcux the\n # body should be US-ASCII. Enforcing this also helps\n # catch malicious input.\n try:\n body = body.decode('ascii')\n except UnicodeDecodeError:\n body = None\n self.log_error('Non-ASCII characters found in form body '\n 'with Content-Type of '\n 'application/x-www-form-urlencoded. Body '\n 'will be ignored.')\n\n if body:\n extra_params = parse_query_string(\n body,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n )\n\n self._params.update(extra_params)\n\n\n# PERF: To avoid typos and improve storage space and speed over a dict.\nclass RequestOptions(object):\n \"\"\"This class is a container for ``Request`` options.\n\n Attributes:\n keep_blank_qs_values (bool): Set to ``True`` in order to retain\n blank values in query string parameters (default ``False``).\n\n \"\"\"\n __slots__ = (\n 'keep_blank_qs_values',\n )\n\n def __init__(self):\n self.keep_blank_qs_values = False\n", "falcon/util/misc.py": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport functools\nimport inspect\nimport warnings\n\nimport six\n\n__all__ = (\n 'deprecated',\n 'http_now',\n 'dt_to_http',\n 'http_date_to_dt',\n 'to_query_str',\n 'get_bound_method',\n)\n\n\n# PERF(kgriffs): Avoid superfluous namespace lookups\nstrptime = datetime.datetime.strptime\nutcnow = datetime.datetime.utcnow\n\n\n# NOTE(kgriffs): We don't want our deprecations to be ignored by default,\n# so create our own type.\n#\n# TODO(kgriffs): Revisit this decision if users complain.\nclass DeprecatedWarning(UserWarning):\n pass\n\n\ndef deprecated(instructions):\n \"\"\"Flags a method as deprecated.\n\n This function returns a decorator which can be used to mark deprecated\n functions. Applying this decorator will result in a warning being\n emitted when the function is used.\n\n Args:\n instructions (str): Specific guidance for the developer, e.g.:\n 'Please migrate to add_proxy(...)''\n \"\"\"\n\n def decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n message = 'Call to deprecated function {0}(...). {1}'.format(\n func.__name__,\n instructions)\n\n frame = inspect.currentframe().f_back\n\n warnings.warn_explicit(message,\n category=DeprecatedWarning,\n filename=inspect.getfile(frame.f_code),\n lineno=frame.f_lineno)\n\n return func(*args, **kwargs)\n\n return wrapper\n\n return decorator\n\n\ndef http_now():\n \"\"\"Returns the current UTC time as an IMF-fixdate.\n\n Returns:\n str: The current UTC time as an IMF-fixdate,\n e.g., 'Tue, 15 Nov 1994 12:45:26 GMT'.\n \"\"\"\n\n return dt_to_http(utcnow())\n\n\ndef dt_to_http(dt):\n \"\"\"Converts a ``datetime`` instance to an HTTP date string.\n\n Args:\n dt (datetime): A ``datetime`` instance to convert, assumed to be UTC.\n\n Returns:\n str: An RFC 1123 date string, e.g.: \"Tue, 15 Nov 1994 12:45:26 GMT\".\n\n \"\"\"\n\n # Tue, 15 Nov 1994 12:45:26 GMT\n return dt.strftime('%a, %d %b %Y %H:%M:%S GMT')\n\n\ndef http_date_to_dt(http_date, obs_date=False):\n \"\"\"Converts an HTTP date string to a datetime instance.\n\n Args:\n http_date (str): An RFC 1123 date string, e.g.:\n \"Tue, 15 Nov 1994 12:45:26 GMT\".\n obs_date (bool, optional): Support obs-date formats according to\n RFC 7231, e.g.: \"Sunday, 06-Nov-94 08:49:37 GMT\" (default ``False``).\n\n Returns:\n datetime: A UTC datetime instance corresponding to the given\n HTTP date.\n\n Raises:\n ValueError: http_date doesn't match any of the available time formats\n \"\"\"\n\n if not obs_date:\n # PERF(kgriffs): This violates DRY, but we do it anyway\n # to avoid the overhead of setting up a tuple, looping\n # over it, and setting up exception handling blocks each\n # time around the loop, in the case that we don't actually\n # need to check for multiple formats.\n return strptime(http_date, '%a, %d %b %Y %H:%M:%S %Z')\n\n time_formats = (\n '%a, %d %b %Y %H:%M:%S %Z',\n '%a, %d-%b-%Y %H:%M:%S %Z',\n '%A, %d-%b-%y %H:%M:%S %Z',\n '%a %b %d %H:%M:%S %Y',\n )\n\n # Loop through the formats and return the first that matches\n for time_format in time_formats:\n try:\n return strptime(http_date, time_format)\n except ValueError:\n continue\n\n # Did not match any formats\n raise ValueError('time data %r does not match known formats' % http_date)\n\n\ndef to_query_str(params):\n \"\"\"Converts a dictionary of params to a query string.\n\n Args:\n params (dict): A dictionary of parameters, where each key is a\n parameter name, and each value is either a ``str`` or\n something that can be converted into a ``str``. If `params`\n is a ``list``, it will be converted to a comma-delimited string\n of values (e.g., 'thing=1,2,3')\n\n Returns:\n str: A URI query string including the '?' prefix, or an empty string\n if no params are given (the ``dict`` is empty).\n \"\"\"\n\n if not params:\n return ''\n\n # PERF: This is faster than a list comprehension and join, mainly\n # because it allows us to inline the value transform.\n query_str = '?'\n for k, v in params.items():\n if v is True:\n v = 'true'\n elif v is False:\n v = 'false'\n elif isinstance(v, list):\n v = ','.join(map(str, v))\n else:\n v = str(v)\n\n query_str += k + '=' + v + '&'\n\n return query_str[:-1]\n\n\ndef get_bound_method(obj, method_name):\n \"\"\"Get a bound method of the given object by name.\n\n Args:\n obj: Object on which to look up the method.\n method_name: Name of the method to retrieve.\n\n Returns:\n Bound method, or ``None`` if the method does not exist on\n the object.\n\n Raises:\n AttributeError: The method exists, but it isn't\n bound (most likely a class was passed, rather than\n an instance of that class).\n\n \"\"\"\n\n method = getattr(obj, method_name, None)\n if method is not None:\n # NOTE(kgriffs): Ensure it is a bound method\n if six.get_method_self(method) is None: # pragma nocover\n # NOTE(kgriffs): In Python 3 this code is unreachable\n # because the above will raise AttributeError on its\n # own.\n msg = '{0} must be a bound method'.format(method)\n raise AttributeError(msg)\n\n return method\n", "falcon/util/uri.py": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport six\n\n# NOTE(kgriffs): See also RFC 3986\n_UNRESERVED = ('ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n 'abcdefghijklmnopqrstuvwxyz'\n '0123456789'\n '-._~')\n\n# NOTE(kgriffs): See also RFC 3986\n_DELIMITERS = \":/?#[]@!$&'()*+,;=\"\n_ALL_ALLOWED = _UNRESERVED + _DELIMITERS\n\n_HEX_DIGITS = '0123456789ABCDEFabcdef'\n\n\ndef _create_char_encoder(allowed_chars):\n\n lookup = {}\n\n for code_point in range(256):\n if chr(code_point) in allowed_chars:\n encoded_char = chr(code_point)\n else:\n encoded_char = '%{0:02X}'.format(code_point)\n\n # NOTE(kgriffs): PY2 returns str from uri.encode, while\n # PY3 returns a byte array.\n key = chr(code_point) if six.PY2 else code_point\n lookup[key] = encoded_char\n\n return lookup.__getitem__\n\n\ndef _create_str_encoder(is_value):\n\n allowed_chars = _UNRESERVED if is_value else _ALL_ALLOWED\n encode_char = _create_char_encoder(allowed_chars)\n\n def encoder(uri):\n # PERF(kgriffs): Very fast way to check, learned from urlib.quote\n if not uri.rstrip(allowed_chars):\n return uri\n\n # Convert to a byte array if it is not one already\n #\n # NOTE(kgriffs): Code coverage disabled since in Py3K the uri\n # is always a text type, so we get a failure for that tox env.\n if isinstance(uri, six.text_type): # pragma no cover\n uri = uri.encode('utf-8')\n\n # Use our map to encode each char and join the result into a new uri\n #\n # PERF(kgriffs): map is faster than list comp on py27, but a tiny bit\n # slower on py33. Since we are already much faster than urllib on\n # py33, let's optimize for py27.\n return ''.join(map(encode_char, uri))\n\n return encoder\n\n\nencode = _create_str_encoder(False)\nencode.__name__ = 'encode'\nencode.__doc__ = \"\"\"Encodes a full or relative URI according to RFC 3986.\n\nRFC 3986 defines a set of \"unreserved\" characters as well as a\nset of \"reserved\" characters used as delimiters. This function escapes\nall other \"disallowed\" characters by percent-encoding them.\n\nNote:\n This utility is faster in the average case than the similar\n `quote` function found in ``urlib``. It also strives to be easier\n to use by assuming a sensible default of allowed characters.\n\nArgs:\n uri (str): URI or part of a URI to encode. If this is a wide\n string (i.e., ``six.text_type``), it will be encoded to\n a UTF-8 byte array and any multibyte sequences will\n be percent-encoded as-is.\n\nReturns:\n str: An escaped version of `uri`, where all disallowed characters\n have been percent-encoded.\n\n\"\"\"\n\n\nencode_value = _create_str_encoder(True)\nencode_value.name = 'encode_value'\nencode_value.__doc__ = \"\"\"Encodes a value string according to RFC 3986.\n\nDisallowed characters are percent-encoded in a way that models\n``urllib.parse.quote(safe=\"~\")``. However, the Falcon function is faster\nin the average case than the similar `quote` function found in urlib.\nIt also strives to be easier to use by assuming a sensible default\nof allowed characters.\n\nAll reserved characters are lumped together into a single set of\n\"delimiters\", and everything in that set is escaped.\n\nNote:\n RFC 3986 defines a set of \"unreserved\" characters as well as a\n set of \"reserved\" characters used as delimiters.\n\nArgs:\n uri (str): URI fragment to encode. It is assumed not to cross delimiter\n boundaries, and so any reserved URI delimiter characters\n included in it will be escaped. If `value` is a wide\n string (i.e., ``six.text_type``), it will be encoded to\n a UTF-8 byte array and any multibyte sequences will\n be percent-encoded as-is.\n\nReturns:\n str: An escaped version of `uri`, where all disallowed characters\n have been percent-encoded.\n\n\"\"\"\n\n# NOTE(kgriffs): This is actually covered, but not in py33; hence the pragma\nif six.PY2: # pragma: no cover\n\n # This map construction is based on urllib\n _HEX_TO_BYTE = dict((a + b, (chr(int(a + b, 16)), int(a + b, 16)))\n for a in _HEX_DIGITS\n for b in _HEX_DIGITS)\n\n def decode(encoded_uri):\n \"\"\"Decodes percent-encoded characters in a URI or query string.\n\n This function models the behavior of `urllib.parse.unquote_plus`, but\n is faster. It is also more robust, in that it will decode escaped\n UTF-8 mutibyte sequences.\n\n Args:\n encoded_uri (str): An encoded URI (full or partial).\n\n Returns:\n str: A decoded URL. Will be of type ``unicode`` on Python 2 IFF the\n URL contained escaped non-ASCII characters, in which case\n UTF-8 is assumed per RFC 3986.\n\n \"\"\"\n\n decoded_uri = encoded_uri\n\n # PERF(kgriffs): Don't take the time to instantiate a new\n # string unless we have to.\n if '+' in decoded_uri:\n decoded_uri = decoded_uri.replace('+', ' ')\n\n # Short-circuit if we can\n if '%' not in decoded_uri:\n return decoded_uri\n\n # Convert to bytes because we are about to replace chars and we\n # don't want Python to mistakenly interpret any high bits.\n if not isinstance(decoded_uri, str):\n # NOTE(kgriffs): Clients should never submit a URI that has\n # unescaped non-ASCII chars in them, but just in case they\n # do, let's encode in a non-lossy format.\n decoded_uri = decoded_uri.encode('utf-8')\n\n only_ascii = True\n\n tokens = decoded_uri.split('%')\n decoded_uri = tokens[0]\n for token in tokens[1:]:\n char, byte = _HEX_TO_BYTE[token[:2]]\n decoded_uri += char + token[2:]\n\n only_ascii = only_ascii and (byte <= 127)\n\n # PERF(kgriffs): Only spend the time to do this if there\n # were non-ascii bytes found in the string.\n if not only_ascii:\n decoded_uri = decoded_uri.decode('utf-8', 'replace')\n\n return decoded_uri\n\n# NOTE(kgriffs): This is actually covered, but not in py2x; hence the pragma\n\nelse: # pragma: no cover\n\n # This map construction is based on urllib\n _HEX_TO_BYTE = dict(((a + b).encode(), bytes([int(a + b, 16)]))\n for a in _HEX_DIGITS\n for b in _HEX_DIGITS)\n\n def _unescape(matchobj):\n # NOTE(kgriffs): Strip '%' and convert the hex number\n return _HEX_TO_BYTE[matchobj.group(0)[1:]]\n\n def decode(encoded_uri):\n \"\"\"Decodes percent-encoded characters in a URI or query string.\n\n This function models the behavior of `urllib.parse.unquote_plus`,\n albeit in a faster, more straightforward manner.\n\n Args:\n encoded_uri (str): An encoded URI (full or partial).\n\n Returns:\n str: A decoded URL. If the URL contains escaped non-ASCII\n characters, UTF-8 is assumed per RFC 3986.\n\n \"\"\"\n\n decoded_uri = encoded_uri\n\n # PERF(kgriffs): Don't take the time to instantiate a new\n # string unless we have to.\n if '+' in decoded_uri:\n decoded_uri = decoded_uri.replace('+', ' ')\n\n # Short-circuit if we can\n if '%' not in decoded_uri:\n return decoded_uri\n\n # NOTE(kgriffs): Clients should never submit a URI that has\n # unescaped non-ASCII chars in them, but just in case they\n # do, let's encode into a non-lossy format.\n decoded_uri = decoded_uri.encode('utf-8')\n\n # PERF(kgriffs): This was found to be faster than using\n # a regex sub call or list comprehension with a join.\n tokens = decoded_uri.split(b'%')\n decoded_uri = tokens[0]\n for token in tokens[1:]:\n decoded_uri += _HEX_TO_BYTE[token[:2]] + token[2:]\n\n # Convert back to str\n return decoded_uri.decode('utf-8', 'replace')\n\n\ndef parse_query_string(query_string, keep_blank_qs_values=False):\n \"\"\"Parse a query string into a dict.\n\n Query string parameters are assumed to use standard form-encoding. Only\n parameters with values are parsed. for example, given 'foo=bar&flag',\n this function would ignore 'flag' unless the `keep_blank_qs_values` option\n is set.\n\n Note:\n In addition to the standard HTML form-based method for specifying\n lists by repeating a given param multiple times, Falcon supports\n a more compact form in which the param may be given a single time\n but set to a ``list`` of comma-separated elements (e.g., 'foo=a,b,c').\n\n When using this format, all commas uri-encoded will not be treated by\n Falcon as a delimiter. If the client wants to send a value as a list,\n it must not encode the commas with the values.\n\n The two different ways of specifying lists may not be mixed in\n a single query string for the same parameter.\n\n Args:\n query_string (str): The query string to parse.\n keep_blank_qs_values (bool): If set to ``True``, preserves boolean\n fields and fields with no content as blank strings.\n\n Returns:\n dict: A dictionary of (*name*, *value*) pairs, one per query\n parameter. Note that *value* may be a single ``str``, or a\n ``list`` of ``str``.\n\n Raises:\n TypeError: `query_string` was not a ``str``.\n\n \"\"\"\n\n params = {}\n\n # PERF(kgriffs): This was found to be faster than using a regex, for\n # both short and long query strings. Tested on both CPython 2.7 and 3.4,\n # and on PyPy 2.3.\n for field in query_string.split('&'):\n k, _, v = field.partition('=')\n if not (v or keep_blank_qs_values):\n continue\n\n # Note(steffgrez): Falcon first decode name parameter for handle\n # utf8 character.\n k = decode(k)\n\n # NOTE(steffgrez): Falcon decode value at the last moment. So query\n # parser won't mix up between percent-encoded comma (as value) and\n # comma-separated list (as reserved character for sub-delimiter)\n if k in params:\n # The key was present more than once in the POST data. Convert to\n # a list, or append the next value to the list.\n old_value = params[k]\n if isinstance(old_value, list):\n old_value.append(decode(v))\n else:\n params[k] = [old_value, decode(v)]\n\n else:\n if ',' in v:\n # NOTE(kgriffs): Falcon supports a more compact form of\n # lists, in which the elements are comma-separated and\n # assigned to a single param instance. If it turns out that\n # very few people use this, it can be deprecated at some\n # point.\n v = v.split(',')\n\n if not keep_blank_qs_values:\n # NOTE(kgriffs): Normalize the result in the case that\n # some elements are empty strings, such that the result\n # will be the same for 'foo=1,,3' as 'foo=1&foo=&foo=3'.\n params[k] = [decode(element) for element in v if element]\n else:\n params[k] = [decode(element) for element in v]\n else:\n params[k] = decode(v)\n\n return params\n\n\ndef parse_host(host, default_port=None):\n \"\"\"Parse a canonical 'host:port' string into parts.\n\n Parse a host string (which may or may not contain a port) into\n parts, taking into account that the string may contain\n either a domain name or an IP address. In the latter case,\n both IPv4 and IPv6 addresses are supported.\n\n Args:\n host (str): Host string to parse, optionally containing a\n port number.\n default_port (int, optional): Port number to return when\n the host string does not contain one (default ``None``).\n\n Returns:\n tuple: A parsed (*host*, *port*) tuple from the given\n host string, with the port converted to an ``int``.\n If the host string does not specify a port, `default_port` is\n used instead.\n\n \"\"\"\n\n # NOTE(kgriff): The value from the Host header may\n # contain a port, so check that and strip it if\n # necessary. This is complicated by the fact that\n # a hostname may be specified either as an IP address\n # or as a domain name, and in the case of IPv6 there\n # may be multiple colons in the string.\n\n if host.startswith('['):\n # IPv6 address with a port\n pos = host.rfind(']:')\n if pos != -1:\n return (host[1:pos], int(host[pos + 2:]))\n else:\n return (host[1:-1], default_port)\n\n pos = host.rfind(':')\n if (pos == -1) or (pos != host.find(':')):\n # Bare domain name or IP address\n return (host, default_port)\n\n # NOTE(kgriffs): At this point we know that there was\n # only a single colon, so we should have an IPv4 address\n # or a domain name plus a port\n name, _, port = host.partition(':')\n return (name, int(port))\n"}
|
{"falcon/request.py": [{"type": "function", "name": "Request.access_route", "lines": [527, 568], "signature": "def access_route(self):", "doc": "A list of all addresses from client to the last proxy server.\n\nInspired by werkzeug's ``access_route``.\n\nNote:\n The list may contain string(s) other than IPv4 / IPv6 address. For\n example the \"unknown\" identifier and obfuscated identifier defined\n by `RFC 7239`_.\n\n .. _RFC 7239: https://tools.ietf.org/html/rfc7239#section-6\n\nWarning:\n HTTP Forwarded headers can be forged by any client or proxy.\n Use this property with caution and write your own verify function.\n The best practice is always using :py:attr:`~.remote_addr` unless\n your application is hosted behind some reverse proxy server(s).\n Also only trust the **last N** addresses provided by those reverse\n proxy servers.\n\nThis property will try to derive addresses sequentially from:\n\n - ``Forwarded``\n - ``X-Forwarded-For``\n - ``X-Real-IP``\n - **or** the IP address of the closest client/proxy"}, {"type": "function", "name": "Request.remote_addr", "lines": [571, 582], "signature": "def remote_addr(self):", "doc": "String of the IP address of the closest client/proxy.\n\nAddress will only be derived from WSGI ``REMOTE_ADDR`` header, which\ncan not be modified by any client or proxy.\n\nNote:\n If your application is behind one or more reverse proxies, you may\n need to use :py:obj:`~.access_route` to retrieve the real IP\n address of the client."}, {"type": "function", "name": "Request._parse_rfc_forwarded", "lines": [1099, 1117], "signature": "def _parse_rfc_forwarded(self):", "doc": "Parse RFC 7239 \"Forwarded\" header.\n\nReturns:\n list: addresses derived from \"for\" parameters."}], "falcon/util/uri.py": [{"type": "function", "name": "unquote_string", "lines": [381, 410], "signature": "def unquote_string(quoted):", "doc": "Unquote an RFC 7320 \"quoted-string\".\n\nArgs:\n quoted (str): Original quoted string\n\nReturns:\n str: unquoted string\n\nRaises:\n TypeError: `quoted` was not a ``str``."}]}
| null |
["tests/test_access_route.py::TestAccessRoute::test_malformed_rfc_forwarded", "tests/test_access_route.py::TestAccessRoute::test_remote_addr", "tests/test_access_route.py::TestAccessRoute::test_remote_addr_only", "tests/test_access_route.py::TestAccessRoute::test_rfc_forwarded", "tests/test_access_route.py::TestAccessRoute::test_x_forwarded_for", "tests/test_access_route.py::TestAccessRoute::test_x_real_ip"]
|
[]
|
77d5e6394a88ead151c9469494749f95f06b24bf
|
{"first_commit_time": 1441828951.0, "pr_title": "Add `access_route` and `remote_addr` to Request object", "pr_body": "which is similar to Werkzeug but more powerful\n\nSupported:\n1. fetch from \"Forwarded\" header (defined by RFC7239)\n2. fetch from \"X-Forwarded-For\" header\n3. fetch from \"X-Read-IP\" header\n4. fetch from wsgi \"REMOTE_ADDR\" header\n\nRelated to #539 and #598 \n\nThis pull-request is simply a duplicate of #599 but always rebased onto the latest origin/master. I don't want to lose any more comments due to the rebase. Please write your comment to #599.\n", "pr_timeline": [{"time": 1446154842.0, "comment": "@philiptzou OK, I think we are ready to merge. Just one final step: Please squash all the patches down to a single commit and format the first line with a \"feat(Request): Blah blah\" prefix, per https://github.com/falconry/falcon/blob/master/CONTRIBUTING.md#commit-message-format.\n"}, {"time": 1446157288.0, "comment": "@kgriffs Done.\n"}, {"time": 1446157954.0, "comment": "Nice work. Thanks!\n"}], "issues": {}}
|
|
google-deepmind/optax
| 1,063
|
https://github.com/google-deepmind/optax/pull/1063
|
google-deepmind__optax-1063
|
[]
|
ee63e4500fbd412d778a1ea143237007e45a5628
|
diff --git a/docs/api/utilities.rst b/docs/api/utilities.rst
index b199697f6..c792944fd 100644
--- a/docs/api/utilities.rst
+++ b/docs/api/utilities.rst
@@ -101,6 +101,7 @@ Tree
tree_mul
tree_ones_like
tree_random_like
+ tree_split_key_like
tree_scalar_mul
tree_set
tree_sub
@@ -153,6 +154,10 @@ Tree ones like
~~~~~~~~~~~~~~
.. autofunction:: tree_ones_like
+Tree with random keys
+~~~~~~~~~~~~~~~~~~~~~~~
+.. autofunction:: tree_split_key_like
+
Tree with random values
~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: tree_random_like
diff --git a/optax/tree_utils/__init__.py b/optax/tree_utils/__init__.py
index e89aef861..44e2e195d 100644
--- a/optax/tree_utils/__init__.py
+++ b/optax/tree_utils/__init__.py
@@ -17,6 +17,7 @@
# pylint: disable=g-importing-member
from optax.tree_utils._casting import tree_cast
from optax.tree_utils._random import tree_random_like
+from optax.tree_utils._random import tree_split_key_like
from optax.tree_utils._state_utils import NamedTupleKey
from optax.tree_utils._state_utils import tree_get
from optax.tree_utils._state_utils import tree_get_all_with_path
diff --git a/optax/tree_utils/_random.py b/optax/tree_utils/_random.py
index 33783b2b1..6b4fab307 100644
--- a/optax/tree_utils/_random.py
+++ b/optax/tree_utils/_random.py
@@ -21,7 +21,7 @@
from jax import tree_util as jtu
-def _tree_rng_keys_split(
+def tree_split_key_like(
rng_key: chex.PRNGKey, target_tree: chex.ArrayTree
) -> chex.ArrayTree:
"""Split keys to match structure of target tree.
@@ -67,7 +67,7 @@ def tree_random_like(
.. versionadded:: 0.2.1
"""
- keys_tree = _tree_rng_keys_split(rng_key, target_tree)
+ keys_tree = tree_split_key_like(rng_key, target_tree)
return jtu.tree_map(
lambda l, k: sampler(k, l.shape, dtype or l.dtype),
target_tree,
|
diff --git a/optax/tree_utils/_random_test.py b/optax/tree_utils/_random_test.py
index 25ea580aa..077ca678a 100644
--- a/optax/tree_utils/_random_test.py
+++ b/optax/tree_utils/_random_test.py
@@ -22,6 +22,7 @@
import jax.numpy as jnp
import jax.random as jrd
import jax.tree_util as jtu
+import numpy as np
from optax import tree_utils as otu
# We consider samplers with varying input dtypes, we do not test all possible
@@ -48,6 +49,19 @@ def get_variable(type_var: str):
class RandomTest(chex.TestCase):
+ def test_tree_split_key_like(self):
+ rng_key = jrd.PRNGKey(0)
+ tree = {'a': jnp.zeros(2), 'b': {'c': [jnp.ones(3), jnp.zeros([4, 5])]}}
+ keys_tree = otu.tree_split_key_like(rng_key, tree)
+
+ with self.subTest('Test structure matches'):
+ self.assertEqual(jtu.tree_structure(tree), jtu.tree_structure(keys_tree))
+
+ with self.subTest('Test random key split'):
+ fst = jnp.stack(jtu.tree_flatten(keys_tree)[0])
+ snd = jrd.split(rng_key, jtu.tree_structure(tree).num_leaves)
+ np.testing.assert_array_equal(fst, snd)
+
@parameterized.product(
_SAMPLER_DTYPES,
type_var=['real_array', 'complex_array', 'pytree'],
| 2024-09-17T21:50:23
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/api/utilities.rst": "Utilities\n=========\n\nGeneral\n-------\n\n.. currentmodule:: optax\n\n.. autosummary::\n scale_gradient\n value_and_grad_from_state\n\nScale gradient\n~~~~~~~~~~~~~~\n.. autofunction:: scale_gradient\n\nValue and grad from state\n~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: value_and_grad_from_state\n\n\nNumerical Stability\n-------------------\n\n.. currentmodule:: optax\n\n.. autosummary::\n safe_increment\n safe_norm\n safe_root_mean_squares\n\nSafe increment\n~~~~~~~~~~~~~~\n.. autofunction:: safe_increment\n\nSafe norm\n~~~~~~~~~\n.. autofunction:: safe_norm\n\nSafe root mean squares\n~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: safe_root_mean_squares\n\n\nLinear Algebra Operators\n------------------------\n\n.. currentmodule:: optax\n\n.. autosummary::\n matrix_inverse_pth_root\n power_iteration\n\nMatrix inverse pth root\n~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: matrix_inverse_pth_root\n\nPower iteration\n~~~~~~~~~~~~~~~\n.. autofunction:: power_iteration\n\n\nSecond Order Optimization\n-------------------------\n\n.. currentmodule:: optax.second_order\n\n.. autosummary::\n fisher_diag\n hessian_diag\n hvp\n\nFisher diagonal\n~~~~~~~~~~~~~~~\n.. autofunction:: fisher_diag\n\nHessian diagonal\n~~~~~~~~~~~~~~~~\n.. autofunction:: hessian_diag\n\nHessian vector product\n~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: hvp\n\n\nTree\n----\n\n.. currentmodule:: optax.tree_utils\n\n.. autosummary::\n NamedTupleKey\n tree_add\n tree_add_scalar_mul\n tree_div\n tree_get\n tree_get_all_with_path\n tree_l1_norm\n tree_l2_norm\n tree_map_params\n tree_mul\n tree_ones_like\n tree_random_like\n tree_scalar_mul\n tree_set\n tree_sub\n tree_sum\n tree_vdot\n tree_where\n tree_zeros_like\n\nNamedTupleKey\n~~~~~~~~~~~~~\n.. autoclass:: NamedTupleKey\n\nTree add\n~~~~~~~~\n.. autofunction:: tree_add\n\nTree add and scalar multiply\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_add_scalar_mul\n\nTree divide\n~~~~~~~~~~~\n.. autofunction:: tree_div\n\nFetch single value that match a given key\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_get\n\nFetch all values that match a given key\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_get_all_with_path\n\nTree l1 norm\n~~~~~~~~~~~~\n.. autofunction:: tree_l1_norm\n\nTree l2 norm\n~~~~~~~~~~~~\n.. autofunction:: tree_l2_norm\n\nTree map parameters\n~~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_map_params\n\nTree multiply\n~~~~~~~~~~~~~\n.. autofunction:: tree_mul\n\nTree ones like\n~~~~~~~~~~~~~~\n.. autofunction:: tree_ones_like\n\nTree with random values\n~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_random_like\n\nTree scalar multiply\n~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_scalar_mul\n\nSet values in a tree\n~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_set\n\nTree subtract\n~~~~~~~~~~~~~\n.. autofunction:: tree_sub\n\nTree sum\n~~~~~~~~\n.. autofunction:: tree_sum\n\nTree inner product\n~~~~~~~~~~~~~~~~~~\n.. autofunction:: tree_vdot\n\nTree where\n~~~~~~~~~~\n.. autofunction:: tree_where\n\nTree zeros like\n~~~~~~~~~~~~~~~\n.. autofunction:: tree_zeros_like\n", "optax/tree_utils/__init__.py": "# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"The tree_utils sub-package.\"\"\"\n\n# pylint: disable=g-importing-member\nfrom optax.tree_utils._casting import tree_cast\nfrom optax.tree_utils._random import tree_random_like\nfrom optax.tree_utils._state_utils import NamedTupleKey\nfrom optax.tree_utils._state_utils import tree_get\nfrom optax.tree_utils._state_utils import tree_get_all_with_path\nfrom optax.tree_utils._state_utils import tree_map_params\nfrom optax.tree_utils._state_utils import tree_set\nfrom optax.tree_utils._tree_math import tree_add\nfrom optax.tree_utils._tree_math import tree_add_scalar_mul\nfrom optax.tree_utils._tree_math import tree_bias_correction\nfrom optax.tree_utils._tree_math import tree_clip\nfrom optax.tree_utils._tree_math import tree_div\nfrom optax.tree_utils._tree_math import tree_full_like\nfrom optax.tree_utils._tree_math import tree_l1_norm\nfrom optax.tree_utils._tree_math import tree_l2_norm\nfrom optax.tree_utils._tree_math import tree_mul\nfrom optax.tree_utils._tree_math import tree_ones_like\nfrom optax.tree_utils._tree_math import tree_scalar_mul\nfrom optax.tree_utils._tree_math import tree_sub\nfrom optax.tree_utils._tree_math import tree_sum\nfrom optax.tree_utils._tree_math import tree_update_infinity_moment\nfrom optax.tree_utils._tree_math import tree_update_moment\nfrom optax.tree_utils._tree_math import tree_update_moment_per_elem_norm\nfrom optax.tree_utils._tree_math import tree_vdot\nfrom optax.tree_utils._tree_math import tree_where\nfrom optax.tree_utils._tree_math import tree_zeros_like\n", "optax/tree_utils/_random.py": "# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Utilities to generate random pytrees.\"\"\"\n\nfrom typing import Callable, Optional\n\nimport chex\nimport jax\nfrom jax import tree_util as jtu\n\n\ndef _tree_rng_keys_split(\n rng_key: chex.PRNGKey, target_tree: chex.ArrayTree\n) -> chex.ArrayTree:\n \"\"\"Split keys to match structure of target tree.\n\n Args:\n rng_key: the key to split.\n target_tree: the tree whose structure to match.\n\n Returns:\n a tree of rng keys.\n \"\"\"\n tree_def = jtu.tree_structure(target_tree)\n keys = jax.random.split(rng_key, tree_def.num_leaves)\n return jtu.tree_unflatten(tree_def, keys)\n\n\ndef tree_random_like(\n rng_key: chex.PRNGKey,\n target_tree: chex.ArrayTree,\n sampler: Callable[\n [chex.PRNGKey, chex.Shape, chex.ArrayDType], chex.Array\n ] = jax.random.normal,\n dtype: Optional[chex.ArrayDType] = None,\n) -> chex.ArrayTree:\n \"\"\"Create tree with random entries of the same shape as target tree.\n\n .. warning::\n The possible dtypes may be limited by the sampler, for example\n ``jax.random.rademacher`` only supports integer dtypes and will raise an\n error if the dtype of the target tree is not an integer or if the dtype\n is not of integer type.\n\n Args:\n rng_key: the key for the random number generator.\n target_tree: the tree whose structure to match. Leaves must be arrays.\n sampler: the noise sampling function, by default ``jax.random.normal``.\n dtype: the desired dtype for the random numbers, passed to ``sampler``. If\n None, the dtype of the target tree is used if possible.\n\n Returns:\n a random tree with the same structure as ``target_tree``, whose leaves have\n distribution ``sampler``.\n\n .. versionadded:: 0.2.1\n \"\"\"\n keys_tree = _tree_rng_keys_split(rng_key, target_tree)\n return jtu.tree_map(\n lambda l, k: sampler(k, l.shape, dtype or l.dtype),\n target_tree,\n keys_tree,\n )\n"}
|
diff --git a/docs/api/utilities.rst b/docs/api/utilities.rst
index b199697f6..c792944fd 100644
--- a/docs/api/utilities.rst
+++ b/docs/api/utilities.rst
@@ -101,6 +101,7 @@ Tree
tree_mul
tree_ones_like
tree_random_like
+ tree_split_key_like
tree_scalar_mul
tree_set
tree_sub
@@ -153,6 +154,10 @@ Tree ones like
~~~~~~~~~~~~~~
.. autofunction:: tree_ones_like
+Tree with random keys
+~~~~~~~~~~~~~~~~~~~~~~~
+.. autofunction:: tree_split_key_like
+
Tree with random values
~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: tree_random_like
|
{"optax/tree_utils/_random.py": [{"type": "function", "name": "tree_split_key_like", "lines": [24, 38], "signature": "def tree_split_key_like( rng_key: chex.PRNGKey, target_tree: chex.ArrayTree ) -> chex.ArrayTree:", "doc": "Split keys to match structure of target tree.\n\nArgs:\n rng_key: the key to split.\n target_tree: the tree whose structure to match.\n\nReturns:\n a tree of rng keys."}]}
| null |
["optax/tree_utils/_random_test.py::RandomTest::test_tree_split_key_like"]
|
["optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like0", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like1", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like10", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like11", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like12", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like13", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like14", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like2", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like3", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like4", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like5", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like6", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like7", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like8", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like9"]
|
1e08bccf195ac54e7d9d766eb5e69345bf0e3230
|
{"first_commit_time": 1726621485.0, "pr_title": "Add optax.tree_utils.tree_random_split.", "pr_body": "This exposes the formerly private function `_tree_rng_keys_split` in [`optax/tree_utils/_random.py`](https://github.com/google-deepmind/optax/blob/main/optax/tree_utils/_random.py) to the public API.\r\n\r\nI've found this to be a useful helper function for manipulation of random trees, and intend to use it for future PRs.", "pr_timeline": [{"time": 1726611196.0, "comment": "Thanks for doing that, it's a good idea, happy to know it can be useful. (sorry for the incremental review, didn't mean to do it like that)."}], "issues": {}}
|
google-deepmind/optax
| 632
|
https://github.com/google-deepmind/optax/pull/632
|
google-deepmind__optax-632
|
[]
|
35920a3712d71992429dc0bd605e75d31cbe2b21
|
diff --git a/optax/__init__.py b/optax/__init__.py
index 0b373684d..6f844478d 100644
--- a/optax/__init__.py
+++ b/optax/__init__.py
@@ -17,6 +17,7 @@
from optax import contrib
from optax import losses
from optax import monte_carlo
+from optax import projections
from optax import schedules
from optax import second_order
from optax import tree_utils
diff --git a/optax/projections/__init__.py b/optax/projections/__init__.py
new file mode 100644
index 000000000..9b38aeb6c
--- /dev/null
+++ b/optax/projections/__init__.py
@@ -0,0 +1,18 @@
+# Copyright 2021 DeepMind Technologies Limited. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""The projections sub-package."""
+
+from optax.projections._projections import projection_non_negative
diff --git a/optax/projections/_projections.py b/optax/projections/_projections.py
new file mode 100644
index 000000000..316cd21fb
--- /dev/null
+++ b/optax/projections/_projections.py
@@ -0,0 +1,39 @@
+# Copyright 2021 DeepMind Technologies Limited. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Euclidean projections."""
+
+from typing import Any
+
+import jax
+from jax import tree_util as jtu
+
+
+def projection_non_negative(pytree: Any) -> Any:
+ r"""Projection onto the non-negative orthant.
+
+ .. math::
+
+ \underset{p}{\text{argmin}} ~ ||x - p||_2^2 \quad
+ \textrm{subject to} \quad p \ge 0
+
+ where :math:`x` is the input pytree.
+
+ Args:
+ pytree: pytree to project.
+ Returns:
+ projected pytree, with the same structure as ``pytree``.
+ """
+ return jtu.tree_map(jax.nn.relu, pytree)
|
diff --git a/optax/projections/_projections_test.py b/optax/projections/_projections_test.py
new file mode 100644
index 000000000..15d4f1204
--- /dev/null
+++ b/optax/projections/_projections_test.py
@@ -0,0 +1,47 @@
+# Copyright 2021 DeepMind Technologies Limited. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for optax.projections."""
+
+from absl.testing import absltest
+from absl.testing import parameterized
+
+import chex
+import jax.numpy as jnp
+import numpy as np
+
+from optax import projections as proj
+
+
+class ProjectionsTest(parameterized.TestCase):
+
+ def test_projection_non_negative(self):
+ # test with array
+ x = jnp.array([-1.0, 2.0, 3.0])
+ expected = jnp.array([0, 2.0, 3.0])
+ np.testing.assert_array_equal(proj.projection_non_negative(x), expected)
+
+ # test with tuple
+ np.testing.assert_array_equal(proj.projection_non_negative((x, x)),
+ (expected, expected))
+
+ # test with nested pytree
+ tree_x = (-1.0, {'k1': 1.0, 'k2': (1.0, 1.0)}, 1.0)
+ tree_expected = (0.0, {'k1': 1.0, 'k2': (1.0, 1.0)}, 1.0)
+ chex.assert_trees_all_equal(proj.projection_non_negative(tree_x),
+ tree_expected)
+
+if __name__ == '__main__':
+ absltest.main()
| 2023-11-13T14:40:27
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"optax/__init__.py": "# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Optax: composable gradient processing and optimization, in JAX.\"\"\"\n\nfrom optax import contrib\nfrom optax import losses\nfrom optax import monte_carlo\nfrom optax import schedules\nfrom optax import second_order\nfrom optax import tree_utils\nfrom optax._src.alias import adabelief\nfrom optax._src.alias import adafactor\nfrom optax._src.alias import adagrad\nfrom optax._src.alias import adam\nfrom optax._src.alias import adamax\nfrom optax._src.alias import adamaxw\nfrom optax._src.alias import adamw\nfrom optax._src.alias import amsgrad\nfrom optax._src.alias import fromage\nfrom optax._src.alias import lamb\nfrom optax._src.alias import lars\nfrom optax._src.alias import lion\nfrom optax._src.alias import MaskOrFn\nfrom optax._src.alias import noisy_sgd\nfrom optax._src.alias import novograd\nfrom optax._src.alias import optimistic_gradient_descent\nfrom optax._src.alias import radam\nfrom optax._src.alias import rmsprop\nfrom optax._src.alias import ScalarOrSchedule\nfrom optax._src.alias import sgd\nfrom optax._src.alias import sm3\nfrom optax._src.alias import yogi\nfrom optax._src.base import EmptyState\nfrom optax._src.base import GradientTransformation\nfrom optax._src.base import GradientTransformationExtraArgs\nfrom optax._src.base import identity\nfrom optax._src.base import OptState\nfrom optax._src.base import Params\nfrom optax._src.base import Schedule\nfrom optax._src.base import set_to_zero\nfrom optax._src.base import stateless\nfrom optax._src.base import stateless_with_tree_map\nfrom optax._src.base import TransformInitFn\nfrom optax._src.base import TransformUpdateExtraArgsFn\nfrom optax._src.base import TransformUpdateFn\nfrom optax._src.base import Updates\nfrom optax._src.base import with_extra_args_support\nfrom optax._src.clipping import adaptive_grad_clip\nfrom optax._src.clipping import AdaptiveGradClipState\nfrom optax._src.clipping import clip\nfrom optax._src.clipping import clip_by_block_rms\nfrom optax._src.clipping import clip_by_global_norm\nfrom optax._src.clipping import ClipByGlobalNormState\nfrom optax._src.clipping import ClipState\nfrom optax._src.clipping import per_example_global_norm_clip\nfrom optax._src.clipping import per_example_layer_norm_clip\nfrom optax._src.combine import chain\nfrom optax._src.combine import multi_transform\nfrom optax._src.combine import MultiTransformState\nfrom optax._src.combine import named_chain\nfrom optax._src.constrain import keep_params_nonnegative\nfrom optax._src.constrain import NonNegativeParamsState\nfrom optax._src.constrain import zero_nans\nfrom optax._src.constrain import ZeroNansState\nfrom optax._src.factorized import FactoredState\nfrom optax._src.factorized import scale_by_factored_rms\nfrom optax._src.linear_algebra import global_norm\nfrom optax._src.linear_algebra import matrix_inverse_pth_root\nfrom optax._src.linear_algebra import power_iteration\nfrom optax._src.lookahead import lookahead\nfrom optax._src.lookahead import LookaheadParams\nfrom optax._src.lookahead import LookaheadState\nfrom optax._src.numerics import safe_int32_increment\nfrom optax._src.numerics import safe_norm\nfrom optax._src.numerics import safe_root_mean_squares\nfrom optax._src.transform import add_decayed_weights\nfrom optax._src.transform import add_noise\nfrom optax._src.transform import AddDecayedWeightsState\nfrom optax._src.transform import AddNoiseState\nfrom optax._src.transform import apply_every\nfrom optax._src.transform import ApplyEvery\nfrom optax._src.transform import bias_correction\nfrom optax._src.transform import centralize\nfrom optax._src.transform import ema\nfrom optax._src.transform import EmaState\nfrom optax._src.transform import scale\nfrom optax._src.transform import scale_by_adam\nfrom optax._src.transform import scale_by_adamax\nfrom optax._src.transform import scale_by_amsgrad\nfrom optax._src.transform import scale_by_belief\nfrom optax._src.transform import scale_by_distance_over_gradients\nfrom optax._src.transform import scale_by_lion\nfrom optax._src.transform import scale_by_novograd\nfrom optax._src.transform import scale_by_optimistic_gradient\nfrom optax._src.transform import scale_by_param_block_norm\nfrom optax._src.transform import scale_by_param_block_rms\nfrom optax._src.transform import scale_by_radam\nfrom optax._src.transform import scale_by_rms\nfrom optax._src.transform import scale_by_rss\nfrom optax._src.transform import scale_by_schedule\nfrom optax._src.transform import scale_by_sm3\nfrom optax._src.transform import scale_by_stddev\nfrom optax._src.transform import scale_by_trust_ratio\nfrom optax._src.transform import scale_by_yogi\nfrom optax._src.transform import ScaleByAdamState\nfrom optax._src.transform import ScaleByAmsgradState\nfrom optax._src.transform import ScaleByBeliefState\nfrom optax._src.transform import ScaleByLionState\nfrom optax._src.transform import ScaleByNovogradState\nfrom optax._src.transform import ScaleByRmsState\nfrom optax._src.transform import ScaleByRssState\nfrom optax._src.transform import ScaleByRStdDevState\nfrom optax._src.transform import ScaleByScheduleState\nfrom optax._src.transform import ScaleBySM3State\nfrom optax._src.transform import ScaleByTrustRatioState\nfrom optax._src.transform import ScaleState\nfrom optax._src.transform import trace\nfrom optax._src.transform import TraceState\nfrom optax._src.transform import update_infinity_moment\nfrom optax._src.transform import update_moment\nfrom optax._src.transform import update_moment_per_elem_norm\nfrom optax._src.update import apply_updates\nfrom optax._src.update import incremental_update\nfrom optax._src.update import periodic_update\nfrom optax._src.utils import multi_normal\nfrom optax._src.utils import scale_gradient\nfrom optax._src.wrappers import apply_if_finite\nfrom optax._src.wrappers import ApplyIfFiniteState\nfrom optax._src.wrappers import flatten\nfrom optax._src.wrappers import masked\nfrom optax._src.wrappers import MaskedNode\nfrom optax._src.wrappers import MaskedState\nfrom optax._src.wrappers import maybe_update\nfrom optax._src.wrappers import MaybeUpdateState\nfrom optax._src.wrappers import MultiSteps\nfrom optax._src.wrappers import MultiStepsState\nfrom optax._src.wrappers import ShouldSkipUpdateFunction\nfrom optax._src.wrappers import skip_large_updates\nfrom optax._src.wrappers import skip_not_finite\n\n# TODO(mtthss): remove tree_utils aliases after updates.\ntree_map_params = tree_utils.tree_map_params\n\n# TODO(mtthss): remove schedules alises from flat namespaces after user updates.\nconstant_schedule = schedules.constant_schedule\ncosine_decay_schedule = schedules.cosine_decay_schedule\ncosine_onecycle_schedule = schedules.cosine_onecycle_schedule\nexponential_decay = schedules.exponential_decay\ninject_hyperparams = schedules.inject_hyperparams\nInjectHyperparamsState = schedules.InjectHyperparamsState\njoin_schedules = schedules.join_schedules\nlinear_onecycle_schedule = schedules.linear_onecycle_schedule\nlinear_schedule = schedules.linear_schedule\npiecewise_constant_schedule = schedules.piecewise_constant_schedule\npiecewise_interpolate_schedule = schedules.piecewise_interpolate_schedule\npolynomial_schedule = schedules.polynomial_schedule\nsgdr_schedule = schedules.sgdr_schedule\nwarmup_cosine_decay_schedule = schedules.warmup_cosine_decay_schedule\nwarmup_exponential_decay_schedule = schedules.warmup_exponential_decay_schedule\ninject_stateful_hyperparams = schedules.inject_stateful_hyperparams\nInjectStatefulHyperparamsState = schedules.InjectStatefulHyperparamsState\nWrappedSchedule = schedules.WrappedSchedule\n\n# TODO(mtthss): remove loss aliases from flat namespace once users have updated.\nconvex_kl_divergence = losses.convex_kl_divergence\ncosine_distance = losses.cosine_distance\ncosine_similarity = losses.cosine_similarity\nctc_loss = losses.ctc_loss\nctc_loss_with_forward_probs = losses.ctc_loss_with_forward_probs\nhinge_loss = losses.hinge_loss\nhuber_loss = losses.huber_loss\nkl_divergence = losses.kl_divergence\nl2_loss = losses.l2_loss\nlog_cosh = losses.log_cosh\nsigmoid_binary_cross_entropy = losses.sigmoid_binary_cross_entropy\nsmooth_labels = losses.smooth_labels\nsoftmax_cross_entropy = losses.softmax_cross_entropy\nsoftmax_cross_entropy_with_integer_labels = (\n losses.softmax_cross_entropy_with_integer_labels\n)\nsquared_error = losses.squared_error\n\n# TODO(mtthss): remove contrib aliases from flat namespace once users updated.\ndifferentially_private_aggregate = contrib.differentially_private_aggregate\nDifferentiallyPrivateAggregateState = (\n contrib.DifferentiallyPrivateAggregateState\n)\ndpsgd = contrib.dpsgd\n\n__version__ = \"0.1.8.dev\"\n\n__all__ = (\n \"adabelief\",\n \"adafactor\",\n \"adagrad\",\n \"adam\",\n \"adamax\",\n \"adamaxw\",\n \"adamw\",\n \"adaptive_grad_clip\",\n \"AdaptiveGradClipState\",\n \"add_decayed_weights\",\n \"add_noise\",\n \"AddDecayedWeightsState\",\n \"AddNoiseState\",\n \"amsgrad\",\n \"apply_every\",\n \"apply_if_finite\",\n \"apply_updates\",\n \"ApplyEvery\",\n \"ApplyIfFiniteState\",\n \"centralize\",\n \"chain\",\n \"clip_by_block_rms\",\n \"clip_by_global_norm\",\n \"clip\",\n \"ClipByGlobalNormState\",\n \"ClipState\",\n \"constant_schedule\",\n \"ctc_loss\",\n \"ctc_loss_with_forward_probs\",\n \"convex_kl_divergence\",\n \"cosine_decay_schedule\",\n \"cosine_distance\",\n \"cosine_onecycle_schedule\",\n \"cosine_similarity\",\n \"differentially_private_aggregate\",\n \"DifferentiallyPrivateAggregateState\",\n \"dpsgd\",\n \"ema\",\n \"EmaState\",\n \"EmptyState\",\n \"exponential_decay\",\n \"FactoredState\",\n \"flatten\",\n \"fromage\",\n \"global_norm\",\n \"GradientTransformation\",\n \"GradientTransformationExtraArgs\",\n \"hinge_loss\",\n \"huber_loss\",\n \"identity\",\n \"incremental_update\",\n \"inject_hyperparams\",\n \"InjectHyperparamsState\",\n \"join_schedules\",\n \"keep_params_nonnegative\",\n \"kl_divergence\",\n \"l2_loss\",\n \"lamb\",\n \"lars\",\n \"lion\",\n \"linear_onecycle_schedule\",\n \"linear_schedule\",\n \"log_cosh\",\n \"lookahead\",\n \"LookaheadParams\",\n \"LookaheadState\",\n \"masked\",\n \"MaskOrFn\",\n \"MaskedState\",\n \"matrix_inverse_pth_root\",\n \"maybe_update\",\n \"MaybeUpdateState\",\n \"multi_normal\",\n \"multi_transform\",\n \"MultiSteps\",\n \"MultiStepsState\",\n \"MultiTransformState\",\n \"noisy_sgd\",\n \"novograd\",\n \"NonNegativeParamsState\",\n \"OptState\",\n \"Params\",\n \"periodic_update\",\n \"per_example_global_norm_clip\",\n \"per_example_layer_norm_clip\",\n \"piecewise_constant_schedule\",\n \"piecewise_interpolate_schedule\",\n \"polynomial_schedule\",\n \"power_iteration\",\n \"radam\",\n \"rmsprop\",\n \"safe_int32_increment\",\n \"safe_norm\",\n \"safe_root_mean_squares\",\n \"ScalarOrSchedule\",\n \"scale_by_adam\",\n \"scale_by_adamax\",\n \"scale_by_amsgrad\",\n \"scale_by_belief\",\n \"scale_by_lion\",\n \"scale_by_factored_rms\",\n \"scale_by_novograd\",\n \"scale_by_param_block_norm\",\n \"scale_by_param_block_rms\",\n \"scale_by_radam\",\n \"scale_by_rms\",\n \"scale_by_rss\",\n \"scale_by_schedule\",\n \"scale_by_sm3\",\n \"scale_by_stddev\",\n \"scale_by_trust_ratio\",\n \"scale_by_yogi\",\n \"scale_gradient\",\n \"scale\",\n \"ScaleByAdamState\",\n \"ScaleByAmsgradState\",\n \"ScaleByBeliefState\",\n \"ScaleByLionState\",\n \"ScaleByNovogradState\",\n \"ScaleByRmsState\",\n \"ScaleByRssState\",\n \"ScaleByRStdDevState\",\n \"ScaleByScheduleState\",\n \"ScaleBySM3State\",\n \"ScaleByTrustRatioState\",\n \"ScaleState\",\n \"Schedule\",\n \"set_to_zero\",\n \"sgd\",\n \"sgdr_schedule\",\n \"ShouldSkipUpdateFunction\",\n \"sigmoid_binary_cross_entropy\",\n \"skip_large_updates\",\n \"skip_not_finite\",\n \"sm3\",\n \"smooth_labels\",\n \"softmax_cross_entropy\",\n \"softmax_cross_entropy_with_integer_labels\",\n \"stateless\",\n \"stateless_with_tree_map\",\n \"trace\",\n \"TraceState\",\n \"TransformInitFn\",\n \"TransformUpdateFn\",\n \"TransformUpdateExtraArgsFn\",\n \"Updates\",\n \"warmup_cosine_decay_schedule\",\n \"warmup_exponential_decay_schedule\",\n \"yogi\",\n \"zero_nans\",\n \"ZeroNansState\",\n)\n\n# _________________________________________\n# / Please don't use symbols in `_src` they \\\n# \\ are not part of the Optax public API. /\n# -----------------------------------------\n# \\ ^__^\n# \\ (oo)\\_______\n# (__)\\ )\\/\\\n# ||----w |\n# || ||\n#\n", "optax/projections/__init__.py": null, "optax/projections/_projections.py": null}
|
{"optax/projections/_projections.py": [{"type": "function", "name": "projection_non_negative", "lines": [24, 39], "signature": "def projection_non_negative(pytree: Any) -> Any:", "doc": "Projection onto the non-negative orthant.\n\n.. math::\n\n \\underset{p}{\\text{argmin}} ~ ||x - p||_2^2 \\quad\n \\textrm{subject to} \\quad p \\ge 0\n\nwhere :math:`x` is the input pytree.\n\nArgs:\n pytree: pytree to project.\nReturns:\n projected pytree, with the same structure as ``pytree``."}]}
| null |
["optax/projections/_projections_test.py::ProjectionsTest::test_projection_non_negative"]
|
[]
|
1e08bccf195ac54e7d9d766eb5e69345bf0e3230
|
{"first_commit_time": 1699961528.0, "pr_title": "Start optax.projections subpackage and add projection_non_negative.", "pr_body": "Start optax.projections subpackage and add projection_non_negative.\n\nIn the future, this subpackage will contain more projections (projection_simplex, projection_l1_ball, etc) as well as non-Euclidean projections.\n\nI used typing.Any as a type annotation for pytrees for the time being, as chex.ArrayTree doesn't seem to accept tuples.\n", "pr_timeline": [], "issues": {}}
|
|
google-deepmind/optax
| 897
|
https://github.com/google-deepmind/optax/pull/897
|
google-deepmind__optax-897
|
[]
|
246f002a64e21bcc8f0b1a3390eb4ee13da83ed5
|
diff --git a/docs/api/losses.rst b/docs/api/losses.rst
index 95cef9f99..41ceab134 100644
--- a/docs/api/losses.rst
+++ b/docs/api/losses.rst
@@ -14,6 +14,7 @@ Losses
kl_divergence
l2_loss
log_cosh
+ ntxent
safe_softmax_cross_entropy
sigmoid_binary_cross_entropy
sigmoid_focal_loss
@@ -61,6 +62,10 @@ Log hyperbolic cosine loss
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: log_cosh
+Normalized temperature scaled cross-entropy (NT-Xent) loss
+~~~~~~~~~~~~~~~~
+.. autofunction:: ntxent
+
Sigmoid binary cross-entropy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: sigmoid_binary_cross_entropy
diff --git a/optax/__init__.py b/optax/__init__.py
index 5edae96d1..4225d7ef2 100644
--- a/optax/__init__.py
+++ b/optax/__init__.py
@@ -203,6 +203,7 @@
kl_divergence = losses.kl_divergence
l2_loss = losses.l2_loss
log_cosh = losses.log_cosh
+ntxent = losses.ntxent
sigmoid_binary_cross_entropy = losses.sigmoid_binary_cross_entropy
smooth_labels = losses.smooth_labels
softmax_cross_entropy = losses.softmax_cross_entropy
@@ -306,6 +307,7 @@
"MultiTransformState",
"nadam",
"nadamw",
+ "ntxent",
"noisy_sgd",
"novograd",
"NonNegativeParamsState",
diff --git a/optax/losses/__init__.py b/optax/losses/__init__.py
index 8393dc310..65c171f24 100644
--- a/optax/losses/__init__.py
+++ b/optax/losses/__init__.py
@@ -35,3 +35,4 @@
from optax.losses._regression import log_cosh
from optax.losses._regression import squared_error
from optax.losses._smoothing import smooth_labels
+from optax.losses._self_supervised import ntxent
diff --git a/optax/losses/_self_supervised.py b/optax/losses/_self_supervised.py
new file mode 100644
index 000000000..8c756a591
--- /dev/null
+++ b/optax/losses/_self_supervised.py
@@ -0,0 +1,86 @@
+# Copyright 2024 DeepMind Technologies Limited. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Self supervised losses."""
+
+import chex
+from jax import lax
+import jax.numpy as jnp
+from optax.losses._regression import cosine_similarity
+
+
+def ntxent(
+ embeddings: chex.Array,
+ labels: chex.Array,
+ temperature: chex.Numeric = 0.07
+) -> chex.Numeric:
+ """Normalized temperature scaled cross entropy loss (NT-Xent).
+
+ References:
+ T. Chen et al `A Simple Framework for Contrastive Learning of Visual
+ Representations <http://arxiv.org/abs/2002.05709>`_, 2020
+ kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss
+
+ Args:
+ emeddings: batch of embeddings, with shape [batch, feature_length]
+ labels: labels for groups that are positive pairs. e.g. if you have
+ a batch of 4 embeddings and the first two and last two were positive
+ pairs your `labels` should look like [0, 0, 1, 1]. labels SHOULD NOT
+ be all the same (e.g. [0, 0, 0, 0]) you will get a NaN result.
+ Shape [batch]
+ temperature: temperature scaling parameter.
+
+ Returns:
+ A scalar loss value of NT-Xent values averaged over all positive
+ pairs
+
+ .. versionadded:: 0.2.3
+ """
+ chex.assert_type([embeddings], float)
+ if labels.shape[0] != embeddings.shape[0]:
+ raise ValueError(
+ 'label dimension should match batch dimension in embeddings'
+ )
+
+ # cosine similarity matrix
+ xcs = cosine_similarity(
+ embeddings[None, :, :], embeddings[:, None, :]
+ ) / temperature
+
+ # finding positive and negative pairs
+ labels1 = jnp.expand_dims(labels, axis=1)
+ labels2 = jnp.expand_dims(labels, axis=0)
+ matches = labels1 == labels2
+ diffs = matches ^ 1
+ matches = jnp.bool_(matches - jnp.eye(matches.shape[0])) # no self cos
+
+ # replace 0 with -inf
+ xcs_diffs = jnp.where(diffs == 1, xcs, -jnp.inf)
+ xcs_matches = jnp.where(matches == 1, xcs, -jnp.inf)
+
+ # shifting for numeric stability
+ comb = jnp.concatenate((xcs_diffs, xcs_matches), axis=-1)
+ xcs_max = jnp.max(comb, axis=1, keepdims=True)
+ xcs_shift_diffs = xcs_diffs - lax.stop_gradient(xcs_max)
+ xcs_shift_matches = xcs_matches - lax.stop_gradient(xcs_max)
+
+ # calc loss
+ numer = xcs_shift_matches
+ numer_exp = jnp.exp(xcs_shift_matches)
+ denom = jnp.sum(jnp.exp(xcs_shift_diffs), axis=1, keepdims=True)
+ denom += numer_exp
+ log_softm = numer - jnp.log(denom)
+ loss = -jnp.where(matches == 1, log_softm, 0.0).sum() / matches.sum()
+
+ return loss
|
diff --git a/optax/losses/_classification_test.py b/optax/losses/_classification_test.py
index b87d520ea..b75f3a088 100644
--- a/optax/losses/_classification_test.py
+++ b/optax/losses/_classification_test.py
@@ -862,6 +862,5 @@ def test_ignore_negative(self):
assert all(ce_loss[self.ts == 0] > 0)
assert all(focal_loss[self.ts == 0] == 0)
-
if __name__ == '__main__':
absltest.main()
diff --git a/optax/losses/_self_supervised_test.py b/optax/losses/_self_supervised_test.py
new file mode 100644
index 000000000..43823f6a3
--- /dev/null
+++ b/optax/losses/_self_supervised_test.py
@@ -0,0 +1,51 @@
+# Copyright 2024 DeepMind Technologies Limited. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for optax.losses._self_supervised."""
+
+from absl.testing import parameterized
+
+import chex
+import jax.numpy as jnp
+import numpy as np
+
+from optax.losses import _self_supervised
+
+
+class NtxentTest(parameterized.TestCase):
+
+ def setUp(self):
+ super().setUp()
+ self.ys = jnp.array([
+ [-1.9540, 1.0780],
+ [ 0.2380, -0.5703],
+ [ 1.8745, -0.0195],
+ [-0.6719, -1.9210],
+ ])
+ self.ts_1 = jnp.array([0,0,1,1])
+ self.ts_2 = jnp.array([0,0,0,1])
+ # Calculated expected output
+ self.exp_1 = jnp.array(14.01032)
+ self.exp_2 = jnp.array(8.968544)
+
+ @chex.all_variants
+ def test_batched(self):
+ """Tests for a full batch."""
+ np.testing.assert_allclose(
+ self.variant(_self_supervised.ntxent)(self.ys, self.ts_1),
+ self.exp_1, atol=1e-4)
+
+ np.testing.assert_allclose(
+ self.variant(_self_supervised.ntxent)(self.ys, self.ts_2),
+ self.exp_2, atol=1e-4)
| 2024-04-01T20:30:36
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/api/losses.rst": "Losses\n======\n\n.. currentmodule:: optax\n\n.. autosummary::\n convex_kl_divergence\n cosine_distance\n cosine_similarity\n ctc_loss\n ctc_loss_with_forward_probs\n hinge_loss\n huber_loss\n kl_divergence\n l2_loss\n log_cosh\n safe_softmax_cross_entropy\n sigmoid_binary_cross_entropy\n sigmoid_focal_loss\n smooth_labels\n softmax_cross_entropy\n softmax_cross_entropy_with_integer_labels\n squared_error\n\n\nConvex Kullback Leibler divergence\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: convex_kl_divergence\n\nCosine distance\n~~~~~~~~~~~~~~~\n.. autofunction:: cosine_distance\n\nCosine similarity\n~~~~~~~~~~~~~~~~~\n.. autofunction:: cosine_similarity\n\nConnectionist temporal classification loss\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: ctc_loss\n.. autofunction:: ctc_loss_with_forward_probs\n\nHinge loss\n~~~~~~~~~~\n.. autofunction:: hinge_loss\n\nHuber loss\n~~~~~~~~~~\n.. autofunction:: huber_loss\n\nKullback-Leibler divergence\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: kl_divergence\n\nL2 Squared loss\n~~~~~~~~~~~~\n.. autofunction:: squared_error\n.. autofunction:: l2_loss\n\nLog hyperbolic cosine loss\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: log_cosh\n\nSigmoid binary cross-entropy\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: sigmoid_binary_cross_entropy\n\nSigmoid focal loss\n~~~~~~~~~~~~~~~~~~\n.. autofunction:: sigmoid_focal_loss\n\nSmoothing labels\n~~~~~~~~~~~~~~~~\n.. autofunction:: smooth_labels\n\nSoft-max cross-entropy\n~~~~~~~~~~~~~~~~~~~~~~\n.. autofunction:: safe_softmax_cross_entropy\n.. autofunction:: softmax_cross_entropy\n.. autofunction:: softmax_cross_entropy_with_integer_labels\n\n\n\n", "optax/__init__.py": "# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Optax: composable gradient processing and optimization, in JAX.\"\"\"\n\nfrom optax import contrib\nfrom optax import losses\nfrom optax import monte_carlo\nfrom optax import projections\nfrom optax import schedules\nfrom optax import second_order\nfrom optax import tree_utils\nfrom optax._src.alias import adabelief\nfrom optax._src.alias import adadelta\nfrom optax._src.alias import adafactor\nfrom optax._src.alias import adagrad\nfrom optax._src.alias import adam\nfrom optax._src.alias import adamax\nfrom optax._src.alias import adamaxw\nfrom optax._src.alias import adamw\nfrom optax._src.alias import amsgrad\nfrom optax._src.alias import fromage\nfrom optax._src.alias import lamb\nfrom optax._src.alias import lars\nfrom optax._src.alias import lion\nfrom optax._src.alias import MaskOrFn\nfrom optax._src.alias import nadam\nfrom optax._src.alias import nadamw\nfrom optax._src.alias import noisy_sgd\nfrom optax._src.alias import novograd\nfrom optax._src.alias import optimistic_gradient_descent\nfrom optax._src.alias import polyak_sgd\nfrom optax._src.alias import radam\nfrom optax._src.alias import rmsprop\nfrom optax._src.alias import rprop\nfrom optax._src.alias import sgd\nfrom optax._src.alias import sm3\nfrom optax._src.alias import yogi\nfrom optax._src.base import EmptyState\nfrom optax._src.base import GradientTransformation\nfrom optax._src.base import GradientTransformationExtraArgs\nfrom optax._src.base import identity\nfrom optax._src.base import OptState\nfrom optax._src.base import Params\nfrom optax._src.base import ScalarOrSchedule\nfrom optax._src.base import Schedule\nfrom optax._src.base import set_to_zero\nfrom optax._src.base import stateless\nfrom optax._src.base import stateless_with_tree_map\nfrom optax._src.base import TransformInitFn\nfrom optax._src.base import TransformUpdateExtraArgsFn\nfrom optax._src.base import TransformUpdateFn\nfrom optax._src.base import Updates\nfrom optax._src.base import with_extra_args_support\nfrom optax._src.clipping import adaptive_grad_clip\nfrom optax._src.clipping import AdaptiveGradClipState\nfrom optax._src.clipping import clip\nfrom optax._src.clipping import clip_by_block_rms\nfrom optax._src.clipping import clip_by_global_norm\nfrom optax._src.clipping import ClipByGlobalNormState\nfrom optax._src.clipping import ClipState\nfrom optax._src.clipping import per_example_global_norm_clip\nfrom optax._src.clipping import per_example_layer_norm_clip\nfrom optax._src.combine import chain\nfrom optax._src.combine import multi_transform\nfrom optax._src.combine import MultiTransformState\nfrom optax._src.combine import named_chain\nfrom optax._src.constrain import keep_params_nonnegative\nfrom optax._src.constrain import NonNegativeParamsState\nfrom optax._src.constrain import zero_nans\nfrom optax._src.constrain import ZeroNansState\nfrom optax._src.factorized import FactoredState\nfrom optax._src.factorized import scale_by_factored_rms\nfrom optax._src.linear_algebra import global_norm\nfrom optax._src.linear_algebra import matrix_inverse_pth_root\nfrom optax._src.linear_algebra import power_iteration\nfrom optax._src.linesearch import scale_by_backtracking_linesearch\nfrom optax._src.linesearch import ScaleByBacktrackingLinesearchState\nfrom optax._src.lookahead import lookahead\nfrom optax._src.lookahead import LookaheadParams\nfrom optax._src.lookahead import LookaheadState\nfrom optax._src.numerics import safe_int32_increment\nfrom optax._src.numerics import safe_norm\nfrom optax._src.numerics import safe_root_mean_squares\nfrom optax._src.transform import add_decayed_weights\nfrom optax._src.transform import add_noise\nfrom optax._src.transform import AddDecayedWeightsState\nfrom optax._src.transform import AddNoiseState\nfrom optax._src.transform import apply_every\nfrom optax._src.transform import ApplyEvery\nfrom optax._src.transform import centralize\nfrom optax._src.transform import ema\nfrom optax._src.transform import EmaState\nfrom optax._src.transform import scale\nfrom optax._src.transform import scale_by_adadelta\nfrom optax._src.transform import scale_by_adam\nfrom optax._src.transform import scale_by_adamax\nfrom optax._src.transform import scale_by_amsgrad\nfrom optax._src.transform import scale_by_belief\nfrom optax._src.transform import scale_by_distance_over_gradients\nfrom optax._src.transform import scale_by_learning_rate\nfrom optax._src.transform import scale_by_lion\nfrom optax._src.transform import scale_by_novograd\nfrom optax._src.transform import scale_by_optimistic_gradient\nfrom optax._src.transform import scale_by_param_block_norm\nfrom optax._src.transform import scale_by_param_block_rms\nfrom optax._src.transform import scale_by_polyak\nfrom optax._src.transform import scale_by_radam\nfrom optax._src.transform import scale_by_rms\nfrom optax._src.transform import scale_by_rprop\nfrom optax._src.transform import scale_by_rss\nfrom optax._src.transform import scale_by_schedule\nfrom optax._src.transform import scale_by_sm3\nfrom optax._src.transform import scale_by_stddev\nfrom optax._src.transform import scale_by_trust_ratio\nfrom optax._src.transform import scale_by_yogi\nfrom optax._src.transform import ScaleByAdaDeltaState\nfrom optax._src.transform import ScaleByAdamState\nfrom optax._src.transform import ScaleByAmsgradState\nfrom optax._src.transform import ScaleByBeliefState\nfrom optax._src.transform import ScaleByLionState\nfrom optax._src.transform import ScaleByNovogradState\nfrom optax._src.transform import ScaleByRmsState\nfrom optax._src.transform import ScaleByRpropState\nfrom optax._src.transform import ScaleByRssState\nfrom optax._src.transform import ScaleByRStdDevState\nfrom optax._src.transform import ScaleByScheduleState\nfrom optax._src.transform import ScaleBySM3State\nfrom optax._src.transform import ScaleByTrustRatioState\nfrom optax._src.transform import ScaleState\nfrom optax._src.transform import trace\nfrom optax._src.transform import TraceState\nfrom optax._src.update import apply_updates\nfrom optax._src.update import incremental_update\nfrom optax._src.update import periodic_update\nfrom optax._src.utils import multi_normal\nfrom optax._src.utils import scale_gradient\nfrom optax._src.utils import value_and_grad_from_state\nfrom optax._src.wrappers import apply_if_finite\nfrom optax._src.wrappers import ApplyIfFiniteState\nfrom optax._src.wrappers import conditionally_mask\nfrom optax._src.wrappers import conditionally_transform\nfrom optax._src.wrappers import ConditionallyMaskState\nfrom optax._src.wrappers import ConditionallyTransformState\nfrom optax._src.wrappers import flatten\nfrom optax._src.wrappers import masked\nfrom optax._src.wrappers import MaskedNode\nfrom optax._src.wrappers import MaskedState\nfrom optax._src.wrappers import maybe_update\nfrom optax._src.wrappers import MaybeUpdateState\nfrom optax._src.wrappers import MultiSteps\nfrom optax._src.wrappers import MultiStepsState\nfrom optax._src.wrappers import ShouldSkipUpdateFunction\nfrom optax._src.wrappers import skip_large_updates\nfrom optax._src.wrappers import skip_not_finite\n\n# TODO(mtthss): remove tree_utils aliases after updates.\ntree_map_params = tree_utils.tree_map_params\nbias_correction = tree_utils.tree_bias_correction\nupdate_infinity_moment = tree_utils.tree_update_infinity_moment\nupdate_moment = tree_utils.tree_update_moment\nupdate_moment_per_elem_norm = tree_utils.tree_update_moment_per_elem_norm\n\n# TODO(mtthss): remove schedules alises from flat namespaces after user updates.\nconstant_schedule = schedules.constant_schedule\ncosine_decay_schedule = schedules.cosine_decay_schedule\ncosine_onecycle_schedule = schedules.cosine_onecycle_schedule\nexponential_decay = schedules.exponential_decay\ninject_hyperparams = schedules.inject_hyperparams\nInjectHyperparamsState = schedules.InjectHyperparamsState\njoin_schedules = schedules.join_schedules\nlinear_onecycle_schedule = schedules.linear_onecycle_schedule\nlinear_schedule = schedules.linear_schedule\npiecewise_constant_schedule = schedules.piecewise_constant_schedule\npiecewise_interpolate_schedule = schedules.piecewise_interpolate_schedule\npolynomial_schedule = schedules.polynomial_schedule\nsgdr_schedule = schedules.sgdr_schedule\nwarmup_cosine_decay_schedule = schedules.warmup_cosine_decay_schedule\nwarmup_exponential_decay_schedule = schedules.warmup_exponential_decay_schedule\ninject_stateful_hyperparams = schedules.inject_stateful_hyperparams\nInjectStatefulHyperparamsState = schedules.InjectStatefulHyperparamsState\nWrappedSchedule = schedules.WrappedSchedule\n\n# TODO(mtthss): remove loss aliases from flat namespace once users have updated.\nconvex_kl_divergence = losses.convex_kl_divergence\ncosine_distance = losses.cosine_distance\ncosine_similarity = losses.cosine_similarity\nctc_loss = losses.ctc_loss\nctc_loss_with_forward_probs = losses.ctc_loss_with_forward_probs\nhinge_loss = losses.hinge_loss\nhuber_loss = losses.huber_loss\nkl_divergence = losses.kl_divergence\nl2_loss = losses.l2_loss\nlog_cosh = losses.log_cosh\nsigmoid_binary_cross_entropy = losses.sigmoid_binary_cross_entropy\nsmooth_labels = losses.smooth_labels\nsoftmax_cross_entropy = losses.softmax_cross_entropy\nsoftmax_cross_entropy_with_integer_labels = (\n losses.softmax_cross_entropy_with_integer_labels\n)\nsquared_error = losses.squared_error\nsigmoid_focal_loss = losses.sigmoid_focal_loss\n\n# TODO(mtthss): remove contrib aliases from flat namespace once users updated.\ndifferentially_private_aggregate = contrib.differentially_private_aggregate\nDifferentiallyPrivateAggregateState = (\n contrib.DifferentiallyPrivateAggregateState\n)\ndpsgd = contrib.dpsgd\n\n__version__ = \"0.2.3.dev\"\n\n__all__ = (\n \"adabelief\",\n \"adadelta\",\n \"adafactor\",\n \"adagrad\",\n \"adam\",\n \"adamax\",\n \"adamaxw\",\n \"adamw\",\n \"adaptive_grad_clip\",\n \"AdaptiveGradClipState\",\n \"add_decayed_weights\",\n \"add_noise\",\n \"AddDecayedWeightsState\",\n \"AddNoiseState\",\n \"amsgrad\",\n \"apply_every\",\n \"apply_if_finite\",\n \"apply_updates\",\n \"ApplyEvery\",\n \"ApplyIfFiniteState\",\n \"centralize\",\n \"chain\",\n \"clip_by_block_rms\",\n \"clip_by_global_norm\",\n \"clip\",\n \"ClipByGlobalNormState\",\n \"ClipState\",\n \"conditionally_mask\",\n \"ConditionallyMaskState\",\n \"conditionally_transform\",\n \"ConditionallyTransformState\",\n \"constant_schedule\",\n \"ctc_loss\",\n \"ctc_loss_with_forward_probs\",\n \"convex_kl_divergence\",\n \"cosine_decay_schedule\",\n \"cosine_distance\",\n \"cosine_onecycle_schedule\",\n \"cosine_similarity\",\n \"differentially_private_aggregate\",\n \"DifferentiallyPrivateAggregateState\",\n \"dpsgd\",\n \"ema\",\n \"EmaState\",\n \"EmptyState\",\n \"exponential_decay\",\n \"FactoredState\",\n \"flatten\",\n \"fromage\",\n \"global_norm\",\n \"GradientTransformation\",\n \"GradientTransformationExtraArgs\",\n \"hinge_loss\",\n \"huber_loss\",\n \"identity\",\n \"incremental_update\",\n \"inject_hyperparams\",\n \"InjectHyperparamsState\",\n \"join_schedules\",\n \"keep_params_nonnegative\",\n \"kl_divergence\",\n \"l2_loss\",\n \"lamb\",\n \"lars\",\n \"lion\",\n \"linear_onecycle_schedule\",\n \"linear_schedule\",\n \"log_cosh\",\n \"lookahead\",\n \"LookaheadParams\",\n \"LookaheadState\",\n \"masked\",\n \"MaskOrFn\",\n \"MaskedState\",\n \"matrix_inverse_pth_root\",\n \"maybe_update\",\n \"MaybeUpdateState\",\n \"multi_normal\",\n \"multi_transform\",\n \"MultiSteps\",\n \"MultiStepsState\",\n \"MultiTransformState\",\n \"nadam\",\n \"nadamw\",\n \"noisy_sgd\",\n \"novograd\",\n \"NonNegativeParamsState\",\n \"OptState\",\n \"Params\",\n \"periodic_update\",\n \"per_example_global_norm_clip\",\n \"per_example_layer_norm_clip\",\n \"piecewise_constant_schedule\",\n \"piecewise_interpolate_schedule\",\n \"polynomial_schedule\",\n \"power_iteration\",\n \"polyak_sgd\",\n \"radam\",\n \"rmsprop\",\n \"rprop\",\n \"safe_int32_increment\",\n \"safe_norm\",\n \"safe_root_mean_squares\",\n \"ScalarOrSchedule\",\n \"scale_by_adadelta\",\n \"scale_by_adam\",\n \"scale_by_adamax\",\n \"scale_by_amsgrad\",\n \"scale_by_backtracking_linesearch\",\n \"scale_by_belief\",\n \"scale_by_lion\",\n \"scale_by_factored_rms\",\n \"scale_by_novograd\",\n \"scale_by_param_block_norm\",\n \"scale_by_param_block_rms\",\n \"scale_by_polyak\",\n \"scale_by_radam\",\n \"scale_by_rms\",\n \"scale_by_rprop\",\n \"scale_by_rss\",\n \"scale_by_schedule\",\n \"scale_by_sm3\",\n \"scale_by_stddev\",\n \"scale_by_trust_ratio\",\n \"scale_by_yogi\",\n \"scale_gradient\",\n \"scale\",\n \"ScaleByAdaDeltaState\",\n \"ScaleByAdamState\",\n \"ScaleByAmsgradState\",\n \"ScaleByBacktrackingLinesearchState\",\n \"ScaleByBeliefState\",\n \"ScaleByLionState\",\n \"ScaleByNovogradState\",\n \"ScaleByRmsState\",\n \"ScaleByRpropState\",\n \"ScaleByRssState\",\n \"ScaleByRStdDevState\",\n \"ScaleByScheduleState\",\n \"ScaleBySM3State\",\n \"ScaleByTrustRatioState\",\n \"ScaleState\",\n \"Schedule\",\n \"set_to_zero\",\n \"sgd\",\n \"sgdr_schedule\",\n \"ShouldSkipUpdateFunction\",\n \"sigmoid_binary_cross_entropy\",\n \"skip_large_updates\",\n \"skip_not_finite\",\n \"sm3\",\n \"smooth_labels\",\n \"softmax_cross_entropy\",\n \"softmax_cross_entropy_with_integer_labels\",\n \"stateless\",\n \"stateless_with_tree_map\",\n \"trace\",\n \"TraceState\",\n \"TransformInitFn\",\n \"TransformUpdateFn\",\n \"TransformUpdateExtraArgsFn\",\n \"Updates\",\n \"value_and_grad_from_state\",\n \"warmup_cosine_decay_schedule\",\n \"warmup_exponential_decay_schedule\",\n \"yogi\",\n \"zero_nans\",\n \"ZeroNansState\",\n)\n\n# _________________________________________\n# / Please don't use symbols in `_src` they \\\n# \\ are not part of the Optax public API. /\n# -----------------------------------------\n# \\ ^__^\n# \\ (oo)\\_______\n# (__)\\ )\\/\\\n# ||----w |\n# || ||\n#\n", "optax/losses/__init__.py": "# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"The losses sub-package.\"\"\"\n\nfrom optax.losses._classification import convex_kl_divergence\nfrom optax.losses._classification import ctc_loss\nfrom optax.losses._classification import ctc_loss_with_forward_probs\nfrom optax.losses._classification import hinge_loss\nfrom optax.losses._classification import kl_divergence\nfrom optax.losses._classification import kl_divergence_with_log_targets\nfrom optax.losses._classification import poly_loss_cross_entropy\nfrom optax.losses._classification import safe_softmax_cross_entropy\nfrom optax.losses._classification import sigmoid_binary_cross_entropy\nfrom optax.losses._classification import sigmoid_focal_loss\nfrom optax.losses._classification import softmax_cross_entropy\nfrom optax.losses._classification import softmax_cross_entropy_with_integer_labels\nfrom optax.losses._fenchel_young import make_fenchel_young_loss\nfrom optax.losses._ranking import ranking_softmax_loss\nfrom optax.losses._regression import cosine_distance\nfrom optax.losses._regression import cosine_similarity\nfrom optax.losses._regression import huber_loss\nfrom optax.losses._regression import l2_loss\nfrom optax.losses._regression import log_cosh\nfrom optax.losses._regression import squared_error\nfrom optax.losses._smoothing import smooth_labels\n", "optax/losses/_self_supervised.py": null}
|
diff --git a/docs/api/losses.rst b/docs/api/losses.rst
index 95cef9f99..41ceab134 100644
--- a/docs/api/losses.rst
+++ b/docs/api/losses.rst
@@ -14,6 +14,7 @@ Losses
kl_divergence
l2_loss
log_cosh
+ ntxent
safe_softmax_cross_entropy
sigmoid_binary_cross_entropy
sigmoid_focal_loss
@@ -61,6 +62,10 @@ Log hyperbolic cosine loss
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: log_cosh
+Normalized temperature scaled cross-entropy (NT-Xent) loss
+~~~~~~~~~~~~~~~~
+.. autofunction:: ntxent
+
Sigmoid binary cross-entropy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: sigmoid_binary_cross_entropy
|
{"optax/losses/_self_supervised.py": [{"type": "function", "name": "ntxent", "lines": [23, 86], "signature": "def ntxent( embeddings: chex.Array, labels: chex.Array, temperature: chex.Numeric = 0.07 ) -> chex.Numeric:", "doc": "Normalized temperature scaled cross entropy loss (NT-Xent).\n\nReferences:\n T. Chen et al `A Simple Framework for Contrastive Learning of Visual \n Representations <http://arxiv.org/abs/2002.05709>`_, 2020\n kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss\n\nArgs:\n emeddings: batch of embeddings, with shape [batch, feature_length]\n labels: labels for groups that are positive pairs. e.g. if you have\n a batch of 4 embeddings and the first two and last two were positive\n pairs your `labels` should look like [0, 0, 1, 1]. labels SHOULD NOT\n be all the same (e.g. [0, 0, 0, 0]) you will get a NaN result. \n Shape [batch]\n temperature: temperature scaling parameter.\n\nReturns:\n A scalar loss value of NT-Xent values averaged over all positive\n pairs\n\n.. versionadded:: 0.2.3"}]}
| null |
["optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_batched__with_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_batched__with_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_batched__without_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_batched__without_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_gradient", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_scalar__with_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_scalar__with_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_scalar__without_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyTest::test_scalar__without_jit", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_against_plain_implementation", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_batched__with_device", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_batched__with_jit", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_batched__without_device", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_batched__without_jit", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_gradient", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_scalar__with_device", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_scalar__with_jit", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_scalar__without_device", "optax/losses/_classification_test.py::SafeSoftmaxCrossEntropyTest::test_scalar__without_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_batched__with_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_batched__with_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_batched__without_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_batched__without_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_scalar__with_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_scalar__with_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_scalar__without_device", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_consistent_with_softmax_cross_entropy_scalar__without_jit", "optax/losses/_classification_test.py::SoftmaxCrossEntropyWithIntegerLabelsTest::test_gradient", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy0", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy1", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy2", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy3", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy4", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy5", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy6", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy7", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy8", "optax/losses/_classification_test.py::SigmoidCrossEntropyTest::testSigmoidCrossEntropy9", "optax/losses/_classification_test.py::PolyLossTest::test_batched_(eps=-0.5,", "optax/losses/_classification_test.py::PolyLossTest::test_batched_(eps=0,", "optax/losses/_classification_test.py::PolyLossTest::test_batched_(eps=1,", "optax/losses/_classification_test.py::PolyLossTest::test_batched_(eps=1.15,", "optax/losses/_classification_test.py::PolyLossTest::test_batched_(eps=1.214,", "optax/losses/_classification_test.py::PolyLossTest::test_batched_(eps=2,", "optax/losses/_classification_test.py::PolyLossTest::test_batched_(eps=5.45,", "optax/losses/_classification_test.py::PolyLossTest::test_equals_to_cross_entropy_when_eps0_(logits=array([0.314]),", "optax/losses/_classification_test.py::PolyLossTest::test_equals_to_cross_entropy_when_eps0_(logits=array([1.89,", "optax/losses/_classification_test.py::PolyLossTest::test_equals_to_cross_entropy_when_eps0_(logits=array([[4.", "optax/losses/_classification_test.py::PolyLossTest::test_equals_to_cross_entropy_when_eps0_(logits=array([[4.,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=-0.5,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=-1,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=0,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=1,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=1.15,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=1.214,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=2,", "optax/losses/_classification_test.py::PolyLossTest::test_scalar_(eps=5.45,", "optax/losses/_classification_test.py::HingeTest::test_batched_binary", "optax/losses/_classification_test.py::HingeTest::test_batched_multi_class", "optax/losses/_classification_test.py::HingeTest::test_binary", "optax/losses/_classification_test.py::HingeTest::test_multi_class", "optax/losses/_classification_test.py::SparsemaxTest::test_batched_binary", "optax/losses/_classification_test.py::SparsemaxTest::test_binary", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_batched__with_device", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_batched__with_jit", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_batched__without_device", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_batched__without_jit", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_scalar__with_device", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_scalar__with_jit", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_scalar__without_device", "optax/losses/_classification_test.py::ConvexKLDivergenceTest::test_scalar__without_jit", "optax/losses/_classification_test.py::PerceptronTest::test_batched_binary", "optax/losses/_classification_test.py::PerceptronTest::test_batched_multi_class", "optax/losses/_classification_test.py::PerceptronTest::test_binary", "optax/losses/_classification_test.py::PerceptronTest::test_multi_class", "optax/losses/_classification_test.py::KLDivergenceTest::test_batched__with_device", "optax/losses/_classification_test.py::KLDivergenceTest::test_batched__with_jit", "optax/losses/_classification_test.py::KLDivergenceTest::test_batched__without_device", "optax/losses/_classification_test.py::KLDivergenceTest::test_batched__without_jit", "optax/losses/_classification_test.py::KLDivergenceTest::test_scalar__with_device", "optax/losses/_classification_test.py::KLDivergenceTest::test_scalar__with_jit", "optax/losses/_classification_test.py::KLDivergenceTest::test_scalar__without_device", "optax/losses/_classification_test.py::KLDivergenceTest::test_scalar__without_jit", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_batched__with_device", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_batched__with_jit", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_batched__without_device", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_batched__without_jit", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_scalar__with_device", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_scalar__with_jit", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_scalar__without_device", "optax/losses/_classification_test.py::KLDivergenceWithLogTargetsTest::test_scalar__without_jit", "optax/losses/_classification_test.py::CTCTest::test_repeat_with_one_to_one_alignment__with_device", "optax/losses/_classification_test.py::CTCTest::test_repeat_with_one_to_one_alignment__with_jit", "optax/losses/_classification_test.py::CTCTest::test_repeat_with_one_to_one_alignment__without_device", "optax/losses/_classification_test.py::CTCTest::test_repeat_with_one_to_one_alignment__without_jit", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment__with_device", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment__with_jit", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment__without_device", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment__without_jit", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment_and_paddings__with_device", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment_and_paddings__with_jit", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment_and_paddings__without_device", "optax/losses/_classification_test.py::CTCTest::test_with_one_to_one_alignment_and_paddings__without_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_alpha_one__with_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_alpha_one__with_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_alpha_one__without_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_alpha_one__without_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_focal_equals_ce__with_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_focal_equals_ce__with_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_focal_equals_ce__without_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_focal_equals_ce__without_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_negative__with_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_negative__with_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_negative__without_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_negative__without_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_positive__with_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_positive__with_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_positive__without_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_ignore_positive__without_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_large_logit_fl_less_than_ce__with_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_large_logit_fl_less_than_ce__with_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_large_logit_fl_less_than_ce__without_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_large_logit_fl_less_than_ce__without_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_scale__with_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_scale__with_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_scale__without_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_scale__without_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_small_logit_fl_less_than_ce__with_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_small_logit_fl_less_than_ce__with_jit", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_small_logit_fl_less_than_ce__without_device", "optax/losses/_classification_test.py::SigmoidFocalLossTest::test_small_logit_fl_less_than_ce__without_jit", "optax/losses/_self_supervised_test.py::NtxentTest::test_batched__with_device", "optax/losses/_self_supervised_test.py::NtxentTest::test_batched__with_jit", "optax/losses/_self_supervised_test.py::NtxentTest::test_batched__without_device", "optax/losses/_self_supervised_test.py::NtxentTest::test_batched__without_jit"]
|
[]
|
1e08bccf195ac54e7d9d766eb5e69345bf0e3230
|
{"first_commit_time": 1711497974.0, "pr_title": "Added a NTXent loss", "pr_body": "An normalized temperature scaled cross entropy (NTXent) loss for a contrastive learning objective. I am fairly new to submitting pull requests to public repos, so I didn't add a ton of tests for this outside a batched test. Let me know if there is anything else I should add!", "pr_timeline": [{"time": 1712004701.0, "comment": "Oh also, I know NTXent isn't really a classification loss, but I didn't know where else to put it."}, {"time": 1712263161.0, "comment": "Hello @GrantMcConachie!\r\n\r\nYes, it's a bug due to a new release of jax, it's already been pointed out in #904. I've upstreamed the bug internally to the jax team, I hope it can get solved. I don't see any quick hot fix (the bug shall affect many of our modules). I can ping you when it gets solved :)"}, {"time": 1712343570.0, "comment": "The tests have been fixed in #908. You should be able to proceed. Again it would be really nice if you could avoid passing through an exponential without careful tricks like the losumexptrick (see https://en.wikipedia.org/wiki/LogSumExp). Alternatively, you may use directly functions like `jax.nn.log_softmax` that implemented such a trick.\r\nThanks again for the PR!"}, {"time": 1712418167.0, "comment": "I am still working on this! It is trickier to implement than I thought due to the partitioning of positive and negative pairs. For example If there is 4 embeddings with labels like this:\r\n```\r\ne1 -> 0\r\ne2 -> 0\r\ne3 -> 0\r\ne4 -> 1\r\n```\r\n\r\nthen your cosine similarity matrix will be a 4x4 matrix like this:\r\n\r\n```\r\n[ 0 sim01+ sim02+ sim03-]\r\n[sim10+ 0 sim12+ sim13-]\r\n[sim20+ sim21+ 0 sim23-]\r\n[sim30 sim31 sim32 0 ]\r\n```\r\nWhere the pluses indicate positive pairs.\r\n\r\n\r\nSo we want the loss to be `log(exp(sim01+) / (exp(sim01+) + exp(sim03-)) + log(exp(sim02+) / (exp(sim02+) + exp(sim03-)) + log(exp(sim10+) / (exp(sim10+) + exp(sim13-)) + ...`. However a row wise log_softmax on this matrix will return values `log(exp(sim01+) / (exp(sim01+) + exp(sim02+) + exp(sim03-))` and will incorporate `exp(sim02+)` in the denominator when it shouldn't.\r\n\r\nI think there's probably a way around this using the logsumexp or log_softmax, but I am still trying to figure out how to do it."}, {"time": 1712967769.0, "comment": "Hi @vroulet! I believe that I implemented the loss function using the same trick that logsumexp and log_softmax. I did not use these functions explicitly, but I was able to \"normalize\" the cosine similarity values by subtracting the row wise maximum cosine similarity values from each cosine similarity value before exponentiating, summing, then taking the logarithm. I believe this is sufficient to avoid overflow/underflow problems, but please let me know if you see an issue with this!"}, {"time": 1712974370.0, "comment": "Thanks for all the edits! I am happy that this will get added. Please let me know if there's more I can do! "}], "issues": {}}
|
googleapis/python-aiplatform
| 4,748
|
https://github.com/googleapis/python-aiplatform/pull/4748
|
googleapis__python-aiplatform-4748
|
[]
|
91d837ece0c02c381cda9fbe9c0c839f9986f182
|
diff --git a/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py b/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py
index f2bcb4ad3a..31417cbe58 100644
--- a/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py
+++ b/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py
@@ -25,7 +25,9 @@ def vector_search_find_neighbors(
deployed_index_id: str,
queries: List[List[float]],
num_neighbors: int,
-) -> None:
+) -> List[
+ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]
+]:
"""Query the vector search index.
Args:
@@ -38,6 +40,9 @@ def vector_search_find_neighbors(
queries (List[List[float]]): Required. A list of queries. Each query is
a list of floats, representing a single embedding.
num_neighbors (int): Required. The number of neighbors to return.
+
+ Returns:
+ List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.
"""
# Initialize the Vertex AI client
aiplatform.init(project=project, location=location)
@@ -48,12 +53,47 @@ def vector_search_find_neighbors(
)
# Query the index endpoint for the nearest neighbors.
- resp = my_index_endpoint.find_neighbors(
+ return my_index_endpoint.find_neighbors(
deployed_index_id=deployed_index_id,
queries=queries,
num_neighbors=num_neighbors,
)
- print(resp)
+
+
+# [END aiplatform_sdk_vector_search_find_neighbors_sample]
+
+
+# [START aiplatform_sdk_vector_search_find_neighbors_hybrid_sample]
+def vector_search_find_neighbors_hybrid_queries(
+ project: str,
+ location: str,
+ index_endpoint_name: str,
+ deployed_index_id: str,
+ num_neighbors: int,
+) -> List[
+ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]
+]:
+ """Query the vector search index using example hybrid queries.
+
+ Args:
+ project (str): Required. Project ID
+ location (str): Required. The region name
+ index_endpoint_name (str): Required. Index endpoint to run the query
+ against.
+ deployed_index_id (str): Required. The ID of the DeployedIndex to run
+ the queries against.
+ num_neighbors (int): Required. The number of neighbors to return.
+
+ Returns:
+ List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.
+ """
+ # Initialize the Vertex AI client
+ aiplatform.init(project=project, location=location)
+
+ # Create the index endpoint instance from an existing endpoint.
+ my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint(
+ index_endpoint_name=index_endpoint_name
+ )
# Query hybrid datapoints, sparse-only datapoints, and dense-only datapoints.
hybrid_queries = [
@@ -77,13 +117,79 @@ def vector_search_find_neighbors(
),
]
- hybrid_resp = my_index_endpoint.find_neighbors(
- deployed_index_id=deployed_index_id,
- queries=hybrid_queries,
- num_neighbors=num_neighbors,)
- print(hybrid_resp)
+ return my_index_endpoint.find_neighbors(
+ deployed_index_id=deployed_index_id,
+ queries=hybrid_queries,
+ num_neighbors=num_neighbors,
+ )
-# [END aiplatform_sdk_vector_search_find_neighbors_sample]
+
+# [END aiplatform_sdk_vector_search_find_neighbors_hybrid_sample]
+
+
+# [START aiplatform_sdk_vector_search_find_neighbors_filtering_crowding_sample]
+def vector_search_find_neighbors_filtering_crowding(
+ project: str,
+ location: str,
+ index_endpoint_name: str,
+ deployed_index_id: str,
+ queries: List[List[float]],
+ num_neighbors: int,
+ filter: List[aiplatform.matching_engine.matching_engine_index_endpoint.Namespace],
+ numeric_filter: List[
+ aiplatform.matching_engine.matching_engine_index_endpoint.NumericNamespace
+ ],
+ per_crowding_attribute_neighbor_count: int,
+) -> List[
+ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]
+]:
+ """Query the vector search index with filtering and crowding.
+
+ Args:
+ project (str): Required. Project ID
+ location (str): Required. The region name
+ index_endpoint_name (str): Required. Index endpoint to run the query
+ against.
+ deployed_index_id (str): Required. The ID of the DeployedIndex to run
+ the queries against.
+ queries (List[List[float]]): Required. A list of queries. Each query is
+ a list of floats, representing a single embedding.
+ num_neighbors (int): Required. The number of neighbors to return.
+ filter (List[Namespace]): Required. A list of Namespaces for filtering
+ the matching results. For example,
+ [Namespace("color", ["red"], []), Namespace("shape", [], ["square"])]
+ will match datapoints that satisfy "red color" but not include
+ datapoints with "square shape".
+ numeric_filter (List[NumericNamespace]): Required. A list of
+ NumericNamespaces for filtering the matching results. For example,
+ [NumericNamespace(name="cost", value_int=5, op="GREATER")] will limit
+ the matching results to datapoints with cost greater than 5.
+ per_crowding_attribute_neighbor_count (int): Required. The maximum
+ number of returned matches with the same crowding tag.
+
+ Returns:
+ List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.
+ """
+ # Initialize the Vertex AI client
+ aiplatform.init(project=project, location=location)
+
+ # Create the index endpoint instance from an existing endpoint.
+ my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint(
+ index_endpoint_name=index_endpoint_name
+ )
+
+ # Query the index endpoint for the nearest neighbors.
+ return my_index_endpoint.find_neighbors(
+ deployed_index_id=deployed_index_id,
+ queries=queries,
+ num_neighbors=num_neighbors,
+ filter=filter,
+ numeric_filter=numeric_filter,
+ per_crowding_attribute_neighbor_count=per_crowding_attribute_neighbor_count,
+ )
+
+
+# [END aiplatform_sdk_vector_search_find_neighbors_filtering_crowding_sample]
# [START aiplatform_sdk_vector_search_find_neighbors_jwt_sample]
@@ -95,7 +201,9 @@ def vector_search_find_neighbors_jwt(
queries: List[List[float]],
num_neighbors: int,
signed_jwt: str,
-) -> List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]]:
+) -> List[
+ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]
+]:
"""Query the vector search index.
Args:
@@ -132,4 +240,5 @@ def vector_search_find_neighbors_jwt(
)
return resp
+
# [END aiplatform_sdk_vector_search_find_neighbors_jwt_sample]
|
diff --git a/samples/model-builder/test_constants.py b/samples/model-builder/test_constants.py
index 3a7d3ed3c3..5930d8394d 100644
--- a/samples/model-builder/test_constants.py
+++ b/samples/model-builder/test_constants.py
@@ -382,14 +382,18 @@
# Vector Search
VECTOR_SEARCH_INDEX = "123"
VECTOR_SEARCH_INDEX_DATAPOINTS = [
- aiplatform.compat.types.index_v1beta1.IndexDatapoint(datapoint_id="datapoint_id_1", feature_vector=[0.1, 0.2]),
- aiplatform.compat.types.index_v1beta1.IndexDatapoint(datapoint_id="datapoint_id_2", feature_vector=[0.3, 0.4]),
+ aiplatform.compat.types.index_v1beta1.IndexDatapoint(
+ datapoint_id="datapoint_id_1", feature_vector=[0.1, 0.2]
+ ),
+ aiplatform.compat.types.index_v1beta1.IndexDatapoint(
+ datapoint_id="datapoint_id_2", feature_vector=[0.3, 0.4]
+ ),
]
VECTOR_SEARCH_INDEX_DATAPOINT_IDS = ["datapoint_id_1", "datapoint_id_2"]
VECTOR_SEARCH_INDEX_ENDPOINT = "456"
VECTOR_SEARCH_DEPLOYED_INDEX_ID = "789"
-VECTOR_SERACH_INDEX_QUERIES = [[0.1]]
-VECTOR_SERACH_INDEX_HYBRID_QUERIES = [
+VECTOR_SEARCH_INDEX_QUERIES = [[0.1]]
+VECTOR_SEARCH_INDEX_HYBRID_QUERIES = [
aiplatform.matching_engine.matching_engine_index_endpoint.HybridQuery(
dense_embedding=[1, 2, 3],
sparse_embedding_dimensions=[10, 20, 30],
@@ -409,6 +413,20 @@
dense_embedding=[1, 2, 3]
),
]
+VECTOR_SEARCH_FILTER = [
+ aiplatform.matching_engine.matching_engine_index_endpoint.Namespace(
+ "color", ["red"], []
+ ),
+ aiplatform.matching_engine.matching_engine_index_endpoint.Namespace(
+ "shape", [], ["squared"]
+ ),
+]
+VECTOR_SEARCH_NUMERIC_FILTER = [
+ aiplatform.matching_engine.matching_engine_index_endpoint.NumericNamespace(
+ name="cost", value_int=5, op="GREATER"
+ )
+]
+VECTOR_SEARCH_PER_CROWDING_ATTRIBUTE_NEIGHBOR_COUNT = 5
VECTOR_SEARCH_INDEX_DISPLAY_NAME = "my-vector-search-index"
VECTOR_SEARCH_INDEX_DESCRIPTION = "test description"
VECTOR_SEARCH_INDEX_LABELS = {"my_key": "my_value"}
diff --git a/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py b/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py
index 30f1f5711d..35c6dfe217 100644
--- a/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py
+++ b/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py
@@ -12,8 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from unittest.mock import call
-
import test_constants as constants
from vector_search import vector_search_find_neighbors_sample
@@ -26,8 +24,8 @@ def test_vector_search_find_neighbors_sample(
location=constants.LOCATION,
index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT,
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
- num_neighbors=10
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
+ num_neighbors=10,
)
# Check client initialization
@@ -37,23 +35,79 @@ def test_vector_search_find_neighbors_sample(
# Check index endpoint initialization with right index endpoint name
mock_index_endpoint_init.assert_called_with(
- index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT)
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
# Check index_endpoint.find_neighbors is called with right params.
- mock_index_endpoint_find_neighbors.assert_has_calls(
- [
- call(
- deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
- num_neighbors=10,
- ),
- call(
- deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_HYBRID_QUERIES,
- num_neighbors=10,
- ),
- ],
- any_order=False,
+ mock_index_endpoint_find_neighbors.assert_called_with(
+ deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
+ num_neighbors=10,
+ )
+
+
+def test_vector_search_find_neighbors_hybrid_sample(
+ mock_sdk_init, mock_index_endpoint_init, mock_index_endpoint_find_neighbors
+):
+ vector_search_find_neighbors_sample.vector_search_find_neighbors_hybrid_queries(
+ project=constants.PROJECT,
+ location=constants.LOCATION,
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT,
+ deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
+ num_neighbors=10,
+ )
+
+ # Check client initialization
+ mock_sdk_init.assert_called_with(
+ project=constants.PROJECT, location=constants.LOCATION
+ )
+
+ # Check index endpoint initialization with right index endpoint name
+ mock_index_endpoint_init.assert_called_with(
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
+
+ # Check index_endpoint.find_neighbors is called with right params.
+ mock_index_endpoint_find_neighbors.assert_called_with(
+ deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
+ queries=constants.VECTOR_SEARCH_INDEX_HYBRID_QUERIES,
+ num_neighbors=10,
+ )
+
+
+def test_vector_search_find_neighbors_filtering_crowding_sample(
+ mock_sdk_init, mock_index_endpoint_init, mock_index_endpoint_find_neighbors
+):
+ vector_search_find_neighbors_sample.vector_search_find_neighbors_filtering_crowding(
+ project=constants.PROJECT,
+ location=constants.LOCATION,
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT,
+ deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
+ num_neighbors=10,
+ filter=constants.VECTOR_SEARCH_FILTER,
+ numeric_filter=constants.VECTOR_SEARCH_NUMERIC_FILTER,
+ per_crowding_attribute_neighbor_count=constants.VECTOR_SEARCH_PER_CROWDING_ATTRIBUTE_NEIGHBOR_COUNT,
+ )
+
+ # Check client initialization
+ mock_sdk_init.assert_called_with(
+ project=constants.PROJECT, location=constants.LOCATION
+ )
+
+ # Check index endpoint initialization with right index endpoint name
+ mock_index_endpoint_init.assert_called_with(
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
+
+ # Check index_endpoint.find_neighbors is called with right params.
+ mock_index_endpoint_find_neighbors.assert_called_with(
+ deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
+ num_neighbors=10,
+ filter=constants.VECTOR_SEARCH_FILTER,
+ numeric_filter=constants.VECTOR_SEARCH_NUMERIC_FILTER,
+ per_crowding_attribute_neighbor_count=constants.VECTOR_SEARCH_PER_CROWDING_ATTRIBUTE_NEIGHBOR_COUNT,
)
@@ -65,7 +119,7 @@ def test_vector_search_find_neighbors_jwt_sample(
location=constants.LOCATION,
index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT,
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT,
)
@@ -77,12 +131,13 @@ def test_vector_search_find_neighbors_jwt_sample(
# Check index endpoint initialization with right index endpoint name
mock_index_endpoint_init.assert_called_with(
- index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT)
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
# Check index_endpoint.find_neighbors is called with right params.
mock_index_endpoint_find_neighbors.assert_called_with(
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT,
)
diff --git a/samples/model-builder/vector_search/vector_search_match_sample_test.py b/samples/model-builder/vector_search/vector_search_match_sample_test.py
index 5d1c5faa64..081e334ef4 100644
--- a/samples/model-builder/vector_search/vector_search_match_sample_test.py
+++ b/samples/model-builder/vector_search/vector_search_match_sample_test.py
@@ -36,7 +36,8 @@ def test_vector_search_match_hybrid_queries_sample(
# Check index endpoint initialization with right index endpoint name
mock_index_endpoint_init.assert_called_with(
- index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT)
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
# Check index_endpoint.match is called with right params.
mock_index_endpoint_match.assert_called_with(
@@ -54,7 +55,7 @@ def test_vector_search_match_jwt_sample(
location=constants.LOCATION,
index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT,
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT,
)
@@ -66,12 +67,13 @@ def test_vector_search_match_jwt_sample(
# Check index endpoint initialization with right index endpoint name
mock_index_endpoint_init.assert_called_with(
- index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT)
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
# Check index_endpoint.match is called with right params.
mock_index_endpoint_match.assert_called_with(
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT,
)
@@ -81,14 +83,14 @@ def test_vector_search_match_psc_manual_sample(
mock_sdk_init,
mock_index_endpoint,
mock_index_endpoint_init,
- mock_index_endpoint_match
+ mock_index_endpoint_match,
):
vector_search_match_sample.vector_search_match_psc_manual(
project=constants.PROJECT,
location=constants.LOCATION,
index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT,
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
ip_address=constants.VECTOR_SEARCH_PSC_MANUAL_IP_ADDRESS,
)
@@ -100,7 +102,8 @@ def test_vector_search_match_psc_manual_sample(
# Check index endpoint initialization with right index endpoint name
mock_index_endpoint_init.assert_called_with(
- index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT)
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
# Check index endpoint PSC IP address is set
assert mock_index_endpoint.private_service_connect_ip_address == (
@@ -110,7 +113,7 @@ def test_vector_search_match_psc_manual_sample(
# Check index_endpoint.match is called with right params.
mock_index_endpoint_match.assert_called_with(
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
)
@@ -123,7 +126,7 @@ def test_vector_search_match_psc_automation_sample(
location=constants.LOCATION,
index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT,
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
psc_network=constants.VECTOR_SEARCH_VPC_NETWORK,
)
@@ -135,12 +138,13 @@ def test_vector_search_match_psc_automation_sample(
# Check index endpoint initialization with right index endpoint name
mock_index_endpoint_init.assert_called_with(
- index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT)
+ index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT
+ )
# Check index_endpoint.match is called with right params.
mock_index_endpoint_match.assert_called_with(
deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID,
- queries=constants.VECTOR_SERACH_INDEX_QUERIES,
+ queries=constants.VECTOR_SEARCH_INDEX_QUERIES,
num_neighbors=10,
psc_network=constants.VECTOR_SEARCH_VPC_NETWORK,
)
| 2024-12-05T14:26:24
|
{}
|
{"samples/model-builder/vector_search/vector_search_find_neighbors_sample.py": "# Copyright 2023 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import List\n\nfrom google.cloud import aiplatform\n\n\n# [START aiplatform_sdk_vector_search_find_neighbors_sample]\ndef vector_search_find_neighbors(\n project: str,\n location: str,\n index_endpoint_name: str,\n deployed_index_id: str,\n queries: List[List[float]],\n num_neighbors: int,\n) -> None:\n \"\"\"Query the vector search index.\n\n Args:\n project (str): Required. Project ID\n location (str): Required. The region name\n index_endpoint_name (str): Required. Index endpoint to run the query\n against.\n deployed_index_id (str): Required. The ID of the DeployedIndex to run\n the queries against.\n queries (List[List[float]]): Required. A list of queries. Each query is\n a list of floats, representing a single embedding.\n num_neighbors (int): Required. The number of neighbors to return.\n \"\"\"\n # Initialize the Vertex AI client\n aiplatform.init(project=project, location=location)\n\n # Create the index endpoint instance from an existing endpoint.\n my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint(\n index_endpoint_name=index_endpoint_name\n )\n\n # Query the index endpoint for the nearest neighbors.\n resp = my_index_endpoint.find_neighbors(\n deployed_index_id=deployed_index_id,\n queries=queries,\n num_neighbors=num_neighbors,\n )\n print(resp)\n\n # Query hybrid datapoints, sparse-only datapoints, and dense-only datapoints.\n hybrid_queries = [\n aiplatform.matching_engine.matching_engine_index_endpoint.HybridQuery(\n dense_embedding=[1, 2, 3],\n sparse_embedding_dimensions=[10, 20, 30],\n sparse_embedding_values=[1.0, 1.0, 1.0],\n rrf_ranking_alpha=0.5,\n ),\n aiplatform.matching_engine.matching_engine_index_endpoint.HybridQuery(\n dense_embedding=[1, 2, 3],\n sparse_embedding_dimensions=[10, 20, 30],\n sparse_embedding_values=[0.1, 0.2, 0.3],\n ),\n aiplatform.matching_engine.matching_engine_index_endpoint.HybridQuery(\n sparse_embedding_dimensions=[10, 20, 30],\n sparse_embedding_values=[0.1, 0.2, 0.3],\n ),\n aiplatform.matching_engine.matching_engine_index_endpoint.HybridQuery(\n dense_embedding=[1, 2, 3]\n ),\n ]\n\n hybrid_resp = my_index_endpoint.find_neighbors(\n deployed_index_id=deployed_index_id,\n queries=hybrid_queries,\n num_neighbors=num_neighbors,)\n print(hybrid_resp)\n\n# [END aiplatform_sdk_vector_search_find_neighbors_sample]\n\n\n# [START aiplatform_sdk_vector_search_find_neighbors_jwt_sample]\ndef vector_search_find_neighbors_jwt(\n project: str,\n location: str,\n index_endpoint_name: str,\n deployed_index_id: str,\n queries: List[List[float]],\n num_neighbors: int,\n signed_jwt: str,\n) -> List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]]:\n \"\"\"Query the vector search index.\n\n Args:\n project (str): Required. Project ID\n location (str): Required. The region name\n index_endpoint_name (str): Required. Index endpoint to run the query\n against.\n deployed_index_id (str): Required. The ID of the DeployedIndex to run\n the queries against.\n queries (List[List[float]]): Required. A list of queries. Each query is\n a list of floats, representing a single embedding.\n num_neighbors (int): Required. The number of neighbors to return.\n signed_jwt (str): Required. The signed JWT token for the private\n endpoint. The endpoint must be configured to accept tokens from JWT's\n issuer and encoded audience.\n\n Returns:\n List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.\n \"\"\"\n # Initialize the Vertex AI client\n aiplatform.init(project=project, location=location)\n\n # Create the index endpoint instance from an existing endpoint.\n my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint(\n index_endpoint_name=index_endpoint_name\n )\n\n # Query the index endpoint for the nearest neighbors.\n resp = my_index_endpoint.find_neighbors(\n deployed_index_id=deployed_index_id,\n queries=queries,\n num_neighbors=num_neighbors,\n signed_jwt=signed_jwt,\n )\n return resp\n\n# [END aiplatform_sdk_vector_search_find_neighbors_jwt_sample]\n"}
|
{"samples/model-builder/vector_search/vector_search_find_neighbors_sample.py": [{"type": "function", "name": "vector_search_find_neighbors_hybrid_queries", "lines": [67, 123], "signature": "def vector_search_find_neighbors_hybrid_queries( project: str, location: str, index_endpoint_name: str, deployed_index_id: str, num_neighbors: int, ) -> List[ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] ]:", "doc": "Query the vector search index using example hybrid queries.\n\nArgs:\n project (str): Required. Project ID\n location (str): Required. The region name\n index_endpoint_name (str): Required. Index endpoint to run the query\n against.\n deployed_index_id (str): Required. The ID of the DeployedIndex to run\n the queries against.\n num_neighbors (int): Required. The number of neighbors to return.\n\nReturns:\n List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query."}, {"type": "function", "name": "vector_search_find_neighbors_filtering_crowding", "lines": [131, 188], "signature": "def vector_search_find_neighbors_filtering_crowding( project: str, location: str, index_endpoint_name: str, deployed_index_id: str, queries: List[List[float]], num_neighbors: int, filter: List[aiplatform.matching_engine.matching_engine_index_endpoint.Namespace], numeric_filter: List[ aiplatform.matching_engine.matching_engine_index_endpoint.NumericNamespace ], per_crowding_attribute_neighbor_count: int, ) -> List[ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] ]:", "doc": "Query the vector search index with filtering and crowding.\n\nArgs:\n project (str): Required. Project ID\n location (str): Required. The region name\n index_endpoint_name (str): Required. Index endpoint to run the query\n against.\n deployed_index_id (str): Required. The ID of the DeployedIndex to run\n the queries against.\n queries (List[List[float]]): Required. A list of queries. Each query is\n a list of floats, representing a single embedding.\n num_neighbors (int): Required. The number of neighbors to return.\n filter (List[Namespace]): Required. A list of Namespaces for filtering\n the matching results. For example,\n [Namespace(\"color\", [\"red\"], []), Namespace(\"shape\", [], [\"square\"])]\n will match datapoints that satisfy \"red color\" but not include\n datapoints with \"square shape\".\n numeric_filter (List[NumericNamespace]): Required. A list of\n NumericNamespaces for filtering the matching results. For example,\n [NumericNamespace(name=\"cost\", value_int=5, op=\"GREATER\")] will limit\n the matching results to datapoints with cost greater than 5.\n per_crowding_attribute_neighbor_count (int): Required. The maximum\n number of returned matches with the same crowding tag.\n\nReturns:\n List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query."}]}
| null |
["samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_sample", "samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_hybrid_sample", "samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_filtering_crowding_sample"]
|
["samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_jwt_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_hybrid_queries_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_jwt_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_psc_manual_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_psc_automation_sample"]
|
67358fa6a830eb842f6b52d09061af4a41b54af6
|
{"first_commit_time": 1734029128.0, "pr_title": "chore: Samples - Add vector search sample for filtering and crowding", "pr_body": "chore: Samples - Add vector search sample for filtering and crowding\n", "pr_timeline": [{"time": 1734028399.0, "comment": "<!-- probot comment [11299897]-->\nHere is the summary of changes.\n<details>\n <summary>You are about to add 2 region tags.</summary>\n\n - [samples/model-builder/vector_search/vector_search_find_neighbors_sample.py:66](https://github.com/googleapis/python-aiplatform/blob/3ea43524f7961fdb29c4f75cf529852894bf1c86/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py#L66), tag `aiplatform_sdk_vector_search_find_neighbors_hybrid_sample`\n- [samples/model-builder/vector_search/vector_search_find_neighbors_sample.py:130](https://github.com/googleapis/python-aiplatform/blob/3ea43524f7961fdb29c4f75cf529852894bf1c86/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py#L130), tag `aiplatform_sdk_vector_search_find_neighbors_filtering_crowding_sample`\n\n</details>\n\n---\nThis comment is generated by [snippet-bot](https://github.com/apps/snippet-bot).\nIf you find problems with this result, please file an issue at:\nhttps://github.com/googleapis/repo-automation-bots/issues.\nTo update this comment, add `snippet-bot:force-run` label or use the checkbox below:\n- [ ] Refresh this comment\n"}], "issues": {}}
|
|
graphql-python/graphene
| 1,506
|
https://github.com/graphql-python/graphene/pull/1506
|
graphql-python__graphene-1506
|
[]
|
8ede21e06381c096589c424960a6cfaca304badb
|
diff --git a/graphene/types/inputobjecttype.py b/graphene/types/inputobjecttype.py
index 5d2785105..fdf38ba05 100644
--- a/graphene/types/inputobjecttype.py
+++ b/graphene/types/inputobjecttype.py
@@ -14,6 +14,31 @@ class InputObjectTypeOptions(BaseOptions):
container = None # type: InputObjectTypeContainer
+# Currently in Graphene, we get a `None` whenever we access an (optional) field that was not set in an InputObjectType
+# using the InputObjectType.<attribute> dot access syntax. This is ambiguous, because in this current (Graphene
+# historical) arrangement, we cannot distinguish between a field not being set and a field being set to None.
+# At the same time, we shouldn't break existing code that expects a `None` when accessing a field that was not set.
+_INPUT_OBJECT_TYPE_DEFAULT_VALUE = None
+
+# To mitigate this, we provide the function `set_input_object_type_default_value` to allow users to change the default
+# value returned in non-specified fields in InputObjectType to another meaningful sentinel value (e.g. Undefined)
+# if they want to. This way, we can keep code that expects a `None` working while we figure out a better solution (or
+# a well-documented breaking change) for this issue.
+
+
+def set_input_object_type_default_value(default_value):
+ """
+ Change the sentinel value returned by non-specified fields in an InputObjectType
+ Useful to differentiate between a field not being set and a field being set to None by using a sentinel value
+ (e.g. Undefined is a good sentinel value for this purpose)
+
+ This function should be called at the beginning of the app or in some other place where it is guaranteed to
+ be called before any InputObjectType is defined.
+ """
+ global _INPUT_OBJECT_TYPE_DEFAULT_VALUE
+ _INPUT_OBJECT_TYPE_DEFAULT_VALUE = default_value
+
+
class InputObjectTypeContainer(dict, BaseType): # type: ignore
class Meta:
abstract = True
@@ -21,7 +46,7 @@ class Meta:
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
for key in self._meta.fields:
- setattr(self, key, self.get(key, None))
+ setattr(self, key, self.get(key, _INPUT_OBJECT_TYPE_DEFAULT_VALUE))
def __init_subclass__(cls, *args, **kwargs):
pass
diff --git a/graphene/validation/depth_limit.py b/graphene/validation/depth_limit.py
index b4599e660..e0f286634 100644
--- a/graphene/validation/depth_limit.py
+++ b/graphene/validation/depth_limit.py
@@ -30,7 +30,7 @@
except ImportError:
# backwards compatibility for v3.6
from typing import Pattern
-from typing import Callable, Dict, List, Optional, Union
+from typing import Callable, Dict, List, Optional, Union, Tuple
from graphql import GraphQLError
from graphql.validation import ValidationContext, ValidationRule
@@ -82,7 +82,7 @@ def __init__(self, validation_context: ValidationContext):
def get_fragments(
- definitions: List[DefinitionNode],
+ definitions: Tuple[DefinitionNode, ...],
) -> Dict[str, FragmentDefinitionNode]:
fragments = {}
for definition in definitions:
@@ -94,7 +94,7 @@ def get_fragments(
# This will actually get both queries and mutations.
# We can basically treat those the same
def get_queries_and_mutations(
- definitions: List[DefinitionNode],
+ definitions: Tuple[DefinitionNode, ...],
) -> Dict[str, OperationDefinitionNode]:
operations = {}
|
diff --git a/graphene/types/tests/conftest.py b/graphene/types/tests/conftest.py
new file mode 100644
index 000000000..43f7d7268
--- /dev/null
+++ b/graphene/types/tests/conftest.py
@@ -0,0 +1,12 @@
+import pytest
+from graphql import Undefined
+
+from graphene.types.inputobjecttype import set_input_object_type_default_value
+
+
[email protected]()
+def set_default_input_object_type_to_undefined():
+ """This fixture is used to change the default value of optional inputs in InputObjectTypes for specific tests"""
+ set_input_object_type_default_value(Undefined)
+ yield
+ set_input_object_type_default_value(None)
diff --git a/graphene/types/tests/test_inputobjecttype.py b/graphene/types/tests/test_inputobjecttype.py
index 0fb7e3945..0d7bcf80c 100644
--- a/graphene/types/tests/test_inputobjecttype.py
+++ b/graphene/types/tests/test_inputobjecttype.py
@@ -1,3 +1,5 @@
+from graphql import Undefined
+
from ..argument import Argument
from ..field import Field
from ..inputfield import InputField
@@ -6,6 +8,7 @@
from ..scalars import Boolean, String
from ..schema import Schema
from ..unmountedtype import UnmountedType
+from ... import NonNull
class MyType:
@@ -136,3 +139,31 @@ def resolve_is_child(self, info, parent):
assert not result.errors
assert result.data == {"isChild": True}
+
+
+def test_inputobjecttype_default_input_as_undefined(
+ set_default_input_object_type_to_undefined,
+):
+ class TestUndefinedInput(InputObjectType):
+ required_field = String(required=True)
+ optional_field = String()
+
+ class Query(ObjectType):
+ undefined_optionals_work = Field(NonNull(Boolean), input=TestUndefinedInput())
+
+ def resolve_undefined_optionals_work(self, info, input: TestUndefinedInput):
+ # Confirm that optional_field comes as Undefined
+ return (
+ input.required_field == "required" and input.optional_field is Undefined
+ )
+
+ schema = Schema(query=Query)
+ result = schema.execute(
+ """query basequery {
+ undefinedOptionalsWork(input: {requiredField: "required"})
+ }
+ """
+ )
+
+ assert not result.errors
+ assert result.data == {"undefinedOptionalsWork": True}
diff --git a/graphene/types/tests/test_type_map.py b/graphene/types/tests/test_type_map.py
index 55b1706e0..55665b6b8 100644
--- a/graphene/types/tests/test_type_map.py
+++ b/graphene/types/tests/test_type_map.py
@@ -20,8 +20,8 @@
from ..interface import Interface
from ..objecttype import ObjectType
from ..scalars import Int, String
-from ..structures import List, NonNull
from ..schema import Schema
+from ..structures import List, NonNull
def create_type_map(types, auto_camelcase=True):
@@ -227,6 +227,18 @@ def resolve_foo_bar(self, args, info):
assert foo_field.description == "Field description"
+def test_inputobject_undefined(set_default_input_object_type_to_undefined):
+ class OtherObjectType(InputObjectType):
+ optional_field = String()
+
+ type_map = create_type_map([OtherObjectType])
+ assert "OtherObjectType" in type_map
+ graphql_type = type_map["OtherObjectType"]
+
+ container = graphql_type.out_type({})
+ assert container.optional_field is Undefined
+
+
def test_objecttype_camelcase():
class MyObjectType(ObjectType):
"""Description"""
| 2023-05-29T18:07:14
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"graphene/types/inputobjecttype.py": "from .base import BaseOptions, BaseType\nfrom .inputfield import InputField\nfrom .unmountedtype import UnmountedType\nfrom .utils import yank_fields_from_attrs\n\n# For static type checking with Mypy\nMYPY = False\nif MYPY:\n from typing import Dict, Callable # NOQA\n\n\nclass InputObjectTypeOptions(BaseOptions):\n fields = None # type: Dict[str, InputField]\n container = None # type: InputObjectTypeContainer\n\n\nclass InputObjectTypeContainer(dict, BaseType): # type: ignore\n class Meta:\n abstract = True\n\n def __init__(self, *args, **kwargs):\n dict.__init__(self, *args, **kwargs)\n for key in self._meta.fields:\n setattr(self, key, self.get(key, None))\n\n def __init_subclass__(cls, *args, **kwargs):\n pass\n\n\nclass InputObjectType(UnmountedType, BaseType):\n \"\"\"\n Input Object Type Definition\n\n An input object defines a structured collection of fields which may be\n supplied to a field argument.\n\n Using ``graphene.NonNull`` will ensure that a input value must be provided by the query.\n\n All class attributes of ``graphene.InputObjectType`` are implicitly mounted as InputField\n using the below Meta class options.\n\n .. code:: python\n\n from graphene import InputObjectType, String, InputField\n\n class Person(InputObjectType):\n # implicitly mounted as Input Field\n first_name = String(required=True)\n # explicitly mounted as Input Field\n last_name = InputField(String, description=\"Surname\")\n\n The fields on an input object type can themselves refer to input object types, but you can't\n mix input and output types in your schema.\n\n Meta class options (optional):\n name (str): the name of the GraphQL type (must be unique in schema). Defaults to class\n name.\n description (str): the description of the GraphQL type in the schema. Defaults to class\n docstring.\n container (class): A class reference for a value object that allows for\n attribute initialization and access. Default InputObjectTypeContainer.\n fields (Dict[str, graphene.InputField]): Dictionary of field name to InputField. Not\n recommended to use (prefer class attributes).\n \"\"\"\n\n @classmethod\n def __init_subclass_with_meta__(cls, container=None, _meta=None, **options):\n if not _meta:\n _meta = InputObjectTypeOptions(cls)\n\n fields = {}\n for base in reversed(cls.__mro__):\n fields.update(yank_fields_from_attrs(base.__dict__, _as=InputField))\n\n if _meta.fields:\n _meta.fields.update(fields)\n else:\n _meta.fields = fields\n if container is None:\n container = type(cls.__name__, (InputObjectTypeContainer, cls), {})\n _meta.container = container\n super(InputObjectType, cls).__init_subclass_with_meta__(_meta=_meta, **options)\n\n @classmethod\n def get_type(cls):\n \"\"\"\n This function is called when the unmounted type (InputObjectType instance)\n is mounted (as a Field, InputField or Argument)\n \"\"\"\n return cls\n", "graphene/validation/depth_limit.py": "# This is a Python port of https://github.com/stems/graphql-depth-limit\n# which is licensed under the terms of the MIT license, reproduced below.\n#\n# -----------\n#\n# MIT License\n#\n# Copyright (c) 2017 Stem\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in all\n# copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\ntry:\n from re import Pattern\nexcept ImportError:\n # backwards compatibility for v3.6\n from typing import Pattern\nfrom typing import Callable, Dict, List, Optional, Union\n\nfrom graphql import GraphQLError\nfrom graphql.validation import ValidationContext, ValidationRule\nfrom graphql.language import (\n DefinitionNode,\n FieldNode,\n FragmentDefinitionNode,\n FragmentSpreadNode,\n InlineFragmentNode,\n Node,\n OperationDefinitionNode,\n)\n\nfrom ..utils.is_introspection_key import is_introspection_key\n\n\nIgnoreType = Union[Callable[[str], bool], Pattern, str]\n\n\ndef depth_limit_validator(\n max_depth: int,\n ignore: Optional[List[IgnoreType]] = None,\n callback: Optional[Callable[[Dict[str, int]], None]] = None,\n):\n class DepthLimitValidator(ValidationRule):\n def __init__(self, validation_context: ValidationContext):\n document = validation_context.document\n definitions = document.definitions\n\n fragments = get_fragments(definitions)\n queries = get_queries_and_mutations(definitions)\n query_depths = {}\n\n for name in queries:\n query_depths[name] = determine_depth(\n node=queries[name],\n fragments=fragments,\n depth_so_far=0,\n max_depth=max_depth,\n context=validation_context,\n operation_name=name,\n ignore=ignore,\n )\n if callable(callback):\n callback(query_depths)\n super().__init__(validation_context)\n\n return DepthLimitValidator\n\n\ndef get_fragments(\n definitions: List[DefinitionNode],\n) -> Dict[str, FragmentDefinitionNode]:\n fragments = {}\n for definition in definitions:\n if isinstance(definition, FragmentDefinitionNode):\n fragments[definition.name.value] = definition\n return fragments\n\n\n# This will actually get both queries and mutations.\n# We can basically treat those the same\ndef get_queries_and_mutations(\n definitions: List[DefinitionNode],\n) -> Dict[str, OperationDefinitionNode]:\n operations = {}\n\n for definition in definitions:\n if isinstance(definition, OperationDefinitionNode):\n operation = definition.name.value if definition.name else \"anonymous\"\n operations[operation] = definition\n return operations\n\n\ndef determine_depth(\n node: Node,\n fragments: Dict[str, FragmentDefinitionNode],\n depth_so_far: int,\n max_depth: int,\n context: ValidationContext,\n operation_name: str,\n ignore: Optional[List[IgnoreType]] = None,\n) -> int:\n if depth_so_far > max_depth:\n context.report_error(\n GraphQLError(\n f\"'{operation_name}' exceeds maximum operation depth of {max_depth}.\",\n [node],\n )\n )\n return depth_so_far\n if isinstance(node, FieldNode):\n should_ignore = is_introspection_key(node.name.value) or is_ignored(\n node, ignore\n )\n\n if should_ignore or not node.selection_set:\n return 0\n return 1 + max(\n map(\n lambda selection: determine_depth(\n node=selection,\n fragments=fragments,\n depth_so_far=depth_so_far + 1,\n max_depth=max_depth,\n context=context,\n operation_name=operation_name,\n ignore=ignore,\n ),\n node.selection_set.selections,\n )\n )\n elif isinstance(node, FragmentSpreadNode):\n return determine_depth(\n node=fragments[node.name.value],\n fragments=fragments,\n depth_so_far=depth_so_far,\n max_depth=max_depth,\n context=context,\n operation_name=operation_name,\n ignore=ignore,\n )\n elif isinstance(\n node, (InlineFragmentNode, FragmentDefinitionNode, OperationDefinitionNode)\n ):\n return max(\n map(\n lambda selection: determine_depth(\n node=selection,\n fragments=fragments,\n depth_so_far=depth_so_far,\n max_depth=max_depth,\n context=context,\n operation_name=operation_name,\n ignore=ignore,\n ),\n node.selection_set.selections,\n )\n )\n else:\n raise Exception(\n f\"Depth crawler cannot handle: {node.kind}.\"\n ) # pragma: no cover\n\n\ndef is_ignored(node: FieldNode, ignore: Optional[List[IgnoreType]] = None) -> bool:\n if ignore is None:\n return False\n for rule in ignore:\n field_name = node.name.value\n if isinstance(rule, str):\n if field_name == rule:\n return True\n elif isinstance(rule, Pattern):\n if rule.match(field_name):\n return True\n elif callable(rule):\n if rule(field_name):\n return True\n else:\n raise ValueError(f\"Invalid ignore option: {rule}.\")\n return False\n"}
|
{"graphene/types/inputobjecttype.py": [{"type": "function", "name": "set_input_object_type_default_value", "lines": [29, 39], "signature": "def set_input_object_type_default_value(default_value):", "doc": "Change the sentinel value returned by non-specified fields in an InputObjectType\nUseful to differentiate between a field not being set and a field being set to None by using a sentinel value\n(e.g. Undefined is a good sentinel value for this purpose)\n\nThis function should be called at the beginning of the app or in some other place where it is guaranteed to\nbe called before any InputObjectType is defined."}]}
| null |
["graphene/types/tests/test_inputobjecttype.py::test_generate_inputobjecttype", "graphene/types/tests/test_inputobjecttype.py::test_generate_inputobjecttype_with_meta", "graphene/types/tests/test_inputobjecttype.py::test_generate_inputobjecttype_with_fields", "graphene/types/tests/test_inputobjecttype.py::test_ordered_fields_in_inputobjecttype", "graphene/types/tests/test_inputobjecttype.py::test_generate_inputobjecttype_unmountedtype", "graphene/types/tests/test_inputobjecttype.py::test_generate_inputobjecttype_as_argument", "graphene/types/tests/test_inputobjecttype.py::test_generate_inputobjecttype_inherit_abstracttype", "graphene/types/tests/test_inputobjecttype.py::test_generate_inputobjecttype_inherit_abstracttype_reversed", "graphene/types/tests/test_inputobjecttype.py::test_inputobjecttype_of_input", "graphene/types/tests/test_inputobjecttype.py::test_inputobjecttype_default_input_as_undefined", "graphene/types/tests/test_type_map.py::test_enum", "graphene/types/tests/test_type_map.py::test_objecttype", "graphene/types/tests/test_type_map.py::test_required_argument_with_default_value", "graphene/types/tests/test_type_map.py::test_dynamic_objecttype", "graphene/types/tests/test_type_map.py::test_interface", "graphene/types/tests/test_type_map.py::test_inputobject", "graphene/types/tests/test_type_map.py::test_inputobject_undefined", "graphene/types/tests/test_type_map.py::test_objecttype_camelcase", "graphene/types/tests/test_type_map.py::test_objecttype_camelcase_disabled", "graphene/types/tests/test_type_map.py::test_objecttype_with_possible_types", "graphene/types/tests/test_type_map.py::test_interface_with_interfaces"]
|
[]
|
8ede21e06381c096589c424960a6cfaca304badb
|
{"first_commit_time": 1668102812.0, "pr_title": "Allow the user to change InputObjectType's default value on non-specified inputs to a sentinel value", "pr_body": "This PR aims to remove the ambiguity that happens with `InputObjectType`s when accessing optional fields using the `input.<attribute>` \"dot access\" syntax.\r\n\r\n### Current behavior:\r\n\r\nIf an optional input field is defined but has not been specified in an incoming `InputObjectType`, it comes back as a `None`\r\n\r\nThis is ambiguous, since the user could also have set that specific field to `None`, meaning we wouldn't be able to tell between an explicit setting to `None` versus the field not being set at all.\r\n\r\n### Proposed behavior in this PR:\r\n\r\nIntroduce a non-breaking, opt-in method to allow the users to change this default value getter behavior in InputObjectTypes to any value that they want to. Graphene already provides such concept in the form of the `graphl.Undefined` value, which is a good example of a value that the user could use to differentiate between an optional input being explicitly set to `None` versus not being set at all.\r\n\r\nThe proposed API is a function called `set_input_object_type_default_value()` that allows the user to change the sentinel value that will be returned in optional inputs (it should be called early in the code, before any `InputObjectType`s are defined). The default value remains `None`, aligning with the current behavior, so as not to break existing code that relies on the current assumption.\r\n\r\n#### _(Next steps and parting thoughts)_\r\nMoving forward, I believe it'd be best to discuss around introducing a well documented breaking change that fixes this behavior altogether, eliminating such possibility. But the current PR should allow us to keep making progress while bigger fish are being fried :fish: :cook: \r\n\r\nThanks!", "pr_timeline": [{"time": 1685384850.0, "comment": "## [Codecov](https://app.codecov.io/gh/graphql-python/graphene/pull/1506?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python) Report\nPatch coverage: **`100.00`**% and no project coverage change.\n> Comparison is base [(`8ede21e`)](https://app.codecov.io/gh/graphql-python/graphene/commit/8ede21e06381c096589c424960a6cfaca304badb?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python) 96.00% compared to head [(`a355083`)](https://app.codecov.io/gh/graphql-python/graphene/pull/1506?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python) 96.00%.\n\n<details><summary>Additional details and impacted files</summary>\n\n\n```diff\n@@ Coverage Diff @@\n## master #1506 +/- ##\n=======================================\n Coverage 96.00% 96.00% \n=======================================\n Files 51 51 \n Lines 1750 1753 +3 \n=======================================\n+ Hits 1680 1683 +3 \n Misses 70 70 \n```\n\n\n| [Impacted Files](https://app.codecov.io/gh/graphql-python/graphene/pull/1506?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python) | Coverage \u0394 | |\n|---|---|---|\n| [graphene/types/inputobjecttype.py](https://app.codecov.io/gh/graphql-python/graphene/pull/1506?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python#diff-Z3JhcGhlbmUvdHlwZXMvaW5wdXRvYmplY3R0eXBlLnB5) | `97.50% <100.00%> (+0.20%)` | :arrow_up: |\n| [graphene/validation/depth\\_limit.py](https://app.codecov.io/gh/graphql-python/graphene/pull/1506?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python#diff-Z3JhcGhlbmUvdmFsaWRhdGlvbi9kZXB0aF9saW1pdC5weQ==) | `96.96% <100.00%> (\u00f8)` | |\n\n\n</details>\n\n[:umbrella: View full report in Codecov by Sentry](https://app.codecov.io/gh/graphql-python/graphene/pull/1506?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python). \n:loudspeaker: Do you have feedback about the report comment? [Let us know in this issue](https://about.codecov.io/codecov-pr-comment-feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=graphql-python).\n"}], "issues": {}}
|
|
huggingface/accelerate
| 255
|
https://github.com/huggingface/accelerate/pull/255
|
huggingface__accelerate-255
|
[]
|
4fc586f5af650a5711dc907fb613367d2f009c9a
|
diff --git a/src/accelerate/accelerator.py b/src/accelerate/accelerator.py
index b841bc809f9..1254c19e9af 100644
--- a/src/accelerate/accelerator.py
+++ b/src/accelerate/accelerator.py
@@ -22,6 +22,7 @@
from packaging import version
+from .checkpointing import load_accelerator_state, save_accelerator_state
from .data_loader import prepare_data_loader
from .kwargs_handlers import DistributedDataParallelKwargs, GradScalerKwargs, InitProcessGroupKwargs, KwargsHandler
from .optimizer import AcceleratedOptimizer
@@ -40,6 +41,7 @@
if is_deepspeed_available():
import deepspeed
+
from .deepspeed_utils import DeepSpeedEngineWrapper, DeepSpeedOptimizerWrapper
import logging
@@ -560,6 +562,36 @@ def save(self, obj, f):
"""
save(obj, f)
+ def save_state(self, output_dir: str):
+ """
+ Saves the current states of the model, optimizer, scaler, and RNG generators.
+
+ Args:
+ output_dir (:obj:`str` or :obj:`os.PathLike`):
+ The name of the folder to save all relevant weights and states.
+ """
+ # Check if folder exists
+ output_dir = os.path.expanduser(output_dir)
+ os.makedirs(output_dir, exist_ok=True)
+ logger.info(f"Saving current state to {output_dir}")
+ weights = [self.get_state_dict(m) for m in self._models]
+ return save_accelerator_state(output_dir, weights, self._optimizers, self.state.process_index, self.scaler)
+
+ def load_state(self, input_dir: str):
+ """
+ Loads the current states of the model, optimizer, scaler, and RNG generators.
+
+ Args:
+ input_dir (:obj:`str` or :obj:`os.PathLike`):
+ The name of the folder all relevant weights and states were saved in.
+ """
+ # Check if folder exists
+ input_dir = os.path.expanduser(input_dir)
+ if not os.path.isdir(input_dir):
+ raise ValueError(f"Tried to find {input_dir} but folder does not exist")
+ logger.info(f"Loading states from {input_dir}")
+ load_accelerator_state(input_dir, self._models, self._optimizers, self.state.process_index, self.scaler)
+
def free_memory(self):
"""
Will release all references to the internal objects stored and call the garbage collector. You should call this
diff --git a/src/accelerate/utils.py b/src/accelerate/utils.py
index 3af792f1f05..abd5152ae83 100644
--- a/src/accelerate/utils.py
+++ b/src/accelerate/utils.py
@@ -43,6 +43,11 @@ def is_sagemaker_available():
if is_deepspeed_available():
from deepspeed import DeepSpeedEngine
+SCALER_NAME = "scaler.pt"
+MODEL_NAME = "pytorch_model"
+RNG_STATE_NAME = "random_states"
+OPTIMIZER_NAME = "optimizer"
+
class RNGType(Enum):
TORCH = "torch"
|
diff --git a/src/accelerate/checkpointing.py b/src/accelerate/checkpointing.py
new file mode 100644
index 00000000000..37d68b90c25
--- /dev/null
+++ b/src/accelerate/checkpointing.py
@@ -0,0 +1,134 @@
+# Copyright 2022 The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import random
+from typing import List
+
+import numpy as np
+import torch
+from torch.cuda.amp import GradScaler
+
+from .state import is_tpu_available
+from .utils import MODEL_NAME, OPTIMIZER_NAME, RNG_STATE_NAME, SCALER_NAME, save
+
+
+if is_tpu_available():
+ import torch_xla.core.xla_model as xm
+
+import logging
+
+
+logger = logging.getLogger(__name__)
+
+
+def save_accelerator_state(
+ output_dir: str, model_states: List[dict], optimizers: list, process_index: int, scaler: GradScaler = None
+):
+ """
+ Saves the current states of the models, optimizers, scaler, and RNG generators to a given directory.
+
+ Args:
+ output_dir (:obj:`str` or :obj:`os.PathLike`):
+ The name of the folder to save all relevant weights and states.
+ model_states (:obj:`List[torch.nn.Module]`):
+ A list of model states
+ optimizers (:obj:`List[torch.optim.Optimizer]`):
+ A list of optimizer instances
+ process_index (:obj:`int`):
+ The current process index in the Accelerator state
+ scaler (:obj:`torch.cuda.amp.GradScaler`, `optional`):
+ An optional gradient scaler instance to save
+ """
+ # Model states
+ for i, state in enumerate(model_states):
+ weights_name = f"{MODEL_NAME}.bin" if i == 0 else f"{MODEL_NAME}_{i}.bin"
+ output_model_file = os.path.join(output_dir, weights_name)
+ save(state, output_model_file)
+ logger.info(f"Model weights saved in {output_model_file}")
+ # Optimizer states
+ for i, opt in enumerate(optimizers):
+ state = opt.state_dict()
+ optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin"
+ output_optimizer_file = os.path.join(output_dir, optimizer_name)
+ save(state, output_optimizer_file)
+ logger.info(f"Optimizer state saved in {output_optimizer_file}")
+ # GradScaler state
+ if scaler is not None:
+ state = scaler.state_dict()
+ output_scaler_file = os.path.join(output_dir, SCALER_NAME)
+ torch.save(state, output_scaler_file)
+ logger.info(f"Gradient scaler state saved in {output_scaler_file}")
+ # Random number generator states
+ states = {}
+ states_name = f"{RNG_STATE_NAME}_{process_index}.pkl"
+ states["random_state"] = random.getstate()
+ states["numpy_random_seed"] = np.random.get_state()
+ states["torch_manual_seed"] = torch.get_rng_state()
+ states["torch_cuda_manual_seed"] = torch.cuda.get_rng_state_all()
+ # ^^ safe to call this function even if cuda is not available
+ if is_tpu_available():
+ states["xm_seed"] = torch.tensor(xm.get_rng_state())
+ output_states_file = os.path.join(output_dir, states_name)
+ torch.save(states, output_states_file)
+ logger.info(f"Random states saved in {output_states_file}")
+ return output_dir
+
+
+def load_accelerator_state(input_dir, models, optimizers, process_index, scaler=None):
+ """
+ Loads states of the models, optimizers, scaler, and RNG generators from a given directory.
+
+ Args:
+ input_dir (:obj:`str` or :obj:`os.PathLike`):
+ The name of the folder to load all relevant weights and states.
+ model_stmodelsates (:obj:`List[torch.nn.Module]`):
+ A list of model instances
+ optimizers (:obj:`List[torch.optim.Optimizer]`):
+ A list of optimizer instances
+ process_index (:obj:`int`):
+ The current process index in the Accelerator state
+ scaler (:obj:`torch.cuda.amp.GradScaler`, `optional`):
+ An optional `GradScaler` instance to load
+ """
+ # Model states
+ for i, model in enumerate(models):
+ weights_name = f"{MODEL_NAME}.bin" if i == 0 else f"{MODEL_NAME}_{i}.bin"
+ input_model_file = os.path.join(input_dir, weights_name)
+ models[i].load_state_dict(torch.load(input_model_file))
+ logger.info("All model weights loaded successfully")
+
+ # Optimizer states
+ for i, opt in enumerate(optimizers):
+ optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin"
+ input_optimizer_file = os.path.join(input_dir, optimizer_name)
+ optimizers[i].load_state_dict(torch.load(input_optimizer_file))
+ logger.info("All optimizer states loaded successfully")
+
+ # GradScaler state
+ if scaler is not None:
+ input_scaler_file = os.path.join(input_dir, SCALER_NAME)
+ scaler.load_state_dict(torch.load(input_scaler_file))
+ logger.info("GradScaler state loaded successfully")
+
+ # Random states
+ states = torch.load(os.path.join(input_dir, f"{RNG_STATE_NAME}_{process_index}.pkl"))
+ random.setstate(states["random_state"])
+ np.random.set_state(states["numpy_random_seed"])
+ torch.set_rng_state(states["torch_manual_seed"])
+ torch.cuda.set_rng_state_all(states["torch_cuda_manual_seed"])
+ # ^^ safe to call this function even if cuda is not available
+ if is_tpu_available():
+ xm.set_rng_state(states["xm_seed"])
+ logger.info("All random states loaded successfully")
diff --git a/tests/test_state_checkpointing.py b/tests/test_state_checkpointing.py
new file mode 100644
index 00000000000..a74dcb7247b
--- /dev/null
+++ b/tests/test_state_checkpointing.py
@@ -0,0 +1,125 @@
+# Copyright 2022 The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import os
+import random
+import tempfile
+import unittest
+
+import torch
+from torch import nn
+from torch.utils.data import DataLoader, TensorDataset
+
+from accelerate import Accelerator
+from accelerate.utils import set_seed
+
+
+logger = logging.getLogger(__name__)
+
+
+def dummy_dataloaders(a=2, b=3, batch_size=16, n_train_batches: int = 10, n_valid_batches: int = 2):
+ "Generates a tuple of dummy DataLoaders to test with"
+
+ def get_dataset(n_batches):
+ x = torch.randn(batch_size * n_batches, 1)
+ return TensorDataset(x, a * x + b + 0.1 * torch.randn(batch_size * n_batches, 1))
+
+ train_dataset = get_dataset(n_train_batches)
+ valid_dataset = get_dataset(n_valid_batches)
+ train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size, num_workers=4)
+ valid_dataloader = DataLoader(valid_dataset, shuffle=False, batch_size=batch_size, num_workers=4)
+ return (train_dataloader, valid_dataloader)
+
+
+def train(num_epochs, model, dataloader, optimizer, accelerator):
+ "Trains for `num_epochs`"
+ rands = []
+ for epoch in range(num_epochs):
+ # Train quickly
+ model.train()
+ for step, batch in enumerate(dataloader):
+ x, y = batch
+ outputs = model(x)
+ loss = torch.nn.functional.mse_loss(outputs, y)
+ accelerator.backward(loss)
+ optimizer.step()
+ optimizer.zero_grad()
+ rands.append(random.random()) # Introduce some randomness
+ return rands
+
+
+class DummyModel(nn.Module):
+ "Simple model to do y=mx+b"
+
+ def __init__(self):
+ super().__init__()
+ self.a = nn.Parameter(torch.randn(1))
+ self.b = nn.Parameter(torch.randn(1))
+
+ def forward(self, x):
+ return x * self.a + self.b
+
+
+class CheckpointTest(unittest.TestCase):
+ def test_can_resume_training(self):
+ with tempfile.TemporaryDirectory() as tmpdir:
+ set_seed(42)
+ model = DummyModel()
+ optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
+ train_dataloader, valid_dataloader = dummy_dataloaders()
+ # Train baseline
+ accelerator = Accelerator()
+ model, optimizer, train_dataloader, valid_dataloader = accelerator.prepare(
+ model, optimizer, train_dataloader, valid_dataloader
+ )
+ # Save initial
+ initial = os.path.join(tmpdir, "initial")
+ accelerator.save_state(initial)
+ (a, b) = model.a.item(), model.b.item()
+ opt_state = optimizer.state_dict()
+ ground_truth_rands = train(3, model, train_dataloader, optimizer, accelerator)
+ (a1, b1) = model.a.item(), model.b.item()
+ opt_state1 = optimizer.state_dict()
+
+ # Train partially
+ set_seed(42)
+ model = DummyModel()
+ optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
+ train_dataloader, valid_dataloader = dummy_dataloaders()
+ accelerator = Accelerator()
+ model, optimizer, train_dataloader, valid_dataloader = accelerator.prepare(
+ model, optimizer, train_dataloader, valid_dataloader
+ )
+ accelerator.load_state(initial)
+ (a2, b2) = model.a.item(), model.b.item()
+ opt_state2 = optimizer.state_dict()
+ self.assertEqual(a, a2)
+ self.assertEqual(b, b2)
+ self.assertEqual(opt_state, opt_state2)
+
+ test_rands = train(2, model, train_dataloader, optimizer, accelerator)
+ # Save everything
+ checkpoint = os.path.join(tmpdir, "checkpoint")
+ accelerator.save_state(checkpoint)
+
+ # Load everything back in and make sure all states work
+ accelerator.load_state(checkpoint)
+ test_rands += train(1, model, train_dataloader, optimizer, accelerator)
+ (a3, b3) = model.a.item(), model.b.item()
+ opt_state3 = optimizer.state_dict()
+ self.assertEqual(a1, a3)
+ self.assertEqual(b1, b3)
+ self.assertEqual(opt_state1, opt_state3)
+ self.assertEqual(ground_truth_rands, test_rands)
| 2022-02-17T21:06:16
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"src/accelerate/accelerator.py": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport gc\nimport os\nimport warnings\nfrom contextlib import contextmanager\nfrom typing import List, Optional, Union\n\nimport torch\n\nfrom packaging import version\n\nfrom .data_loader import prepare_data_loader\nfrom .kwargs_handlers import DistributedDataParallelKwargs, GradScalerKwargs, InitProcessGroupKwargs, KwargsHandler\nfrom .optimizer import AcceleratedOptimizer\nfrom .state import AcceleratorState, DistributedType, is_deepspeed_available\nfrom .utils import (\n DeepSpeedPlugin,\n RNGType,\n convert_outputs_to_fp32,\n extract_model_from_parallel,\n gather,\n pad_across_processes,\n save,\n wait_for_everyone,\n)\n\n\nif is_deepspeed_available():\n import deepspeed\n from .deepspeed_utils import DeepSpeedEngineWrapper, DeepSpeedOptimizerWrapper\n\nimport logging\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Accelerator:\n \"\"\"\n Creates an instance of an accelerator for distributed training (on multi-GPU, TPU) or mixed precision training.\n\n Args:\n device_placement (:obj:`bool`, `optional`, defaults to :obj:`True`):\n Whether or not the accelerator should put objects on device (tensors yielded by the dataloader, model,\n etc...).\n split_batches (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If\n :obj:`True` the actual batch size used will be the same on any kind of distributed processes, but it must\n be a round multiple of the :obj:`num_processes` you are using. If :obj:`False`, actual batch size used will\n be the one set in your script multiplied by the number of processes.\n mixed_precision (:obj:`str`, `optional`):\n Whether or not to use mixed precision training (fp16 or bfloat16). Choose from 'no','fp16','bf16'. Will\n default to the value in the environment variable :obj:`MIXED_PRECISION`, which will use the default value\n in the accelerate config of the current system or the flag passed with the :obj:`accelerate.launch`\n command. 'fp16' requires pytorch 1.6 or higher. 'bf16' requires pytorch 1.10 or higher.\n cpu (:obj:`bool`, `optional`):\n Whether or not to force the script to execute on CPU. Will ignore GPU available if set to :obj:`True` and\n force the execution on one process only.\n deepspeed_plugin (:obj:`DeepSpeedPlugin`, `optional`):\n Tweak your DeepSpeed related args using this argument. This argument is optional and can be configured\n directly using `accelerate config`\n rng_types (list of :obj:`str` or :class:`~accelerate.utils.RNGType`):\n The list of random number generators to synchronize at the beginning of each iteration in your prepared\n dataloaders. Should be one or several of:\n\n - :obj:`\"torch\"`: the base torch random number generator\n - :obj:`\"cuda\"`: the CUDA random number generator (GPU only)\n - :obj:`\"xla\"`: the XLA random number generator (TPU only)\n - :obj:`\"generator\"`: the :obj:`torch.Generator` of the sampler (or batch sampler if there is no sampler in\n your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type.\n\n Will default to :obj:`[\"torch\"]` for PyTorch versions <=1.5.1 and :obj:`[\"generator\"]` for PyTorch versions\n >= 1.6.\n dispatch_batches (:obj:`bool`, `optional`):\n If set to :obj:`True`, the dataloader prepared by the Accelerator is only iterated through on the main\n process and then the batches are split and broadcast to each process. Will default to :obj:`True` for\n :obj:`DataLoader` whose underlying dataset is an :obj:`IterableDataset`, :obj:`False` otherwise.\n kwargs_handlers (list of kwargs handlers, `optional`)\n A list of :obj:`KwargHandler` to customize how the objects related to distributed training or mixed\n precision are created. See :doc:`kwargs` for more information.\n\n Attributes\n\n - **device** (:obj:`torch.device`) -- The device to use.\n - **state** (:class:`~accelerate.AcceleratorState`) -- The distributed setup state.\n \"\"\"\n\n def __init__(\n self,\n device_placement: bool = True,\n split_batches: bool = False,\n fp16: bool = None,\n mixed_precision: str = None,\n cpu: bool = False,\n deepspeed_plugin: DeepSpeedPlugin = None,\n rng_types: Optional[List[Union[str, RNGType]]] = None,\n dispatch_batches: Optional[bool] = None,\n kwargs_handlers: Optional[List[KwargsHandler]] = None,\n ):\n\n if mixed_precision is not None:\n mixed_precision = mixed_precision.lower()\n if mixed_precision not in [\"no\", \"fp16\", \"bf16\"]:\n raise ValueError(\n f\"Unknown mixed_precision mode: {mixed_precision}. Choose between 'no', 'fp16' and 'bf16'.\"\n )\n\n if fp16:\n warnings.warn('fp16=True is deprecated. Use mixed_precision=\"fp16\" instead.', DeprecationWarning)\n mixed_precision = \"fp16\"\n\n if deepspeed_plugin is None: # init from env variables\n deepspeed_plugin = DeepSpeedPlugin() if os.environ.get(\"USE_DEEPSPEED\", \"false\") == \"true\" else None\n else:\n assert isinstance(\n deepspeed_plugin, DeepSpeedPlugin\n ), \"`deepspeed_plugin` must be a DeepSpeedPlugin object.\"\n\n # Kwargs handlers\n self.ddp_handler = None\n self.scaler_handler = None\n self.init_handler = None\n if kwargs_handlers is not None:\n for handler in kwargs_handlers:\n assert isinstance(handler, KwargsHandler), f\"Unsupported kwargs handler passed: {handler}.\"\n if isinstance(handler, DistributedDataParallelKwargs):\n if self.ddp_handler is not None:\n raise ValueError(\"You can only pass one `DistributedDataParallelKwargs` in `kwargs_handler`.\")\n else:\n self.ddp_handler = handler\n elif isinstance(handler, GradScalerKwargs):\n if self.scaler_handler is not None:\n raise ValueError(\"You can only pass one `GradScalerKwargs` in `kwargs_handler`.\")\n else:\n self.scaler_handler = handler\n elif isinstance(handler, InitProcessGroupKwargs):\n if self.init_handler is not None:\n raise ValueError(\"You can only pass one `InitProcessGroupKwargs` in `kwargs_handler`.\")\n else:\n self.init_handler = handler\n\n kwargs = self.init_handler.to_kwargs() if self.init_handler is not None else {}\n self.state = AcceleratorState(\n mixed_precision=mixed_precision,\n cpu=cpu,\n deepspeed_plugin=deepspeed_plugin,\n _from_accelerator=True,\n **kwargs,\n )\n\n self.device_placement = device_placement\n self.split_batches = split_batches\n self.dispatch_batches = dispatch_batches\n\n # Mixed precision attributes\n self.scaler = None\n self.native_amp = False\n if self.state.mixed_precision == \"fp16\":\n self.native_amp = version.parse(torch.__version__) >= version.parse(\"1.6\")\n if version.parse(torch.__version__) < version.parse(\"1.6\"):\n raise ValueError(\"fp16 mixed precision requires PyTorch >= 1.6\")\n\n kwargs = self.scaler_handler.to_kwargs() if self.scaler_handler is not None else {}\n self.scaler = torch.cuda.amp.GradScaler(**kwargs)\n elif self.state.mixed_precision == \"bf16\":\n self.native_amp = version.parse(torch.__version__) >= version.parse(\"1.10\")\n if mixed_precision == \"bf16\" and version.parse(torch.__version__) < version.parse(\"1.10\"):\n raise ValueError(\"bf16 mixed precision requires PyTorch >= 1.10\")\n\n kwargs = self.scaler_handler.to_kwargs() if self.scaler_handler is not None else {}\n self.scaler = torch.cuda.amp.GradScaler(**kwargs)\n\n # Internal references to the training objects\n self._optimizers = []\n self._models = []\n\n # RNG Types\n self.rng_types = rng_types\n if self.rng_types is None:\n self.rng_types = [\"torch\"] if version.parse(torch.__version__) <= version.parse(\"1.5.1\") else [\"generator\"]\n\n @property\n def distributed_type(self):\n return self.state.distributed_type\n\n @property\n def num_processes(self):\n return self.state.num_processes\n\n @property\n def process_index(self):\n return self.state.process_index\n\n @property\n def local_process_index(self):\n return self.state.local_process_index\n\n @property\n def device(self):\n return self.state.device\n\n @property\n def is_main_process(self):\n \"\"\"True for one process only.\"\"\"\n return self.process_index == 0\n\n @property\n def is_local_main_process(self):\n \"\"\"True for one process per server.\"\"\"\n return self.local_process_index == 0\n\n @property\n def use_fp16(self):\n return self.mixed_precision != \"no\"\n\n @property\n def mixed_precision(self):\n if self.distributed_type == DistributedType.DEEPSPEED:\n if self.state.deepspeed_plugin.deepspeed_config[\"fp16\"][\"enabled\"]:\n mixed_precision = \"fp16\"\n elif self.state.deepspeed_plugin.deepspeed_config[\"bf16\"][\"enabled\"]:\n mixed_precision = \"bf16\"\n else:\n mixed_precision = \"no\"\n else:\n mixed_precision = self.state.mixed_precision\n return mixed_precision\n\n @contextmanager\n def local_main_process_first(self):\n \"\"\"\n Lets the local main process go inside a with block.\n\n The other processes will enter the with block after the main process exits.\n \"\"\"\n yield from self._goes_first(self.is_local_main_process)\n\n @contextmanager\n def main_process_first(self):\n \"\"\"\n Lets the main process go first inside a with block.\n\n The other processes will enter the with block after the main process exits.\n \"\"\"\n yield from self._goes_first(self.is_main_process)\n\n def _goes_first(self, is_main):\n if not is_main:\n self.wait_for_everyone()\n\n yield\n\n if is_main:\n self.wait_for_everyone()\n\n def print(self, *args, **kwargs):\n \"\"\"\n Use in replacement of :obj:`print()` to only print once per server.\n \"\"\"\n if self.is_local_main_process:\n print(*args, **kwargs)\n\n def _prepare_one(self, obj):\n if isinstance(obj, torch.utils.data.DataLoader):\n return self.prepare_data_loader(obj)\n elif isinstance(obj, torch.nn.Module):\n self._models.append(obj)\n return self.prepare_model(obj)\n elif isinstance(obj, torch.optim.Optimizer):\n optimizer = self.prepare_optimizer(obj)\n self._optimizers.append(optimizer)\n return optimizer\n else:\n return obj\n\n def prepare(self, *args):\n \"\"\"\n Prepare all objects passed in :obj:`args` for distributed training and mixed precision, then return them in the\n same order.\n\n Accepts the following type of objects:\n\n - :obj:`torch.utils.data.DataLoader`: PyTorch Dataloader\n - :obj:`torch.nn.Module`: PyTorch Module\n - :obj:`torch.optim.Optimizer`: PyTorch Optimizer\n \"\"\"\n # On TPUs, putting the model on the XLA device will create new parameters, so the corresponding optimizer will\n # have parameters disconnected from the model (so no training :-( ).\n # If the model and optimizer have parameters on different devices we raise an error.\n if self.distributed_type == DistributedType.TPU:\n model_device, optimizer_device = self._get_devices()\n if model_device is not None and optimizer_device is not None and model_device != optimizer_device:\n raise ValueError(\n \"The model and the optimizer parameters are not on the same device, which probably means you \"\n \"created an optimizer around your model **before** putting on the device. Make sure the line \"\n \"model.to(device) is before the optimizer creation in your script or remove it entirely and use \"\n \"the flag default value for `devicement_placement` in your `Accelerator` to let it handle that \"\n \"part for you.\"\n )\n\n # If we're dealing with device placement, this deals with that by...\n tpu_should_fix_optimizer = self.device_placement and self.distributed_type == DistributedType.TPU\n if tpu_should_fix_optimizer:\n # 1. grabbing old model parameters\n old_named_params = self._get_named_parameters(*args)\n\n if self.distributed_type == DistributedType.DEEPSPEED:\n result = self._prepare_deepspeed(*args)\n else:\n result = tuple(self._prepare_one(obj) for obj in args)\n\n if tpu_should_fix_optimizer:\n # 2. grabbing new model parameters\n new_named_params = self._get_named_parameters(*result)\n # 3. building a map from the first to the second\n mapping = {p: new_named_params[n] for n, p in old_named_params.items()}\n # 4. using that map to update the parameters of the optimizer\n for obj in result:\n if isinstance(obj, torch.optim.Optimizer):\n obj._switch_parameters(mapping)\n\n return result if len(result) > 1 else result[0]\n\n def prepare_model(self, model):\n if self.device_placement:\n model = model.to(self.device)\n if self.distributed_type == DistributedType.MULTI_GPU:\n kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {}\n model = torch.nn.parallel.DistributedDataParallel(\n model, device_ids=[self.local_process_index], output_device=self.local_process_index, **kwargs\n )\n elif self.distributed_type == DistributedType.MULTI_CPU:\n kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {}\n model = torch.nn.parallel.DistributedDataParallel(model, **kwargs)\n if self.native_amp:\n if self.mixed_precision == \"fp16\" and version.parse(torch.__version__) >= version.parse(\"1.10\"):\n model.forward = torch.cuda.amp.autocast(dtype=torch.float16)(model.forward)\n elif self.mixed_precision == \"bf16\":\n model.forward = torch.cuda.amp.autocast(dtype=torch.bfloat16)(model.forward)\n else:\n model.forward = torch.cuda.amp.autocast()(model.forward)\n model.forward = convert_outputs_to_fp32(model.forward)\n return model\n\n def _prepare_deepspeed(self, *args):\n\n deepspeed_plugin = self.state.deepspeed_plugin\n self.deepspeed_config = deepspeed_plugin.deepspeed_config\n\n batch_sizes = [obj.batch_size for obj in args if hasattr(obj, \"batch_size\")]\n if len(batch_sizes) == 0:\n raise ValueError(\n \"You must specify a training or evaluation dataloader in `accelerate.prepare()` when using DeepSpeed.\"\n )\n\n batch_size_per_device = min(batch_sizes) if deepspeed_plugin.is_train_batch_min else max(batch_sizes)\n if len(batch_sizes) > 1:\n logger.info(\n f\"Since you passed both train and evaluation dataloader, `is_train_batch_min` (here \\\n {deepspeed_plugin.is_train_batch_min} will decide the `train_batch_size` ({batch_size_per_device}).\"\n )\n\n self.deepspeed_config[\"train_batch_size\"] = (\n batch_size_per_device * deepspeed_plugin.gradient_accumulation_steps * self.num_processes\n )\n\n result = [self._prepare_one(obj) if isinstance(obj, torch.utils.data.DataLoader) else obj for obj in args]\n\n model = None\n optimizer = None\n for obj in result:\n if isinstance(obj, torch.nn.Module):\n model = obj\n elif isinstance(obj, (torch.optim.Optimizer, dict)):\n optimizer = obj\n\n if deepspeed_plugin.auto_opt_mapping:\n is_adam = isinstance(optimizer, torch.optim.Adam)\n is_adamw = isinstance(optimizer, torch.optim.AdamW)\n if (is_adam or is_adamw) and deepspeed_plugin.offload_optimizer_device == \"cpu\":\n defaults = optimizer.defaults\n params = []\n for group in optimizer.param_groups:\n params.extend(group[\"params\"])\n\n optimizer = deepspeed.ops.adam.DeepSpeedCPUAdam(\n params,\n lr=defaults[\"lr\"],\n bias_correction=True,\n betas=defaults[\"betas\"],\n eps=defaults[\"eps\"],\n weight_decay=defaults[\"weight_decay\"],\n amsgrad=defaults[\"amsgrad\"],\n adamw_mode=is_adamw,\n )\n\n # useful when only eval_dataloader is given into `accelerator.prepare()`\n if model is not None:\n engine = DeepSpeedEngineWrapper(\n args=None,\n model=model,\n optimizer=optimizer,\n config_params=self.deepspeed_config,\n dist_init_required=False,\n )\n for i in range(len(result)):\n if isinstance(result[i], torch.nn.Module):\n result[i] = engine\n elif isinstance(result[i], torch.optim.Optimizer):\n result[i] = DeepSpeedOptimizerWrapper(engine.optimizer, engine)\n self.deepspeed_engine = engine # pointing for deepspeed_engine.backward()\n self._models.append(engine)\n self._optimizers.append(engine.optimizer)\n assert (\n len(self._models) == 1\n ), \"You can't use same `Accelerator()` instance with 2 models when using DeepSpeed\"\n\n if self.distributed_type == DistributedType.DEEPSPEED:\n assert hasattr(\n self, \"deepspeed_engine\"\n ), \"You need to pass the model along the optimizer when using Deepspeed.\"\n\n return tuple(result)\n\n def prepare_data_loader(self, data_loader):\n return prepare_data_loader(\n data_loader,\n self.device,\n num_processes=self.num_processes,\n process_index=self.process_index,\n split_batches=self.split_batches,\n put_on_device=self.device_placement,\n rng_types=self.rng_types.copy(),\n dispatch_batches=self.dispatch_batches,\n )\n\n def prepare_optimizer(self, optimizer):\n return AcceleratedOptimizer(optimizer, device_placement=self.device_placement, scaler=self.scaler)\n\n def backward(self, loss, **kwargs):\n \"\"\"\n Use :obj:`accelerator.backward(loss)` in lieu of :obj:`loss.backward()`.\n \"\"\"\n if self.distributed_type == DistributedType.DEEPSPEED:\n self.deepspeed_engine.backward(loss, **kwargs)\n elif self.scaler is not None:\n self.scaler.scale(loss).backward(**kwargs)\n else:\n loss.backward(**kwargs)\n\n def unscale_gradients(self, optimizer=None):\n \"\"\"\n Unscale the gradients in mixed precision training with AMP. This is a noop in all other settings.\n\n Args:\n optimizer (:obj:`torch.optim.Optimizer` or :obj:`List[torch.optim.Optimizer]`, `optional`):\n The optimizer(s) for which to unscale gradients. If not set, will unscale gradients on all optimizers\n that were passed to :meth:`~accelerate.Accelerator.prepare`.\n \"\"\"\n if self.state.use_fp16 and self.native_amp:\n if optimizer is None:\n # TODO: this unscales all optimizers where we should only unscale the one where parameters are.\n optimizer = self._optimizers\n elif not isinstance(optimizer, (tuple, list)):\n optimizer = [optimizer]\n for opt in optimizer:\n while isinstance(opt, AcceleratedOptimizer):\n opt = opt.optimizer\n self.scaler.unscale_(opt)\n\n def clip_grad_norm_(self, parameters, max_norm, norm_type=2):\n \"\"\"\n Should be used in place of :func:`torch.nn.utils.clip_grad_norm_`.\n \"\"\"\n self.unscale_gradients()\n torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type)\n\n def clip_grad_value_(self, parameters, clip_value):\n \"\"\"\n Should be used in place of :func:`torch.nn.utils.clip_grad_value_`.\n \"\"\"\n self.unscale_gradients()\n torch.nn.utils.clip_grad_value_(parameters, clip_value)\n\n def gather(self, tensor):\n \"\"\"\n Gather the values in `tensor` accross all processes and concatenate them on the first dimension. Useful to\n regroup the predictions from all processes when doing evaluation.\n\n Note:\n This gather happens in all processes.\n\n Args:\n tensor (:obj:`torch.Tensor`, or a nested tuple/list/dictionary of :obj:`torch.Tensor`):\n The tensors to gather across all processes.\n\n Returns:\n :obj:`torch.Tensor`, or a nested tuple/list/dictionary of :obj:`torch.Tensor`: The gathered tensor(s). Note\n that the first dimension of the result is `num_processes` multiplied by the first dimension of the input\n tensors.\n \"\"\"\n return gather(tensor)\n\n def pad_across_processes(self, tensor, dim=0, pad_index=0, pad_first=False):\n \"\"\"\n Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so\n they can safely be gathered.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n dim (:obj:`int`, `optional`, defaults to 0):\n The dimension on which to pad.\n pad_index (:obj:`int`, `optional`, defaults to 0):\n The value with which to pad.\n pad_first (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether to pad at the beginning or the end.\n \"\"\"\n return pad_across_processes(tensor, dim=dim, pad_index=pad_index, pad_first=pad_first)\n\n def unwrap_model(self, model):\n \"\"\"\n Unwraps the :obj:`model` from the additional layer possible added by :meth:`~accelerate.Accelerator.prepare`.\n Useful before saving the model.\n\n Args:\n model (:obj:`torch.nn.Module`):\n The model to unwrap.\n \"\"\"\n return extract_model_from_parallel(model)\n\n def wait_for_everyone(self):\n \"\"\"\n Will stop the execution of the current process until every other process has reached that point (so this does\n nothing when the script is only run in one process). Useful to do before saving a model.\n \"\"\"\n wait_for_everyone()\n\n def save(self, obj, f):\n \"\"\"\n Save the object passed to disk once per machine. Use in place of :obj:`torch.save`.\n\n Args:\n obj: The object to save.\n f (:obj:`str` or :obj:`os.PathLike`):\n Where to save the content of :obj:`obj`.\n \"\"\"\n save(obj, f)\n\n def free_memory(self):\n \"\"\"\n Will release all references to the internal objects stored and call the garbage collector. You should call this\n method between two trainings with different models/optimizers.\n \"\"\"\n self._optimizers = []\n self._models = []\n self.deepspeed_engine = None\n gc.collect()\n torch.cuda.empty_cache()\n\n def _get_named_parameters(self, *args):\n named_parameters = {}\n for obj in args:\n if isinstance(obj, torch.nn.Module):\n obj = extract_model_from_parallel(obj)\n named_parameters.update({n: p for n, p in obj.named_parameters()})\n return named_parameters\n\n def _get_devices(self, *args):\n model_device = None\n optimizer_device = None\n for obj in args:\n # Loop through model parameters and stop at the first once we have its device.\n if isinstance(obj, torch.nn.Module):\n for param in obj.parameters():\n model_device = param.device\n break\n # Loop through optimizer parameters groups and stop at the first once we have its device.\n if isinstance(obj, torch.optim.Optimizer):\n for param_group in obj.param_groups:\n if len(param_group[\"params\"]) > 0:\n optimizer_device = param_group[\"params\"][0].device\n break\n return (model_device, optimizer_device)\n\n def get_state_dict(self, model):\n is_zero_3 = False\n if is_deepspeed_available():\n if isinstance(model, DeepSpeedEngineWrapper) and self.distributed_type == DistributedType.DEEPSPEED:\n is_zero_3 = self.state.deepspeed_plugin.zero_stage == 3\n\n if is_zero_3:\n state_dict = model._zero3_consolidated_fp16_state_dict()\n else:\n model = self.unwrap_model(model)\n state_dict = model.state_dict()\n\n for k in state_dict:\n if state_dict[k].dtype == torch.float16:\n state_dict[k] = state_dict[k].float()\n\n return state_dict\n\n @contextmanager\n def autocast(self):\n \"\"\"\n Will apply automatic mixed-precision inside the block inside this context manager, if it is enabled. Nothing\n different will happen otherwise.\n \"\"\"\n if self.native_amp:\n if self.mixed_precision == \"fp16\" and version.parse(torch.__version__) >= version.parse(\"1.10\"):\n autocast_context = torch.cuda.amp.autocast(dtype=torch.float16)\n elif self.mixed_precision == \"bf16\":\n autocast_context = torch.cuda.amp.autocast(dtype=torch.bfloat16)\n else:\n autocast_context = torch.cuda.amp.autocast()\n\n autocast_context.__enter__()\n yield\n autocast_context.__exit__()\n else:\n yield\n\n @property\n def optimizer_step_was_skipped(self):\n \"\"\"\n Whether or not the optimizer update was skipped (because of gradient overflow in mixed precision), in which\n case the learning rate should not be changed.\n \"\"\"\n for optimizer in self._optimizers:\n if optimizer.is_overflow:\n return True\n return False\n", "src/accelerate/utils.py": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport importlib\nimport os\nimport random\nfrom collections.abc import Mapping\nfrom dataclasses import dataclass, field\nfrom enum import Enum\nfrom typing import List, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom packaging import version\n\nfrom .state import AcceleratorState, DistributedType, is_deepspeed_available, is_tpu_available\n\n\nif is_tpu_available():\n import torch_xla.core.xla_model as xm\n\n\ndef is_boto3_available():\n return importlib.util.find_spec(\"boto3\") is not None\n\n\ndef is_sagemaker_available():\n return importlib.util.find_spec(\"sagemaker\") is not None\n\n\nif is_deepspeed_available():\n from deepspeed import DeepSpeedEngine\n\n\nclass RNGType(Enum):\n TORCH = \"torch\"\n CUDA = \"cuda\"\n XLA = \"xla\"\n GENERATOR = \"generator\"\n\n\n@dataclass\nclass TensorInformation:\n shape: torch.Size\n dtype: torch.dtype\n\n\ndef set_seed(seed: int):\n \"\"\"\n Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch``.\n\n Args:\n seed (:obj:`int`): The seed to set.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n # ^^ safe to call this function even if cuda is not available\n if is_tpu_available():\n xm.set_rng_state(seed)\n\n\ndef synchronize_rng_state(rng_type: Optional[RNGType] = None, generator: Optional[torch.Generator] = None):\n # Get the proper rng state\n if rng_type == RNGType.TORCH:\n rng_state = torch.get_rng_state()\n elif rng_type == RNGType.CUDA:\n rng_state = torch.cuda.get_rng_state()\n elif rng_type == RNGType.XLA:\n assert is_tpu_available(), \"Can't synchronize XLA seeds on an environment without TPUs.\"\n rng_state = torch.tensor(xm.get_rng_state())\n elif rng_type == RNGType.GENERATOR:\n assert generator is not None, \"Need a generator to synchronize its seed.\"\n rng_state = generator.get_state()\n\n # Broadcast the rng state from device 0 to other devices\n state = AcceleratorState()\n if state.distributed_type == DistributedType.TPU:\n rng_state = xm.mesh_reduce(\"random_seed\", rng_state, lambda x: x[0])\n elif state.distributed_type == DistributedType.MULTI_GPU:\n rng_state = rng_state.to(state.device)\n torch.distributed.broadcast(rng_state, 0)\n rng_state = rng_state.cpu()\n elif state.distributed_type == DistributedType.MULTI_CPU:\n torch.distributed.broadcast(rng_state, 0)\n\n # Set the broadcast rng state\n if rng_type == RNGType.TORCH:\n torch.set_rng_state(rng_state)\n elif rng_type == RNGType.CUDA:\n torch.cuda.set_rng_state(rng_state)\n elif rng_type == RNGType.XLA:\n xm.set_rng_state(rng_state.item())\n elif rng_type == RNGType.GENERATOR:\n generator.set_state(rng_state)\n\n\ndef synchronize_rng_states(rng_types: List[Union[str, RNGType]], generator: Optional[torch.Generator] = None):\n for rng_type in rng_types:\n synchronize_rng_state(RNGType(rng_type), generator=generator)\n\n\ndef honor_type(obj, generator):\n \"\"\"\n Cast a generator to the same type as obj (list, tuple or namedtuple)\n \"\"\"\n # There is no direct check whether an object if of type namedtuple sadly, this is a workaround.\n if isinstance(obj, tuple) and hasattr(obj, \"_fields\"):\n # Can instantiate a namedtuple from a generator directly, contrary to a tuple/list.\n return type(obj)(*list(generator))\n return type(obj)(generator)\n\n\ndef is_torch_tensor(tensor):\n return isinstance(tensor, torch.Tensor)\n\n\ndef is_tensor_information(tensor_info):\n return isinstance(tensor_info, TensorInformation)\n\n\ndef recursively_apply(func, data, *args, test_type=is_torch_tensor, error_on_other_type=False, **kwargs):\n \"\"\"\n Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.\n\n Args:\n func (:obj:`callable`):\n The function to recursively apply.\n data (nested list/tuple/dictionary of :obj:`main_type`):\n The data on which to apply :obj:`func`\n *args:\n Positional arguments that will be passed to :obj:`func` when applied on the unpacked data.\n main_type (:obj:`type`, `optional`, defaults to :obj:`torch.Tensor`):\n The base type of the objects to which apply :obj:`func`.\n error_on_other_type (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether to return an error or not if after unpacking :obj:`data`, we get on an object that is not of type\n :obj:`main_type`. If :obj:`False`, the function will leave objects of types different than :obj:`main_type`\n unchanged.\n **kwargs:\n Keyword arguments that will be passed to :obj:`func` when applied on the unpacked data.\n\n Returns:\n The same data structure as :obj:`data` with :obj:`func` applied to every object of type :obj:`main_type`.\n \"\"\"\n if isinstance(data, (tuple, list)):\n return honor_type(\n data,\n (\n recursively_apply(\n func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\n )\n for o in data\n ),\n )\n elif isinstance(data, Mapping):\n return type(data)(\n {\n k: recursively_apply(\n func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\n )\n for k, v in data.items()\n }\n )\n elif test_type(data):\n return func(data, *args, **kwargs)\n elif error_on_other_type:\n raise TypeError(\n f\"Can't apply {func.__name__} on object of type {type(data)}, only of nested list/tuple/dicts of objects \"\n f\"that satisfy {test_type.__name__}.\"\n )\n return data\n\n\ndef send_to_device(tensor, device):\n \"\"\"\n Recursively sends the elements in a nested list/tuple/dictionary of tensors to a given device.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to send to a given device.\n device (:obj:`torch.device`):\n The device to send the data to\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors sent to the proper device.\n \"\"\"\n\n def _send_to_device(t, device):\n return t.to(device)\n\n def _has_to_method(t):\n return hasattr(t, \"to\")\n\n return recursively_apply(_send_to_device, tensor, device, test_type=_has_to_method)\n\n\ndef get_data_structure(data):\n \"\"\"\n Recursively gathers the information needed to rebuild a nested list/tuple/dictionary of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to send to analyze.\n\n Returns:\n The same data structure as :obj:`data` with :class:`~accelerate.utils.TensorInformation` instead of tensors.\n \"\"\"\n\n def _get_data_structure(tensor):\n return TensorInformation(shape=tensor.shape, dtype=tensor.dtype)\n\n return recursively_apply(_get_data_structure, data)\n\n\ndef initialize_tensors(data_structure):\n \"\"\"\n Recursively initializes tensors from a nested list/tuple/dictionary of\n :class:`~accelerate.utils.TensorInformation`.\n\n Returns:\n The same data structure as :obj:`data` with tensors instead of :class:`~accelerate.utils.TensorInformation`.\n \"\"\"\n\n def _initialize_tensor(tensor_info):\n return torch.empty(*tensor_info.shape, dtype=tensor_info.dtype)\n\n return recursively_apply(_initialize_tensor, data_structure, test_type=is_tensor_information)\n\n\ndef convert_to_fp32(tensor):\n \"\"\"\n Recursively converts the elements nested list/tuple/dictionary of tensors in FP16/BF16 precision to FP32.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to convert from FP16/BF16 to FP32.\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors that were in FP16/BF16 precision converted to FP32.\n \"\"\"\n\n def _convert_to_fp32(tensor):\n return tensor.float()\n\n def _is_fp16_bf16_tensor(tensor):\n return hasattr(tensor, \"dtype\") and (\n tensor.dtype == torch.float16\n or (version.parse(torch.__version__) >= version.parse(\"1.10\") and tensor.dtype == torch.bfloat16)\n )\n\n return recursively_apply(_convert_to_fp32, tensor, test_type=_is_fp16_bf16_tensor)\n\n\ndef convert_outputs_to_fp32(model_forward):\n \"\"\"\n Decorator to apply to a function outputing tensors (like a model forward pass) that ensures the outputs in FP16\n precision will be convert back to FP32.\n\n Args:\n model_forward (:obj:`Callable`):\n The function which outputs we want to treat.\n\n Returns:\n The same function as :obj:`model_forward` but with converted outputs.\n \"\"\"\n\n def convert_outputs(*args, **kwargs):\n outputs = model_forward(*args, **kwargs)\n return convert_to_fp32(outputs)\n\n return convert_outputs\n\n\ndef extract_model_from_parallel(model):\n \"\"\"\n Extract a model from its distributed containers.\n\n Args:\n model (:obj:`torch.nn.Module`): The model to extract.\n\n Returns:\n :obj:`torch.nn.Module`: The extracted model.\n \"\"\"\n options = (torch.nn.parallel.DistributedDataParallel, torch.nn.DataParallel)\n if is_deepspeed_available():\n options += (DeepSpeedEngine,)\n\n while isinstance(model, options):\n model = model.module\n return model\n\n\ndef _tpu_gather(tensor, name=\"gather tensor\"):\n if isinstance(tensor, (list, tuple)):\n return honor_type(tensor, (_tpu_gather(t, name=f\"{name}_{i}\") for i, t in enumerate(tensor)))\n elif isinstance(tensor, Mapping):\n return type(tensor)({k: _tpu_gather(v, name=f\"{name}_{k}\") for k, v in tensor.items()})\n elif not isinstance(tensor, torch.Tensor):\n raise TypeError(f\"Can't gather the values of type {type(tensor)}, only of nested list/tuple/dicts of tensors.\")\n if tensor.ndim == 0:\n tensor = tensor.clone()[None]\n return xm.mesh_reduce(name, tensor, torch.cat)\n\n\ndef _gpu_gather(tensor):\n def _gpu_gather_one(tensor):\n if tensor.ndim == 0:\n tensor = tensor.clone()[None]\n output_tensors = [tensor.clone() for _ in range(torch.distributed.get_world_size())]\n torch.distributed.all_gather(output_tensors, tensor)\n return torch.cat(output_tensors, dim=0)\n\n return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)\n\n\n_cpu_gather = _gpu_gather\n\n\ndef gather(tensor):\n \"\"\"\n Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors sent to the proper device.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n return _tpu_gather(tensor, name=\"accelerate.utils.gather\")\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n return _gpu_gather(tensor)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n return _cpu_gather(tensor)\n else:\n return tensor\n\n\ndef _gpu_broadcast(data, src=0):\n def _gpu_broadcast_one(tensor, src=0):\n torch.distributed.broadcast(tensor, src=src)\n return tensor\n\n return recursively_apply(_gpu_broadcast_one, data, error_on_other_type=True, src=src)\n\n\ndef _tpu_broadcast(tensor, src=0, name=\"broadcast tensor\"):\n if isinstance(tensor, (list, tuple)):\n return honor_type(tensor, (_tpu_broadcast(t, name=f\"{name}_{i}\") for i, t in enumerate(tensor)))\n elif isinstance(tensor, Mapping):\n return type(tensor)({k: _tpu_broadcast(v, name=f\"{name}_{k}\") for k, v in tensor.items()})\n return xm.mesh_reduce(name, tensor, lambda x: x[src])\n\n\ndef broadcast(tensor, from_process: int = 0):\n \"\"\"\n Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n from_process (:obj:`int`, `optional`, defaults to 0):\n The process from which to send the data\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors broadcasted to the proper device.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n return _tpu_broadcast(tensor, src=from_process, name=\"accelerate.utils.broadcast\")\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n return _gpu_broadcast(tensor, src=from_process)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n return _gpu_broadcast(tensor, src=from_process)\n else:\n return tensor\n\n\ndef broadcast_object_list(object_list, from_process: int = 0):\n \"\"\"\n Broadcast a list of picklable objects form one process to the others.\n\n Args:\n object_list (list of picklable objects):\n The list of objects to broadcast. This list will be modified inplace.\n from_process (:obj:`int`, `optional`, defaults to 0):\n The process from which to send the data.\n\n Returns:\n The same list containing the objects from process 0.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n for i, obj in enumerate(object_list):\n object_list[i] = xm.mesh_reduce(\"accelerate.utils.broadcast_object_list\", obj, lambda x: x[from_process])\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n torch.distributed.broadcast_object_list(object_list, src=from_process)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n torch.distributed.broadcast_object_list(object_list, src=from_process)\n return object_list\n\n\ndef slice_tensors(data, tensor_slice):\n \"\"\"\n Recursively takes a slice in a nested list/tuple/dictionary of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to slice.\n tensor_slice (:obj:`slice`):\n The slice to take.\n\n Returns:\n The same data structure as :obj:`data` with all the tensors slices.\n \"\"\"\n\n def _slice_tensor(tensor, tensor_slice):\n return tensor[tensor_slice]\n\n return recursively_apply(_slice_tensor, data, tensor_slice)\n\n\ndef find_batch_size(data):\n \"\"\"\n Recursively finds the batch size in a nested list/tuple/dictionary of lists of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`): The data from which to find the batch size.\n\n Returns:\n :obj:`int`: The batch size.\n \"\"\"\n if isinstance(data, (tuple, list)):\n return find_batch_size(data[0])\n elif isinstance(data, Mapping):\n for k in data.keys():\n return find_batch_size(data[k])\n elif not isinstance(data, torch.Tensor):\n raise TypeError(f\"Can only find the batch size of tensors but got {type(data)}.\")\n return data.shape[0]\n\n\ndef concatenate(data, dim=0):\n \"\"\"\n Recursively concatenate the tensors in a nested list/tuple/dictionary of lists of tensors with the same shape.\n\n Args:\n data (nested list/tuple/dictionary of lists of tensors :obj:`torch.Tensor`):\n The data to concatenate.\n dim (:obj:`int`, `optional`, defaults to 0):\n The dimension on which to concatenate.\n\n Returns:\n The same data structure as :obj:`data` with all the tensors concatenated.\n \"\"\"\n if isinstance(data[0], (tuple, list)):\n return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))\n elif isinstance(data[0], Mapping):\n return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})\n elif not isinstance(data[0], torch.Tensor):\n raise TypeError(f\"Can only concatenate tensors but got {type(data[0])}\")\n return torch.cat(data, dim=dim)\n\n\ndef pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):\n \"\"\"\n Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they\n can safely be gathered.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n dim (:obj:`int`, `optional`, defaults to 0):\n The dimension on which to pad.\n pad_index (:obj:`int`, `optional`, defaults to 0):\n The value with which to pad.\n pad_first (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether to pad at the beginning or the end.\n \"\"\"\n\n def _pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):\n if dim >= len(tensor.shape):\n return tensor\n\n # Gather all sizes\n size = torch.tensor(tensor.shape, device=tensor.device)[None]\n sizes = gather(size).cpu()\n # Then pad to the maximum size\n max_size = max(s[dim] for s in sizes)\n if max_size == tensor.shape[dim]:\n return tensor\n\n old_size = tensor.shape\n new_size = list(old_size)\n new_size[dim] = max_size\n new_tensor = tensor.new_zeros(tuple(new_size)) + pad_index\n if pad_first:\n indices = tuple(\n slice(max_size - old_size[dim], max_size) if i == dim else slice(None) for i in range(len(new_size))\n )\n else:\n indices = tuple(slice(0, old_size[dim]) if i == dim else slice(None) for i in range(len(new_size)))\n new_tensor[indices] = tensor\n return new_tensor\n\n return recursively_apply(\n _pad_across_processes, tensor, error_on_other_type=True, dim=dim, pad_index=pad_index, pad_first=pad_first\n )\n\n\ndef wait_for_everyone():\n \"\"\"\n Introduces a blocking point in the script, making sure all processes have reached this point before continuing.\n\n Warning::\n\n Make sure all processes will reach this instruction otherwise one of your processes will hang forever.\n \"\"\"\n if (\n AcceleratorState().distributed_type == DistributedType.MULTI_GPU\n or AcceleratorState().distributed_type == DistributedType.MULTI_CPU\n or AcceleratorState().distributed_type == DistributedType.DEEPSPEED\n ):\n torch.distributed.barrier()\n elif AcceleratorState().distributed_type == DistributedType.TPU:\n xm.rendezvous(\"accelerate.utils.wait_for_everyone\")\n\n\ndef save(obj, f):\n \"\"\"\n Save the data to disk. Use in place of :obj:`torch.save()`.\n\n Args:\n obj: The data to save\n f: The file (or file-like object) to use to save the data\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n xm.save(obj, f)\n elif AcceleratorState().local_process_index == 0:\n torch.save(obj, f)\n\n\nclass PrepareForLaunch:\n \"\"\"\n Prepare a function that will launched in a distributed setup.\n\n Args:\n launcher (:obj:`Callable`):\n The function to launch.\n distributed_type (:class:`~accelerate.state.DistributedType`):\n The distributed type to prepare for.\n \"\"\"\n\n def __init__(self, launcher, distributed_type=\"NO\"):\n self.launcher = launcher\n self.distributed_type = DistributedType(distributed_type)\n\n def __call__(self, index, *args):\n if self.distributed_type == DistributedType.MULTI_GPU or self.distributed_type == DistributedType.MULTI_CPU:\n # Prepare the environment for torch.distributed\n os.environ[\"LOCAL_RANK\"] = str(index)\n os.environ[\"RANK\"] = str(index)\n\n self.launcher(*args)\n\n\n@dataclass\nclass DeepSpeedPlugin:\n\n gradient_accumulation_steps: int = field(\n default=None, metadata={\"help\": \"Number of steps to accumulate gradients before updating optimizer states\"}\n )\n zero_stage: int = field(\n default=None,\n metadata={\"help\": \"Possible options are 0,1,2,3; Default will be taken from environment variable\"},\n )\n is_train_batch_min: str = field(\n default=True,\n metadata={\"help\": \"If both train & eval dataloaders are specified, this will decide the train_batch_size\"},\n )\n\n auto_opt_mapping: bool = field(\n default=True,\n metadata={\"help\": \"whether to map torch.adam to deepspeed optimizer version of adam based on config\"},\n )\n\n offload_optimizer_device: bool = field(default=None, metadata={\"help\": \"Possible options are none|cpu|nvme\"})\n\n def __post_init__(self):\n\n if self.gradient_accumulation_steps is None:\n self.gradient_accumulation_steps = int(os.environ.get(\"GRADIENT_ACCUMULATION_STEPS\", 1))\n\n if self.zero_stage is None:\n self.zero_stage = int(os.environ.get(\"DEEPSPEED_ZERO_STAGE\", 2))\n\n if self.offload_optimizer_device is None:\n self.offload_optimizer_device = os.environ.get(\"DEEPSPEED_OFFLOAD_OPTIMIZER_DEVICE\", \"none\")\n\n self.deepspeed_config = {\n \"train_batch_size\": None,\n \"gradient_accumulation_steps\": self.gradient_accumulation_steps,\n \"zero_optimization\": {\n \"stage\": self.zero_stage,\n \"offload_optimizer\": {\n \"device\": self.offload_optimizer_device,\n },\n },\n \"steps_per_print\": float(\"inf\"), # this will stop deepspeed from logging @ stdout\n \"zero_allow_untested_optimizer\": True,\n }\n"}
|
{"src/accelerate/accelerator.py": [{"type": "function", "name": "Accelerator.save_state", "lines": [565, 578], "signature": "def save_state(self, output_dir: str):", "doc": "Saves the current states of the model, optimizer, scaler, and RNG generators.\n\nArgs:\n output_dir (:obj:`str` or :obj:`os.PathLike`):\n The name of the folder to save all relevant weights and states."}, {"type": "function", "name": "Accelerator.load_state", "lines": [580, 593], "signature": "def load_state(self, input_dir: str):", "doc": "Loads the current states of the model, optimizer, scaler, and RNG generators.\n\nArgs:\n input_dir (:obj:`str` or :obj:`os.PathLike`):\n The name of the folder all relevant weights and states were saved in."}]}
| null |
["tests/test_state_checkpointing.py::CheckpointTest::test_can_resume_training"]
|
[]
|
08101b9dde2b1a9658c2e363e3e9f5663ba06073
|
{"first_commit_time": 1645041954.0, "pr_title": "Add in checkpointing capability", "pr_body": "Closes https://github.com/huggingface/accelerate/issues/171\r\n\r\nThis PR adds in two functions, `Accelerator.save_state` and `Accelerator.load_state`, which will go through and checkpoint the current state of everything accelerator touches.\r\n\r\nI've included a testing script, but this needs to be properly turned into a test, as currently it is just a script with a bunch of asserts. I have tested successfully on:\r\n\r\n- Single GPU\r\n- Single CPU\r\n- Multi-GPU", "pr_timeline": [], "issues": {}}
|
|
huggingface/accelerate
| 991
|
https://github.com/huggingface/accelerate/pull/991
|
huggingface__accelerate-991
|
[]
|
5858ac62b4546547fa69ffbf9a7f8bd1d1b1f43c
|
diff --git a/src/accelerate/accelerator.py b/src/accelerate/accelerator.py
index 7075c4dcdb6..ec29ac07bce 100644
--- a/src/accelerate/accelerator.py
+++ b/src/accelerate/accelerator.py
@@ -18,11 +18,13 @@
import shutil
import sys
import warnings
+from collections import OrderedDict
from contextlib import contextmanager
from functools import wraps
-from typing import List, Optional, Union
+from typing import Callable, List, Optional, Union
import torch
+import torch.utils.hooks as hooks
from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
from .data_loader import DataLoaderDispatcher, prepare_data_loader
@@ -403,6 +405,10 @@ def __init__(
self._dataloaders = []
self._custom_objects = []
+ # Hooks
+ self._load_model_state_pre_hook = OrderedDict()
+ self._save_model_state_pre_hook = OrderedDict()
+
# RNG Types
self.rng_types = rng_types
if self.rng_types is None:
@@ -1613,6 +1619,38 @@ def save(self, obj, f):
"""
save(obj, f)
+ def register_save_state_pre_hook(self, hook: Callable[..., None]) -> hooks.RemovableHandle:
+ """
+ Registers a pre hook to be run before `save_checkpoint` is called in [`Accelerator.save_state`].
+
+ Args:
+ hook (`Callable`):
+ A function to be called in [`Accelerator.save_state`] before `save_checkpoint`.
+
+ The hook should have the following signature:
+
+ `hook(models: List[torch.nn.Module], weights: List[Dict[str, torch.Tensor]], input_dir: str) -> None`
+
+ The `models` argument are the models as saved in the accelerator state under `accelerator._models`, `weigths`
+ argument are the state dicts of the `models`, and the `input_dir` argument is the `input_dir` argument passed
+ to [`Accelerator.load_state`].
+
+ <Tip>
+
+ Should only be used in conjunction with [`Accelerator.register_load_state_pre_hook`]. Can be useful to save
+ configurations in addition to model weights. Can also be used to overwrite model saving with a customized
+ method. In this case, make sure to remove already loaded weights from the weights list.
+
+ </Tip>
+
+ Returns:
+ `torch.utils.hooks.RemovableHandle`: a handle that can be used to remove the added hook by calling
+ `handle.remove()`
+ """
+ handle = hooks.RemovableHandle(self._save_model_state_pre_hook)
+ self._save_model_state_pre_hook[handle.id] = hook
+ return handle
+
def save_state(self, output_dir: str = None, **save_model_func_kwargs):
"""
Saves the current states of the model, optimizer, scaler, RNG generators, and registered objects to a folder.
@@ -1699,6 +1737,11 @@ def save_state(self, output_dir: str = None, **save_model_func_kwargs):
elif self.distributed_type not in [DistributedType.MEGATRON_LM]:
schedulers = self._schedulers
+ # Call model loading hooks that might have been registered with
+ # accelerator.register_model_state_hook
+ for hook in self._save_model_state_pre_hook.values():
+ hook(self._models, weights, output_dir)
+
save_location = save_accelerator_state(
output_dir, weights, optimizers, schedulers, self.state.process_index, self.scaler
)
@@ -1707,6 +1750,37 @@ def save_state(self, output_dir: str = None, **save_model_func_kwargs):
self.project_configuration.iteration += 1
return save_location
+ def register_load_state_pre_hook(self, hook: Callable[..., None]) -> hooks.RemovableHandle:
+ """
+ Registers a pre hook to be run before [`load_checkpoint`] is called in [`Accelerator.load_state`].
+
+ Args:
+ hook (`Callable`):
+ A function to be called in [`Accelerator.load_state`] before `load_checkpoint`.
+
+ The hook should have the following signature:
+
+ `hook(models: List[torch.nn.Module], input_dir: str) -> None`
+
+ The `models` argument are the models as saved in the accelerator state under `accelerator._models`, and the
+ `input_dir` argument is the `input_dir` argument passed to [`Accelerator.load_state`].
+
+ <Tip>
+
+ Should only be used in conjunction with [`Accelerator.register_save_state_pre_hook`]. Can be useful to load
+ configurations in addition to model weights. Can also be used to overwrite model loading with a customized
+ method. In this case, make sure to remove already loaded models from the models list.
+
+ </Tip>
+
+ Returns:
+ `torch.utils.hooks.RemovableHandle`: a handle that can be used to remove the added hook by calling
+ `handle.remove()`
+ """
+ handle = hooks.RemovableHandle(self._load_model_state_pre_hook)
+ self._load_model_state_pre_hook[handle.id] = hook
+ return handle
+
def load_state(self, input_dir: str, **load_model_func_kwargs):
"""
Loads the current states of the model, optimizer, scaler, RNG generators, and registered objects.
@@ -1769,6 +1843,11 @@ def load_state(self, input_dir: str, **load_model_func_kwargs):
elif self.distributed_type not in [DistributedType.MEGATRON_LM]:
schedulers = self._schedulers
+ # Call model loading hooks that might have been registered with
+ # accelerator.register_model_state_hook
+ for hook in self._load_model_state_pre_hook.values():
+ hook(models, input_dir)
+
load_accelerator_state(
input_dir, models, optimizers, schedulers, self.state.process_index, self.scaler, **load_model_func_kwargs
)
|
diff --git a/tests/test_accelerator.py b/tests/test_accelerator.py
index 19d6c1655b4..511d4daae62 100644
--- a/tests/test_accelerator.py
+++ b/tests/test_accelerator.py
@@ -1,3 +1,6 @@
+import json
+import os
+import tempfile
import unittest
import torch
@@ -17,6 +20,15 @@ def create_components():
return model, optimizer, scheduler, train_dl, valid_dl
+def get_signature(model):
+ return (model.weight.abs().sum() + model.bias.abs().sum()).item()
+
+
+def load_random_weights(model):
+ state = torch.nn.Linear(*tuple(model.weight.T.shape)).state_dict()
+ model.load_state_dict(state)
+
+
class AcceleratorTester(unittest.TestCase):
def test_prepared_objects_are_referenced(self):
accelerator = Accelerator()
@@ -49,3 +61,83 @@ def test_free_memory_dereferences_prepared_components(self):
self.assertTrue(len(accelerator._schedulers) == 0)
self.assertTrue(len(accelerator._dataloaders) == 0)
AcceleratorState._reset_state()
+
+ def test_save_load_model(self):
+ accelerator = Accelerator()
+ model, optimizer, scheduler, train_dl, valid_dl = create_components()
+ accelerator.prepare(model, optimizer, scheduler, train_dl, valid_dl)
+
+ model_signature = get_signature(model)
+
+ with tempfile.TemporaryDirectory() as tmpdirname:
+ accelerator.save_state(tmpdirname)
+
+ # make sure random weights don't match
+ load_random_weights(model)
+ self.assertTrue(abs(model_signature - get_signature(model)) > 1e-3)
+
+ # make sure loaded weights match
+ accelerator.load_state(tmpdirname)
+ self.assertTrue(abs(model_signature - get_signature(model)) < 1e-3)
+
+ def test_save_load_model_with_hooks(self):
+ accelerator = Accelerator()
+ model, optimizer, scheduler, train_dl, valid_dl = create_components()
+ accelerator.prepare(model, optimizer, scheduler, train_dl, valid_dl)
+
+ model_signature = get_signature(model)
+
+ # saving hook
+ def save_config(models, weights, output_dir):
+ config = {"class_name": models[0].__class__.__name__}
+
+ with open(os.path.join(output_dir, "data.json"), "w") as f:
+ json.dump(config, f)
+
+ # loading hook
+ def load_config(models, input_dir):
+ with open(os.path.join(input_dir, "data.json"), "r") as f:
+ config = json.load(f)
+
+ models[0].class_name = config["class_name"]
+
+ save_hook = accelerator.register_save_state_pre_hook(save_config)
+ load_hook = accelerator.register_load_state_pre_hook(load_config)
+
+ with tempfile.TemporaryDirectory() as tmpdirname:
+ accelerator.save_state(tmpdirname)
+
+ # make sure random weights don't match with hooks
+ load_random_weights(model)
+ self.assertTrue(abs(model_signature - get_signature(model)) > 1e-3)
+
+ # random class name to verify correct one is loaded
+ model.class_name = "random"
+
+ # make sure loaded weights match with hooks
+ accelerator.load_state(tmpdirname)
+ self.assertTrue(abs(model_signature - get_signature(model)) < 1e-3)
+
+ # mode.class_name is loaded from config
+ self.assertTrue(model.class_name == model.__class__.__name__)
+
+ # remove hooks
+ save_hook.remove()
+ load_hook.remove()
+
+ with tempfile.TemporaryDirectory() as tmpdirname:
+ accelerator.save_state(tmpdirname)
+
+ # make sure random weights don't match with hooks removed
+ load_random_weights(model)
+ self.assertTrue(abs(model_signature - get_signature(model)) > 1e-3)
+
+ # random class name to verify correct one is loaded
+ model.class_name = "random"
+
+ # make sure loaded weights match with hooks removed
+ accelerator.load_state(tmpdirname)
+ self.assertTrue(abs(model_signature - get_signature(model)) < 1e-3)
+
+ # mode.class_name is NOT loaded from config
+ self.assertTrue(model.class_name != model.__class__.__name__)
| 2023-01-19T20:16:40
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"src/accelerate/accelerator.py": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport contextlib\nimport math\nimport os\nimport shutil\nimport sys\nimport warnings\nfrom contextlib import contextmanager\nfrom functools import wraps\nfrom typing import List, Optional, Union\n\nimport torch\n\nfrom .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\nfrom .data_loader import DataLoaderDispatcher, prepare_data_loader\nfrom .logging import get_logger\nfrom .optimizer import AcceleratedOptimizer\nfrom .scheduler import AcceleratedScheduler\nfrom .state import AcceleratorState, GradientState, parse_flag_from_env\nfrom .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers\nfrom .utils import (\n MODEL_NAME,\n DeepSpeedPlugin,\n DistributedDataParallelKwargs,\n DistributedType,\n DynamoBackend,\n FullyShardedDataParallelPlugin,\n GradScalerKwargs,\n InitProcessGroupKwargs,\n KwargsHandler,\n LoggerType,\n MegatronLMPlugin,\n PrecisionType,\n ProjectConfiguration,\n RNGType,\n compare_versions,\n convert_outputs_to_fp32,\n extract_model_from_parallel,\n gather,\n get_pretty_name,\n is_bf16_available,\n is_deepspeed_available,\n is_megatron_lm_available,\n is_torch_version,\n is_tpu_available,\n pad_across_processes,\n recursively_apply,\n reduce,\n release_memory,\n save,\n wait_for_everyone,\n)\n\n\nif is_deepspeed_available():\n import deepspeed\n\n from .utils import (\n DeepSpeedEngineWrapper,\n DeepSpeedOptimizerWrapper,\n DeepSpeedSchedulerWrapper,\n DummyOptim,\n DummyScheduler,\n )\n\nif is_megatron_lm_available():\n from .utils import (\n MegatronEngine,\n MegatronLMDummyDataLoader,\n MegatronLMDummyScheduler,\n MegatronLMOptimizerWrapper,\n MegatronLMSchedulerWrapper,\n megatron_lm_initialize,\n megatron_lm_prepare_data_loader,\n megatron_lm_prepare_model,\n megatron_lm_prepare_optimizer,\n megatron_lm_prepare_scheduler,\n )\n\nif is_torch_version(\">\", \"1.10.0\"):\n from torch.distributed.algorithms.join import Join\n\n\nif is_tpu_available(check_device=False):\n import torch_xla.distributed.xla_multiprocessing as xmp\n\n\nif is_torch_version(\"<=\", \"1.13.5\"):\n from torch.optim.lr_scheduler import _LRScheduler as LRScheduler\nelse:\n from torch.optim.lr_scheduler import LRScheduler as LRScheduler\n\nlogger = get_logger(__name__)\n\n\nclass Accelerator:\n \"\"\"\n Creates an instance of an accelerator for distributed training (on multi-GPU, TPU) or mixed precision training.\n\n Args:\n device_placement (`bool`, *optional*, defaults to `True`):\n Whether or not the accelerator should put objects on device (tensors yielded by the dataloader, model,\n etc...).\n split_batches (`bool`, *optional*, defaults to `False`):\n Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If\n `True` the actual batch size used will be the same on any kind of distributed processes, but it must be a\n round multiple of the `num_processes` you are using. If `False`, actual batch size used will be the one set\n in your script multiplied by the number of processes.\n mixed_precision (`str`, *optional*):\n Whether or not to use mixed precision training (fp16 or bfloat16). Choose from 'no','fp16','bf16'. Will\n default to the value in the environment variable `ACCELERATE_MIXED_PRECISION`, which will use the default\n value in the accelerate config of the current system or the flag passed with the `accelerate.launch`\n command. 'fp16' requires pytorch 1.6 or higher. 'bf16' requires pytorch 1.10 or higher.\n gradient_accumulation_steps (`int`, *optional*, default to 1):\n The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with\n `Accelerator.accumulate`.\n cpu (`bool`, *optional*):\n Whether or not to force the script to execute on CPU. Will ignore GPU available if set to `True` and force\n the execution on one process only.\n deepspeed_plugin (`DeepSpeedPlugin`, *optional*):\n Tweak your DeepSpeed related args using this argument. This argument is optional and can be configured\n directly using *accelerate config*\n fsdp_plugin (`FullyShardedDataParallelPlugin`, *optional*):\n Tweak your FSDP related args using this argument. This argument is optional and can be configured directly\n using *accelerate config*\n megatron_lm_plugin (`MegatronLMPlugin`, *optional*):\n Tweak your MegatronLM related args using this argument. This argument is optional and can be configured\n directly using *accelerate config*\n rng_types (list of `str` or [`~utils.RNGType`]):\n The list of random number generators to synchronize at the beginning of each iteration in your prepared\n dataloaders. Should be one or several of:\n\n - `\"torch\"`: the base torch random number generator\n - `\"cuda\"`: the CUDA random number generator (GPU only)\n - `\"xla\"`: the XLA random number generator (TPU only)\n - `\"generator\"`: the `torch.Generator` of the sampler (or batch sampler if there is no sampler in your\n dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type.\n\n Will default to `[\"torch\"]` for PyTorch versions <=1.5.1 and `[\"generator\"]` for PyTorch versions >= 1.6.\n log_with (list of `str`, [`~utils.LoggerType`] or [`~tracking.GeneralTracker`], *optional*):\n A list of loggers to be setup for experiment tracking. Should be one or several of:\n\n - `\"all\"`\n - `\"tensorboard\"`\n - `\"wandb\"`\n - `\"comet_ml\"`\n If `\"all\"` is selected, will pick up all available trackers in the environment and initialize them. Can\n also accept implementations of `GeneralTracker` for custom trackers, and can be combined with `\"all\"`.\n project_config (`ProjectConfiguration`, *optional*):\n A configuration for how saving the state can be handled.\n project_dir (`str`, `os.PathLike`, *optional*):\n A path to a directory for storing data such as logs of locally-compatible loggers and potentially saved\n checkpoints.\n dispatch_batches (`bool`, *optional*):\n If set to `True`, the dataloader prepared by the Accelerator is only iterated through on the main process\n and then the batches are split and broadcast to each process. Will default to `True` for `DataLoader` whose\n underlying dataset is an `IterableDataset`, `False` otherwise.\n even_batches (`bool`, *optional*, defaults to `True`):\n If set to `True`, in cases where the total batch size across all processes does not exactly divide the\n dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among\n all workers.\n step_scheduler_with_optimizer (`bool`, *optional`, defaults to `True`):\n Set `True` if the learning rate scheduler is stepped at the same time as the optimizer, `False` if only\n done under certain circumstances (at the end of each epoch, for instance).\n kwargs_handlers (`List[KwargHandler]`, *optional*)\n A list of `KwargHandler` to customize how the objects related to distributed training or mixed precision\n are created. See [kwargs](kwargs) for more information.\n dynamo_backend (`str` or `DynamoBackend`, *optional*, defaults to `\"no\"`):\n Set to one of the possible dynamo backends to optimize your training with torch dynamo.\n\n **Available attributes:**\n\n - **device** (`torch.device`) -- The device to use.\n - **distributed_type** ([`~utils.DistributedType`]) -- The distributed training configuration.\n - **local_process_index** (`int`) -- The process index on the current machine.\n - **mixed_precision** (`str`) -- The configured mixed precision mode.\n - **num_processes** (`int`) -- The total number of processes used for training.\n - **optimizer_step_was_skipped** (`bool`) -- Whether or not the optimizer update was skipped (because of\n gradient overflow in mixed precision), in which\n case the learning rate should not be changed.\n - **process_index** (`int`) -- The overall index of the current process among all processes.\n - **state** ([`~state.AcceleratorState`]) -- The distributed setup state.\n - **sync_gradients** (`bool`) -- Whether the gradients are currently being synced across all processes.\n - **use_distributed** (`bool`) -- Whether the current configuration is for distributed training.\n \"\"\"\n\n def __init__(\n self,\n device_placement: bool = True,\n split_batches: bool = False,\n mixed_precision: Union[PrecisionType, str] = None,\n gradient_accumulation_steps: int = 1,\n cpu: bool = False,\n deepspeed_plugin: DeepSpeedPlugin = None,\n fsdp_plugin: FullyShardedDataParallelPlugin = None,\n megatron_lm_plugin: MegatronLMPlugin = None,\n rng_types: Optional[List[Union[str, RNGType]]] = None,\n log_with: Optional[List[Union[str, LoggerType, GeneralTracker]]] = None,\n project_dir: Optional[Union[str, os.PathLike]] = None,\n project_config: Optional[ProjectConfiguration] = None,\n logging_dir: Optional[Union[str, os.PathLike]] = None,\n dispatch_batches: Optional[bool] = None,\n even_batches: bool = True,\n step_scheduler_with_optimizer: bool = True,\n kwargs_handlers: Optional[List[KwargsHandler]] = None,\n dynamo_backend: Union[DynamoBackend, str] = None,\n ):\n if project_config is not None:\n self.project_configuration = project_config\n else:\n self.project_configuration = ProjectConfiguration(project_dir=project_dir)\n\n if logging_dir is not None:\n warnings.warn(\n \"`logging_dir` is deprecated and will be removed in version 0.18.0 of 🤗 Accelerate. Use `project_dir` instead.\",\n FutureWarning,\n )\n self.project_configuration.logging_dir = logging_dir\n if project_dir is not None and self.project_dir is None:\n self.project_configuration.project_dir = project_dir\n if mixed_precision is not None:\n mixed_precision = str(mixed_precision)\n if mixed_precision not in PrecisionType:\n raise ValueError(\n f\"Unknown mixed_precision mode: {mixed_precision}. Choose between {PrecisionType.list()}\"\n )\n\n if dynamo_backend is not None:\n dynamo_backend = DynamoBackend(dynamo_backend.upper())\n\n if deepspeed_plugin is None: # init from env variables\n deepspeed_plugin = (\n DeepSpeedPlugin() if os.environ.get(\"ACCELERATE_USE_DEEPSPEED\", \"false\") == \"true\" else None\n )\n else:\n assert isinstance(\n deepspeed_plugin, DeepSpeedPlugin\n ), \"`deepspeed_plugin` must be an `accelerate.utils.DeepSpeedPlugin` object.\"\n os.environ[\"ACCELERATE_USE_DEEPSPEED\"] = \"true\" # use DeepSpeed if plugin is provided\n if deepspeed_plugin:\n if not is_deepspeed_available():\n raise ImportError(\"DeepSpeed is not installed => run `pip install deepspeed` or build it from source.\")\n if compare_versions(\"deepspeed\", \"<\", \"0.6.5\"):\n raise ImportError(\"DeepSpeed version must be >= 0.6.5. Please update DeepSpeed.\")\n\n mixed_precision = (\n os.environ.get(\"ACCELERATE_MIXED_PRECISION\", \"no\") if mixed_precision is None else mixed_precision\n )\n deepspeed_plugin.set_mixed_precision(mixed_precision)\n deepspeed_plugin.set_deepspeed_weakref()\n\n if os.environ.get(\"ACCELERATE_USE_FSDP\", \"false\") == \"true\" or isinstance(\n fsdp_plugin, FullyShardedDataParallelPlugin\n ):\n if is_torch_version(\"<\", \"1.12.0\"):\n raise ValueError(\"FSDP requires PyTorch >= 1.12.0\")\n\n if fsdp_plugin is None: # init from env variables\n fsdp_plugin = (\n FullyShardedDataParallelPlugin() if os.environ.get(\"ACCELERATE_USE_FSDP\", \"false\") == \"true\" else None\n )\n else:\n if not isinstance(fsdp_plugin, FullyShardedDataParallelPlugin):\n raise TypeError(\"`fsdp_plugin` must be a FullyShardedDataParallelPlugin object.\")\n os.environ[\"ACCELERATE_USE_FSDP\"] = \"true\" # use FSDP if plugin is provided\n\n if megatron_lm_plugin is None: # init from env variables\n megatron_lm_plugin = (\n MegatronLMPlugin() if os.environ.get(\"ACCELERATE_USE_MEGATRON_LM\", \"false\") == \"true\" else None\n )\n else:\n if not isinstance(megatron_lm_plugin, MegatronLMPlugin):\n raise TypeError(\"`megatron_lm_plugin` must be a MegatronLMPlugin object.\")\n os.environ[\"ACCELERATE_USE_MEGATRON_LM\"] = \"true\" # use MegatronLM if plugin is provided\n\n if megatron_lm_plugin:\n if not is_megatron_lm_available():\n raise ImportError(\"Megatron is not installed. please build it from source.\")\n\n # Kwargs handlers\n self.ddp_handler = None\n self.scaler_handler = None\n self.init_handler = None\n if kwargs_handlers is not None:\n for handler in kwargs_handlers:\n assert isinstance(\n handler, KwargsHandler\n ), f\"Unsupported kwargs handler passed: {handler}, must be one that inherits `accelerate.utils.KwargsHandler`.\"\n if isinstance(handler, DistributedDataParallelKwargs):\n if self.ddp_handler is not None:\n raise ValueError(\"You can only pass one `DistributedDataParallelKwargs` in `kwargs_handler`.\")\n else:\n self.ddp_handler = handler\n elif isinstance(handler, GradScalerKwargs):\n if self.scaler_handler is not None:\n raise ValueError(\"You can only pass one `GradScalerKwargs` in `kwargs_handler`.\")\n else:\n self.scaler_handler = handler\n elif isinstance(handler, InitProcessGroupKwargs):\n if self.init_handler is not None:\n raise ValueError(\"You can only pass one `InitProcessGroupKwargs` in `kwargs_handler`.\")\n else:\n self.init_handler = handler\n\n kwargs = self.init_handler.to_kwargs() if self.init_handler is not None else {}\n self.state = AcceleratorState(\n mixed_precision=mixed_precision,\n cpu=cpu,\n dynamo_backend=dynamo_backend,\n deepspeed_plugin=deepspeed_plugin,\n fsdp_plugin=fsdp_plugin,\n megatron_lm_plugin=megatron_lm_plugin,\n _from_accelerator=True,\n **kwargs,\n )\n\n trackers = filter_trackers(log_with, self.logging_dir)\n if len(trackers) < 1 and log_with is not None:\n warnings.warn(f\"`log_with={log_with}` was passed but no supported trackers are currently installed.\")\n self.log_with = trackers\n\n if (\n (mixed_precision != \"bf16\")\n and getattr(self.state, \"downcast_bfloat\", False)\n and (self.state.distributedType != DistributedType.TPU)\n ):\n raise ValueError(\"Can only use `downcast_bf16` when using `mixed_precision='bf16'` and on a TPU\")\n\n if gradient_accumulation_steps > 1:\n if self.state.distributed_type == DistributedType.TPU:\n raise NotImplementedError(\n \"Gradient accumulation on TPU is not supported. Pass in `gradient_accumulation_steps=1`\"\n )\n\n self.gradient_accumulation_steps = gradient_accumulation_steps\n self.device_placement = device_placement\n self.split_batches = split_batches\n self.dispatch_batches = dispatch_batches\n if dispatch_batches is True and is_torch_version(\"<\", \"1.8.0\"):\n raise ImportError(\n \"Using `DataLoaderDispatcher` requires PyTorch 1.8.0 minimum. You have {torch.__version__}.\"\n )\n self.even_batches = even_batches\n self.step_scheduler_with_optimizer = step_scheduler_with_optimizer\n\n # Mixed precision attributes\n self.scaler = None\n self.native_amp = False\n err = \"{mode} mixed precision requires {requirement}\"\n if (\n self.state.mixed_precision == \"fp16\"\n and self.device.type != \"cpu\"\n and self.distributed_type not in (DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM)\n ):\n self.native_amp = True\n if not torch.cuda.is_available() and not parse_flag_from_env(\"ACCELERATE_USE_MPS_DEVICE\"):\n raise ValueError(err.format(mode=\"fp16\", requirement=\"a GPU\"))\n kwargs = self.scaler_handler.to_kwargs() if self.scaler_handler is not None else {}\n if self.distributed_type == DistributedType.FSDP:\n from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler\n\n self.scaler = ShardedGradScaler(**kwargs)\n else:\n self.scaler = torch.cuda.amp.GradScaler(**kwargs)\n elif self.state.mixed_precision == \"bf16\" and self.distributed_type not in (\n DistributedType.DEEPSPEED,\n DistributedType.FSDP,\n DistributedType.MEGATRON_LM,\n ):\n if self.device.type == \"cpu\":\n self.native_amp = is_torch_version(\">=\", \"1.10\")\n else:\n self.native_amp = is_bf16_available(True)\n if mixed_precision == \"bf16\" and not self.native_amp and not is_tpu_available():\n raise ValueError(err.format(mode=\"bf16\", requirement=\"PyTorch >= 1.10 and a supported device.\"))\n\n # Only on the GPU do we care about scaling the gradients\n if torch.cuda.is_available() and self.device.type != \"cpu\":\n kwargs = self.scaler_handler.to_kwargs() if self.scaler_handler is not None else {}\n self.scaler = torch.cuda.amp.GradScaler(**kwargs)\n\n # Start of internal step tracking\n self.step = 0\n self.gradient_state = GradientState()\n\n # Internal references to the training objects\n self._optimizers = []\n self._models = []\n self._schedulers = []\n self._dataloaders = []\n self._custom_objects = []\n\n # RNG Types\n self.rng_types = rng_types\n if self.rng_types is None:\n self.rng_types = [\"generator\"]\n\n @property\n def use_distributed(self):\n \"\"\"\n Whether the Accelerator is configured for distributed training\n \"\"\"\n return self.distributed_type != DistributedType.NO and self.num_processes > 1\n\n @property\n def distributed_type(self):\n return self.state.distributed_type\n\n @property\n def num_processes(self):\n return self.state.num_processes\n\n @property\n def process_index(self):\n return self.state.process_index\n\n @property\n def local_process_index(self):\n return self.state.local_process_index\n\n @property\n def device(self):\n return self.state.device\n\n @property\n def project_dir(self):\n return self.project_configuration.project_dir\n\n @property\n def logging_dir(self):\n return self.project_configuration.logging_dir\n\n @property\n def save_iteration(self):\n return self.project_configuration.iteration\n\n @property\n def is_main_process(self):\n \"\"\"True for one process only.\"\"\"\n return (\n self.process_index == 0 if self.distributed_type != DistributedType.MEGATRON_LM else self.is_last_process\n )\n\n @property\n def is_local_main_process(self):\n \"\"\"True for one process per server.\"\"\"\n return (\n self.local_process_index == 0\n if self.distributed_type != DistributedType.MEGATRON_LM\n else self.is_last_process\n )\n\n @property\n def use_fp16(self):\n return self.mixed_precision != \"no\"\n\n @property\n def is_last_process(self):\n return self.process_index == self.num_processes - 1\n\n @property\n def mixed_precision(self):\n return self.state.mixed_precision\n\n def on_main_process(func):\n \"\"\"\n A decorator that will run the decorated function on the main process only.\n \"\"\"\n\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n if self.is_main_process or not self.use_distributed:\n return func(self, *args, **kwargs)\n\n return wrapper\n\n def on_local_main_process(func):\n \"\"\"\n A decorator that will run the decorated function on the local main process only.\n \"\"\"\n\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n if self.is_local_main_process or not self.use_distributed:\n return func(self, *args, **kwargs)\n\n return wrapper\n\n def on_last_process(func):\n \"\"\"\n A decorator that will run the decorated function on the last process only.\n \"\"\"\n\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n if self.is_last_process or not self.use_distributed:\n return func(self, *args, **kwargs)\n\n return wrapper\n\n def on_process(process_idx):\n \"\"\"\n A decorator that will run the decorated function on a given process index only.\n \"\"\"\n\n def decorator(func):\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n if self.process_idx == process_idx or not self.use_distributed:\n return func(self, *args, **kwargs)\n\n return wrapper\n\n return decorator\n\n def on_local_process(local_process_idx):\n \"\"\"\n A decorator that will run the decorated function on a given local process index only.\n \"\"\"\n\n def decorator(func):\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n if self.local_process_idx == local_process_idx or not self.use_distributed:\n return func(self, *args, **kwargs)\n\n return wrapper\n\n return decorator\n\n def _goes_first(self, is_main):\n if not is_main:\n self.wait_for_everyone()\n\n yield\n\n if is_main:\n self.wait_for_everyone()\n\n @contextmanager\n def main_process_first(self):\n \"\"\"\n Lets the main process go first inside a with block.\n\n The other processes will enter the with block after the main process exits.\n \"\"\"\n yield from self._goes_first(self.is_main_process)\n\n @contextmanager\n def local_main_process_first(self):\n \"\"\"\n Lets the local main process go inside a with block.\n\n The other processes will enter the with block after the main process exits.\n \"\"\"\n yield from self._goes_first(self.is_local_main_process)\n\n @contextmanager\n def no_sync(self, model):\n \"\"\"\n A context manager to disable gradient synchronizations across DDP processes by calling\n `torch.nn.parallel.DistributedDataParallel.no_sync`.\n\n If `model` is not in DDP, this context manager does nothing\n\n Args:\n model (`torch.nn.Module`):\n PyTorch Module that was prepared with `Accelerator.prepare`\n\n Example:\n\n ```python\n >>> from accelerate import Accelerator\n\n >>> accelerator = Accelerator()\n >>> dataloader, model, optimizer = accelerator.prepare(dataloader, model, optimizer)\n >>> input_a = next(iter(dataloader))\n >>> input_b = next(iter(dataloader))\n\n >>> with accelerator.no_sync():\n ... outputs = model(input_a)\n ... loss = loss_func(outputs)\n ... accelerator.backward(loss)\n ... # No synchronization across processes, only accumulate gradients\n >>> outputs = model(input_b)\n >>> accelerator.backward(loss)\n >>> # Synchronization across all processes\n >>> optimizer.step()\n >>> optimizer.zero_grad()\n ```\n \"\"\"\n context = contextlib.nullcontext\n if self.use_distributed:\n context = getattr(model, \"no_sync\", context)\n\n with context():\n yield\n\n def _do_sync(self):\n \"Sets the right `sync_gradients` context and either resets or increases `self.step`\"\n if self.gradient_state.end_of_dataloader:\n self.step = 0\n self.gradient_state._set_sync_gradients(True)\n else:\n self.step += 1\n self.gradient_state._set_sync_gradients((self.step % self.gradient_accumulation_steps) == 0)\n\n @property\n def sync_gradients(self):\n return self.gradient_state.sync_gradients\n\n @contextmanager\n def accumulate(self, model):\n \"\"\"\n A context manager that will lightly wrap around and perform gradient accumulation automatically\n\n Args:\n model (`torch.nn.Module`):\n PyTorch Module that was prepared with `Accelerator.prepare`\n\n Example:\n\n ```python\n >>> from accelerate import Accelerator\n\n >>> accelerator = Accelerator(gradient_accumulation_steps=2)\n >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler)\n\n >>> with accelerator.accumulate():\n ... for input, output in dataloader:\n ... outputs = model(input)\n ... loss = loss_func(outputs)\n ... loss.backward()\n ... optimizer.step()\n ... scheduler.step()\n ... optimizer.zero_grad()\n ```\n \"\"\"\n self._do_sync()\n if self.sync_gradients:\n context = contextlib.nullcontext\n else:\n context = self.no_sync\n\n with context(model):\n yield\n\n @contextmanager\n def join_uneven_inputs(self, joinables, even_batches=None):\n \"\"\"\n A context manager that facilitates distributed training or evaluation on uneven inputs, which acts as a wrapper\n around `torch.distributed.algorithms.join`. This is useful when the total batch size does not evenly divide the\n length of the dataset.\n\n Args:\n joinables (`List[torch.distributed.algorithms.Joinable]`):\n A list of models or optimizers that subclass `torch.distributed.algorithms.Joinable`. Most commonly, a\n PyTorch Module that was prepared with `Accelerator.prepare` for DistributedDataParallel training.\n even_batches (`bool`, *optional*)\n If set, this will override the value of `even_batches` set in the `Accelerator`. If it is not provided,\n the default `Accelerator` value wil be used.\n\n <Tip warning={true}>\n\n `join_uneven_inputs` is only supported for Distributed Data Parallel training on multiple GPUs. For any other\n configuration, this method will have no effect.\n\n </Tip>\n\n <Tip warning={true}>\n\n Overidding `even_batches` will not affect iterable-style data loaders.\n\n </Tip>\n\n Example:\n\n ```python\n >>> from accelerate import Accelerator\n\n >>> accelerator = Accelerator(even_batches=True)\n >>> ddp_model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader)\n\n >>> with accelerator.join_uneven_inputs([ddp_model], even_batches=False):\n ... for input, output in dataloader:\n ... outputs = model(input)\n ... loss = loss_func(outputs)\n ... loss.backward()\n ... optimizer.step()\n ... optimizer.zero_grad()\n ```\n\n \"\"\"\n if is_torch_version(\"<\", \"1.10.0\"):\n raise ValueError(f\"Joining uneven inputs requires PyTorch >= 1.10.0, You have {torch.__version__}.\")\n\n if self.distributed_type == DistributedType.MULTI_GPU:\n dl_even_batches_values = []\n\n if even_batches is not None:\n iterable_dl_seen = False\n # override value in batch sampler for map-style datasets\n for dl_idx, dl in enumerate(self._dataloaders):\n if isinstance(dl, DataLoaderDispatcher):\n iterable_dl_seen = True\n continue\n dl_even_batches_values.append((dl_idx, dl.batch_sampler.even_batches))\n dl.batch_sampler.even_batches = even_batches\n\n if iterable_dl_seen:\n warnings.warn(\n \"Overridding even_batches is only supported for map-style datasets, yet some dataloaders given were iterable\"\n )\n else:\n even_batches = self.even_batches\n\n enable_join = False if even_batches else True\n try:\n with Join(joinables, enable=enable_join, throw_on_early_termination=False):\n yield\n finally:\n # reset any batch samplers that have been modified\n for dl_idx, even_batches_value in dl_even_batches_values:\n self._dataloaders[dl_idx].batch_sampler.even_batches = even_batches_value\n else:\n # Even when disabled, Join expects models to subclass Joinable, so skip entirely for single process runs\n if self.distributed_type != DistributedType.NO:\n warnings.warn(\n \"Joining uneven inputs is only supported for multi-GPU training, as a result `join_uneven_inputs` will have no effect.\"\n )\n\n with contextlib.nullcontext(joinables):\n yield\n\n def print(self, *args, **kwargs):\n \"\"\"\n Use in replacement of `print()` to only print once per server.\n \"\"\"\n if self.is_local_main_process:\n print(*args, **kwargs)\n\n def _prepare_one(self, obj, first_pass=False, device_placement=None):\n # First pass of preparation: DataLoader, model, optimizer\n if first_pass:\n if isinstance(obj, torch.utils.data.DataLoader):\n return self.prepare_data_loader(obj, device_placement=device_placement)\n elif isinstance(obj, torch.nn.Module):\n return self.prepare_model(obj, device_placement=device_placement)\n elif isinstance(obj, torch.optim.Optimizer):\n optimizer = self.prepare_optimizer(obj, device_placement=device_placement)\n return optimizer\n # Second pass of preparation: LR scheduler (which need the full list of optimizers)\n elif isinstance(obj, LRScheduler):\n scheduler = self.prepare_scheduler(obj)\n return scheduler\n # Return the unprocessed object if previous criteria was not met\n return obj\n\n def _prepare_fsdp(self, *args):\n result = []\n for obj in args:\n if isinstance(obj, torch.nn.Module):\n model = obj\n break\n optimizers = []\n\n self._schedulers = []\n self._models = []\n intermediate_result = []\n for obj in args:\n if isinstance(obj, torch.optim.Optimizer):\n if len(obj.param_groups) > 1:\n logger.warning(\n \"FSDP Warning: When using FSDP, several parameter groups will be conflated into \"\n \"a single one due to nested module wrapping and parameter flattening.\"\n )\n try:\n optimizer = obj.optimizer.__class__(model.parameters(), **obj.optimizer.defaults)\n except TypeError:\n if \"differentiable\" in obj.optimizer.defaults:\n # https://github.com/huggingface/accelerate/issues/801\n defaults = {k: v for k, v in obj.optimizer.defaults.items() if k != \"differentiable\"}\n optimizer = obj.optimizer.__class__(model.parameters(), **defaults)\n else:\n raise\n obj = self.prepare_optimizer(optimizer)\n optimizers.append(obj)\n elif isinstance(obj, torch.nn.Module):\n self._models.append(obj)\n intermediate_result.append(obj)\n\n for obj in intermediate_result:\n if isinstance(obj, AcceleratedScheduler):\n obj.optimizer = optimizers\n for i, opt in enumerate(self._optimizers):\n if getattr(obj.scheduler, \"optimizer\", None) == opt.optimizer:\n obj.scheduler.optimizer = optimizers[i]\n obj.optimizers = [optimizers[i]]\n break\n self._schedulers.append(obj)\n result.append(obj)\n self._optimizers = optimizers\n return tuple(result)\n\n def prepare(self, *args, device_placement=None):\n \"\"\"\n Prepare all objects passed in `args` for distributed training and mixed precision, then return them in the same\n order.\n\n Args:\n *args (list of objects):\n Any of the following type of objects:\n\n - `torch.utils.data.DataLoader`: PyTorch Dataloader\n - `torch.nn.Module`: PyTorch Module\n - `torch.optim.Optimizer`: PyTorch Optimizer\n - `torch.optim.lr_scheduler.LRScheduler`: PyTorch LR Scheduler\n\n device_placement (`List[bool]`, *optional*):\n Used to customize whether automatic device placement should be performed for each object passed. Needs\n to be a list of the same length as `args`.\n\n <Tip>\n\n You don't need to prepare a model if you only use it for inference without any kind of mixed precision\n\n </Tip>\n \"\"\"\n if device_placement is None:\n device_placement = [None for _ in args]\n elif self.distributed_type in (DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM):\n raise ValueError(\"You can't customize device placements with DeepSpeed or Megatron-LM.\")\n elif len(device_placement) != len(args):\n raise ValueError(\n f\"`device_placement` should be a list with {len(args)} elements (the number of objects passed).\"\n )\n\n if self.distributed_type == DistributedType.FSDP:\n model_count = 0\n optimizer_present = False\n for obj in args:\n if isinstance(obj, torch.nn.Module):\n model_count += 1\n if isinstance(obj, torch.optim.Optimizer):\n optimizer_present = True\n if model_count > 1 and optimizer_present:\n raise ValueError(\n \"For FSDP to work with multiple models (>1), \"\n \"prepare must be called for all the models before optimizers are created. \"\n \"Then pass the optimizers to the prepare call in the same order as corresponding models.\"\n )\n elif model_count == 1 and optimizer_present:\n logger.warning(\n \"FSDP Warning: When using FSDP, \"\n \"it is efficient and recommended to call prepare for the model before creating the optimizer\"\n )\n\n # On TPUs, putting the model on the XLA device will create new parameters, so the corresponding optimizer will\n # have parameters disconnected from the model (so no training :-( ).\n # If the model and optimizer have parameters on different devices we raise an error.\n if self.distributed_type == DistributedType.TPU:\n model_device, optimizer_device = self._get_devices()\n if model_device is not None and optimizer_device is not None and model_device != optimizer_device:\n raise ValueError(\n \"The model and the optimizer parameters are not on the same device, which probably means you \"\n \"created an optimizer around your model **before** putting on the device. Make sure the line \"\n \"model.to(device) is before the optimizer creation in your script or remove it entirely and use \"\n \"the flag default value for `device_placement` in your `Accelerator` to let it handle that \"\n \"part for you.\"\n )\n\n # If we're dealing with device placement, this deals with that by...\n tpu_should_fix_optimizer = self.device_placement and self.distributed_type == DistributedType.TPU\n if tpu_should_fix_optimizer:\n # 1. grabbing old model parameters\n old_named_params = self._get_named_parameters(*args)\n\n if self.distributed_type == DistributedType.DEEPSPEED:\n result = self._prepare_deepspeed(*args)\n elif self.distributed_type == DistributedType.MEGATRON_LM:\n result = self._prepare_megatron_lm(*args)\n else:\n result = tuple(\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\n )\n result = tuple(self._prepare_one(obj, device_placement=d) for obj, d in zip(result, device_placement))\n\n if tpu_should_fix_optimizer:\n # 2. grabbing new model parameters\n new_named_params = self._get_named_parameters(*result)\n # 3. building a map from the first to the second\n mapping = {p: new_named_params[n] for n, p in old_named_params.items()}\n # 4. using that map to update the parameters of the optimizer\n for obj in result:\n if isinstance(obj, torch.optim.Optimizer):\n obj._switch_parameters(mapping)\n\n if self.distributed_type == DistributedType.FSDP and model_count == 1 and optimizer_present:\n result = self._prepare_fsdp(*result)\n\n return result if len(result) > 1 else result[0]\n\n def prepare_model(self, model: torch.nn.Module, device_placement=None):\n \"\"\"\n Prepares a PyTorch model for training in any distributed setup. It is recommended to use\n [`Accelerator.prepare`] instead.\n\n Args:\n model (`torch.nn.Module`):\n A PyTorch model to prepare. You don't need to prepare a model if it is used only for inference without\n any kind of mixed precision\n device_placement (`bool`, *optional*):\n Whether or not to place the model on the proper device. Will default to `self.device_placement`.\n \"\"\"\n if device_placement is None:\n device_placement = self.device_placement and self.distributed_type != DistributedType.FSDP\n self._models.append(model)\n if device_placement:\n model = model.to(self.device)\n if self.state.dynamo_backend != DynamoBackend.NO:\n import torch._dynamo as dynamo\n\n model = dynamo.optimize(self.state.dynamo_backend.value.lower())(model)\n if self.distributed_type == DistributedType.MULTI_GPU:\n if any(p.requires_grad for p in model.parameters()):\n kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {}\n model = torch.nn.parallel.DistributedDataParallel(\n model, device_ids=[self.local_process_index], output_device=self.local_process_index, **kwargs\n )\n elif self.distributed_type == DistributedType.FSDP:\n from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP\n\n # Check if the model is already a FSDP model due to `Manual Wrapping` and if so,\n # don't wrap it again\n if type(model) != FSDP:\n self.state.fsdp_plugin.set_auto_wrap_policy(model)\n fsdp_plugin = self.state.fsdp_plugin\n model = FSDP(\n model,\n sharding_strategy=fsdp_plugin.sharding_strategy,\n cpu_offload=fsdp_plugin.cpu_offload,\n auto_wrap_policy=fsdp_plugin.auto_wrap_policy,\n backward_prefetch=fsdp_plugin.backward_prefetch,\n mixed_precision=fsdp_plugin.mixed_precision_policy,\n ignored_modules=fsdp_plugin.ignored_modules,\n device_id=self.device,\n limit_all_gathers=fsdp_plugin.limit_all_gathers,\n )\n self._models[-1] = model\n elif self.distributed_type == DistributedType.MULTI_CPU:\n kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {}\n model = torch.nn.parallel.DistributedDataParallel(model, **kwargs)\n if self.native_amp:\n model._original_forward = model.forward\n if self.mixed_precision == \"fp16\" and is_torch_version(\">=\", \"1.10\"):\n model.forward = torch.cuda.amp.autocast(dtype=torch.float16)(model.forward)\n elif self.mixed_precision == \"bf16\" and self.distributed_type != DistributedType.TPU:\n model.forward = torch.autocast(device_type=self.device.type, dtype=torch.bfloat16)(model.forward)\n else:\n model.forward = torch.cuda.amp.autocast()(model.forward)\n model.forward = convert_outputs_to_fp32(model.forward)\n if self.distributed_type == DistributedType.TPU and self.state.fork_launched:\n model = xmp.MpModelWrapper(model).to(self.device)\n return model\n\n def _prepare_deepspeed(self, *args):\n\n deepspeed_plugin = self.state.deepspeed_plugin\n\n if deepspeed_plugin.deepspeed_config[\"train_micro_batch_size_per_gpu\"] == \"auto\":\n result = [\n self._prepare_one(obj, first_pass=True) if isinstance(obj, torch.utils.data.DataLoader) else obj\n for obj in args\n ]\n\n batch_sizes = [obj.batch_size for obj in args if hasattr(obj, \"batch_size\")]\n if self.split_batches:\n batch_sizes = [batch_size // self.num_processes for batch_size in batch_sizes]\n if len(batch_sizes) == 0:\n raise ValueError(\n \"When using DeepSpeed `accelerate.prepare()` requires you to pass at least one of training or evaluation dataloaders \"\n \"or alternatively set an integer value in `train_micro_batch_size_per_gpu` in the deepspeed config file\"\n \"or assign integer value to `AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']`.\"\n )\n\n batch_size_per_device = min(batch_sizes) if deepspeed_plugin.is_train_batch_min else max(batch_sizes)\n if len(batch_sizes) > 1:\n logger.info(\n \"Since you passed both train and evaluation dataloader, `is_train_batch_min` (here \"\n f\"{deepspeed_plugin.is_train_batch_min} will decide the `train_batch_size` ({batch_size_per_device}).\"\n )\n else:\n batch_size_per_device = deepspeed_plugin.deepspeed_config[\"train_micro_batch_size_per_gpu\"]\n result = [obj for obj in args]\n\n if self.gradient_accumulation_steps != deepspeed_plugin.deepspeed_config[\"gradient_accumulation_steps\"]:\n logger.info(\n f\"Updating DeepSpeed's gradient accumulation steps to {self.gradient_accumulation_steps} from \"\n f\"{deepspeed_plugin.deepspeed_config['gradient_accumulation_steps']}.\"\n )\n deepspeed_plugin.deepspeed_config[\"gradient_accumulation_steps\"] = self.gradient_accumulation_steps\n config_kwargs = {\n \"train_micro_batch_size_per_gpu\": batch_size_per_device,\n \"train_batch_size\": batch_size_per_device\n * deepspeed_plugin.deepspeed_config[\"gradient_accumulation_steps\"]\n * self.num_processes,\n \"gradient_clipping\": 1.0,\n \"zero_optimization.stage3_gather_16bit_weights_on_model_save\": False,\n }\n\n model = None\n optimizer = None\n scheduler = None\n for obj in result:\n if isinstance(obj, torch.nn.Module):\n model = obj\n elif isinstance(obj, (torch.optim.Optimizer, DummyOptim)):\n optimizer = obj\n elif (isinstance(obj, (LRScheduler, DummyScheduler))) or (\n type(obj).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES\n ):\n scheduler = obj\n\n if optimizer is not None:\n if \"optimizer\" in deepspeed_plugin.deepspeed_config and not isinstance(optimizer, (DummyOptim)):\n raise ValueError(\n \"You cannot specify an optimizer in the config file and in the code at the same time. \"\n \"Please remove the optimizer from the config file or \"\n \"create `accelerate.utils.DummyOptim` in the code.\"\n )\n elif \"optimizer\" not in deepspeed_plugin.deepspeed_config and isinstance(optimizer, (DummyOptim)):\n raise ValueError(\n \"You cannot create a `DummyOptim` without specifying an optimizer in the config file.\"\n )\n\n if isinstance(optimizer, (torch.optim.Optimizer)):\n deepspeed_plugin.deepspeed_config[\"zero_allow_untested_optimizer\"] = True\n\n if scheduler is not None:\n if \"scheduler\" in deepspeed_plugin.deepspeed_config and not isinstance(scheduler, (DummyScheduler)):\n raise ValueError(\n \"You cannot specify a scheduler in the config file and in the code at the same time. \"\n \"Please remove the scheduler from the config file or \"\n \"create `accelerate.utils.DummyScheduler` in the code.\"\n )\n elif \"scheduler\" not in deepspeed_plugin.deepspeed_config and isinstance(scheduler, (DummyScheduler)):\n raise ValueError(\n \"You cannot create a `DummyScheduler` without specifying a scheduler in the config file.\"\n )\n\n if optimizer is not None and scheduler is not None:\n if isinstance(optimizer, (DummyOptim)) and not isinstance(scheduler, (DummyScheduler)):\n raise ValueError(\n \"You can only specify `accelerate.utils.DummyScheduler` in the code when using \"\n \"`accelerate.utils.DummyOptim`.\"\n )\n\n if model is not None:\n if hasattr(model, \"config\") and hasattr(model.config, \"hidden_size\"):\n hidden_size = model.config.hidden_size\n config_kwargs.update(\n {\n \"zero_optimization.reduce_bucket_size\": hidden_size * hidden_size,\n \"zero_optimization.stage3_prefetch_bucket_size\": 0.9 * hidden_size * hidden_size,\n \"zero_optimization.stage3_param_persistence_threshold\": 10 * hidden_size,\n }\n )\n\n if isinstance(optimizer, (DummyOptim)):\n config_kwargs.update(\n {\"optimizer.params.lr\": optimizer.lr, \"optimizer.params.weight_decay\": optimizer.weight_decay}\n )\n if isinstance(scheduler, (DummyScheduler)):\n config_kwargs.update(\n {\n \"scheduler.params.warmup_min_lr\": 0,\n \"scheduler.params.warmup_max_lr\": scheduler.optimizer.lr,\n \"scheduler.params.warmup_num_steps\": scheduler.warmup_num_steps,\n }\n )\n if scheduler.total_num_steps is not None:\n config_kwargs[\"scheduler.params.total_num_steps\"] = (\n math.ceil(scheduler.total_num_steps / self.num_processes)\n if not self.split_batches\n else scheduler.total_num_steps\n )\n deepspeed_plugin.deepspeed_config_process(must_match=False, **config_kwargs)\n self.deepspeed_config = deepspeed_plugin.deepspeed_config\n kwargs = dict(model=model, config_params=self.deepspeed_config)\n if optimizer is not None:\n if isinstance(optimizer, (DummyOptim)):\n kwargs[\"model_parameters\"] = optimizer.params\n else:\n kwargs[\"optimizer\"] = optimizer\n if scheduler is not None:\n if type(scheduler).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES:\n kwargs[\"lr_scheduler\"] = scheduler\n\n engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)\n if optimizer is not None:\n optimizer = DeepSpeedOptimizerWrapper(optimizer)\n if scheduler is not None:\n if lr_scheduler is None:\n scheduler = AcceleratedScheduler(\n scheduler,\n optimizer,\n step_with_optimizer=self.step_scheduler_with_optimizer,\n split_batches=self.split_batches,\n )\n else:\n scheduler = DeepSpeedSchedulerWrapper(lr_scheduler, optimizer)\n\n for i in range(len(result)):\n if isinstance(result[i], torch.nn.Module):\n result[i] = engine\n elif isinstance(result[i], (torch.optim.Optimizer, DummyOptim)):\n result[i] = optimizer\n elif (isinstance(result[i], (LRScheduler, DummyScheduler))) or (\n type(result[i]).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES\n ):\n result[i] = scheduler\n # pointing for deepspeed_engine_wrapped.backward()\n self.deepspeed_engine_wrapped = DeepSpeedEngineWrapper(engine)\n self._models.append(engine)\n if optimizer is not None:\n self._optimizers.append(optimizer)\n if scheduler is not None:\n self._schedulers.append(scheduler)\n if len(self._models) > 1:\n raise AssertionError(\n \"You can't use same `Accelerator()` instance with multiple models when using DeepSpeed\"\n )\n return tuple(result)\n\n def _prepare_megatron_lm(self, *args):\n megatron_lm_plugin = self.state.megatron_lm_plugin\n if not megatron_lm_plugin.megatron_dataset_flag:\n batch_sizes = [obj.batch_size for obj in args if hasattr(obj, \"batch_size\")]\n if len(batch_sizes) == 0:\n raise ValueError(\n \"You must specify a training or evaluation dataloader in `accelerate.prepare()` when using Megatron-LM.\"\n )\n\n micro_batch_size = min(batch_sizes) if megatron_lm_plugin.is_train_batch_min else max(batch_sizes)\n if len(batch_sizes) > 1:\n logger.info(\n \"Since you passed both train and evaluation dataloader, `is_train_batch_min` (here \"\n f\"{megatron_lm_plugin.is_train_batch_min} will decide the `train_batch_size` ({micro_batch_size}).\"\n )\n else:\n for obj in args:\n if isinstance(obj, MegatronLMDummyDataLoader):\n micro_batch_size = obj.dataset_args[\"micro_batch_size\"]\n break\n\n dp_degree = self.num_processes // (megatron_lm_plugin.tp_degree * megatron_lm_plugin.pp_degree)\n megatron_lm_plugin.set_training_args(micro_batch_size, dp_degree)\n\n model = None\n optimizer = None\n scheduler = None\n is_dummy_scheduler = False\n batch_data = None\n for obj in args:\n if isinstance(obj, torch.utils.data.DataLoader) and batch_data is None:\n batch_data = next(iter(obj))\n if isinstance(obj, torch.nn.Module):\n model = obj\n elif isinstance(obj, (torch.optim.Optimizer)):\n optimizer = obj\n elif isinstance(obj, (LRScheduler, MegatronLMDummyScheduler)):\n scheduler = obj\n\n if model is not None:\n megatron_lm_plugin.set_network_size_args(model, batch_data)\n if optimizer is not None:\n megatron_lm_plugin.set_optimizer_type(optimizer)\n if scheduler is not None:\n is_dummy_scheduler = isinstance(scheduler, MegatronLMDummyScheduler)\n if not is_dummy_scheduler:\n raise ValueError(\n \"You can't use a custom scheduler with Megatron-LM. Please use the `accelerate.utils.MegatronLMDummyScheduler` instead.\"\n )\n megatron_lm_plugin.set_scheduler_args(scheduler)\n\n # initialize megatron-lm\n megatron_lm_initialize(self, args_defaults=megatron_lm_plugin.megatron_lm_default_args)\n counter = 0\n result = []\n for obj in args:\n if isinstance(obj, torch.utils.data.DataLoader):\n result.append(megatron_lm_prepare_data_loader(self, obj))\n counter += 1\n elif isinstance(obj, MegatronLMDummyDataLoader):\n if counter == 0:\n obj.set_megatron_data_args()\n dataloaders = megatron_lm_prepare_data_loader(self, obj)\n result.append(dataloaders[counter])\n counter += 1\n else:\n result.append(obj)\n\n if model is not None:\n model = megatron_lm_prepare_model(self)\n if optimizer is not None:\n optimizer = megatron_lm_prepare_optimizer(self, model)\n if scheduler is not None:\n scheduler = megatron_lm_prepare_scheduler(self, optimizer, scheduler)\n\n if model is not None:\n model = MegatronEngine(self, model, optimizer, scheduler)\n if optimizer is not None:\n optimizer = MegatronLMOptimizerWrapper(optimizer)\n if scheduler is not None:\n scheduler = MegatronLMSchedulerWrapper(scheduler, optimizer)\n\n for i in range(len(result)):\n if isinstance(result[i], torch.nn.Module):\n result[i] = model\n elif isinstance(result[i], torch.optim.Optimizer):\n result[i] = optimizer\n elif isinstance(result[i], MegatronLMDummyScheduler):\n result[i] = scheduler\n if model is not None:\n self._models.append(model)\n if optimizer is not None:\n self._optimizers.append(optimizer)\n if scheduler is not None:\n self._schedulers.append(scheduler)\n if len(self._models) > 1:\n raise AssertionError(\n \"You can't use same `Accelerator()` instance with multiple models when using Megatron-LM\"\n )\n return tuple(result)\n\n def prepare_data_loader(self, data_loader: torch.utils.data.DataLoader, device_placement=None):\n \"\"\"\n Prepares a PyTorch DataLoader for training in any distributed setup. It is recommended to use\n [`Accelerator.prepare`] instead.\n\n Args:\n data_loader (`torch.utils.data.DataLoader`):\n A vanilla PyTorch DataLoader to prepare\n device_placement (`bool`, *optional*):\n Whether or not to place the batches on the proper device in the prepared dataloader. Will default to\n `self.device_placement`.\n \"\"\"\n if device_placement is None:\n device_placement = self.device_placement if self.distributed_type != DistributedType.TPU else False\n prepared_data_loader = prepare_data_loader(\n data_loader,\n self.device,\n num_processes=self.num_processes,\n process_index=self.process_index,\n split_batches=self.split_batches,\n put_on_device=device_placement,\n rng_types=self.rng_types.copy(),\n dispatch_batches=self.dispatch_batches,\n even_batches=self.even_batches,\n )\n self._dataloaders.append(prepared_data_loader)\n return prepared_data_loader\n\n def prepare_optimizer(self, optimizer: torch.optim.Optimizer, device_placement=None):\n \"\"\"\n Prepares a PyTorch Optimizer for training in any distributed setup. It is recommended to use\n [`Accelerator.prepare`] instead.\n\n Args:\n optimizer (`torch.optim.Optimizer`):\n A vanilla PyTorch optimizer to prepare\n device_placement (`bool`, *optional*):\n Whether or not to place the optimizer on the proper device. Will default to `self.device_placement`.\n \"\"\"\n if device_placement is None:\n device_placement = self.device_placement\n optimizer = AcceleratedOptimizer(optimizer, device_placement=device_placement, scaler=self.scaler)\n self._optimizers.append(optimizer)\n return optimizer\n\n def prepare_scheduler(self, scheduler: LRScheduler):\n \"\"\"\n Prepares a PyTorch Scheduler for training in any distributed setup. It is recommended to use\n [`Accelerator.prepare`] instead.\n\n Args:\n scheduler (`torch.optim.lr_scheduler.LRScheduler`):\n A vanilla PyTorch scheduler to prepare\n \"\"\"\n # We try to find the optimizer associated with `scheduler`, the default is the full list.\n optimizer = self._optimizers\n for opt in self._optimizers:\n if getattr(scheduler, \"optimizer\", None) == opt.optimizer:\n optimizer = opt\n break\n scheduler = AcceleratedScheduler(\n scheduler,\n optimizer,\n step_with_optimizer=self.step_scheduler_with_optimizer,\n split_batches=self.split_batches,\n )\n self._schedulers.append(scheduler)\n return scheduler\n\n def backward(self, loss, **kwargs):\n \"\"\"\n Scales the gradients in accordance to `Accelerator.gradient_accumulation_steps` and calls the correct\n `backward()` based on the configuration.\n\n Should be used in lieu of `loss.backward()`.\n \"\"\"\n if self.distributed_type != DistributedType.DEEPSPEED:\n # deepspeed handles loss scaling by gradient_accumulation_steps in its `backward`\n loss = loss / self.gradient_accumulation_steps\n if self.distributed_type == DistributedType.DEEPSPEED:\n self.deepspeed_engine_wrapped.backward(loss, **kwargs)\n elif self.distributed_type == DistributedType.MEGATRON_LM:\n return\n elif self.scaler is not None:\n self.scaler.scale(loss).backward(**kwargs)\n else:\n loss.backward(**kwargs)\n\n def unscale_gradients(self, optimizer=None):\n \"\"\"\n Unscale the gradients in mixed precision training with AMP. This is a noop in all other settings.\n\n Args:\n optimizer (`torch.optim.Optimizer` or `List[torch.optim.Optimizer]`, *optional*):\n The optimizer(s) for which to unscale gradients. If not set, will unscale gradients on all optimizers\n that were passed to [`~Accelerator.prepare`].\n \"\"\"\n if self.use_fp16 and self.native_amp:\n if optimizer is None:\n # TODO: this unscales all optimizers where we should only unscale the one where parameters are.\n optimizer = self._optimizers\n elif not isinstance(optimizer, (tuple, list)):\n optimizer = [optimizer]\n for opt in optimizer:\n while isinstance(opt, AcceleratedOptimizer):\n opt = opt.optimizer\n self.scaler.unscale_(opt)\n\n def clip_grad_norm_(self, parameters, max_norm, norm_type=2):\n \"\"\"\n Should be used in place of `torch.nn.utils.clip_grad_norm_`.\n\n Returns:\n `torch.Tensor`: Total norm of the parameter gradients (viewed as a single vector).\n\n Example:\n\n ```python\n >>> from accelerate import Accelerator\n\n >>> accelerator = Accelerator(gradient_accumulation_steps=2)\n >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler)\n\n >>> for (input, target) in dataloader:\n ... optimizer.zero_grad()\n ... output = model(input)\n ... loss = loss_func(output, target)\n ... accelerator.backward(loss)\n ... if accelerator.sync_gradients:\n ... accelerator.clip_grad_norm_(model.parameters(), max_grad_norm)\n ... optimizer.step()\n ```\n \"\"\"\n if self.distributed_type == DistributedType.FSDP:\n self.unscale_gradients()\n parameters = [p for p in parameters]\n for model in self._models:\n if parameters == [p for p in model.parameters()]:\n return model.clip_grad_norm_(max_norm, norm_type)\n elif self.distributed_type == DistributedType.DEEPSPEED:\n # `accelerator.backward(loss)` is doing that automatically. Therefore, its implementation is not needed\n # We cannot return the gradient norm because DeepSpeed does it.\n return None\n self.unscale_gradients()\n return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type)\n\n def clip_grad_value_(self, parameters, clip_value):\n \"\"\"\n Should be used in place of `torch.nn.utils.clip_grad_value_`.\n\n Example:\n\n ```python\n >>> from accelerate import Accelerator\n\n >>> accelerator = Accelerator(gradient_accumulation_steps=2)\n >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler)\n\n >>> for (input, target) in dataloader:\n ... optimizer.zero_grad()\n ... output = model(input)\n ... loss = loss_func(output, target)\n ... accelerator.backward(loss)\n ... if accelerator.sync_gradients:\n ... accelerator.clip_grad_value_(model.parameters(), clip_value)\n ... optimizer.step()\n ```\n \"\"\"\n if self.distributed_type in [DistributedType.DEEPSPEED, DistributedType.FSDP]:\n raise Exception(\"DeepSpeed and FSDP do not support `clip_grad_value_`. Use `clip_grad_norm_` instead.\")\n self.unscale_gradients()\n torch.nn.utils.clip_grad_value_(parameters, clip_value)\n\n def gather(self, tensor):\n \"\"\"\n Gather the values in *tensor* across all processes and concatenate them on the first dimension. Useful to\n regroup the predictions from all processes when doing evaluation.\n\n Note:\n This gather happens in all processes.\n\n Args:\n tensor (`torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`):\n The tensors to gather across all processes.\n\n Returns:\n `torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`: The gathered tensor(s). Note that the\n first dimension of the result is *num_processes* multiplied by the first dimension of the input tensors.\n \"\"\"\n return gather(tensor)\n\n def gather_for_metrics(self, tensor):\n \"\"\"\n Gathers `tensor` and potentially drops duplicates in the last batch if on a distributed system. Should be used\n for gathering the inputs and targets for metric calculation.\n\n Args:\n tensor (`torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`):\n The tensors for calculating metrics across all processes.\n \"\"\"\n tensor = self.gather(tensor)\n if self.use_distributed:\n if self.gradient_state.remainder == -1:\n logger.info(\n \"The used dataset had no length, returning gathered tensors. You should drop the remainder yourself.\"\n )\n return tensor\n try:\n # Then see if we're on the last batch of our eval dataloader\n if self.gradient_state.end_of_dataloader and self.gradient_state.remainder > 0:\n # Last batch needs to be truncated on distributed systems as it contains additional samples\n def _adjust_samples(tensor):\n return tensor[: self.gradient_state.remainder]\n\n return recursively_apply(_adjust_samples, tensor)\n else:\n # Not at the end of the dataloader, no need to adjust the tensors\n return tensor\n except:\n # Dataset had no length or raised an error\n return tensor\n return tensor\n\n def reduce(self, tensor, reduction=\"sum\"):\n \"\"\"\n Reduce the values in *tensor* across all processes based on *reduction*.\n\n Note:\n All processes get the reduced value.\n\n Args:\n tensor (`torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`):\n The tensors to reduce across all processes.\n reduction (`str`, *optional*, defaults to \"sum\"):\n A reduction type, can be one of 'sum', 'mean', or 'none'. If 'none', will not perform any operation.\n\n Returns:\n `torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`: The reduced tensor(s).\n \"\"\"\n return reduce(tensor, reduction)\n\n def pad_across_processes(self, tensor, dim=0, pad_index=0, pad_first=False):\n \"\"\"\n Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so\n they can safely be gathered.\n\n Args:\n tensor (nested list/tuple/dictionary of `torch.Tensor`):\n The data to gather.\n dim (`int`, *optional*, defaults to 0):\n The dimension on which to pad.\n pad_index (`int`, *optional*, defaults to 0):\n The value with which to pad.\n pad_first (`bool`, *optional*, defaults to `False`):\n Whether to pad at the beginning or the end.\n \"\"\"\n return pad_across_processes(tensor, dim=dim, pad_index=pad_index, pad_first=pad_first)\n\n def unwrap_model(self, model, keep_fp32_wrapper: bool = False):\n \"\"\"\n Unwraps the `model` from the additional layer possible added by [`~Accelerator.prepare`]. Useful before saving\n the model.\n\n Args:\n model (`torch.nn.Module`):\n The model to unwrap.\n keep_fp32_wrapper (`bool`, *optional*, defaults to `False`):\n Whether to not remove the mixed precision hook if it was added.\n \"\"\"\n return extract_model_from_parallel(model, keep_fp32_wrapper)\n\n def wait_for_everyone(self):\n \"\"\"\n Will stop the execution of the current process until every other process has reached that point (so this does\n nothing when the script is only run in one process). Useful to do before saving a model.\n \"\"\"\n wait_for_everyone()\n\n @on_main_process\n def init_trackers(self, project_name: str, config: Optional[dict] = None, init_kwargs: Optional[dict] = {}):\n \"\"\"\n Initializes a run for all trackers stored in `self.log_with`, potentially with starting configurations\n\n Args:\n project_name (`str`):\n The name of the project. All trackers will save their data based on this\n config (`dict`, *optional*):\n Optional starting configuration to be logged.\n init_kwargs (`dict`, *optional*):\n A nested dictionary of kwargs to be passed to a specific tracker's `__init__` function. Should be\n formatted like so:\n ```python\n {\"wandb\": {\"tags\": [\"tag_a\", \"tag_b\"]}}\n ```\n \"\"\"\n self.trackers = []\n for tracker in self.log_with:\n if issubclass(type(tracker), GeneralTracker):\n # Custom trackers are already initialized\n self.trackers.append(tracker)\n else:\n tracker_init = LOGGER_TYPE_TO_CLASS[str(tracker)]\n if getattr(tracker_init, \"requires_logging_directory\"):\n # We can skip this check since it was done in `__init__`\n self.trackers.append(\n tracker_init(project_name, self.logging_dir, **init_kwargs.get(str(tracker), {}))\n )\n else:\n self.trackers.append(tracker_init(project_name, **init_kwargs.get(str(tracker), {})))\n if config is not None:\n for tracker in self.trackers:\n tracker.store_init_configuration(config)\n\n @on_main_process\n def get_tracker(self, name: str):\n \"\"\"\n Returns a `tracker` from `self.trackers` based on `name` on the main process only.\n\n Args:\n name (`str`):\n The name of a tracker, corresponding to the `.name` property.\n \"\"\"\n for tracker in self.trackers:\n if tracker.name == name:\n return tracker.tracker\n raise ValueError(f\"{name} is not an available tracker stored inside the `Accelerator`.\")\n\n @on_main_process\n def log(self, values: dict, step: Optional[int] = None, log_kwargs: Optional[dict] = {}):\n \"\"\"\n Logs `values` to all stored trackers in `self.trackers` on the main process only.\n\n Args:\n values (`dict`):\n Values should be a dictionary-like object containing only types `int`, `float`, or `str`.\n step (`int`, *optional*):\n The run step. If included, the log will be affiliated with this step.\n log_kwargs (`dict`, *optional*):\n A nested dictionary of kwargs to be passed to a specific tracker's `log` function. Should be formatted\n like so:\n ```python\n {\"wandb\": {\"tags\": [\"tag_a\", \"tag_b\"]}}\n ```\n \"\"\"\n for tracker in self.trackers:\n tracker.log(values, step=step, **log_kwargs.get(tracker.name, {}))\n\n @on_main_process\n def end_training(self):\n \"\"\"\n Runs any special end training behaviors, such as stopping trackers on the main process only. Should always be\n called at the end of your script if using experiment tracking.\n \"\"\"\n for tracker in self.trackers:\n tracker.finish()\n\n def save(self, obj, f):\n \"\"\"\n Save the object passed to disk once per machine. Use in place of `torch.save`.\n\n Args:\n obj: The object to save.\n f (`str` or `os.PathLike`):\n Where to save the content of `obj`.\n \"\"\"\n save(obj, f)\n\n def save_state(self, output_dir: str = None, **save_model_func_kwargs):\n \"\"\"\n Saves the current states of the model, optimizer, scaler, RNG generators, and registered objects to a folder.\n\n If a `ProjectConfiguration` was passed to the `Accelerator` object with `automatic_checkpoint_naming` enabled\n then checkpoints will be saved to `self.project_dir/checkpoints`. If the number of current saves is greater\n than `total_limit` then the oldest save is deleted. Each checkpoint is saved in seperate folders named\n `checkpoint_<iteration>`.\n\n Otherwise they are just saved to `output_dir`.\n\n <Tip>\n\n Should only be used when wanting to save a checkpoint during training and restoring the state in the same\n environment.\n\n </Tip>\n\n Args:\n output_dir (`str` or `os.PathLike`):\n The name of the folder to save all relevant weights and states.\n save_model_func_kwargs (`dict`, *optional*):\n Additional keyword arguments for saving model which can be passed to the underlying save function, such\n as optional arguments for DeepSpeed's `save_checkpoint` function.\n \"\"\"\n if self.project_configuration.automatic_checkpoint_naming:\n output_dir = os.path.join(self.project_dir, \"checkpoints\")\n os.makedirs(output_dir, exist_ok=True)\n if self.project_configuration.automatic_checkpoint_naming:\n folders = [os.path.join(output_dir, folder) for folder in os.listdir(output_dir)]\n if self.project_configuration.total_limit is not None and (\n len(folders) + 1 > self.project_configuration.total_limit\n ):\n folders.sort()\n logger.warning(\n f\"Deleting {len(folders) + 1 - self.project_configuration.total_limit} checkpoints to make room for new checkpoint.\"\n )\n for folder in folders[: len(folders) + 1 - self.project_configuration.total_limit]:\n shutil.rmtree(folder)\n output_dir = os.path.join(output_dir, f\"checkpoint_{self.save_iteration}\")\n if os.path.exists(output_dir):\n raise ValueError(\n f\"Checkpoint directory {output_dir} ({self.save_iteration}) already exists. Please manually override `self.save_iteration` with what iteration to start with.\"\n )\n os.makedirs(output_dir, exist_ok=True)\n logger.info(f\"Saving current state to {output_dir}\")\n\n # Save the models taking care of FSDP and DeepSpeed nuances\n weights = []\n for i, model in enumerate(self._models):\n if self.distributed_type == DistributedType.FSDP:\n logger.info(\"Saving FSDP model\")\n self.state.fsdp_plugin.save_model(self, model, output_dir, i)\n logger.info(f\"FSDP Model saved to output dir {output_dir}\")\n elif self.distributed_type == DistributedType.DEEPSPEED:\n logger.info(\"Saving DeepSpeed Model and Optimizer\")\n ckpt_id = f\"{MODEL_NAME}\" if i == 0 else f\"{MODEL_NAME}_{i}\"\n model.save_checkpoint(output_dir, ckpt_id, **save_model_func_kwargs)\n logger.info(f\"DeepSpeed Model and Optimizer saved to output dir {os.path.join(output_dir, ckpt_id)}\")\n elif self.distributed_type == DistributedType.MEGATRON_LM:\n logger.info(\"Saving Megatron-LM Model, Optimizer and Scheduler\")\n model.save_checkpoint(output_dir)\n logger.info(f\"Megatron-LM Model , Optimizer and Scheduler saved to output dir {output_dir}\")\n else:\n weights.append(self.get_state_dict(model, unwrap=False))\n\n # Save the optimizers taking care of FSDP and DeepSpeed nuances\n optimizers = []\n if self.distributed_type == DistributedType.FSDP:\n for opt in self._optimizers:\n logger.info(\"Saving FSDP Optimizer\")\n self.state.fsdp_plugin.save_optimizer(self, opt, self._models[i], output_dir, i)\n logger.info(f\"FSDP Optimizer saved to output dir {output_dir}\")\n elif self.distributed_type not in [DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM]:\n optimizers = self._optimizers\n\n # Save the lr schedulers taking care of DeepSpeed nuances\n schedulers = []\n if self.distributed_type == DistributedType.DEEPSPEED:\n for i, scheduler in enumerate(self._schedulers):\n if isinstance(scheduler, DeepSpeedSchedulerWrapper):\n continue\n schedulers.append(scheduler)\n elif self.distributed_type not in [DistributedType.MEGATRON_LM]:\n schedulers = self._schedulers\n\n save_location = save_accelerator_state(\n output_dir, weights, optimizers, schedulers, self.state.process_index, self.scaler\n )\n for i, obj in enumerate(self._custom_objects):\n save_custom_state(obj, output_dir, i)\n self.project_configuration.iteration += 1\n return save_location\n\n def load_state(self, input_dir: str, **load_model_func_kwargs):\n \"\"\"\n Loads the current states of the model, optimizer, scaler, RNG generators, and registered objects.\n\n <Tip>\n\n Should only be used in conjunction with [`Accelerator.save_state`].\n\n </Tip>\n\n Args:\n input_dir (`str` or `os.PathLike`):\n The name of the folder all relevant weights and states were saved in.\n load_model_func_kwargs (`dict`, *optional*):\n Additional keyword arguments for loading model which can be passed to the underlying load function,\n such as optional arguments for DeepSpeed's `load_checkpoint` function.\n \"\"\"\n # Check if folder exists\n input_dir = os.path.expanduser(input_dir)\n if not os.path.isdir(input_dir):\n raise ValueError(f\"Tried to find {input_dir} but folder does not exist\")\n logger.info(f\"Loading states from {input_dir}\")\n\n # Load the models taking care of FSDP and DeepSpeed nuances\n models = []\n for i, model in enumerate(self._models):\n if self.distributed_type == DistributedType.FSDP:\n logger.info(\"Loading FSDP model\")\n self.state.fsdp_plugin.load_model(self, model, input_dir, i)\n logger.info(f\"FSDP Model loaded from input dir {input_dir}\")\n elif self.distributed_type == DistributedType.DEEPSPEED:\n logger.info(\"Loading DeepSpeed Model and Optimizer\")\n ckpt_id = f\"{MODEL_NAME}\" if i == 0 else f\"{MODEL_NAME}_{i}\"\n model.load_checkpoint(input_dir, ckpt_id, **load_model_func_kwargs)\n logger.info(f\"DeepSpeed Model and Optimizer loaded from input dir {os.path.join(input_dir, ckpt_id)}\")\n elif self.distributed_type == DistributedType.MEGATRON_LM:\n logger.info(\"Loading Megatron-LM Model, Optimizer and Scheduler\")\n model.load_checkpoint(input_dir)\n logger.info(f\"Megatron-LM Model , Optimizer and Scheduler loaded from input dir {input_dir}\")\n else:\n models.append(model)\n\n # Load the optimizers taking care of FSDP and DeepSpeed nuances\n optimizers = []\n if self.distributed_type == DistributedType.FSDP:\n for i, opt in enumerate(self._optimizers):\n logger.info(\"Loading FSDP Optimizer\")\n self.state.fsdp_plugin.load_optimizer(self, opt, self._models[i], input_dir, i)\n logger.info(f\"FSDP Optimizer loaded from input dir {input_dir}\")\n elif self.distributed_type not in [DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM]:\n optimizers = self._optimizers\n\n # Load the lr schedulers taking care of DeepSpeed nuances\n schedulers = []\n if self.distributed_type == DistributedType.DEEPSPEED:\n for i, scheduler in enumerate(self._schedulers):\n if isinstance(scheduler, DeepSpeedSchedulerWrapper):\n continue\n schedulers.append(scheduler)\n elif self.distributed_type not in [DistributedType.MEGATRON_LM]:\n schedulers = self._schedulers\n\n load_accelerator_state(\n input_dir, models, optimizers, schedulers, self.state.process_index, self.scaler, **load_model_func_kwargs\n )\n custom_checkpoints = [f for f in os.listdir(input_dir) if \"custom_checkpoint\" in f]\n if len(custom_checkpoints) != len(self._custom_objects):\n err = \"Warning! Number of found checkpoints does not match the number of registered objects:\"\n err += f\"\\n\\tFound checkpoints: {len(custom_checkpoints)}\"\n err += f\"\\n\\tRegistered objects: {len(self._custom_objects)}\\nSkipping.\"\n logger.warning(err)\n else:\n logger.info(f\"Loading in {len(custom_checkpoints)} custom states\")\n for index, obj in enumerate(self._custom_objects):\n load_custom_state(obj, input_dir, index)\n\n def free_memory(self):\n \"\"\"\n Will release all references to the internal objects stored and call the garbage collector. You should call this\n method between two trainings with different models/optimizers.\n \"\"\"\n self._schedulers = []\n self._optimizers = []\n self._models = []\n self._dataloaders = []\n self.deepspeed_engine_wrapped = None\n release_memory()\n\n def clear(self):\n \"\"\"\n Alias for [`Accelerate.free_memory`], releases all references to the internal objects stored and call the\n garbage collector. You should call this method between two trainings with different models/optimizers.\n \"\"\"\n self.free_memory()\n\n def _get_named_parameters(self, *args):\n named_parameters = {}\n for obj in args:\n if isinstance(obj, torch.nn.Module):\n obj = extract_model_from_parallel(obj)\n named_parameters.update({n: p for n, p in obj.named_parameters()})\n return named_parameters\n\n def _get_devices(self, *args):\n model_device = None\n optimizer_device = None\n for obj in args:\n # Loop through model parameters and stop at the first once we have its device.\n if isinstance(obj, torch.nn.Module):\n for param in obj.parameters():\n model_device = param.device\n break\n # Loop through optimizer parameters groups and stop at the first once we have its device.\n if isinstance(obj, torch.optim.Optimizer):\n for param_group in obj.param_groups:\n if len(param_group[\"params\"]) > 0:\n optimizer_device = param_group[\"params\"][0].device\n break\n return (model_device, optimizer_device)\n\n def get_state_dict(self, model, unwrap=True):\n \"\"\"\n Returns the state dictionary of a model sent through [`Accelerator.prepare`] in full precision\n\n Args:\n model (`torch.nn.Module`):\n A PyTorch model sent through [`Accelerator.prepare`]\n unwrap (`bool`, *optional*, defaults to `True`):\n Whether to return the original underlying state_dict of `model` or to return the wrapped state_dict\n \"\"\"\n is_zero_3 = False\n if self.distributed_type == DistributedType.DEEPSPEED:\n is_zero_3 = self.deepspeed_config[\"zero_optimization\"][\"stage\"] == 3\n\n if is_zero_3:\n if model.zero_gather_16bit_weights_on_model_save():\n state_dict = model._zero3_consolidated_16bit_state_dict()\n else:\n raise ValueError(\n \"Cannot get 16bit model weights because `stage3_gather_16bit_weights_on_model_save` in DeepSpeed config is False. \"\n \"To save the model weights in 16bit, set `stage3_gather_16bit_weights_on_model_save` to True in DeepSpeed config file or \"\n \"set `zero3_save_16bit_model` to True when using `accelerate config`. \"\n \"To save the full checkpoint, run `model.save_checkpoint(save_dir)` and use `zero_to_fp32.py` to recover weights.\"\n )\n else:\n if unwrap:\n model = self.unwrap_model(model)\n state_dict = model.state_dict()\n\n if state_dict is not None:\n for k in state_dict:\n if state_dict[k].dtype == torch.float16:\n state_dict[k] = state_dict[k].float()\n\n return state_dict\n\n def register_for_checkpointing(self, *objects):\n \"\"\"\n Makes note of `objects` and will save or load them in during `save_state` or `load_state`.\n\n These should be utilized when the state is being loaded or saved in the same script. It is not designed to be\n used in different scripts\n\n <Tip>\n\n Every `object` must have a `load_state_dict` and `state_dict` function to be stored.\n\n </Tip>\n \"\"\"\n invalid_objects = []\n for obj in objects:\n if not hasattr(obj, \"state_dict\") or not hasattr(obj, \"load_state_dict\"):\n invalid_objects.append(obj)\n if len(invalid_objects) > 0:\n err = \"All `objects` must include a `state_dict` and `load_state_dict` function to be stored. The following inputs are invalid:\"\n for index, obj in enumerate(invalid_objects):\n err += f\"\\n\\t- Item at index {index}, `{get_pretty_name(obj)}`\"\n raise ValueError(err)\n self._custom_objects.extend(objects)\n\n @contextmanager\n def autocast(self):\n \"\"\"\n Will apply automatic mixed-precision inside the block inside this context manager, if it is enabled. Nothing\n different will happen otherwise.\n \"\"\"\n if self.native_amp:\n if self.mixed_precision == \"fp16\" and is_torch_version(\">=\", \"1.10\"):\n autocast_context = torch.cuda.amp.autocast(dtype=torch.float16)\n elif self.mixed_precision == \"bf16\":\n if self.distributed_type in [DistributedType.NO, DistributedType.MULTI_CPU, DistributedType.MULTI_GPU]:\n autocast_context = torch.autocast(dtype=torch.bfloat16, device_type=self.device.type)\n else:\n autocast_context = torch.cuda.amp.autocast()\n\n autocast_context.__enter__()\n yield\n autocast_context.__exit__(*sys.exc_info())\n else:\n yield\n\n @property\n def optimizer_step_was_skipped(self):\n \"\"\"\n Whether or not the optimizer update was skipped (because of gradient overflow in mixed precision), in which\n case the learning rate should not be changed.\n \"\"\"\n for optimizer in self._optimizers:\n if optimizer.step_was_skipped:\n return True\n return False\n"}
|
{"src/accelerate/accelerator.py": [{"type": "function", "name": "Accelerator.register_save_state_pre_hook", "lines": [1622, 1652], "signature": "def register_save_state_pre_hook(self, hook: Callable[..., None]) -> hooks.RemovableHandle:", "doc": "Registers a pre hook to be run before `save_checkpoint` is called in [`Accelerator.save_state`].\n\nArgs:\n hook (`Callable`):\n A function to be called in [`Accelerator.save_state`] before `save_checkpoint`.\n\nThe hook should have the following signature:\n\n`hook(models: List[torch.nn.Module], weights: List[Dict[str, torch.Tensor]], input_dir: str) -> None`\n\nThe `models` argument are the models as saved in the accelerator state under `accelerator._models`, `weigths`\nargument are the state dicts of the `models`, and the `input_dir` argument is the `input_dir` argument passed\nto [`Accelerator.load_state`].\n\n<Tip>\n\nShould only be used in conjunction with [`Accelerator.register_load_state_pre_hook`]. Can be useful to save\nconfigurations in addition to model weights. Can also be used to overwrite model saving with a customized\nmethod. In this case, make sure to remove already loaded weights from the weights list.\n\n</Tip>\n\nReturns:\n `torch.utils.hooks.RemovableHandle`: a handle that can be used to remove the added hook by calling\n `handle.remove()`"}, {"type": "function", "name": "Accelerator.register_load_state_pre_hook", "lines": [1753, 1782], "signature": "def register_load_state_pre_hook(self, hook: Callable[..., None]) -> hooks.RemovableHandle:", "doc": "Registers a pre hook to be run before [`load_checkpoint`] is called in [`Accelerator.load_state`].\n\nArgs:\n hook (`Callable`):\n A function to be called in [`Accelerator.load_state`] before `load_checkpoint`.\n\nThe hook should have the following signature:\n\n`hook(models: List[torch.nn.Module], input_dir: str) -> None`\n\nThe `models` argument are the models as saved in the accelerator state under `accelerator._models`, and the\n`input_dir` argument is the `input_dir` argument passed to [`Accelerator.load_state`].\n\n<Tip>\n\nShould only be used in conjunction with [`Accelerator.register_save_state_pre_hook`]. Can be useful to load\nconfigurations in addition to model weights. Can also be used to overwrite model loading with a customized\nmethod. In this case, make sure to remove already loaded models from the models list.\n\n</Tip>\n\nReturns:\n `torch.utils.hooks.RemovableHandle`: a handle that can be used to remove the added hook by calling\n `handle.remove()`"}]}
| null |
["tests/test_accelerator.py::AcceleratorTester::test_save_load_model_with_hooks"]
|
["tests/test_accelerator.py::AcceleratorTester::test_free_memory_dereferences_prepared_components", "tests/test_accelerator.py::AcceleratorTester::test_prepared_objects_are_referenced", "tests/test_accelerator.py::AcceleratorTester::test_save_load_model"]
|
08101b9dde2b1a9658c2e363e3e9f5663ba06073
|
{"first_commit_time": 1674159358.0, "pr_title": "Saving and loading state hooks", "pr_body": "With this design we would have the necessary freedom to save & load models as wished in `diffusers`. \r\n\r\nPossible solution to: https://github.com/huggingface/accelerate/issues/976\r\nThe idea would be for loading:\r\n- loop over the `models` and if a certain type of model, e.g. UNet, we call `from_pretrained(...)` and remove it from the list so it's not called after.\r\n\r\nFor saving:\r\n- loop over the passed `self._models` and passed `weights` and if `self_models` if of certain type (e.g. UNet) we call `save_pretrained(...)` and remove only from the weights list so it's not called after. \r\n\r\nDesign is heavily inspired by the hooks in: https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module\r\n\r\nIf this is ok, happy to write tests and better docs and try it out. \r\n\r\ncc @sgugger @pcuenca ", "pr_timeline": [{"time": 1674760221.0, "comment": "_The documentation is not available anymore as the PR was closed or merged._"}, {"time": 1674457060.0, "comment": "The use case would then looks like this in diffusers: https://github.com/huggingface/diffusers/pull/2048/files . Because we're doing loading and saving directly in the hooks, the weights and models have to be popped from the respective lists so that nothing is loaded / saved twice. \r\n\r\nThink there are also use cases where the model doesn't need to be popped, e.g. if someone wants to save the config for transformers in addition to the checkpoint the hooks could just save and load the config but then let accelerate do the saving / loading of the weights. \r\n\r\nThink this design is quite minimal for accelerate. Wdyt? \r\nHappy to add a test to accelerate that shows how one could use the hooks to save a config of a transformer model in addition to saving/loading the weights."}, {"time": 1674228547.0, "comment": "Aaaah! Nice trick popping the weights/models, I didn't think of that. Please do add a test then and maybe one method to remove the hooks so the API is complete? Thanks a lot!"}, {"time": 1674740220.0, "comment": "Pretty sure test failures are unrelated. Would be great to get another review here :-) "}, {"time": 1674760700.0, "comment": "@patrickvonplaten only was now able to look at this today, I can confirm that your PR does indeed break that test, it works prior to this and is only on the CPU when it fails. If you have some time please try to look into it otherwise I'll look into this later today. cc @sgugger "}], "issues": {}}
|
|
huggingface/datasets
| 214
|
https://github.com/huggingface/datasets/pull/214
|
huggingface__datasets-214
|
[]
|
f9ef1ee270848e322f06e546751aeffde67349c5
|
diff --git a/datasets/wikipedia/wikipedia.py b/datasets/wikipedia/wikipedia.py
index 5733781ccf8..6f36e2bc534 100644
--- a/datasets/wikipedia/wikipedia.py
+++ b/datasets/wikipedia/wikipedia.py
@@ -25,9 +25,9 @@
import xml.etree.cElementTree as etree
import apache_beam as beam
-import mwparserfromhell
import six
+import mwparserfromhell
import nlp
diff --git a/src/nlp/arrow_dataset.py b/src/nlp/arrow_dataset.py
index e09cd16de3c..c9845b59d00 100644
--- a/src/nlp/arrow_dataset.py
+++ b/src/nlp/arrow_dataset.py
@@ -19,6 +19,7 @@
import hashlib
import logging
import os
+from collections import defaultdict
from collections.abc import Mapping
from typing import Any, Dict, List, Optional, Union
@@ -29,7 +30,7 @@
from nlp.utils.py_utils import dumps
from .arrow_writer import ArrowWriter
-from .utils import convert_tuples_in_lists
+from .utils import convert_tuples_in_lists, map_nested
logger = logging.getLogger(__name__)
@@ -409,7 +410,7 @@ def map(
- `function(example: Dict, indices: int) -> Union[Dict, Any]` if `batched=False` and `with_indices=True`
- `function(batch: Dict[List]) -> Union[Dict, Any]` if `batched=True` and `with_indices=False`
- `function(batch: Dict[List], indices: List[int]) -> Union[Dict, Any]` if `batched=True` and `with_indices=True`
- `with_indices` (`bool`, default: `False`): Provide example indices to `function`
+ `with_indices` (`bool`, default: `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`.
`batched` (`bool`, default: `False`): Provide batch of examples to `function`
`batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`
`batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`
@@ -557,3 +558,66 @@ def apply_function_on_filtered_inputs(inputs, indices):
return Dataset.from_buffer(buf_writer.getvalue())
else:
return self
+
+ def filter(self, function, with_indices=False, **kwargs):
+ """ Apply a filter function to all the elements in the table in batches
+ and update the table so that the dataset only includes examples according to the filter function.
+
+ Args:
+ `function` (`callable`): with one of the following signature:
+ - `function(example: Dict) -> bool` if `with_indices=False`
+ - `function(example: Dict, indices: int) -> bool` if `with_indices=True`
+ `with_indices` (`bool`, default: `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`.
+ `batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`
+ `batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`
+ `remove_columns` (`Optional[List[str]]`, default: `None`): Remove a selection of columns while doing the mapping.
+ Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding
+ columns with names in `remove_columns`, these columns will be kept.
+ `keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
+ `load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
+ can be identified, use it instead of recomputing.
+ `cache_file_name` (`Optional[str]`, default: `None`): Provide the name of a cache file to use to store the
+ results of the computation instead of the automatically generated cache file name.
+ `writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
+ Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
+ `disable_nullable` (`bool`, default: `True`): Allow null values in the table.
+ """
+
+ # transforme the filter function into the map function
+ def map_function(batch, *args):
+ result = defaultdict(list)
+ num_examples = len(batch[next(iter(batch.keys()))])
+
+ # create single examples
+ for i in range(num_examples):
+ example = map_nested(lambda x: x[i], batch, dict_only=True)
+
+ # check if example should be fildered or not
+ if with_indices:
+ keep_example = function(example, args[0][i])
+ else:
+ keep_example = function(example)
+
+ assert isinstance(
+ keep_example, bool
+ ), f"The filter function returns a variable of type {type(keep_example)}, but should return a variable of type `bool`."
+ # if example shall be kept add to result
+ if keep_example:
+ for key in batch.keys():
+ result[key].append(example[key])
+
+ # if no example shall be kept, init with empty list
+ if bool(result) is False:
+ for key in batch.keys():
+ result[key] = []
+
+ return result
+
+ # to avoid errors with the arrow_schema we define it here
+ test_inputs = self[:2]
+ if "remove_columns" in kwargs:
+ test_inputs = {key: test_inputs[key] for key in (test_inputs.keys() - kwargs["remove_columns"])}
+ arrow_schema = pa.Table.from_pydict(test_inputs).schema
+
+ # return map function
+ return self.map(map_function, batched=True, with_indices=with_indices, arrow_schema=arrow_schema, **kwargs)
|
diff --git a/tests/test_arrow_dataset.py b/tests/test_arrow_dataset.py
new file mode 100644
index 00000000000..88142afd586
--- /dev/null
+++ b/tests/test_arrow_dataset.py
@@ -0,0 +1,114 @@
+import os
+import tempfile
+from unittest import TestCase
+
+import numpy as np
+import pyarrow as pa
+
+from nlp.arrow_reader import BaseReader
+from nlp.info import DatasetInfo
+from nlp.splits import SplitDict, SplitInfo
+
+
+class ReaderTester(BaseReader):
+ """
+ Build a Dataset object out of Instruction instance(s).
+ This reader is made for testing. It mocks file reads.
+ """
+
+ def _get_dataset_from_filename(self, filename_skip_take):
+ """Returns a Dataset instance from given (filename, skip, take)."""
+ filename, skip, take = (
+ filename_skip_take["filename"],
+ filename_skip_take["skip"] if "skip" in filename_skip_take else None,
+ filename_skip_take["take"] if "take" in filename_skip_take else None,
+ )
+ pa_table = pa.Table.from_pydict({"filename": [filename + "_" + str(x) for x in np.arange(100).tolist()]})
+ if skip is not None and take is not None:
+ pa_table = pa_table.slice(skip, take)
+ return pa_table
+
+
+class BaseDatasetTest(TestCase):
+ def _create_dummy_dataset(self):
+ name = "my_name"
+ train_info = SplitInfo(name="train", num_examples=30)
+ test_info = SplitInfo(name="test", num_examples=30)
+ split_infos = [train_info, test_info]
+ split_dict = SplitDict()
+ split_dict.add(train_info)
+ split_dict.add(test_info)
+ info = DatasetInfo(splits=split_dict)
+ reader = ReaderTester("", info)
+ dset = reader.read(name, "train", split_infos)
+ return dset
+
+ def test_map(self):
+ dset = self._create_dummy_dataset()
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset_test = dset.map(
+ lambda x: {"name": x["filename"][:-2], "id": int(x["filename"][-1])}, cache_file_name=tmp_file
+ )
+ self.assertEqual(len(dset_test), 30)
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset_test = dset.map(
+ lambda x: {"name": x["filename"][:-2], "id": int(x["filename"][-1])}, cache_file_name=tmp_file
+ )
+ dset_test_with_indices = dset.map(
+ lambda x, i: {"name": x["filename"][:-2], "id": i}, with_indices=True, cache_file_name=tmp_file
+ )
+ self.assertEqual(len(dset_test_with_indices), 30)
+
+ def test_map_batched(self):
+ dset = self._create_dummy_dataset()
+
+ def map_batched(example):
+ return {"filename_new": [x + "_extension" for x in example["filename"]]}
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset_test_batched = dset.map(map_batched, batched=True, cache_file_name=tmp_file)
+ self.assertEqual(len(dset_test_batched), 30)
+
+ def map_batched_with_indices(example, idx):
+ return {"filename_new": [x + "_extension_" + str(idx) for x in example["filename"]]}
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset_test_with_indices_batched = dset.map(
+ map_batched_with_indices, batched=True, with_indices=True, cache_file_name=tmp_file
+ )
+ self.assertEqual(len(dset_test_with_indices_batched), 30)
+
+ def test_remove_colums(self):
+ dset = self._create_dummy_dataset()
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset = dset.map(
+ lambda x, i: {"name": x["filename"][:-2], "id": i}, with_indices=True, cache_file_name=tmp_file
+ )
+ self.assertTrue("id" in dset[0])
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset = dset.map(lambda x: x, remove_columns=["id"], cache_file_name=tmp_file)
+ self.assertTrue("id" not in dset[0])
+
+ def test_filter(self):
+ dset = self._create_dummy_dataset()
+ # keep only first five examples
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset_filter_first_five = dset.filter(lambda x, i: i < 5, with_indices=True, cache_file_name=tmp_file)
+ self.assertEqual(len(dset_filter_first_five), 5)
+
+ # filter filenames with even id at the end
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "test.arrow")
+ dset_filter_even_num = dset.filter(lambda x: (int(x["filename"][-1]) % 2 == 0), cache_file_name=tmp_file)
+ self.assertEqual(len(dset_filter_even_num), 15)
| 2020-05-28T16:21:40
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"datasets/wikipedia/wikipedia.py": "# coding=utf-8\n# Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace NLP Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n\"\"\"Wikipedia dataset containing cleaned articles of all languages.\"\"\"\n\nfrom __future__ import absolute_import, division, print_function\n\nimport codecs\nimport json\nimport logging\nimport re\nimport xml.etree.cElementTree as etree\n\nimport apache_beam as beam\nimport mwparserfromhell\nimport six\n\nimport nlp\n\n\nif six.PY3:\n import bz2 # pylint:disable=g-import-not-at-top\nelse:\n # py2's built-in bz2 package does not support reading from file objects.\n import bz2file as bz2 # pylint:disable=g-import-not-at-top\n\n_CITATION = \"\"\"\\\n@ONLINE {wikidump,\n author = \"Wikimedia Foundation\",\n title = \"Wikimedia Downloads\",\n url = \"https://dumps.wikimedia.org\"\n}\n\"\"\"\n\n_DESCRIPTION = \"\"\"\\\nWikipedia dataset containing cleaned articles of all languages.\nThe datasets are built from the Wikipedia dump\n(https://dumps.wikimedia.org/) with one split per language. Each example\ncontains the content of one full Wikipedia article with cleaning to strip\nmarkdown and unwanted sections (references, etc.).\n\"\"\"\n\n_LICENSE = (\n \"This work is licensed under the Creative Commons Attribution-ShareAlike \"\n \"3.0 Unported License. To view a copy of this license, visit \"\n \"http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to \"\n \"Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\"\n)\n\n# Source: https://en.wikipedia.org/wiki/List_of_Wikipedias (accessed 3/1/2019)\n# Removed because no articles: hz.\nWIKIPEDIA_LANGUAGES = [\n \"aa\",\n \"ab\",\n \"ace\",\n \"ady\",\n \"af\",\n \"ak\",\n \"als\",\n \"am\",\n \"an\",\n \"ang\",\n \"ar\",\n \"arc\",\n \"arz\",\n \"as\",\n \"ast\",\n \"atj\",\n \"av\",\n \"ay\",\n \"az\",\n \"azb\",\n \"ba\",\n \"bar\",\n \"bat-smg\",\n \"bcl\",\n \"be\",\n \"be-x-old\",\n \"bg\",\n \"bh\",\n \"bi\",\n \"bjn\",\n \"bm\",\n \"bn\",\n \"bo\",\n \"bpy\",\n \"br\",\n \"bs\",\n \"bug\",\n \"bxr\",\n \"ca\",\n \"cbk-zam\",\n \"cdo\",\n \"ce\",\n \"ceb\",\n \"ch\",\n \"cho\",\n \"chr\",\n \"chy\",\n \"ckb\",\n \"co\",\n \"cr\",\n \"crh\",\n \"cs\",\n \"csb\",\n \"cu\",\n \"cv\",\n \"cy\",\n \"da\",\n \"de\",\n \"din\",\n \"diq\",\n \"dsb\",\n \"dty\",\n \"dv\",\n \"dz\",\n \"ee\",\n \"el\",\n \"eml\",\n \"en\",\n \"eo\",\n \"es\",\n \"et\",\n \"eu\",\n \"ext\",\n \"fa\",\n \"ff\",\n \"fi\",\n \"fiu-vro\",\n \"fj\",\n \"fo\",\n \"fr\",\n \"frp\",\n \"frr\",\n \"fur\",\n \"fy\",\n \"ga\",\n \"gag\",\n \"gan\",\n \"gd\",\n \"gl\",\n \"glk\",\n \"gn\",\n \"gom\",\n \"gor\",\n \"got\",\n \"gu\",\n \"gv\",\n \"ha\",\n \"hak\",\n \"haw\",\n \"he\",\n \"hi\",\n \"hif\",\n \"ho\",\n \"hr\",\n \"hsb\",\n \"ht\",\n \"hu\",\n \"hy\",\n \"ia\",\n \"id\",\n \"ie\",\n \"ig\",\n \"ii\",\n \"ik\",\n \"ilo\",\n \"inh\",\n \"io\",\n \"is\",\n \"it\",\n \"iu\",\n \"ja\",\n \"jam\",\n \"jbo\",\n \"jv\",\n \"ka\",\n \"kaa\",\n \"kab\",\n \"kbd\",\n \"kbp\",\n \"kg\",\n \"ki\",\n \"kj\",\n \"kk\",\n \"kl\",\n \"km\",\n \"kn\",\n \"ko\",\n \"koi\",\n \"krc\",\n \"ks\",\n \"ksh\",\n \"ku\",\n \"kv\",\n \"kw\",\n \"ky\",\n \"la\",\n \"lad\",\n \"lb\",\n \"lbe\",\n \"lez\",\n \"lfn\",\n \"lg\",\n \"li\",\n \"lij\",\n \"lmo\",\n \"ln\",\n \"lo\",\n \"lrc\",\n \"lt\",\n \"ltg\",\n \"lv\",\n \"mai\",\n \"map-bms\",\n \"mdf\",\n \"mg\",\n \"mh\",\n \"mhr\",\n \"mi\",\n \"min\",\n \"mk\",\n \"ml\",\n \"mn\",\n \"mr\",\n \"mrj\",\n \"ms\",\n \"mt\",\n \"mus\",\n \"mwl\",\n \"my\",\n \"myv\",\n \"mzn\",\n \"na\",\n \"nah\",\n \"nap\",\n \"nds\",\n \"nds-nl\",\n \"ne\",\n \"new\",\n \"ng\",\n \"nl\",\n \"nn\",\n \"no\",\n \"nov\",\n \"nrm\",\n \"nso\",\n \"nv\",\n \"ny\",\n \"oc\",\n \"olo\",\n \"om\",\n \"or\",\n \"os\",\n \"pa\",\n \"pag\",\n \"pam\",\n \"pap\",\n \"pcd\",\n \"pdc\",\n \"pfl\",\n \"pi\",\n \"pih\",\n \"pl\",\n \"pms\",\n \"pnb\",\n \"pnt\",\n \"ps\",\n \"pt\",\n \"qu\",\n \"rm\",\n \"rmy\",\n \"rn\",\n \"ro\",\n \"roa-rup\",\n \"roa-tara\",\n \"ru\",\n \"rue\",\n \"rw\",\n \"sa\",\n \"sah\",\n \"sat\",\n \"sc\",\n \"scn\",\n \"sco\",\n \"sd\",\n \"se\",\n \"sg\",\n \"sh\",\n \"si\",\n \"simple\",\n \"sk\",\n \"sl\",\n \"sm\",\n \"sn\",\n \"so\",\n \"sq\",\n \"sr\",\n \"srn\",\n \"ss\",\n \"st\",\n \"stq\",\n \"su\",\n \"sv\",\n \"sw\",\n \"szl\",\n \"ta\",\n \"tcy\",\n \"te\",\n \"tet\",\n \"tg\",\n \"th\",\n \"ti\",\n \"tk\",\n \"tl\",\n \"tn\",\n \"to\",\n \"tpi\",\n \"tr\",\n \"ts\",\n \"tt\",\n \"tum\",\n \"tw\",\n \"ty\",\n \"tyv\",\n \"udm\",\n \"ug\",\n \"uk\",\n \"ur\",\n \"uz\",\n \"ve\",\n \"vec\",\n \"vep\",\n \"vi\",\n \"vls\",\n \"vo\",\n \"wa\",\n \"war\",\n \"wo\",\n \"wuu\",\n \"xal\",\n \"xh\",\n \"xmf\",\n \"yi\",\n \"yo\",\n \"za\",\n \"zea\",\n \"zh\",\n \"zh-classical\",\n \"zh-min-nan\",\n \"zh-yue\",\n \"zu\",\n]\n\n_BASE_URL_TMPL = \"https://dumps.wikimedia.org/{lang}wiki/{date}/\"\n_INFO_FILE = \"dumpstatus.json\"\n\n\nclass WikipediaConfig(nlp.BuilderConfig):\n \"\"\"BuilderConfig for Wikipedia.\"\"\"\n\n def __init__(self, language=None, date=None, **kwargs):\n \"\"\"BuilderConfig for Wikipedia.\n\n Args:\n language: string, the language code for the Wikipedia dump to use.\n date: string, date of the Wikipedia dump in YYYYMMDD format. A list of\n available dates can be found at https://dumps.wikimedia.org/enwiki/.\n **kwargs: keyword arguments forwarded to super.\n \"\"\"\n super(WikipediaConfig, self).__init__(\n name=\"{0}.{1}\".format(date, language),\n description=\"Wikipedia dataset for {0}, parsed from {1} dump.\".format(language, date),\n **kwargs,\n )\n self.date = date\n self.language = language\n\n\n_VERSION = nlp.Version(\"1.0.0\", \"New split API (https://tensorflow.org/datasets/splits)\")\n\n\nclass Wikipedia(nlp.BeamBasedBuilder):\n \"\"\"Wikipedia dataset.\"\"\"\n\n # Use mirror (your.org) to avoid download caps.\n\n BUILDER_CONFIGS = [\n WikipediaConfig(version=_VERSION, language=lang, date=\"20200501\",) # pylint:disable=g-complex-comprehension\n for lang in WIKIPEDIA_LANGUAGES\n ]\n\n def _info(self):\n return nlp.DatasetInfo(\n description=_DESCRIPTION,\n features=nlp.Features({\"title\": nlp.Value(\"string\"), \"text\": nlp.Value(\"string\")}),\n # No default supervised_keys.\n supervised_keys=None,\n homepage=\"https://dumps.wikimedia.org\",\n citation=_CITATION,\n )\n\n def _split_generators(self, dl_manager, pipeline):\n def _base_url(lang):\n return _BASE_URL_TMPL.format(lang=lang.replace(\"-\", \"_\"), date=self.config.date)\n\n lang = self.config.language\n\n info_url = _base_url(lang) + _INFO_FILE\n # Use dictionary since testing mock always returns the same result.\n downloaded_files = dl_manager.download_and_extract({\"info\": info_url})\n\n xml_urls = []\n total_bytes = 0\n with open(downloaded_files[\"info\"]) as f:\n dump_info = json.load(f)\n multistream_dump_info = dump_info[\"jobs\"][\"articlesmultistreamdump\"]\n assert multistream_dump_info[\"status\"] == \"done\", (\n \"Specified dump (%s) multistream status is not 'done': %s\"\n % (_base_url(lang), multistream_dump_info[\"status\"])\n )\n\n for fname, info in multistream_dump_info[\"files\"].items():\n if \".xml\" not in fname:\n continue\n total_bytes += info[\"size\"]\n xml_urls.append(_base_url(lang) + fname)\n\n # Use dictionary since testing mock always returns the same result.\n downloaded_files = dl_manager.download({\"xml\": xml_urls})\n if not pipeline.is_local():\n downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)\n\n return [\n nlp.SplitGenerator( # pylint:disable=g-complex-comprehension\n name=nlp.Split.TRAIN, gen_kwargs={\"filepaths\": downloaded_files[\"xml\"], \"language\": lang}\n )\n ]\n\n def _build_pcollection(self, pipeline, filepaths, language):\n \"\"\"Build PCollection of examples in the raw (text) form.\"\"\"\n\n def _extract_content(filepath):\n \"\"\"Extracts article content from a single WikiMedia XML file.\"\"\"\n logging.info(\"generating examples from = %s\", filepath)\n with beam.io.filesystems.FileSystems.open(filepath) as f:\n f = bz2.BZ2File(filename=f)\n if six.PY3:\n # Workaround due to:\n # https://github.com/tensorflow/tensorflow/issues/33563\n utf_f = codecs.getreader(\"utf-8\")(f)\n else:\n utf_f = f\n\n # To clear root, to free-up more memory than just `elem.clear()`.\n context = etree.iterparse(utf_f, events=(\"end\",))\n context = iter(context)\n unused_event, root = next(context)\n for unused_event, elem in context:\n if not elem.tag.endswith(\"page\"):\n continue\n namespace = elem.tag[:-4]\n title = elem.find(\"./{0}title\".format(namespace)).text\n ns = elem.find(\"./{0}ns\".format(namespace)).text\n id_ = elem.find(\"./{0}id\".format(namespace)).text\n\n # Filter pages that are not in the \"main\" namespace.\n if ns != \"0\":\n root.clear()\n continue\n\n raw_content = elem.find(\"./{0}revision/{0}text\".format(namespace)).text\n root.clear()\n\n # Filter redirects.\n if raw_content is None or raw_content.lower().startswith(\"#redirect\"):\n beam.metrics.Metrics.counter(language, \"filtered-redirects\").inc()\n continue\n\n beam.metrics.Metrics.counter(language, \"extracted-examples\").inc()\n yield (id_, title, raw_content)\n\n def _clean_content(inputs):\n \"\"\"Cleans raw wikicode to extract text.\"\"\"\n id_, title, raw_content = inputs\n try:\n text = _parse_and_clean_wikicode(raw_content)\n except (mwparserfromhell.parser.ParserError) as e:\n beam.metrics.Metrics.counter(language, \"parser-error\").inc()\n logging.error(\"mwparserfromhell ParseError: %s\", e)\n return\n\n if not text:\n beam.metrics.Metrics.counter(language, \"empty-clean-examples\").inc()\n return\n\n beam.metrics.Metrics.counter(language, \"cleaned-examples\").inc()\n\n yield id_, {\"title\": title, \"text\": text}\n\n return (\n pipeline\n | \"Initialize\" >> beam.Create(filepaths)\n | \"Extract content\" >> beam.FlatMap(_extract_content)\n | \"Distribute\" >> beam.transforms.Reshuffle()\n | \"Clean content\" >> beam.FlatMap(_clean_content)\n )\n\n\ndef _parse_and_clean_wikicode(raw_content):\n \"\"\"Strips formatting and unwanted sections from raw page content.\"\"\"\n wikicode = mwparserfromhell.parse(raw_content)\n\n # Filters for references, tables, and file/image links.\n re_rm_wikilink = re.compile(\"^(?:File|Image|Media):\", flags=re.IGNORECASE | re.UNICODE)\n\n def rm_wikilink(obj):\n return bool(re_rm_wikilink.match(six.text_type(obj.title)))\n\n def rm_tag(obj):\n return six.text_type(obj.tag) in {\"ref\", \"table\"}\n\n def rm_template(obj):\n return obj.name.lower() in {\"reflist\", \"notelist\", \"notelist-ua\", \"notelist-lr\", \"notelist-ur\", \"notelist-lg\"}\n\n def try_remove_obj(obj, section):\n try:\n section.remove(obj)\n except ValueError:\n # For unknown reasons, objects are sometimes not found.\n pass\n\n section_text = []\n # Filter individual sections to clean.\n for section in wikicode.get_sections(flat=True, include_lead=True, include_headings=True):\n for obj in section.ifilter_wikilinks(matches=rm_wikilink, recursive=True):\n try_remove_obj(obj, section)\n for obj in section.ifilter_templates(matches=rm_template, recursive=True):\n try_remove_obj(obj, section)\n for obj in section.ifilter_tags(matches=rm_tag, recursive=True):\n try_remove_obj(obj, section)\n\n section_text.append(section.strip_code().strip())\n return \"\\n\\n\".join(section_text)\n", "src/nlp/arrow_dataset.py": "# coding=utf-8\n# Copyright 2020 The HuggingFace Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n\"\"\" Simple Dataset wrapping an Arrow Table.\"\"\"\n\nimport hashlib\nimport logging\nimport os\nfrom collections.abc import Mapping\nfrom typing import Any, Dict, List, Optional, Union\n\nimport numpy as np\nimport pyarrow as pa\nfrom tqdm import tqdm\n\nfrom nlp.utils.py_utils import dumps\n\nfrom .arrow_writer import ArrowWriter\nfrom .utils import convert_tuples_in_lists\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Dataset(object):\n \"\"\" A Dataset backed by an Arrow table or Record Batch.\n \"\"\"\n\n def __init__(\n self,\n arrow_table: Union[pa.Table, pa.RecordBatch],\n data_files: Optional[List[dict]] = None,\n info: Optional[Any] = None,\n ):\n self._info = info\n self._data: pa.Table = arrow_table\n self._data_files: List[dict] = data_files if data_files is not None else []\n self._format_type = None\n self._format_columns = None\n self._output_all_columns = False\n\n @classmethod\n def from_file(cls, filename: str):\n \"\"\" Instantiate a Dataset backed by an Arrow table at filename \"\"\"\n mmap = pa.memory_map(filename)\n f = pa.ipc.open_stream(mmap)\n pa_table = f.read_all()\n return cls(arrow_table=pa_table, data_files=[{\"filename\": filename}])\n\n @classmethod\n def from_buffer(cls, buffer: pa.Buffer):\n \"\"\" Instantiate a Dataset backed by an Arrow buffer \"\"\"\n mmap = pa.BufferReader(buffer)\n f = pa.ipc.open_stream(mmap)\n pa_table = f.read_all()\n return cls(pa_table)\n\n @property\n def info(self):\n return self._info\n\n @property\n def data(self):\n return self._data\n\n @property\n def cache_files(self):\n return self._data_files\n\n @property\n def columns(self):\n return self._data.columns\n\n @property\n def nbytes(self):\n return self._data.nbytes\n\n @property\n def num_columns(self):\n return self._data.num_columns\n\n @property\n def num_rows(self):\n return self._data.num_rows\n\n @property\n def column_names(self):\n return self._data.column_names\n\n @property\n def schema(self) -> pa.Schema:\n return self._data.schema\n\n @property\n def shape(self):\n return self._data.shape\n\n def drop(self, columns: Union[str, List[str]]):\n \"\"\" Drop one or more columns.\n\n Args:\n columns: list of str\n \"\"\"\n if isinstance(columns, str):\n columns = [columns]\n if any(col not in self._data.column_names for col in columns):\n raise ValueError(\n \"Columns {} not in the dataset. Current columns in the dataset: {}\".format(\n list(filter(lambda col: col not in self._data.column_names, columns)), self._data.column_names\n )\n )\n self._data = self._data.drop(columns)\n\n def unique(self, column: str):\n \"\"\" Return a list of the unque elements in a column.\n\n Args:\n columns: str\n \"\"\"\n if column not in self._data.column_names:\n raise ValueError(f\"Column ({column}) not in table columns ({self._data.column_names}).\")\n return self._data.column(column).unique().to_pylist()\n\n def dictionary_encode_column(self, column: str):\n \"\"\" Dictionary encode a column.\n Dictionnary encode can reduce the size of a column with many repetitions (e.g. string labels columns)\n by storing a dictionnary of the strings. This only affect the internal storage.\n\n Args:\n columns: str\n \"\"\"\n if column not in self._data.column_names:\n raise ValueError(f\"Column ({column}) not in table columns ({self._data.column_names}).\")\n casted_schema: pa.Schema = self._data.schema\n field_index = casted_schema.get_field_index(column)\n field: pa.Field = casted_schema.field(field_index)\n casted_field = pa.field(field.name, pa.dictionary(pa.int32(), field.type), nullable=False)\n casted_schema.set(field_index, casted_field)\n self._data = self._data.cast(casted_schema)\n\n def flatten(self):\n \"\"\" Flatten the Table.\n Each column with a struct type is flattened into one column per struct field.\n Other columns are left unchanged.\n \"\"\"\n self._data = self._data.flatten()\n\n def __len__(self):\n return self._data.num_rows\n\n def __iter__(self):\n format_type = self._format_type\n format_columns = self._format_columns\n output_all_columns = self._output_all_columns\n for index in range(self._data.num_rows):\n yield self._getitem(\n index, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns,\n )\n\n def __repr__(self):\n schema_str = dict((a, str(b)) for a, b in zip(self._data.schema.names, self._data.schema.types))\n return f\"Dataset(schema: {schema_str}, num_rows: {self.num_rows})\"\n\n @property\n def format(self):\n return {\n \"type\": \"python\" if self._format_type is None else self._format_type,\n \"columns\": self.column_names if self._format_columns is None else self._format_columns,\n \"output_all_columns\": self._output_all_columns,\n }\n\n def set_format(self, type: Optional[str] = None, columns: Optional[List] = None, output_all_columns: bool = False):\n \"\"\" Set __getitem__ return format (type and columns)\n\n Args:\n type (Optional ``str``): output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas']\n None means __getitem__ returns python objects (default)\n columns (Optional ``List[str]``): columns to format in the output\n None means __getitem__ returns all columns (default)\n output_all_columns (``bool`` default to False): keep un-formated columns as well in the output (as python objects)\n \"\"\"\n # Check return type\n if type == \"torch\":\n try:\n import torch # noqa: F401\n except ImportError:\n logger.error(\"PyTorch needs to be installed to be able to return PyTorch tensors.\")\n elif type == \"tensorflow\":\n try:\n import tensorflow # noqa: F401\n except ImportError:\n logger.error(\"Tensorflow needs to be installed to be able to return Tensorflow tensors.\")\n else:\n assert (\n type is None or type == \"numpy\" or type == \"pandas\"\n ), \"Return type should be None or selected in ['numpy', 'torch', 'tensorflow', 'pandas'].\"\n\n # Check filter column\n if isinstance(columns, str):\n columns = [columns]\n if columns is not None and any(col not in self._data.column_names for col in columns):\n raise ValueError(\n \"Columns {} not in the dataset. Current columns in the dataset: {}\".format(\n list(filter(lambda col: col not in self._data.column_names, columns)), self._data.column_names\n )\n )\n\n self._format_type = type\n self._format_columns = columns\n self._output_all_columns = output_all_columns\n logger.info(\n \"Set __getitem__(key) output type to %s for %s columns \"\n \" (when key is int or slice) and %s output other (un-formated) columns.\",\n \"python objects\" if type is None else type,\n \"no\" if columns is None else str(columns),\n \"do\" if output_all_columns else \"don't\",\n )\n\n def reset_format(self):\n \"\"\" Reset __getitem__ return format to python objects and all columns.\n\n Same as ``self.set_format()``\n \"\"\"\n self.set_format()\n\n def _convert_outputs(self, outputs, format_type=None, format_columns=None, output_all_columns=False):\n if format_type is None:\n if output_all_columns:\n return outputs\n if isinstance(outputs, dict) and format_columns is not None:\n return {k: v for k, v in outputs.items() if k in format_columns}\n return outputs\n\n if format_type == \"numpy\":\n import numpy\n\n command = numpy.array\n elif format_type == \"torch\":\n import torch\n\n command = torch.tensor\n elif format_type == \"tensorflow\":\n import tensorflow\n\n command = tensorflow.ragged.constant\n else:\n\n def identity(x):\n return x\n\n command = identity\n\n if isinstance(outputs, (list, tuple)):\n return command(outputs)\n else:\n output_dict = {}\n for k, v in outputs.items():\n if format_columns is not None and k not in format_columns and not output_all_columns:\n continue\n if format_columns is None or k in format_columns:\n v = command(v)\n output_dict[k] = v\n return output_dict\n\n @staticmethod\n def _unnest(py_dict):\n return dict((key, array[0]) for key, array in py_dict.items())\n\n @staticmethod\n def _nest(py_dict):\n return dict((key, [elem]) for key, elem in py_dict.items())\n\n def _getitem(\n self, key: Union[int, slice, str], format_type=None, format_columns=None, output_all_columns=False\n ) -> Union[Dict, List]:\n \"\"\" Can be used to index columns (by string names) or rows (by integer index or slices)\n \"\"\"\n if isinstance(key, int):\n if key < 0:\n key = self._data.num_rows + key\n if key >= self._data.num_rows:\n raise IndexError(f\"Index ({key}) outside of table length ({self._data.num_rows}).\")\n if format_type is not None and format_type == \"pandas\":\n outputs = self._data.slice(key, 1).to_pandas()\n else:\n outputs = self._unnest(self._data.slice(key, 1).to_pydict())\n elif isinstance(key, slice):\n key_indices = key.indices(self._data.num_rows)\n if key_indices[2] != 1 or key_indices[1] < key_indices[0]:\n raise ValueError(\"Slicing can only take contiguous and ordered slices.\")\n if format_type is not None and format_type == \"pandas\":\n outputs = self._data.slice(key_indices[0], key_indices[1] - key_indices[0]).to_pandas()\n else:\n outputs = self._data.slice(key_indices[0], key_indices[1] - key_indices[0]).to_pydict()\n elif isinstance(key, str):\n if key not in self._data.column_names:\n raise ValueError(f\"Column ({key}) not in table columns ({self._data.column_names}).\")\n if format_type is not None:\n if format_columns is None or key in format_columns:\n if format_type == \"pandas\":\n outputs = self._data[key].to_pandas()\n elif format_type == \"numpy\":\n outputs = np.concatenate([arr.to_numpy() for arr in self._data[key].chunks])\n else:\n outputs = self._convert_outputs(self._data[key].to_pylist(), format_type=format_type)\n else:\n outputs = self._data[key].to_pylist()\n else:\n outputs = self._data[key].to_pylist()\n else:\n raise ValueError(\"Can only get row(s) (int or slice) or columns (string).\")\n\n if (\n (format_type is not None or format_columns is not None)\n and not isinstance(key, str)\n and format_type != \"pandas\"\n ):\n outputs = self._convert_outputs(\n outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns\n )\n return outputs\n\n def __getitem__(self, key: Union[int, slice, str]) -> Union[Dict, List]:\n \"\"\" Can be used to index columns (by string names) or rows (by integer index)\n \"\"\"\n return self._getitem(\n key,\n format_type=self._format_type,\n format_columns=self._format_columns,\n output_all_columns=self._output_all_columns,\n )\n\n def cleanup_cache_files(self):\n \"\"\" Clean up all cache files in the dataset cache directory, excepted the currently used cache file if there is one.\n Be carefull when running this command that no other process is currently using other cache files.\n\n Return:\n Number of removed files\n \"\"\"\n if not self._data_files or \"filename\" not in self._data_files[0]:\n return None\n current_cache_file = os.path.abspath(self._data_files[0][\"filename\"])\n cache_directory = os.path.dirname(current_cache_file)\n logger.info(f\"Listing files in {cache_directory}\")\n files: List[str] = os.listdir(cache_directory)\n files_to_remove = []\n for f_name in files:\n full_name = os.path.abspath(os.path.join(cache_directory, f_name))\n if f_name.startswith(\"cache-\") and f_name.endswith(\".arrow\"):\n if full_name == current_cache_file:\n logger.info(f\"Keeping current cache file at {full_name}\")\n continue\n files_to_remove.append(full_name)\n for file_path in files_to_remove:\n logger.info(f\"Removing {file_path}\")\n os.remove(file_path)\n return len(files_to_remove)\n\n def _get_cache_file_path(self, function, cache_kwargs):\n \"\"\" Find a unique name from the filenames, kwargs and the function \"\"\"\n if not self._data_files or \"filename\" not in self._data_files[0]:\n return None\n previous_files_string = \"-\".join(\n \"-\".join(str(k) + \"-\" + str(v) for k, v in f.items()) for f in self._data_files\n )\n cache_kwargs_string = \"-\".join(str(k) + \"-\" + str(v) for k, v in cache_kwargs.items())\n function_bytes = dumps(function)\n output_hash = hashlib.md5(\n previous_files_string.encode(\"utf-8\") + cache_kwargs_string.encode(\"utf-8\") + function_bytes\n ).hexdigest()\n cache_file_name = \"cache-\" + output_hash + \".arrow\"\n cache_directory = os.path.dirname(self._data_files[0][\"filename\"])\n cache_file_path = os.path.join(cache_directory, cache_file_name)\n return cache_file_path\n\n def map(\n self,\n function,\n with_indices: bool = False,\n batched: bool = False,\n batch_size: Optional[int] = 1000,\n remove_columns: Optional[List[str]] = None,\n keep_in_memory: bool = False,\n load_from_cache_file: bool = True,\n cache_file_name: Optional[str] = None,\n writer_batch_size: Optional[int] = 1000,\n arrow_schema: Optional[pa.Schema] = None,\n disable_nullable: bool = True,\n ):\n \"\"\" Apply a function to all the elements in the table (individually or in batches)\n and update the table (if function does updated examples).\n\n Args:\n `function` (`callable`): with one of the following signature:\n - `function(example: Dict) -> Union[Dict, Any]` if `batched=False` and `with_indices=False`\n - `function(example: Dict, indices: int) -> Union[Dict, Any]` if `batched=False` and `with_indices=True`\n - `function(batch: Dict[List]) -> Union[Dict, Any]` if `batched=True` and `with_indices=False`\n - `function(batch: Dict[List], indices: List[int]) -> Union[Dict, Any]` if `batched=True` and `with_indices=True`\n `with_indices` (`bool`, default: `False`): Provide example indices to `function`\n `batched` (`bool`, default: `False`): Provide batch of examples to `function`\n `batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`\n `batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`\n `remove_columns` (`Optional[List[str]]`, default: `None`): Remove a selection of columns while doing the mapping.\n Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding\n columns with names in `remove_columns`, these columns will be kept.\n `keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.\n `load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`\n can be identified, use it instead of recomputing.\n `cache_file_name` (`Optional[str]`, default: `None`): Provide the name of a cache file to use to store the\n results of the computation instead of the automatically generated cache file name.\n `writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.\n Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.\n `arrow_schema` (`Optional[pa.Schema]`, default: `None`): Use a specific Apache Arrow Schema to store the cache file\n instead of the automatically generated one.\n `disable_nullable` (`bool`, default: `True`): Allow null values in the table.\n \"\"\"\n # If the array is empty we do nothing\n if len(self) == 0:\n return self\n\n # Select the columns (arrow columns) to process\n if remove_columns is not None and any(col not in self._data.column_names for col in remove_columns):\n raise ValueError(\n \"Column to remove {} not in the dataset. Current columns in the dataset: {}\".format(\n list(filter(lambda col: col not in self._data.column_names, remove_columns)),\n self._data.column_names,\n )\n )\n\n # If we do batch computation but no batch sze is provided, default to the full dataset\n if batched and (batch_size is None or batch_size <= 0):\n batch_size = self._data.num_rows\n\n # Check if the function returns updated examples\n def does_function_return_dict(inputs, indices):\n \"\"\" Does the function returns a dict. \"\"\"\n processed_inputs = function(inputs, indices) if with_indices else function(inputs)\n does_return_dict = isinstance(processed_inputs, Mapping)\n\n if does_return_dict is False and processed_inputs is not None:\n raise TypeError(\n \"Provided `function` which is applied to all elements of table returns a variable of type {}. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.\".format(\n type(processed_inputs)\n )\n )\n elif isinstance(test_indices, list) and does_return_dict is True:\n all_dict_values_are_lists = all(isinstance(value, list) for value in processed_inputs.values())\n if all_dict_values_are_lists is False:\n raise TypeError(\n \"Provided `function` which is applied to all elements of table returns a `dict` of types {}. When using `batched=True`, make sure provided `function` returns a `dict` of types `list`.\".format(\n [type(x) for x in processed_inputs.values()]\n )\n )\n\n return does_return_dict\n\n # We only update the data table (and use the cache) if the function returns a dict.\n # Test it on the first element or a small batch (0, 1) for batched inputs\n test_inputs = self[:2] if batched else self[0]\n test_indices = [0, 1] if batched else 0\n update_data = does_function_return_dict(test_inputs, test_indices)\n\n def apply_function_on_filtered_inputs(inputs, indices):\n \"\"\" Utility to apply the function on a selection of columns. \"\"\"\n processed_inputs = function(inputs, indices) if with_indices else function(inputs)\n if not update_data:\n return None # Nothing to update, let's move on\n if remove_columns is not None:\n for column in remove_columns:\n inputs.pop(column)\n if self._format_type is not None:\n inputs = self._getitem(\n key=(indices if isinstance(indices, int) else slice(indices[0], indices[-1])),\n format_type=None,\n format_columns=None,\n )\n inputs.update(processed_inputs)\n return inputs\n\n # Find the output schema if none is given\n test_inputs = self[:2] if batched else self[0]\n test_indices = [0, 1] if batched else 0\n test_output = apply_function_on_filtered_inputs(test_inputs, test_indices)\n if arrow_schema is None and update_data:\n if not batched:\n test_output = self._nest(test_output)\n test_output = convert_tuples_in_lists(test_output)\n arrow_schema = pa.Table.from_pydict(test_output).schema\n if disable_nullable:\n arrow_schema = pa.schema(pa.field(field.name, field.type, nullable=False) for field in arrow_schema)\n\n # Check if we've already cached this computation (indexed by a hash)\n if self._data_files and update_data:\n if cache_file_name is None:\n # we create a unique hash from the function, current dataset file and the mapping args\n cache_kwargs = {\n \"with_indices\": with_indices,\n \"batched\": batched,\n \"batch_size\": batch_size,\n \"remove_columns\": remove_columns,\n \"keep_in_memory\": keep_in_memory,\n \"load_from_cache_file\": load_from_cache_file,\n \"cache_file_name\": cache_file_name,\n \"writer_batch_size\": writer_batch_size,\n \"arrow_schema\": arrow_schema,\n \"disable_nullable\": disable_nullable,\n }\n cache_file_name = self._get_cache_file_path(function, cache_kwargs)\n if os.path.exists(cache_file_name) and load_from_cache_file:\n logger.info(\"Loading cached processed dataset at %s\", cache_file_name)\n return Dataset.from_file(cache_file_name)\n\n # Prepare output buffer and batched writer in memory or on file if we update the table\n if update_data:\n if keep_in_memory or not self._data_files:\n buf_writer = pa.BufferOutputStream()\n writer = ArrowWriter(schema=arrow_schema, stream=buf_writer, writer_batch_size=writer_batch_size)\n else:\n buf_writer = None\n logger.info(\"Caching processed dataset at %s\", cache_file_name)\n writer = ArrowWriter(schema=arrow_schema, path=cache_file_name, writer_batch_size=writer_batch_size)\n\n # Loop over single examples or batches and write to buffer/file if examples are to be updated\n if not batched:\n for i, example in tqdm(enumerate(self)):\n example = apply_function_on_filtered_inputs(example, i)\n if update_data:\n writer.write(example)\n else:\n for i in tqdm(range(0, len(self), batch_size)):\n batch = self[i : i + batch_size]\n indices = list(range(*(slice(i, i + batch_size).indices(self._data.num_rows)))) # Something simpler?\n batch = apply_function_on_filtered_inputs(batch, indices)\n if update_data:\n writer.write_batch(batch)\n\n if update_data:\n writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\n\n # Create new Dataset from buffer or file\n if buf_writer is None:\n return Dataset.from_file(cache_file_name)\n else:\n return Dataset.from_buffer(buf_writer.getvalue())\n else:\n return self\n"}
|
{"src/nlp/arrow_dataset.py": [{"type": "function", "name": "Dataset.filter", "lines": [562, 623], "signature": "def filter(self, function, with_indices=False, **kwargs):", "doc": "Apply a filter function to all the elements in the table in batches\nand update the table so that the dataset only includes examples according to the filter function.\n\nArgs:\n `function` (`callable`): with one of the following signature:\n - `function(example: Dict) -> bool` if `with_indices=False`\n - `function(example: Dict, indices: int) -> bool` if `with_indices=True`\n `with_indices` (`bool`, default: `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`.\n `batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`\n `batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`\n `remove_columns` (`Optional[List[str]]`, default: `None`): Remove a selection of columns while doing the mapping.\n Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding\n columns with names in `remove_columns`, these columns will be kept.\n `keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.\n `load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`\n can be identified, use it instead of recomputing.\n `cache_file_name` (`Optional[str]`, default: `None`): Provide the name of a cache file to use to store the\n results of the computation instead of the automatically generated cache file name.\n `writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.\n Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.\n `disable_nullable` (`bool`, default: `True`): Allow null values in the table."}, {"type": "function", "name": "Dataset.filter.map_function", "lines": [587, 614], "signature": "def map_function(batch, *args):", "doc": ""}]}
| null |
["tests/test_arrow_dataset.py::BaseDatasetTest::test_filter"]
|
["tests/test_arrow_dataset.py::BaseDatasetTest::test_map", "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_batched", "tests/test_arrow_dataset.py::BaseDatasetTest::test_remove_colums"]
|
5142a8cf61d8a4495eda3d91dc4283a6df01ea14
|
{"first_commit_time": 1590682463.0, "pr_title": "[arrow_dataset.py] add new filter function", "pr_body": "The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.\r\nI think, filtering out examples is also a very common operation people would like to perform on datasets.\r\n\r\nThis PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.\r\n\r\nHere is a sample code you can play around with:\r\n\r\n```python\r\nds = nlp.load_dataset(\"squad\", split=\"validation[:10%]\")\r\n\r\n\r\ndef remove_under_idx_5(example, idx):\r\n return idx < 5\r\n\r\n\r\ndef only_keep_examples_with_is_in_context(example):\r\n return \"is\" in example[\"context\"]\r\n\r\n\r\nresult_keep_only_first_5 = ds.filter(remove_under_idx_5, with_indices=True, load_from_cache_file=False)\r\nresult_keep_examples_with_is_in_context = ds.filter(only_keep_examples_with_is_in_context, load_from_cache_file=False)\r\n\r\nprint(\"Original number of examples: {}\".format(len(ds)))\r\nprint(\"First five examples number of examples: {}\".format(len(result_keep_only_first_5)))\r\nprint(\"Is in context examples number of examples: {}\".format(len(result_keep_examples_with_is_in_context)))\r\n```", "pr_timeline": [{"time": 1590685778.0, "comment": "I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet."}, {"time": 1590686270.0, "comment": "Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n```python\r\nfor i in range(num_examples):\r\n example = map_nested(lambda x: x[i], batch)\r\n # ... then test to keep it or not\r\n```"}, {"time": 1590688138.0, "comment": "> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome I'll check it out :-) "}, {"time": 1590689264.0, "comment": "> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome this function is definitely much nicer!"}, {"time": 1590689557.0, "comment": "Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n```python\r\none_example = {\r\n \"title\": \"blabla\",\r\n \"paragraphs\": [\r\n \"p1\", \"p2\", ...\r\n ]\r\n}\r\n```"}, {"time": 1590689547.0, "comment": "We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch."}, {"time": 1590690109.0, "comment": "> Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n> \r\n> ```python\r\n> one_example = {\r\n> \"title\": \"blabla\",\r\n> \"paragraphs\": [\r\n> \"p1\", \"p2\", ...\r\n> ]\r\n> }\r\n> ```\r\n\r\nThey both work. I'm using it on trivia_qa which is pretty nested. If you use the option `dict_only=True` I think it's fine."}, {"time": 1590690468.0, "comment": "> We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch.\r\n\r\nWhy? "}, {"time": 1590691553.0, "comment": "Actually it's fine. I guess this is going to be yet another thing to be unit-tested just to make sure ^^"}, {"time": 1590697487.0, "comment": "Yes, I will need to add tests and documentation! \r\n@thomwolf - would a function like this be ok? It abstracts `.map()` a bit which might be hard to understand. "}, {"time": 1590745786.0, "comment": "I tried on some datasets with nested structure and it works fine ! Great work :D \r\n"}, {"time": 1590746842.0, "comment": "Awesome :-), I will add documentation and some simple unittests"}, {"time": 1590751936.0, "comment": "Ok merging!"}], "issues": {}}
|
|
huggingface/datasets
| 6,318
|
https://github.com/huggingface/datasets/pull/6318
|
huggingface__datasets-6318
|
[]
|
3aeb078ba1afd713e901df43343c160877403d07
|
diff --git a/src/datasets/utils/py_utils.py b/src/datasets/utils/py_utils.py
index 912824fa7f3..243bc0f99c9 100644
--- a/src/datasets/utils/py_utils.py
+++ b/src/datasets/utils/py_utils.py
@@ -736,6 +736,29 @@ def proxy(func):
return proxy
+if config.DILL_VERSION < version.parse("0.3.6"):
+
+ @pklregister(set)
+ def _save_set(pickler, obj):
+ dill._dill.log.info(f"Se: {obj}")
+ from datasets.fingerprint import Hasher
+
+ args = (sorted(obj, key=Hasher.hash),)
+ pickler.save_reduce(set, args, obj=obj)
+ dill._dill.log.info("# Se")
+
+elif config.DILL_VERSION.release[:3] in [version.parse("0.3.6").release, version.parse("0.3.7").release]:
+
+ @pklregister(set)
+ def _save_set(pickler, obj):
+ dill._dill.logger.trace(pickler, "Se: %s", obj)
+ from datasets.fingerprint import Hasher
+
+ args = (sorted(obj, key=Hasher.hash),)
+ pickler.save_reduce(set, args, obj=obj)
+ dill._dill.logger.trace(pickler, "# Se")
+
+
if config.DILL_VERSION < version.parse("0.3.6"):
@pklregister(CodeType)
|
diff --git a/tests/test_fingerprint.py b/tests/test_fingerprint.py
index 5538f27554e..f4d5d65744e 100644
--- a/tests/test_fingerprint.py
+++ b/tests/test_fingerprint.py
@@ -2,6 +2,7 @@
import os
import pickle
import subprocess
+from functools import partial
from hashlib import md5
from pathlib import Path
from tempfile import gettempdir
@@ -10,6 +11,7 @@
from unittest import TestCase
from unittest.mock import patch
+import numpy as np
import pytest
from multiprocess import Pool
@@ -254,6 +256,22 @@ def test_hash_same_strings(self):
self.assertEqual(hash1, hash2)
self.assertEqual(hash1, hash3)
+ def test_set_stable(self):
+ rng = np.random.default_rng(42)
+ set_ = {rng.random() for _ in range(10_000)}
+ expected_hash = Hasher.hash(set_)
+ assert expected_hash == Pool(1).apply_async(partial(Hasher.hash, set(set_))).get()
+
+ def test_set_doesnt_depend_on_order(self):
+ set_ = set("abc")
+ hash1 = md5(datasets.utils.py_utils.dumps(set_)).hexdigest()
+ set_ = set("def")
+ hash2 = md5(datasets.utils.py_utils.dumps(set_)).hexdigest()
+ set_ = set("cba")
+ hash3 = md5(datasets.utils.py_utils.dumps(set_)).hexdigest()
+ self.assertEqual(hash1, hash3)
+ self.assertNotEqual(hash1, hash2)
+
@require_tiktoken
def test_hash_tiktoken_encoding(self):
import tiktoken
| 2023-10-19T12:19:13
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"src/datasets/utils/py_utils.py": "# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n\"\"\"Some python utils function and classes.\n\n\"\"\"\n\nimport copy\nimport functools\nimport itertools\nimport multiprocessing.pool\nimport os\nimport queue\nimport re\nimport types\nimport warnings\nfrom contextlib import contextmanager\nfrom dataclasses import fields, is_dataclass\nfrom io import BytesIO as StringIO\nfrom multiprocessing import Manager\nfrom queue import Empty\nfrom shutil import disk_usage\nfrom types import CodeType, FunctionType\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple, TypeVar, Union\nfrom urllib.parse import urlparse\n\nimport dill\nimport multiprocess\nimport multiprocess.pool\nimport numpy as np\nfrom packaging import version\nfrom tqdm.auto import tqdm\n\nfrom .. import config\nfrom ..parallel import parallel_map\nfrom . import logging\n\n\ntry: # pragma: no branch\n import typing_extensions as _typing_extensions\n from typing_extensions import Final, Literal\nexcept ImportError:\n _typing_extensions = Literal = Final = None\n\n\nlogger = logging.get_logger(__name__)\n\n\n# NOTE: When used on an instance method, the cache is shared across all\n# instances and IS NOT per-instance.\n# See\n# https://stackoverflow.com/questions/14946264/python-lru-cache-decorator-per-instance\n# For @property methods, use @memoized_property below.\nmemoize = functools.lru_cache\n\n\ndef size_str(size_in_bytes):\n \"\"\"Returns a human readable size string.\n\n If size_in_bytes is None, then returns \"Unknown size\".\n\n For example `size_str(1.5 * datasets.units.GiB) == \"1.50 GiB\"`.\n\n Args:\n size_in_bytes: `int` or `None`, the size, in bytes, that we want to\n format as a human-readable size string.\n \"\"\"\n if not size_in_bytes:\n return \"Unknown size\"\n\n _NAME_LIST = [(\"PiB\", 2**50), (\"TiB\", 2**40), (\"GiB\", 2**30), (\"MiB\", 2**20), (\"KiB\", 2**10)]\n\n size_in_bytes = float(size_in_bytes)\n for name, size_bytes in _NAME_LIST:\n value = size_in_bytes / size_bytes\n if value >= 1.0:\n return f\"{value:.2f} {name}\"\n return f\"{int(size_in_bytes)} bytes\"\n\n\ndef convert_file_size_to_int(size: Union[int, str]) -> int:\n \"\"\"\n Converts a size expressed as a string with digits an unit (like `\"50MB\"`) to an integer (in bytes).\n\n Args:\n size (`int` or `str`): The size to convert. Will be directly returned if an `int`.\n\n Example:\n\n ```py\n >>> convert_file_size_to_int(\"1MiB\")\n 1048576\n ```\n \"\"\"\n if isinstance(size, int):\n return size\n if size.upper().endswith(\"PIB\"):\n return int(size[:-3]) * (2**50)\n if size.upper().endswith(\"TIB\"):\n return int(size[:-3]) * (2**40)\n if size.upper().endswith(\"GIB\"):\n return int(size[:-3]) * (2**30)\n if size.upper().endswith(\"MIB\"):\n return int(size[:-3]) * (2**20)\n if size.upper().endswith(\"KIB\"):\n return int(size[:-3]) * (2**10)\n if size.upper().endswith(\"PB\"):\n int_size = int(size[:-2]) * (10**15)\n return int_size // 8 if size.endswith(\"b\") else int_size\n if size.upper().endswith(\"TB\"):\n int_size = int(size[:-2]) * (10**12)\n return int_size // 8 if size.endswith(\"b\") else int_size\n if size.upper().endswith(\"GB\"):\n int_size = int(size[:-2]) * (10**9)\n return int_size // 8 if size.endswith(\"b\") else int_size\n if size.upper().endswith(\"MB\"):\n int_size = int(size[:-2]) * (10**6)\n return int_size // 8 if size.endswith(\"b\") else int_size\n if size.upper().endswith(\"KB\"):\n int_size = int(size[:-2]) * (10**3)\n return int_size // 8 if size.endswith(\"b\") else int_size\n raise ValueError(f\"`size={size}` is not in a valid format. Use an integer followed by the unit, e.g., '5GB'.\")\n\n\ndef glob_pattern_to_regex(pattern):\n # partially taken from fsspec:\n # https://github.com/fsspec/filesystem_spec/blob/697d0f8133d8a5fbc3926e4761d7ecd51337ce50/fsspec/asyn.py#L735\n return (\n pattern.replace(\"\\\\\", r\"\\\\\")\n .replace(\".\", r\"\\.\")\n .replace(\"*\", \".*\")\n .replace(\"+\", r\"\\+\")\n .replace(\"//\", \"/\")\n .replace(\"(\", r\"\\(\")\n .replace(\")\", r\"\\)\")\n .replace(\"|\", r\"\\|\")\n .replace(\"^\", r\"\\^\")\n .replace(\"$\", r\"\\$\")\n .rstrip(\"/\")\n .replace(\"?\", \".\")\n )\n\n\ndef string_to_dict(string: str, pattern: str) -> Dict[str, str]:\n \"\"\"Un-format a string using a python f-string pattern.\n From https://stackoverflow.com/a/36838374\n\n Example::\n\n >>> p = 'hello, my name is {name} and I am a {age} year old {what}'\n >>> s = p.format(name='cody', age=18, what='quarterback')\n >>> s\n 'hello, my name is cody and I am a 18 year old quarterback'\n >>> string_to_dict(s, p)\n {'age': '18', 'name': 'cody', 'what': 'quarterback'}\n\n Args:\n string (str): input string\n pattern (str): pattern formatted like a python f-string\n\n Returns:\n Dict[str, str]: dictionary of variable -> value, retrieved from the input using the pattern\n\n Raises:\n ValueError: if the string doesn't match the pattern\n \"\"\"\n regex = re.sub(r\"{(.+?)}\", r\"(?P<_\\1>.+)\", pattern)\n result = re.search(regex, string)\n if result is None:\n raise ValueError(f\"String {string} doesn't match the pattern {pattern}\")\n values = list(result.groups())\n keys = re.findall(r\"{(.+?)}\", pattern)\n _dict = dict(zip(keys, values))\n return _dict\n\n\ndef asdict(obj):\n \"\"\"Convert an object to its dictionary representation recursively.\n\n <Added version=\"2.4.0\"/>\n \"\"\"\n\n # Implementation based on https://docs.python.org/3/library/dataclasses.html#dataclasses.asdict\n\n def _is_dataclass_instance(obj):\n # https://docs.python.org/3/library/dataclasses.html#dataclasses.is_dataclass\n return is_dataclass(obj) and not isinstance(obj, type)\n\n def _asdict_inner(obj):\n if _is_dataclass_instance(obj):\n result = {}\n for f in fields(obj):\n value = _asdict_inner(getattr(obj, f.name))\n if not f.init or value != f.default or f.metadata.get(\"include_in_asdict_even_if_is_default\", False):\n result[f.name] = value\n return result\n elif isinstance(obj, tuple) and hasattr(obj, \"_fields\"):\n # obj is a namedtuple\n return type(obj)(*[_asdict_inner(v) for v in obj])\n elif isinstance(obj, (list, tuple)):\n # Assume we can create an object of this type by passing in a\n # generator (which is not true for namedtuples, handled\n # above).\n return type(obj)(_asdict_inner(v) for v in obj)\n elif isinstance(obj, dict):\n return {_asdict_inner(k): _asdict_inner(v) for k, v in obj.items()}\n else:\n return copy.deepcopy(obj)\n\n if not isinstance(obj, dict) and not _is_dataclass_instance(obj):\n raise TypeError(f\"{obj} is not a dict or a dataclass\")\n\n return _asdict_inner(obj)\n\n\n@contextmanager\ndef temporary_assignment(obj, attr, value):\n \"\"\"Temporarily assign obj.attr to value.\"\"\"\n original = getattr(obj, attr, None)\n setattr(obj, attr, value)\n try:\n yield\n finally:\n setattr(obj, attr, original)\n\n\n@contextmanager\ndef temp_seed(seed: int, set_pytorch=False, set_tensorflow=False):\n \"\"\"Temporarily set the random seed. This works for python numpy, pytorch and tensorflow.\"\"\"\n np_state = np.random.get_state()\n np.random.seed(seed)\n\n if set_pytorch and config.TORCH_AVAILABLE:\n import torch\n\n torch_state = torch.random.get_rng_state()\n torch.random.manual_seed(seed)\n\n if torch.cuda.is_available():\n torch_cuda_states = torch.cuda.get_rng_state_all()\n torch.cuda.manual_seed_all(seed)\n\n if set_tensorflow and config.TF_AVAILABLE:\n import tensorflow as tf\n from tensorflow.python.eager import context as tfpycontext\n\n tf_state = tf.random.get_global_generator()\n temp_gen = tf.random.Generator.from_seed(seed)\n tf.random.set_global_generator(temp_gen)\n\n if not tf.executing_eagerly():\n raise ValueError(\"Setting random seed for TensorFlow is only available in eager mode\")\n\n tf_context = tfpycontext.context() # eager mode context\n tf_seed = tf_context._seed\n tf_rng_initialized = hasattr(tf_context, \"_rng\")\n if tf_rng_initialized:\n tf_rng = tf_context._rng\n tf_context._set_global_seed(seed)\n\n try:\n yield\n finally:\n np.random.set_state(np_state)\n\n if set_pytorch and config.TORCH_AVAILABLE:\n torch.random.set_rng_state(torch_state)\n if torch.cuda.is_available():\n torch.cuda.set_rng_state_all(torch_cuda_states)\n\n if set_tensorflow and config.TF_AVAILABLE:\n tf.random.set_global_generator(tf_state)\n\n tf_context._seed = tf_seed\n if tf_rng_initialized:\n tf_context._rng = tf_rng\n else:\n delattr(tf_context, \"_rng\")\n\n\ndef unique_values(values):\n \"\"\"Iterate over iterable and return only unique values in order.\"\"\"\n seen = set()\n for value in values:\n if value not in seen:\n seen.add(value)\n yield value\n\n\ndef no_op_if_value_is_null(func):\n \"\"\"If the value is None, return None, else call `func`.\"\"\"\n\n def wrapper(value):\n return func(value) if value is not None else None\n\n return wrapper\n\n\ndef first_non_null_value(iterable):\n \"\"\"Return the index and the value of the first non-null value in the iterable. If all values are None, return -1 as index.\"\"\"\n for i, value in enumerate(iterable):\n if value is not None:\n return i, value\n return -1, None\n\n\ndef zip_dict(*dicts):\n \"\"\"Iterate over items of dictionaries grouped by their keys.\"\"\"\n for key in unique_values(itertools.chain(*dicts)): # set merge all keys\n # Will raise KeyError if the dict don't have the same keys\n yield key, tuple(d[key] for d in dicts)\n\n\nclass NonMutableDict(dict):\n \"\"\"Dict where keys can only be added but not modified.\n\n Will raise an error if the user try to overwrite one key. The error message\n can be customized during construction. It will be formatted using {key} for\n the overwritten key.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n self._error_msg = kwargs.pop(\n \"error_msg\",\n \"Try to overwrite existing key: {key}\",\n )\n if kwargs:\n raise ValueError(\"NonMutableDict cannot be initialized with kwargs.\")\n super().__init__(*args, **kwargs)\n\n def __setitem__(self, key, value):\n if key in self:\n raise ValueError(self._error_msg.format(key=key))\n return super().__setitem__(key, value)\n\n def update(self, other):\n if any(k in self for k in other):\n raise ValueError(self._error_msg.format(key=set(self) & set(other)))\n return super().update(other)\n\n\nclass classproperty(property): # pylint: disable=invalid-name\n \"\"\"Descriptor to be used as decorator for @classmethods.\"\"\"\n\n def __get__(self, obj, objtype=None):\n return self.fget.__get__(None, objtype)()\n\n\ndef _single_map_nested(args):\n \"\"\"Apply a function recursively to each element of a nested data struct.\"\"\"\n function, data_struct, types, rank, disable_tqdm, desc = args\n\n # Singleton first to spare some computation\n if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\n return function(data_struct)\n\n # Reduce logging to keep things readable in multiprocessing with tqdm\n if rank is not None and logging.get_verbosity() < logging.WARNING:\n logging.set_verbosity_warning()\n # Print at least one thing to fix tqdm in notebooks in multiprocessing\n # see https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308\n if rank is not None and not disable_tqdm and any(\"notebook\" in tqdm_cls.__name__ for tqdm_cls in tqdm.__mro__):\n print(\" \", end=\"\", flush=True)\n\n # Loop over single examples or batches and write to buffer/file if examples are to be updated\n pbar_iterable = data_struct.items() if isinstance(data_struct, dict) else data_struct\n pbar_desc = (desc + \" \" if desc is not None else \"\") + \"#\" + str(rank) if rank is not None else desc\n with logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit=\"obj\", desc=pbar_desc) as pbar:\n if isinstance(data_struct, dict):\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\n else:\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\n if isinstance(data_struct, list):\n return mapped\n elif isinstance(data_struct, tuple):\n return tuple(mapped)\n else:\n return np.array(mapped)\n\n\ndef map_nested(\n function: Callable[[Any], Any],\n data_struct: Any,\n dict_only: bool = False,\n map_list: bool = True,\n map_tuple: bool = False,\n map_numpy: bool = False,\n num_proc: Optional[int] = None,\n parallel_min_length: int = 2,\n types: Optional[tuple] = None,\n disable_tqdm: bool = True,\n desc: Optional[str] = None,\n) -> Any:\n \"\"\"Apply a function recursively to each element of a nested data struct.\n\n Use multiprocessing if num_proc > 1 and the length of data_struct is greater than or equal to\n `parallel_min_length`.\n\n <Changed version=\"2.5.0\">\n\n Before version 2.5.0, multiprocessing was not used if `num_proc` was greater than or equal to ``len(iterable)``.\n\n Now, if `num_proc` is greater than or equal to ``len(iterable)``, `num_proc` is set to ``len(iterable)`` and\n multiprocessing is used.\n\n </Changed>\n\n Args:\n function (`Callable`): Function to be applied to `data_struct`.\n data_struct (`Any`): Data structure to apply `function` to.\n dict_only (`bool`, default `False`): Whether only apply `function` recursively to `dict` values in\n `data_struct`.\n map_list (`bool`, default `True`): Whether also apply `function` recursively to `list` elements (besides `dict`\n values).\n map_tuple (`bool`, default `False`): Whether also apply `function` recursively to `tuple` elements (besides\n `dict` values).\n map_numpy (`bool, default `False`): Whether also apply `function` recursively to `numpy.array` elements (besides\n `dict` values).\n num_proc (`int`, *optional*): Number of processes.\n parallel_min_length (`int`, default `2`): Minimum length of `data_struct` required for parallel\n processing.\n <Added version=\"2.5.0\"/>\n types (`tuple`, *optional*): Additional types (besides `dict` values) to apply `function` recursively to their\n elements.\n disable_tqdm (`bool`, default `True`): Whether to disable the tqdm progressbar.\n desc (`str`, *optional*): Prefix for the tqdm progressbar.\n\n Returns:\n `Any`\n \"\"\"\n if types is None:\n types = []\n if not dict_only:\n if map_list:\n types.append(list)\n if map_tuple:\n types.append(tuple)\n if map_numpy:\n types.append(np.ndarray)\n types = tuple(types)\n\n # Singleton\n if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\n return function(data_struct)\n\n disable_tqdm = disable_tqdm or not logging.is_progress_bar_enabled()\n iterable = list(data_struct.values()) if isinstance(data_struct, dict) else data_struct\n\n if num_proc is None:\n num_proc = 1\n if num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:\n mapped = [\n _single_map_nested((function, obj, types, None, True, None))\n for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\n ]\n else:\n with warnings.catch_warnings():\n warnings.filterwarnings(\n \"ignore\",\n message=\".* is experimental and might be subject to breaking changes in the future\\\\.$\",\n category=UserWarning,\n )\n mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)\n\n if isinstance(data_struct, dict):\n return dict(zip(data_struct.keys(), mapped))\n else:\n if isinstance(data_struct, list):\n return mapped\n elif isinstance(data_struct, tuple):\n return tuple(mapped)\n else:\n return np.array(mapped)\n\n\nclass NestedDataStructure:\n def __init__(self, data=None):\n self.data = data if data is not None else []\n\n def flatten(self, data=None):\n data = data if data is not None else self.data\n if isinstance(data, dict):\n return self.flatten(list(data.values()))\n elif isinstance(data, (list, tuple)):\n return [flattened for item in data for flattened in self.flatten(item)]\n else:\n return [data]\n\n\ndef has_sufficient_disk_space(needed_bytes, directory=\".\"):\n try:\n free_bytes = disk_usage(os.path.abspath(directory)).free\n except OSError:\n return True\n return needed_bytes < free_bytes\n\n\ndef _convert_github_url(url_path: str) -> Tuple[str, Optional[str]]:\n \"\"\"Convert a link to a file on a github repo in a link to the raw github object.\"\"\"\n parsed = urlparse(url_path)\n sub_directory = None\n if parsed.scheme in (\"http\", \"https\", \"s3\") and parsed.netloc == \"github.com\":\n if \"blob\" in url_path:\n if not url_path.endswith(\".py\"):\n raise ValueError(f\"External import from github at {url_path} should point to a file ending with '.py'\")\n url_path = url_path.replace(\"blob\", \"raw\") # Point to the raw file\n else:\n # Parse github url to point to zip\n github_path = parsed.path[1:]\n repo_info, branch = github_path.split(\"/tree/\") if \"/tree/\" in github_path else (github_path, \"master\")\n repo_owner, repo_name = repo_info.split(\"/\")\n url_path = f\"https://github.com/{repo_owner}/{repo_name}/archive/{branch}.zip\"\n sub_directory = f\"{repo_name}-{branch}\"\n return url_path, sub_directory\n\n\ndef get_imports(file_path: str) -> Tuple[str, str, str, str]:\n \"\"\"Find whether we should import or clone additional files for a given processing script.\n And list the import.\n\n We allow:\n - library dependencies,\n - local dependencies and\n - external dependencies whose url is specified with a comment starting from \"# From:' followed by the raw url to a file, an archive or a github repository.\n external dependencies will be downloaded (and extracted if needed in the dataset folder).\n We also add an `__init__.py` to each sub-folder of a downloaded folder so the user can import from them in the script.\n\n Note that only direct import in the dataset processing script will be handled\n We don't recursively explore the additional import to download further files.\n\n Example::\n\n import tensorflow\n import .c4_utils\n import .clicr.dataset-code.build_json_dataset # From: https://raw.githubusercontent.com/clips/clicr/master/dataset-code/build_json_dataset\n \"\"\"\n lines = []\n with open(file_path, encoding=\"utf-8\") as f:\n lines.extend(f.readlines())\n\n logger.debug(f\"Checking {file_path} for additional imports.\")\n imports: List[Tuple[str, str, str, Optional[str]]] = []\n is_in_docstring = False\n for line in lines:\n docstr_start_match = re.findall(r'[\\s\\S]*?\"\"\"[\\s\\S]*?', line)\n\n if len(docstr_start_match) == 1:\n # flip True <=> False only if doctstring\n # starts at line without finishing\n is_in_docstring = not is_in_docstring\n\n if is_in_docstring:\n # import statements in doctstrings should\n # not be added as required dependencies\n continue\n\n match = re.match(r\"^import\\s+(\\.?)([^\\s\\.]+)[^#\\r\\n]*(?:#\\s+From:\\s+)?([^\\r\\n]*)\", line, flags=re.MULTILINE)\n if match is None:\n match = re.match(\n r\"^from\\s+(\\.?)([^\\s\\.]+)(?:[^\\s]*)\\s+import\\s+[^#\\r\\n]*(?:#\\s+From:\\s+)?([^\\r\\n]*)\",\n line,\n flags=re.MULTILINE,\n )\n if match is None:\n continue\n if match.group(1):\n # The import starts with a '.', we will download the relevant file\n if any(imp[1] == match.group(2) for imp in imports):\n # We already have this import\n continue\n if match.group(3):\n # The import has a comment with 'From:', we'll retrieve it from the given url\n url_path = match.group(3)\n url_path, sub_directory = _convert_github_url(url_path)\n imports.append((\"external\", match.group(2), url_path, sub_directory))\n elif match.group(2):\n # The import should be at the same place as the file\n imports.append((\"internal\", match.group(2), match.group(2), None))\n else:\n if match.group(3):\n # The import has a comment with `From: git+https:...`, asks user to pip install from git.\n url_path = match.group(3)\n imports.append((\"library\", match.group(2), url_path, None))\n else:\n imports.append((\"library\", match.group(2), match.group(2), None))\n\n return imports\n\n\nclass Pickler(dill.Pickler):\n \"\"\"Same Pickler as the one from dill, but improved for notebooks and shells\"\"\"\n\n dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())\n\n def save(self, obj, save_persistent_id=True):\n # lazy registration of reduction functions\n obj_type = type(obj)\n if obj_type not in Pickler.dispatch:\n if config.DILL_VERSION < version.parse(\"0.3.6\"):\n\n def dill_log(pickler, msg):\n dill._dill.log.info(msg)\n\n elif config.DILL_VERSION.release[:3] in [version.parse(\"0.3.6\").release, version.parse(\"0.3.7\").release]:\n\n def dill_log(pickler, msg):\n dill._dill.logger.trace(pickler, msg)\n\n if (obj_type.__module__, obj_type.__name__) == (\"_regex\", \"Pattern\"):\n try:\n import regex\n\n @pklregister(obj_type)\n def _save_regex(pickler, obj):\n dill_log(pickler, f\"Re: {obj}\")\n args = (\n obj.pattern,\n obj.flags,\n )\n pickler.save_reduce(regex.compile, args, obj=obj)\n dill_log(pickler, \"# Re\")\n return\n\n except ImportError:\n pass\n elif (obj_type.__module__, obj_type.__name__) == (\"torch\", \"Tensor\"):\n try:\n import torch\n\n @pklregister(obj_type)\n def _save_tensor(pickler, obj):\n # `torch.from_numpy` is not picklable in `torch>=1.11.0`\n def _create_tensor(np_array):\n return torch.from_numpy(np_array)\n\n dill_log(pickler, f\"To: {obj}\")\n args = (obj.detach().cpu().numpy(),)\n pickler.save_reduce(_create_tensor, args, obj=obj)\n dill_log(pickler, \"# To\")\n return\n\n except ImportError:\n pass\n elif (obj_type.__module__, obj_type.__name__) == (\"tiktoken.core\", \"Encoding\"):\n try:\n import tiktoken\n\n @pklregister(obj_type)\n def _save_encoding(pickler, obj):\n dill_log(pickler, f\"Enc: {obj}\")\n args = (obj.name, obj._pat_str, obj._mergeable_ranks, obj._special_tokens)\n pickler.save_reduce(tiktoken.Encoding, args, obj=obj)\n dill_log(pickler, \"# Enc\")\n return\n\n except ImportError:\n pass\n elif obj_type.__module__.startswith(\"spacy.lang\") and any(\n (cls.__module__, cls.__name__) == (\"spacy.language\", \"Language\") for cls in obj_type.__mro__\n ):\n try:\n import spacy\n\n @pklregister(obj_type)\n def _save_lang(pickler, obj):\n def _create_lang(config, bytes_data):\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\n nlp = lang_cls.from_config(config)\n return nlp.from_bytes(bytes_data)\n\n dill_log(pickler, f\"Sp: {obj}\")\n args = (obj.config, obj.to_bytes())\n pickler.save_reduce(_create_lang, args, obj=obj)\n dill_log(pickler, \"# Sp\")\n return\n\n except ImportError:\n pass\n\n dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)\n\n def memoize(self, obj):\n # don't memoize strings since two identical strings can have different python ids\n if type(obj) != str: # noqa: E721\n dill.Pickler.memoize(self, obj)\n\n\ndef dump(obj, file):\n \"\"\"pickle an object to a file\"\"\"\n Pickler(file, recurse=True).dump(obj)\n return\n\n\n@contextmanager\ndef _no_cache_fields(obj):\n try:\n if (\n \"PreTrainedTokenizerBase\" in [base_class.__name__ for base_class in type(obj).__mro__]\n and hasattr(obj, \"cache\")\n and isinstance(obj.cache, dict)\n ):\n with temporary_assignment(obj, \"cache\", {}):\n yield\n else:\n yield\n\n except ImportError:\n yield\n\n\ndef dumps(obj):\n \"\"\"pickle an object to a string\"\"\"\n file = StringIO()\n with _no_cache_fields(obj):\n dump(obj, file)\n return file.getvalue()\n\n\ndef pklregister(t):\n def proxy(func):\n Pickler.dispatch[t] = func\n return func\n\n return proxy\n\n\nif config.DILL_VERSION < version.parse(\"0.3.6\"):\n\n @pklregister(CodeType)\n def _save_code(pickler, obj):\n \"\"\"\n From dill._dill.save_code\n This is a modified version that removes the origin (filename + line no.)\n of functions created in notebooks or shells for example.\n \"\"\"\n dill._dill.log.info(f\"Co: {obj}\")\n # The filename of a function is the .py file where it is defined.\n # Filenames of functions created in notebooks or shells start with '<'\n # ex: <ipython-input-13-9ed2afe61d25> for ipython, and <stdin> for shell\n # Filenames of functions created in ipykernel the filename\n # look like f\"{tempdir}/ipykernel_{id1}/{id2}.py\"\n # Moreover lambda functions have a special name: '<lambda>'\n # ex: (lambda x: x).__code__.co_name == \"<lambda>\" # True\n #\n # For the hashing mechanism we ignore where the function has been defined\n # More specifically:\n # - we ignore the filename of special functions (filename starts with '<')\n # - we always ignore the line number\n # - we only use the base name of the file instead of the whole path,\n # to be robust in case a script is moved for example.\n #\n # Only those two lines are different from the original implementation:\n co_filename = (\n \"\"\n if obj.co_filename.startswith(\"<\")\n or (\n len(obj.co_filename.split(os.path.sep)) > 1\n and obj.co_filename.split(os.path.sep)[-2].startswith(\"ipykernel_\")\n )\n or obj.co_name == \"<lambda>\"\n else os.path.basename(obj.co_filename)\n )\n co_firstlineno = 1\n # The rest is the same as in the original dill implementation\n if dill._dill.PY3:\n if hasattr(obj, \"co_posonlyargcount\"):\n args = (\n obj.co_argcount,\n obj.co_posonlyargcount,\n obj.co_kwonlyargcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename,\n obj.co_name,\n co_firstlineno,\n obj.co_lnotab,\n obj.co_freevars,\n obj.co_cellvars,\n )\n else:\n args = (\n obj.co_argcount,\n obj.co_kwonlyargcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename,\n obj.co_name,\n co_firstlineno,\n obj.co_lnotab,\n obj.co_freevars,\n obj.co_cellvars,\n )\n else:\n args = (\n obj.co_argcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename,\n obj.co_name,\n co_firstlineno,\n obj.co_lnotab,\n obj.co_freevars,\n obj.co_cellvars,\n )\n pickler.save_reduce(CodeType, args, obj=obj)\n dill._dill.log.info(\"# Co\")\n return\n\nelif config.DILL_VERSION.release[:3] in [version.parse(\"0.3.6\").release, version.parse(\"0.3.7\").release]:\n # From: https://github.com/uqfoundation/dill/blob/dill-0.3.6/dill/_dill.py#L1104\n @pklregister(CodeType)\n def save_code(pickler, obj):\n dill._dill.logger.trace(pickler, \"Co: %s\", obj)\n\n ############################################################################################################\n # Modification here for huggingface/datasets\n # The filename of a function is the .py file where it is defined.\n # Filenames of functions created in notebooks or shells start with '<'\n # ex: <ipython-input-13-9ed2afe61d25> for ipython, and <stdin> for shell\n # Filenames of functions created in ipykernel the filename\n # look like f\"{tempdir}/ipykernel_{id1}/{id2}.py\"\n # Moreover lambda functions have a special name: '<lambda>'\n # ex: (lambda x: x).__code__.co_name == \"<lambda>\" # True\n #\n # For the hashing mechanism we ignore where the function has been defined\n # More specifically:\n # - we ignore the filename of special functions (filename starts with '<')\n # - we always ignore the line number\n # - we only use the base name of the file instead of the whole path,\n # to be robust in case a script is moved for example.\n #\n # Only those two lines are different from the original implementation:\n co_filename = (\n \"\"\n if obj.co_filename.startswith(\"<\")\n or (\n len(obj.co_filename.split(os.path.sep)) > 1\n and obj.co_filename.split(os.path.sep)[-2].startswith(\"ipykernel_\")\n )\n or obj.co_name == \"<lambda>\"\n else os.path.basename(obj.co_filename)\n )\n co_firstlineno = 1\n # The rest is the same as in the original dill implementation, except for the replacements:\n # - obj.co_filename => co_filename\n # - obj.co_firstlineno => co_firstlineno\n ############################################################################################################\n\n if hasattr(obj, \"co_endlinetable\"): # python 3.11a (20 args)\n args = (\n obj.co_lnotab, # for < python 3.10 [not counted in args]\n obj.co_argcount,\n obj.co_posonlyargcount,\n obj.co_kwonlyargcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename, # Modification for huggingface/datasets ############################################\n obj.co_name,\n obj.co_qualname,\n co_firstlineno, # Modification for huggingface/datasets #########################################\n obj.co_linetable,\n obj.co_endlinetable,\n obj.co_columntable,\n obj.co_exceptiontable,\n obj.co_freevars,\n obj.co_cellvars,\n )\n elif hasattr(obj, \"co_exceptiontable\"): # python 3.11 (18 args)\n args = (\n obj.co_lnotab, # for < python 3.10 [not counted in args]\n obj.co_argcount,\n obj.co_posonlyargcount,\n obj.co_kwonlyargcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename, # Modification for huggingface/datasets ############################################\n obj.co_name,\n obj.co_qualname,\n co_firstlineno, # Modification for huggingface/datasets #########################################\n obj.co_linetable,\n obj.co_exceptiontable,\n obj.co_freevars,\n obj.co_cellvars,\n )\n elif hasattr(obj, \"co_linetable\"): # python 3.10 (16 args)\n args = (\n obj.co_lnotab, # for < python 3.10 [not counted in args]\n obj.co_argcount,\n obj.co_posonlyargcount,\n obj.co_kwonlyargcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename, # Modification for huggingface/datasets ############################################\n obj.co_name,\n co_firstlineno, # Modification for huggingface/datasets #########################################\n obj.co_linetable,\n obj.co_freevars,\n obj.co_cellvars,\n )\n elif hasattr(obj, \"co_posonlyargcount\"): # python 3.8 (16 args)\n args = (\n obj.co_argcount,\n obj.co_posonlyargcount,\n obj.co_kwonlyargcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename, # Modification for huggingface/datasets ############################################\n obj.co_name,\n co_firstlineno, # Modification for huggingface/datasets #########################################\n obj.co_lnotab,\n obj.co_freevars,\n obj.co_cellvars,\n )\n else: # python 3.7 (15 args)\n args = (\n obj.co_argcount,\n obj.co_kwonlyargcount,\n obj.co_nlocals,\n obj.co_stacksize,\n obj.co_flags,\n obj.co_code,\n obj.co_consts,\n obj.co_names,\n obj.co_varnames,\n co_filename, # Modification for huggingface/datasets ############################################\n obj.co_name,\n co_firstlineno, # Modification for huggingface/datasets #########################################\n obj.co_lnotab,\n obj.co_freevars,\n obj.co_cellvars,\n )\n\n pickler.save_reduce(dill._dill._create_code, args, obj=obj)\n dill._dill.logger.trace(pickler, \"# Co\")\n return\n\n\nif config.DILL_VERSION < version.parse(\"0.3.5\"):\n\n @pklregister(FunctionType)\n def save_function(pickler, obj):\n \"\"\"\n From dill._dill.save_function\n This is a modified version that make globs deterministic since the order of\n the keys in the output dictionary of globalvars can change.\n \"\"\"\n if not dill._dill._locate_function(obj):\n dill._dill.log.info(f\"F1: {obj}\")\n if getattr(pickler, \"_recurse\", False):\n # recurse to get all globals referred to by obj\n globalvars = dill.detect.globalvars\n globs = globalvars(obj, recurse=True, builtin=True)\n if id(obj) in dill._dill.stack:\n globs = obj.__globals__ if dill._dill.PY3 else obj.func_globals\n else:\n globs = obj.__globals__ if dill._dill.PY3 else obj.func_globals\n # globs is a dictionary with keys = var names (str) and values = python objects\n # however the dictionary is not always loaded in the same order\n # therefore we have to sort the keys to make deterministic.\n # This is important to make `dump` deterministic.\n # Only this line is different from the original implementation:\n globs = dict(sorted(globs.items()))\n # The rest is the same as in the original dill implementation\n _byref = getattr(pickler, \"_byref\", None)\n _recurse = getattr(pickler, \"_recurse\", None)\n _memo = (id(obj) in dill._dill.stack) and (_recurse is not None)\n dill._dill.stack[id(obj)] = len(dill._dill.stack), obj\n if dill._dill.PY3:\n _super = (\"super\" in getattr(obj.__code__, \"co_names\", ())) and (_byref is not None)\n if _super:\n pickler._byref = True\n if _memo:\n pickler._recurse = False\n fkwdefaults = getattr(obj, \"__kwdefaults__\", None)\n pickler.save_reduce(\n dill._dill._create_function,\n (obj.__code__, globs, obj.__name__, obj.__defaults__, obj.__closure__, obj.__dict__, fkwdefaults),\n obj=obj,\n )\n else:\n _super = (\n (\"super\" in getattr(obj.func_code, \"co_names\", ()))\n and (_byref is not None)\n and getattr(pickler, \"_recurse\", False)\n )\n if _super:\n pickler._byref = True\n if _memo:\n pickler._recurse = False\n pickler.save_reduce(\n dill._dill._create_function,\n (obj.func_code, globs, obj.func_name, obj.func_defaults, obj.func_closure, obj.__dict__),\n obj=obj,\n )\n if _super:\n pickler._byref = _byref\n if _memo:\n pickler._recurse = _recurse\n if (\n dill._dill.OLDER\n and not _byref\n and (_super or (not _super and _memo) or (not _super and not _memo and _recurse))\n ):\n pickler.clear_memo()\n dill._dill.log.info(\"# F1\")\n else:\n dill._dill.log.info(f\"F2: {obj}\")\n name = getattr(obj, \"__qualname__\", getattr(obj, \"__name__\", None))\n dill._dill.StockPickler.save_global(pickler, obj, name=name)\n dill._dill.log.info(\"# F2\")\n return\n\nelif config.DILL_VERSION.release[:3] == version.parse(\"0.3.5\").release: # 0.3.5, 0.3.5.1\n # https://github.com/uqfoundation/dill/blob/dill-0.3.5.1/dill/_dill.py\n @pklregister(FunctionType)\n def save_function(pickler, obj):\n if not dill._dill._locate_function(obj, pickler):\n dill._dill.log.info(\"F1: %s\" % obj)\n _recurse = getattr(pickler, \"_recurse\", None)\n _postproc = getattr(pickler, \"_postproc\", None)\n _main_modified = getattr(pickler, \"_main_modified\", None)\n _original_main = getattr(pickler, \"_original_main\", dill._dill.__builtin__) # 'None'\n postproc_list = []\n if _recurse:\n # recurse to get all globals referred to by obj\n from dill.detect import globalvars\n\n globs_copy = globalvars(obj, recurse=True, builtin=True)\n\n # Add the name of the module to the globs dictionary to prevent\n # the duplication of the dictionary. Pickle the unpopulated\n # globals dictionary and set the remaining items after the function\n # is created to correctly handle recursion.\n globs = {\"__name__\": obj.__module__}\n else:\n globs_copy = obj.__globals__ if dill._dill.PY3 else obj.func_globals\n\n # If the globals is the __dict__ from the module being saved as a\n # session, substitute it by the dictionary being actually saved.\n if _main_modified and globs_copy is _original_main.__dict__:\n globs_copy = getattr(pickler, \"_main\", _original_main).__dict__\n globs = globs_copy\n # If the globals is a module __dict__, do not save it in the pickle.\n elif (\n globs_copy is not None\n and obj.__module__ is not None\n and getattr(dill._dill._import_module(obj.__module__, True), \"__dict__\", None) is globs_copy\n ):\n globs = globs_copy\n else:\n globs = {\"__name__\": obj.__module__}\n\n # DONE: modified here for huggingface/datasets\n # - globs is a dictionary with keys = var names (str) and values = python objects\n # - globs_copy is a dictionary with keys = var names (str) and values = ids of the python objects\n # however the dictionary is not always loaded in the same order\n # therefore we have to sort the keys to make deterministic.\n # This is important to make `dump` deterministic.\n # Only these line are different from the original implementation:\n # START\n globs_is_globs_copy = globs is globs_copy\n globs = dict(sorted(globs.items()))\n if globs_is_globs_copy:\n globs_copy = globs\n elif globs_copy is not None:\n globs_copy = dict(sorted(globs_copy.items()))\n # END\n\n if globs_copy is not None and globs is not globs_copy:\n # In the case that the globals are copied, we need to ensure that\n # the globals dictionary is updated when all objects in the\n # dictionary are already created.\n if dill._dill.PY3:\n glob_ids = {id(g) for g in globs_copy.values()}\n else:\n glob_ids = {id(g) for g in globs_copy.itervalues()}\n for stack_element in _postproc:\n if stack_element in glob_ids:\n _postproc[stack_element].append((dill._dill._setitems, (globs, globs_copy)))\n break\n else:\n postproc_list.append((dill._dill._setitems, (globs, globs_copy)))\n\n if dill._dill.PY3:\n closure = obj.__closure__\n state_dict = {}\n for fattrname in (\"__doc__\", \"__kwdefaults__\", \"__annotations__\"):\n fattr = getattr(obj, fattrname, None)\n if fattr is not None:\n state_dict[fattrname] = fattr\n if obj.__qualname__ != obj.__name__:\n state_dict[\"__qualname__\"] = obj.__qualname__\n if \"__name__\" not in globs or obj.__module__ != globs[\"__name__\"]:\n state_dict[\"__module__\"] = obj.__module__\n\n state = obj.__dict__\n if type(state) is not dict: # noqa: E721\n state_dict[\"__dict__\"] = state\n state = None\n if state_dict:\n state = state, state_dict\n\n dill._dill._save_with_postproc(\n pickler,\n (\n dill._dill._create_function,\n (obj.__code__, globs, obj.__name__, obj.__defaults__, closure),\n state,\n ),\n obj=obj,\n postproc_list=postproc_list,\n )\n else:\n closure = obj.func_closure\n if obj.__doc__ is not None:\n postproc_list.append((setattr, (obj, \"__doc__\", obj.__doc__)))\n if \"__name__\" not in globs or obj.__module__ != globs[\"__name__\"]:\n postproc_list.append((setattr, (obj, \"__module__\", obj.__module__)))\n if obj.__dict__:\n postproc_list.append((setattr, (obj, \"__dict__\", obj.__dict__)))\n\n dill._dill._save_with_postproc(\n pickler,\n (dill._dill._create_function, (obj.func_code, globs, obj.func_name, obj.func_defaults, closure)),\n obj=obj,\n postproc_list=postproc_list,\n )\n\n # Lift closure cell update to earliest function (#458)\n if _postproc:\n topmost_postproc = next(iter(_postproc.values()), None)\n if closure and topmost_postproc:\n for cell in closure:\n possible_postproc = (setattr, (cell, \"cell_contents\", obj))\n try:\n topmost_postproc.remove(possible_postproc)\n except ValueError:\n continue\n\n # Change the value of the cell\n pickler.save_reduce(*possible_postproc)\n # pop None created by calling preprocessing step off stack\n if dill._dill.PY3:\n pickler.write(bytes(\"0\", \"UTF-8\"))\n else:\n pickler.write(\"0\")\n\n dill._dill.log.info(\"# F1\")\n else:\n dill._dill.log.info(\"F2: %s\" % obj)\n name = getattr(obj, \"__qualname__\", getattr(obj, \"__name__\", None))\n dill._dill.StockPickler.save_global(pickler, obj, name=name)\n dill._dill.log.info(\"# F2\")\n return\n\nelif config.DILL_VERSION.release[:3] in [version.parse(\"0.3.6\").release, version.parse(\"0.3.7\").release]:\n # From: https://github.com/uqfoundation/dill/blob/dill-0.3.6/dill/_dill.py#L1739\n @pklregister(FunctionType)\n def save_function(pickler, obj):\n if not dill._dill._locate_function(obj, pickler):\n if type(obj.__code__) is not CodeType:\n # Some PyPy builtin functions have no module name, and thus are not\n # able to be located\n module_name = getattr(obj, \"__module__\", None)\n if module_name is None:\n module_name = dill._dill.__builtin__.__name__\n module = dill._dill._import_module(module_name, safe=True)\n _pypy_builtin = False\n try:\n found, _ = dill._dill._getattribute(module, obj.__qualname__)\n if getattr(found, \"__func__\", None) is obj:\n _pypy_builtin = True\n except AttributeError:\n pass\n\n if _pypy_builtin:\n dill._dill.logger.trace(pickler, \"F3: %s\", obj)\n pickler.save_reduce(getattr, (found, \"__func__\"), obj=obj)\n dill._dill.logger.trace(pickler, \"# F3\")\n return\n\n dill._dill.logger.trace(pickler, \"F1: %s\", obj)\n _recurse = getattr(pickler, \"_recurse\", None)\n _postproc = getattr(pickler, \"_postproc\", None)\n _main_modified = getattr(pickler, \"_main_modified\", None)\n _original_main = getattr(pickler, \"_original_main\", dill._dill.__builtin__) # 'None'\n postproc_list = []\n if _recurse:\n # recurse to get all globals referred to by obj\n from dill.detect import globalvars\n\n globs_copy = globalvars(obj, recurse=True, builtin=True)\n\n # Add the name of the module to the globs dictionary to prevent\n # the duplication of the dictionary. Pickle the unpopulated\n # globals dictionary and set the remaining items after the function\n # is created to correctly handle recursion.\n globs = {\"__name__\": obj.__module__}\n else:\n globs_copy = obj.__globals__\n\n # If the globals is the __dict__ from the module being saved as a\n # session, substitute it by the dictionary being actually saved.\n if _main_modified and globs_copy is _original_main.__dict__:\n globs_copy = getattr(pickler, \"_main\", _original_main).__dict__\n globs = globs_copy\n # If the globals is a module __dict__, do not save it in the pickle.\n elif (\n globs_copy is not None\n and obj.__module__ is not None\n and getattr(dill._dill._import_module(obj.__module__, True), \"__dict__\", None) is globs_copy\n ):\n globs = globs_copy\n else:\n globs = {\"__name__\": obj.__module__}\n\n ########################################################################################################\n # Modification here for huggingface/datasets\n # - globs is a dictionary with keys = var names (str) and values = python objects\n # - globs_copy is a dictionary with keys = var names (str) and values = ids of the python objects\n # However the dictionary is not always loaded in the same order,\n # therefore we have to sort the keys to make deterministic.\n # This is important to make `dump` deterministic.\n # Only these line are different from the original implementation:\n # START\n globs_is_globs_copy = globs is globs_copy\n globs = dict(sorted(globs.items()))\n if globs_is_globs_copy:\n globs_copy = globs\n elif globs_copy is not None:\n globs_copy = dict(sorted(globs_copy.items()))\n # END\n ########################################################################################################\n\n if globs_copy is not None and globs is not globs_copy:\n # In the case that the globals are copied, we need to ensure that\n # the globals dictionary is updated when all objects in the\n # dictionary are already created.\n glob_ids = {id(g) for g in globs_copy.values()}\n for stack_element in _postproc:\n if stack_element in glob_ids:\n _postproc[stack_element].append((dill._dill._setitems, (globs, globs_copy)))\n break\n else:\n postproc_list.append((dill._dill._setitems, (globs, globs_copy)))\n\n closure = obj.__closure__\n state_dict = {}\n for fattrname in (\"__doc__\", \"__kwdefaults__\", \"__annotations__\"):\n fattr = getattr(obj, fattrname, None)\n if fattr is not None:\n state_dict[fattrname] = fattr\n if obj.__qualname__ != obj.__name__:\n state_dict[\"__qualname__\"] = obj.__qualname__\n if \"__name__\" not in globs or obj.__module__ != globs[\"__name__\"]:\n state_dict[\"__module__\"] = obj.__module__\n\n state = obj.__dict__\n if type(state) is not dict: # noqa: E721\n state_dict[\"__dict__\"] = state\n state = None\n if state_dict:\n state = state, state_dict\n\n dill._dill._save_with_postproc(\n pickler,\n (dill._dill._create_function, (obj.__code__, globs, obj.__name__, obj.__defaults__, closure), state),\n obj=obj,\n postproc_list=postproc_list,\n )\n\n # Lift closure cell update to earliest function (#458)\n if _postproc:\n topmost_postproc = next(iter(_postproc.values()), None)\n if closure and topmost_postproc:\n for cell in closure:\n possible_postproc = (setattr, (cell, \"cell_contents\", obj))\n try:\n topmost_postproc.remove(possible_postproc)\n except ValueError:\n continue\n\n # Change the value of the cell\n pickler.save_reduce(*possible_postproc)\n # pop None created by calling preprocessing step off stack\n pickler.write(bytes(\"0\", \"UTF-8\"))\n\n dill._dill.logger.trace(pickler, \"# F1\")\n else:\n dill._dill.logger.trace(pickler, \"F2: %s\", obj)\n name = getattr(obj, \"__qualname__\", getattr(obj, \"__name__\", None))\n dill._dill.StockPickler.save_global(pickler, obj, name=name)\n dill._dill.logger.trace(pickler, \"# F2\")\n return\n\n\ndef copyfunc(func):\n result = types.FunctionType(func.__code__, func.__globals__, func.__name__, func.__defaults__, func.__closure__)\n result.__kwdefaults__ = func.__kwdefaults__\n return result\n\n\nY = TypeVar(\"Y\")\n\n\ndef _write_generator_to_queue(queue: queue.Queue, func: Callable[..., Iterable[Y]], kwargs: dict) -> int:\n for i, result in enumerate(func(**kwargs)):\n queue.put(result)\n return i\n\n\ndef _get_pool_pid(pool: Union[multiprocessing.pool.Pool, multiprocess.pool.Pool]) -> Set[int]:\n return {f.pid for f in pool._pool}\n\n\ndef iflatmap_unordered(\n pool: Union[multiprocessing.pool.Pool, multiprocess.pool.Pool],\n func: Callable[..., Iterable[Y]],\n *,\n kwargs_iterable: Iterable[dict],\n) -> Iterable[Y]:\n initial_pool_pid = _get_pool_pid(pool)\n pool_changed = False\n manager_cls = Manager if isinstance(pool, multiprocessing.pool.Pool) else multiprocess.Manager\n with manager_cls() as manager:\n queue = manager.Queue()\n async_results = [\n pool.apply_async(_write_generator_to_queue, (queue, func, kwargs)) for kwargs in kwargs_iterable\n ]\n try:\n while True:\n try:\n yield queue.get(timeout=0.05)\n except Empty:\n if all(async_result.ready() for async_result in async_results) and queue.empty():\n break\n if _get_pool_pid(pool) != initial_pool_pid:\n pool_changed = True\n # One of the subprocesses has died. We should not wait forever.\n raise RuntimeError(\n \"One of the subprocesses has abruptly died during map operation.\"\n \"To debug the error, disable multiprocessing.\"\n )\n finally:\n if not pool_changed:\n # we get the result in case there's an error to raise\n [async_result.get(timeout=0.05) for async_result in async_results]\n"}
|
{"src/datasets/utils/py_utils.py": [{"type": "function", "name": "_save_set", "lines": [753, 759], "signature": "def _save_set(pickler, obj):", "doc": ""}]}
| null |
["tests/test_fingerprint.py::HashingTest::test_set_stable"]
|
["tests/test_fingerprint.py::RecurseDumpTest::test_dump_ignores_line_definition_of_function", "tests/test_fingerprint.py::RecurseDumpTest::test_dump_ipython_function", "tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_class", "tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function", "tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals", "tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_method", "tests/test_fingerprint.py::HashingTest::test_hash_class_instance", "tests/test_fingerprint.py::HashingTest::test_hash_same_strings", "tests/test_fingerprint.py::HashingTest::test_hash_simple", "tests/test_fingerprint.py::HashingTest::test_hash_unpicklable", "tests/test_fingerprint.py::HashingTest::test_hash_update", "tests/test_fingerprint.py::HashingTest::test_set_doesnt_depend_on_order", "tests/test_fingerprint.py::test_move_script_doesnt_change_hash", "tests/test_fingerprint.py::test_fingerprint_in_multiprocessing", "tests/test_fingerprint.py::test_fingerprint_when_transform_version_changes", "tests/test_fingerprint.py::test_dependency_on_dill"]
|
5142a8cf61d8a4495eda3d91dc4283a6df01ea14
|
{"first_commit_time": 1697717923.0, "pr_title": "Deterministic set hash", "pr_body": "Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets.\r\n\r\nThis is useful to get deterministic hashes of tokenizers that use a trie based on python sets.\r\n\r\nreported in https://github.com/huggingface/datasets/issues/3847", "pr_timeline": [{"time": 1697732224.0, "comment": "_The documentation is not available anymore as the PR was closed or merged._"}, {"time": 1697718504.0, "comment": "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006827 / 0.011353 (-0.004526) | 0.004468 / 0.011008 (-0.006540) | 0.088687 / 0.038508 (0.050179) | 0.072560 / 0.023109 (0.049451) | 0.333421 / 0.275898 (0.057523) | 0.374977 / 0.323480 (0.051497) | 0.005829 / 0.007986 (-0.002156) | 0.003284 / 0.004328 (-0.001045) | 0.068929 / 0.004250 (0.064678) | 0.057212 / 0.037052 (0.020160) | 0.328911 / 0.258489 (0.070422) | 0.389107 / 0.293841 (0.095266) | 0.033518 / 0.128546 (-0.095029) | 0.009919 / 0.075646 (-0.065728) | 0.308100 / 0.419271 (-0.111171) | 0.059380 / 0.043533 (0.015847) | 0.345587 / 0.255139 (0.090448) | 0.353703 / 0.283200 (0.070503) | 0.026454 / 0.141683 (-0.115229) | 1.573309 / 1.452155 (0.121155) | 1.663812 / 1.492716 (0.171095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255081 / 0.018006 (0.237075) | 0.472613 / 0.000490 (0.472123) | 0.016120 / 0.000200 (0.015920) | 0.000383 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028219 / 0.037411 (-0.009192) | 0.086600 / 0.014526 (0.072074) | 0.099484 / 0.176557 (-0.077073) | 0.154604 / 0.737135 (-0.582531) | 0.099168 / 0.296338 (-0.197171) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421703 / 0.215209 (0.206494) | 4.188600 / 2.077655 (2.110945) | 2.037575 / 1.504120 (0.533456) | 1.843389 / 1.541195 (0.302194) | 1.912554 / 1.468490 (0.444064) | 0.517452 / 4.584777 (-4.067325) | 3.838002 / 3.745712 (0.092290) | 3.698899 / 5.269862 (-1.570963) | 2.175393 / 4.565676 (-2.390283) | 0.066059 / 0.424275 (-0.358216) | 0.008455 / 0.007607 (0.000848) | 0.506813 / 0.226044 (0.280768) | 4.826994 / 2.268929 (2.558066) | 2.544437 / 55.444624 (-52.900187) | 2.164938 / 6.876477 (-4.711539) | 2.171725 / 2.142072 (0.029652) | 0.603757 / 4.805227 (-4.201470) | 0.149113 / 6.500664 (-6.351551) | 0.065093 / 0.075469 (-0.010376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366887 / 1.841788 (-0.474901) | 20.508089 / 8.074308 (12.433780) | 14.836531 / 10.191392 (4.645139) | 0.167418 / 0.680424 (-0.513006) | 0.019707 / 0.534201 (-0.514494) | 0.409897 / 0.579283 (-0.169387) | 0.439412 / 0.434364 (0.005048) | 0.495784 / 0.540337 (-0.044553) | 0.685367 / 1.386936 (-0.701569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007604 / 0.011353 (-0.003749) | 0.004368 / 0.011008 (-0.006640) | 0.072628 / 0.038508 (0.034120) | 0.084187 / 0.023109 (0.061077) | 0.461396 / 0.275898 (0.185498) | 0.481429 / 0.323480 (0.157949) | 0.005894 / 0.007986 (-0.002092) | 0.003472 / 0.004328 (-0.000857) | 0.068717 / 0.004250 (0.064466) | 0.061066 / 0.037052 (0.024014) | 0.464217 / 0.258489 (0.205728) | 0.498061 / 0.293841 (0.204220) | 0.035458 / 0.128546 (-0.093089) | 0.009474 / 0.075646 (-0.066173) | 0.079633 / 0.419271 (-0.339639) | 0.053966 / 0.043533 (0.010433) | 0.454911 / 0.255139 (0.199772) | 0.470837 / 0.283200 (0.187637) | 0.026358 / 0.141683 (-0.115325) | 1.665131 / 1.452155 (0.212976) | 1.730365 / 1.492716 (0.237648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234810 / 0.018006 (0.216804) | 0.453672 / 0.000490 (0.453183) | 0.004620 / 0.000200 (0.004420) | 0.000119 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035310 / 0.037411 (-0.002101) | 0.100379 / 0.014526 (0.085853) | 0.118802 / 0.176557 (-0.057754) | 0.173853 / 0.737135 (-0.563282) | 0.115714 / 0.296338 (-0.180624) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.466797 / 0.215209 (0.251588) | 4.698324 / 2.077655 (2.620670) | 2.446897 / 1.504120 (0.942777) | 2.277346 / 1.541195 (0.736151) | 2.347211 / 1.468490 (0.878721) | 0.514377 / 4.584777 (-4.070400) | 3.931269 / 3.745712 (0.185557) | 3.573575 / 5.269862 (-1.696286) | 2.208122 / 4.565676 (-2.357554) | 0.061081 / 0.424275 (-0.363194) | 0.007803 / 0.007607 (0.000196) | 0.544376 / 0.226044 (0.318332) | 5.440003 / 2.268929 (3.171074) | 3.012559 / 55.444624 (-52.432065) | 2.617286 / 6.876477 (-4.259191) | 2.863978 / 2.142072 (0.721906) | 0.610024 / 4.805227 (-4.195203) | 0.133643 / 6.500664 (-6.367021) | 0.064766 / 0.075469 (-0.010703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.465225 / 1.841788 (-0.376563) | 21.308351 / 8.074308 (13.234043) | 15.176634 / 10.191392 (4.985242) | 0.172701 / 0.680424 (-0.507723) | 0.020345 / 0.534201 (-0.513855) | 0.433923 / 0.579283 (-0.145360) | 0.450183 / 0.434364 (0.015819) | 0.514048 / 0.540337 (-0.026289) | 0.736302 / 1.386936 (-0.650634) |\n\n</details>\n</details>\n\n\n"}, {"time": 1697732840.0, "comment": "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008305 / 0.011353 (-0.003048) | 0.006007 / 0.011008 (-0.005001) | 0.103521 / 0.038508 (0.065013) | 0.075776 / 0.023109 (0.052666) | 0.378888 / 0.275898 (0.102990) | 0.405245 / 0.323480 (0.081765) | 0.004596 / 0.007986 (-0.003390) | 0.003687 / 0.004328 (-0.000641) | 0.079043 / 0.004250 (0.074792) | 0.055895 / 0.037052 (0.018843) | 0.406565 / 0.258489 (0.148076) | 0.433869 / 0.293841 (0.140028) | 0.045321 / 0.128546 (-0.083226) | 0.014317 / 0.075646 (-0.061329) | 0.345312 / 0.419271 (-0.073960) | 0.064485 / 0.043533 (0.020953) | 0.381744 / 0.255139 (0.126605) | 0.401162 / 0.283200 (0.117962) | 0.035973 / 0.141683 (-0.105709) | 1.829616 / 1.452155 (0.377461) | 1.868487 / 1.492716 (0.375771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245432 / 0.018006 (0.227426) | 0.494249 / 0.000490 (0.493759) | 0.010878 / 0.000200 (0.010678) | 0.000492 / 0.000054 (0.000437) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032778 / 0.037411 (-0.004633) | 0.103418 / 0.014526 (0.088892) | 0.108010 / 0.176557 (-0.068547) | 0.176477 / 0.737135 (-0.560658) | 0.107732 / 0.296338 (-0.188606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.572471 / 0.215209 (0.357262) | 5.647039 / 2.077655 (3.569384) | 2.385069 / 1.504120 (0.880949) | 2.048928 / 1.541195 (0.507733) | 2.108538 / 1.468490 (0.640048) | 0.861436 / 4.584777 (-3.723341) | 4.933452 / 3.745712 (1.187739) | 4.735219 / 5.269862 (-0.534642) | 2.926971 / 4.565676 (-1.638705) | 0.097687 / 0.424275 (-0.326588) | 0.008346 / 0.007607 (0.000739) | 0.677754 / 0.226044 (0.451709) | 6.798433 / 2.268929 (4.529504) | 3.129862 / 55.444624 (-52.314762) | 2.454033 / 6.876477 (-4.422444) | 2.464590 / 2.142072 (0.322517) | 1.034497 / 4.805227 (-3.770730) | 0.205753 / 6.500664 (-6.294911) | 0.076618 / 0.075469 (0.001149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.617569 / 1.841788 (-0.224219) | 22.091489 / 8.074308 (14.017181) | 20.406312 / 10.191392 (10.214920) | 0.222012 / 0.680424 (-0.458411) | 0.027787 / 0.534201 (-0.506414) | 0.441669 / 0.579283 (-0.137615) | 0.564773 / 0.434364 (0.130409) | 0.510389 / 0.540337 (-0.029948) | 0.753672 / 1.386936 (-0.633264) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011107 / 0.011353 (-0.000246) | 0.004973 / 0.011008 (-0.006035) | 0.078331 / 0.038508 (0.039823) | 0.083964 / 0.023109 (0.060855) | 0.518980 / 0.275898 (0.243082) | 0.528264 / 0.323480 (0.204784) | 0.007452 / 0.007986 (-0.000534) | 0.003931 / 0.004328 (-0.000397) | 0.079724 / 0.004250 (0.075474) | 0.061739 / 0.037052 (0.024686) | 0.517804 / 0.258489 (0.259315) | 0.582764 / 0.293841 (0.288923) | 0.049674 / 0.128546 (-0.078873) | 0.014540 / 0.075646 (-0.061106) | 0.093130 / 0.419271 (-0.326141) | 0.060647 / 0.043533 (0.017114) | 0.492628 / 0.255139 (0.237489) | 0.549761 / 0.283200 (0.266562) | 0.034313 / 0.141683 (-0.107369) | 1.824574 / 1.452155 (0.372419) | 2.013664 / 1.492716 (0.520947) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231335 / 0.018006 (0.213329) | 0.521477 / 0.000490 (0.520987) | 0.011314 / 0.000200 (0.011114) | 0.000397 / 0.000054 (0.000343) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033303 / 0.037411 (-0.004108) | 0.098238 / 0.014526 (0.083712) | 0.119527 / 0.176557 (-0.057030) | 0.169163 / 0.737135 (-0.567972) | 0.114536 / 0.296338 (-0.181803) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578401 / 0.215209 (0.363191) | 5.966438 / 2.077655 (3.888783) | 2.646370 / 1.504120 (1.142250) | 2.361833 / 1.541195 (0.820638) | 2.476573 / 1.468490 (1.008083) | 0.777411 / 4.584777 (-3.807366) | 4.811070 / 3.745712 (1.065357) | 4.314221 / 5.269862 (-0.955641) | 2.743317 / 4.565676 (-1.822359) | 0.110394 / 0.424275 (-0.313881) | 0.008333 / 0.007607 (0.000726) | 0.729588 / 0.226044 (0.503543) | 7.743226 / 2.268929 (5.474298) | 3.606294 / 55.444624 (-51.838330) | 2.838069 / 6.876477 (-4.038408) | 3.087494 / 2.142072 (0.945421) | 1.053341 / 4.805227 (-3.751886) | 0.205105 / 6.500664 (-6.295559) | 0.075204 / 0.075469 (-0.000265) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561959 / 1.841788 (-0.279829) | 21.407849 / 8.074308 (13.333541) | 19.084263 / 10.191392 (8.892871) | 0.226129 / 0.680424 (-0.454295) | 0.029695 / 0.534201 (-0.504506) | 0.427035 / 0.579283 (-0.152248) | 0.565353 / 0.434364 (0.130989) | 0.526789 / 0.540337 (-0.013548) | 0.734820 / 1.386936 (-0.652116) |\n\n</details>\n</details>\n\n\n"}], "issues": {}}
|
|
huggingface/huggingface_hub
| 2,027
|
https://github.com/huggingface/huggingface_hub/pull/2027
|
huggingface__huggingface_hub-2027
|
[]
|
0c272d506e390f2d7b9dac68159595845c7f8e3b
|
diff --git a/src/huggingface_hub/hf_file_system.py b/src/huggingface_hub/hf_file_system.py
index a78ab0fd80..630ff64ba8 100644
--- a/src/huggingface_hub/hf_file_system.py
+++ b/src/huggingface_hub/hf_file_system.py
@@ -577,6 +577,20 @@ def isfile(self, path):
except: # noqa: E722
return False
+ def url(self, path: str) -> str:
+ """Get the HTTP URL of the given path"""
+ resolved_path = self.resolve_path(path)
+ url = hf_hub_url(
+ resolved_path.repo_id,
+ resolved_path.path_in_repo,
+ repo_type=resolved_path.repo_type,
+ revision=resolved_path.revision,
+ endpoint=self.endpoint,
+ )
+ if self.isdir(path):
+ url = url.replace("/resolve/", "/tree/", 1)
+ return url
+
@property
def transaction(self):
"""A context within which files are committed together upon exit
@@ -653,6 +667,9 @@ def _upload_chunk(self, final: bool = False) -> None:
path=self.resolved_path.unresolve(),
)
+ def url(self) -> str:
+ return self.fs.url(self.path)
+
class HfFileSystemStreamFile(fsspec.spec.AbstractBufferedFile):
def __init__(
@@ -740,6 +757,9 @@ def read(self, length: int = -1):
self.loc += len(out)
return out
+ def url(self) -> str:
+ return self.fs.url(self.path)
+
def __del__(self):
if not hasattr(self, "resolved_path"):
# Means that the constructor failed. Nothing to do.
|
diff --git a/tests/test_hf_file_system.py b/tests/test_hf_file_system.py
index 02cf913515..af9bf3b94f 100644
--- a/tests/test_hf_file_system.py
+++ b/tests/test_hf_file_system.py
@@ -131,6 +131,16 @@ def test_glob(self):
)
self.assertIsNotNone(files[keys[0]]["last_commit"])
+ def test_url(self):
+ self.assertEqual(
+ self.hffs.url(self.hf_path + "/data/text_data.txt"),
+ f"{ENDPOINT_STAGING}/datasets/{self.repo_id}/resolve/main/data/text_data.txt",
+ )
+ self.assertEqual(
+ self.hffs.url(self.hf_path + "/data"),
+ f"{ENDPOINT_STAGING}/datasets/{self.repo_id}/tree/main/data",
+ )
+
def test_file_type(self):
self.assertTrue(
self.hffs.isdir(self.hf_path + "/data") and not self.hffs.isdir(self.hf_path + "/.gitattributes")
| 2024-02-14T18:00:11
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"src/huggingface_hub/hf_file_system.py": "import copy\nimport os\nimport re\nimport tempfile\nfrom collections import deque\nfrom dataclasses import dataclass, field\nfrom datetime import datetime\nfrom itertools import chain\nfrom typing import Any, Dict, List, NoReturn, Optional, Tuple, Union\nfrom urllib.parse import quote, unquote\n\nimport fsspec\nfrom requests import Response\n\nfrom ._commit_api import CommitOperationCopy, CommitOperationDelete\nfrom .constants import DEFAULT_REVISION, ENDPOINT, REPO_TYPE_MODEL, REPO_TYPES_MAPPING, REPO_TYPES_URL_PREFIXES\nfrom .file_download import hf_hub_url\nfrom .hf_api import HfApi, LastCommitInfo, RepoFile\nfrom .utils import (\n EntryNotFoundError,\n HFValidationError,\n RepositoryNotFoundError,\n RevisionNotFoundError,\n hf_raise_for_status,\n http_backoff,\n)\n\n\n# Regex used to match special revisions with \"/\" in them (see #1710)\nSPECIAL_REFS_REVISION_REGEX = re.compile(\n r\"\"\"\n (^refs\\/convert\\/\\w+) # `refs/convert/parquet` revisions\n |\n (^refs\\/pr\\/\\d+) # PR revisions\n \"\"\",\n re.VERBOSE,\n)\n\n\n@dataclass\nclass HfFileSystemResolvedPath:\n \"\"\"Data structure containing information about a resolved Hugging Face file system path.\"\"\"\n\n repo_type: str\n repo_id: str\n revision: str\n path_in_repo: str\n # The part placed after '@' in the initial path. It can be a quoted or unquoted refs revision.\n # Used to reconstruct the unresolved path to return to the user.\n _raw_revision: Optional[str] = field(default=None, repr=False)\n\n def unresolve(self) -> str:\n repo_path = REPO_TYPES_URL_PREFIXES.get(self.repo_type, \"\") + self.repo_id\n if self._raw_revision:\n return f\"{repo_path}@{self._raw_revision}/{self.path_in_repo}\".rstrip(\"/\")\n elif self.revision != DEFAULT_REVISION:\n return f\"{repo_path}@{safe_revision(self.revision)}/{self.path_in_repo}\".rstrip(\"/\")\n else:\n return f\"{repo_path}/{self.path_in_repo}\".rstrip(\"/\")\n\n\nclass HfFileSystem(fsspec.AbstractFileSystem):\n \"\"\"\n Access a remote Hugging Face Hub repository as if were a local file system.\n\n Args:\n token (`str`, *optional*):\n Authentication token, obtained with [`HfApi.login`] method. Will default to the stored token.\n\n Usage:\n\n ```python\n >>> from huggingface_hub import HfFileSystem\n\n >>> fs = HfFileSystem()\n\n >>> # List files\n >>> fs.glob(\"my-username/my-model/*.bin\")\n ['my-username/my-model/pytorch_model.bin']\n >>> fs.ls(\"datasets/my-username/my-dataset\", detail=False)\n ['datasets/my-username/my-dataset/.gitattributes', 'datasets/my-username/my-dataset/README.md', 'datasets/my-username/my-dataset/data.json']\n\n >>> # Read/write files\n >>> with fs.open(\"my-username/my-model/pytorch_model.bin\") as f:\n ... data = f.read()\n >>> with fs.open(\"my-username/my-model/pytorch_model.bin\", \"wb\") as f:\n ... f.write(data)\n ```\n \"\"\"\n\n root_marker = \"\"\n protocol = \"hf\"\n\n def __init__(\n self,\n *args,\n endpoint: Optional[str] = None,\n token: Optional[str] = None,\n **storage_options,\n ):\n super().__init__(*args, **storage_options)\n self.endpoint = endpoint or ENDPOINT\n self.token = token\n self._api = HfApi(endpoint=endpoint, token=token)\n # Maps (repo_type, repo_id, revision) to a 2-tuple with:\n # * the 1st element indicating whether the repositoy and the revision exist\n # * the 2nd element being the exception raised if the repository or revision doesn't exist\n self._repo_and_revision_exists_cache: Dict[\n Tuple[str, str, Optional[str]], Tuple[bool, Optional[Exception]]\n ] = {}\n\n def _repo_and_revision_exist(\n self, repo_type: str, repo_id: str, revision: Optional[str]\n ) -> Tuple[bool, Optional[Exception]]:\n if (repo_type, repo_id, revision) not in self._repo_and_revision_exists_cache:\n try:\n self._api.repo_info(repo_id, revision=revision, repo_type=repo_type)\n except (RepositoryNotFoundError, HFValidationError) as e:\n self._repo_and_revision_exists_cache[(repo_type, repo_id, revision)] = False, e\n self._repo_and_revision_exists_cache[(repo_type, repo_id, None)] = False, e\n except RevisionNotFoundError as e:\n self._repo_and_revision_exists_cache[(repo_type, repo_id, revision)] = False, e\n self._repo_and_revision_exists_cache[(repo_type, repo_id, None)] = True, None\n else:\n self._repo_and_revision_exists_cache[(repo_type, repo_id, revision)] = True, None\n self._repo_and_revision_exists_cache[(repo_type, repo_id, None)] = True, None\n return self._repo_and_revision_exists_cache[(repo_type, repo_id, revision)]\n\n def resolve_path(self, path: str, revision: Optional[str] = None) -> HfFileSystemResolvedPath:\n def _align_revision_in_path_with_revision(\n revision_in_path: Optional[str], revision: Optional[str]\n ) -> Optional[str]:\n if revision is not None:\n if revision_in_path is not None and revision_in_path != revision:\n raise ValueError(\n f'Revision specified in path (\"{revision_in_path}\") and in `revision` argument (\"{revision}\")'\n \" are not the same.\"\n )\n else:\n revision = revision_in_path\n return revision\n\n path = self._strip_protocol(path)\n if not path:\n # can't list repositories at root\n raise NotImplementedError(\"Access to repositories lists is not implemented.\")\n elif path.split(\"/\")[0] + \"/\" in REPO_TYPES_URL_PREFIXES.values():\n if \"/\" not in path:\n # can't list repositories at the repository type level\n raise NotImplementedError(\"Access to repositories lists is not implemented.\")\n repo_type, path = path.split(\"/\", 1)\n repo_type = REPO_TYPES_MAPPING[repo_type]\n else:\n repo_type = REPO_TYPE_MODEL\n if path.count(\"/\") > 0:\n if \"@\" in path:\n repo_id, revision_in_path = path.split(\"@\", 1)\n if \"/\" in revision_in_path:\n match = SPECIAL_REFS_REVISION_REGEX.search(revision_in_path)\n if match is not None and revision in (None, match.group()):\n # Handle `refs/convert/parquet` and PR revisions separately\n path_in_repo = SPECIAL_REFS_REVISION_REGEX.sub(\"\", revision_in_path).lstrip(\"/\")\n revision_in_path = match.group()\n else:\n revision_in_path, path_in_repo = revision_in_path.split(\"/\", 1)\n else:\n path_in_repo = \"\"\n revision = _align_revision_in_path_with_revision(unquote(revision_in_path), revision)\n repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)\n if not repo_and_revision_exist:\n _raise_file_not_found(path, err)\n else:\n revision_in_path = None\n repo_id_with_namespace = \"/\".join(path.split(\"/\")[:2])\n path_in_repo_with_namespace = \"/\".join(path.split(\"/\")[2:])\n repo_id_without_namespace = path.split(\"/\")[0]\n path_in_repo_without_namespace = \"/\".join(path.split(\"/\")[1:])\n repo_id = repo_id_with_namespace\n path_in_repo = path_in_repo_with_namespace\n repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)\n if not repo_and_revision_exist:\n if isinstance(err, (RepositoryNotFoundError, HFValidationError)):\n repo_id = repo_id_without_namespace\n path_in_repo = path_in_repo_without_namespace\n repo_and_revision_exist, _ = self._repo_and_revision_exist(repo_type, repo_id, revision)\n if not repo_and_revision_exist:\n _raise_file_not_found(path, err)\n else:\n _raise_file_not_found(path, err)\n else:\n repo_id = path\n path_in_repo = \"\"\n if \"@\" in path:\n repo_id, revision_in_path = path.split(\"@\", 1)\n revision = _align_revision_in_path_with_revision(unquote(revision_in_path), revision)\n else:\n revision_in_path = None\n repo_and_revision_exist, _ = self._repo_and_revision_exist(repo_type, repo_id, revision)\n if not repo_and_revision_exist:\n raise NotImplementedError(\"Access to repositories lists is not implemented.\")\n\n revision = revision if revision is not None else DEFAULT_REVISION\n return HfFileSystemResolvedPath(repo_type, repo_id, revision, path_in_repo, _raw_revision=revision_in_path)\n\n def invalidate_cache(self, path: Optional[str] = None) -> None:\n if not path:\n self.dircache.clear()\n self._repo_and_revision_exists_cache.clear()\n else:\n path = self.resolve_path(path).unresolve()\n while path:\n self.dircache.pop(path, None)\n path = self._parent(path)\n\n def _open(\n self,\n path: str,\n mode: str = \"rb\",\n revision: Optional[str] = None,\n block_size: Optional[int] = None,\n **kwargs,\n ) -> \"HfFileSystemFile\":\n if \"a\" in mode:\n raise NotImplementedError(\"Appending to remote files is not yet supported.\")\n if block_size == 0:\n return HfFileSystemStreamFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)\n else:\n return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)\n\n def _rm(self, path: str, revision: Optional[str] = None, **kwargs) -> None:\n resolved_path = self.resolve_path(path, revision=revision)\n self._api.delete_file(\n path_in_repo=resolved_path.path_in_repo,\n repo_id=resolved_path.repo_id,\n token=self.token,\n repo_type=resolved_path.repo_type,\n revision=resolved_path.revision,\n commit_message=kwargs.get(\"commit_message\"),\n commit_description=kwargs.get(\"commit_description\"),\n )\n self.invalidate_cache(path=resolved_path.unresolve())\n\n def rm(\n self,\n path: str,\n recursive: bool = False,\n maxdepth: Optional[int] = None,\n revision: Optional[str] = None,\n **kwargs,\n ) -> None:\n resolved_path = self.resolve_path(path, revision=revision)\n paths = self.expand_path(path, recursive=recursive, maxdepth=maxdepth, revision=revision)\n paths_in_repo = [self.resolve_path(path).path_in_repo for path in paths if not self.isdir(path)]\n operations = [CommitOperationDelete(path_in_repo=path_in_repo) for path_in_repo in paths_in_repo]\n commit_message = f\"Delete {path} \"\n commit_message += \"recursively \" if recursive else \"\"\n commit_message += f\"up to depth {maxdepth} \" if maxdepth is not None else \"\"\n # TODO: use `commit_description` to list all the deleted paths?\n self._api.create_commit(\n repo_id=resolved_path.repo_id,\n repo_type=resolved_path.repo_type,\n token=self.token,\n operations=operations,\n revision=resolved_path.revision,\n commit_message=kwargs.get(\"commit_message\", commit_message),\n commit_description=kwargs.get(\"commit_description\"),\n )\n self.invalidate_cache(path=resolved_path.unresolve())\n\n def ls(\n self, path: str, detail: bool = True, refresh: bool = False, revision: Optional[str] = None, **kwargs\n ) -> List[Union[str, Dict[str, Any]]]:\n \"\"\"List the contents of a directory.\"\"\"\n resolved_path = self.resolve_path(path, revision=revision)\n path = resolved_path.unresolve()\n kwargs = {\"expand_info\": detail, **kwargs}\n try:\n out = self._ls_tree(path, refresh=refresh, revision=revision, **kwargs)\n except EntryNotFoundError:\n # Path could be a file\n if not resolved_path.path_in_repo:\n _raise_file_not_found(path, None)\n out = self._ls_tree(self._parent(path), refresh=refresh, revision=revision, **kwargs)\n out = [o for o in out if o[\"name\"] == path]\n if len(out) == 0:\n _raise_file_not_found(path, None)\n return out if detail else [o[\"name\"] for o in out]\n\n def _ls_tree(\n self,\n path: str,\n recursive: bool = False,\n refresh: bool = False,\n revision: Optional[str] = None,\n expand_info: bool = True,\n ):\n resolved_path = self.resolve_path(path, revision=revision)\n path = resolved_path.unresolve()\n root_path = HfFileSystemResolvedPath(\n resolved_path.repo_type,\n resolved_path.repo_id,\n resolved_path.revision,\n path_in_repo=\"\",\n _raw_revision=resolved_path._raw_revision,\n ).unresolve()\n\n out = []\n if path in self.dircache and not refresh:\n cached_path_infos = self.dircache[path]\n out.extend(cached_path_infos)\n dirs_not_in_dircache = []\n if recursive:\n # Use BFS to traverse the cache and build the \"recursive \"output\n # (The Hub uses a so-called \"tree first\" strategy for the tree endpoint but we sort the output to follow the spec so the result is (eventually) the same)\n dirs_to_visit = deque(\n [path_info for path_info in cached_path_infos if path_info[\"type\"] == \"directory\"]\n )\n while dirs_to_visit:\n dir_info = dirs_to_visit.popleft()\n if dir_info[\"name\"] not in self.dircache:\n dirs_not_in_dircache.append(dir_info[\"name\"])\n else:\n cached_path_infos = self.dircache[dir_info[\"name\"]]\n out.extend(cached_path_infos)\n dirs_to_visit.extend(\n [path_info for path_info in cached_path_infos if path_info[\"type\"] == \"directory\"]\n )\n\n dirs_not_expanded = []\n if expand_info:\n # Check if there are directories with non-expanded entries\n dirs_not_expanded = [self._parent(o[\"name\"]) for o in out if o[\"last_commit\"] is None]\n\n if (recursive and dirs_not_in_dircache) or (expand_info and dirs_not_expanded):\n # If the dircache is incomplete, find the common path of the missing and non-expanded entries\n # and extend the output with the result of `_ls_tree(common_path, recursive=True)`\n common_prefix = os.path.commonprefix(dirs_not_in_dircache + dirs_not_expanded)\n # Get the parent directory if the common prefix itself is not a directory\n common_path = (\n common_prefix.rstrip(\"/\")\n if common_prefix.endswith(\"/\")\n or common_prefix == root_path\n or common_prefix in chain(dirs_not_in_dircache, dirs_not_expanded)\n else self._parent(common_prefix)\n )\n out = [o for o in out if not o[\"name\"].startswith(common_path + \"/\")]\n for cached_path in self.dircache:\n if cached_path.startswith(common_path + \"/\"):\n self.dircache.pop(cached_path, None)\n self.dircache.pop(common_path, None)\n out.extend(\n self._ls_tree(\n common_path,\n recursive=recursive,\n refresh=True,\n revision=revision,\n expand_info=expand_info,\n )\n )\n else:\n tree = self._api.list_repo_tree(\n resolved_path.repo_id,\n resolved_path.path_in_repo,\n recursive=recursive,\n expand=expand_info,\n revision=resolved_path.revision,\n repo_type=resolved_path.repo_type,\n )\n for path_info in tree:\n if isinstance(path_info, RepoFile):\n cache_path_info = {\n \"name\": root_path + \"/\" + path_info.path,\n \"size\": path_info.size,\n \"type\": \"file\",\n \"blob_id\": path_info.blob_id,\n \"lfs\": path_info.lfs,\n \"last_commit\": path_info.last_commit,\n \"security\": path_info.security,\n }\n else:\n cache_path_info = {\n \"name\": root_path + \"/\" + path_info.path,\n \"size\": 0,\n \"type\": \"directory\",\n \"tree_id\": path_info.tree_id,\n \"last_commit\": path_info.last_commit,\n }\n parent_path = self._parent(cache_path_info[\"name\"])\n self.dircache.setdefault(parent_path, []).append(cache_path_info)\n out.append(cache_path_info)\n return copy.deepcopy(out) # copy to not let users modify the dircache\n\n def glob(self, path, **kwargs):\n # Set expand_info=False by default to get a x10 speed boost\n kwargs = {\"expand_info\": kwargs.get(\"detail\", False), **kwargs}\n path = self.resolve_path(path, revision=kwargs.get(\"revision\")).unresolve()\n return super().glob(path, **kwargs)\n\n def find(\n self,\n path: str,\n maxdepth: Optional[int] = None,\n withdirs: bool = False,\n detail: bool = False,\n refresh: bool = False,\n revision: Optional[str] = None,\n **kwargs,\n ) -> Union[List[str], Dict[str, Dict[str, Any]]]:\n if maxdepth:\n return super().find(\n path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, refresh=refresh, revision=revision, **kwargs\n )\n resolved_path = self.resolve_path(path, revision=revision)\n path = resolved_path.unresolve()\n kwargs = {\"expand_info\": detail, **kwargs}\n try:\n out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs)\n except EntryNotFoundError:\n # Path could be a file\n if self.info(path, revision=revision, **kwargs)[\"type\"] == \"file\":\n out = {path: {}}\n else:\n out = {}\n else:\n if not withdirs:\n out = [o for o in out if o[\"type\"] != \"directory\"]\n else:\n # If `withdirs=True`, include the directory itself to be consistent with the spec\n path_info = self.info(path, revision=resolved_path.revision, **kwargs)\n out = [path_info] + out if path_info[\"type\"] == \"directory\" else out\n out = {o[\"name\"]: o for o in out}\n names = sorted(out)\n if not detail:\n return names\n else:\n return {name: out[name] for name in names}\n\n def cp_file(self, path1: str, path2: str, revision: Optional[str] = None, **kwargs) -> None:\n resolved_path1 = self.resolve_path(path1, revision=revision)\n resolved_path2 = self.resolve_path(path2, revision=revision)\n\n same_repo = (\n resolved_path1.repo_type == resolved_path2.repo_type and resolved_path1.repo_id == resolved_path2.repo_id\n )\n\n if same_repo and self.info(path1, revision=resolved_path1.revision)[\"lfs\"] is not None:\n commit_message = f\"Copy {path1} to {path2}\"\n self._api.create_commit(\n repo_id=resolved_path1.repo_id,\n repo_type=resolved_path1.repo_type,\n revision=resolved_path2.revision,\n commit_message=kwargs.get(\"commit_message\", commit_message),\n commit_description=kwargs.get(\"commit_description\", \"\"),\n operations=[\n CommitOperationCopy(\n src_path_in_repo=resolved_path1.path_in_repo,\n path_in_repo=resolved_path2.path_in_repo,\n src_revision=resolved_path1.revision,\n )\n ],\n )\n else:\n with self.open(path1, \"rb\", revision=resolved_path1.revision) as f:\n content = f.read()\n commit_message = f\"Copy {path1} to {path2}\"\n self._api.upload_file(\n path_or_fileobj=content,\n path_in_repo=resolved_path2.path_in_repo,\n repo_id=resolved_path2.repo_id,\n token=self.token,\n repo_type=resolved_path2.repo_type,\n revision=resolved_path2.revision,\n commit_message=kwargs.get(\"commit_message\", commit_message),\n commit_description=kwargs.get(\"commit_description\"),\n )\n self.invalidate_cache(path=resolved_path1.unresolve())\n self.invalidate_cache(path=resolved_path2.unresolve())\n\n def modified(self, path: str, **kwargs) -> datetime:\n info = self.info(path, **kwargs)\n return info[\"last_commit\"][\"date\"]\n\n def info(self, path: str, refresh: bool = False, revision: Optional[str] = None, **kwargs) -> Dict[str, Any]:\n resolved_path = self.resolve_path(path, revision=revision)\n path = resolved_path.unresolve()\n expand_info = kwargs.get(\n \"expand_info\", True\n ) # don't expose it as a parameter in the public API to follow the spec\n if not resolved_path.path_in_repo:\n # Path is the root directory\n out = {\n \"name\": path,\n \"size\": 0,\n \"type\": \"directory\",\n }\n if expand_info:\n last_commit = self._api.list_repo_commits(\n resolved_path.repo_id, repo_type=resolved_path.repo_type, revision=resolved_path.revision\n )[-1]\n out = {\n **out,\n \"tree_id\": None, # TODO: tree_id of the root directory?\n \"last_commit\": LastCommitInfo(\n oid=last_commit.commit_id, title=last_commit.title, date=last_commit.created_at\n ),\n }\n else:\n out = None\n parent_path = self._parent(path)\n if parent_path in self.dircache:\n # Check if the path is in the cache\n out1 = [o for o in self.dircache[parent_path] if o[\"name\"] == path]\n if not out1:\n _raise_file_not_found(path, None)\n out = out1[0]\n if refresh or out is None or (expand_info and out and out[\"last_commit\"] is None):\n paths_info = self._api.get_paths_info(\n resolved_path.repo_id,\n resolved_path.path_in_repo,\n expand=expand_info,\n revision=resolved_path.revision,\n repo_type=resolved_path.repo_type,\n )\n if not paths_info:\n _raise_file_not_found(path, None)\n path_info = paths_info[0]\n root_path = HfFileSystemResolvedPath(\n resolved_path.repo_type,\n resolved_path.repo_id,\n resolved_path.revision,\n path_in_repo=\"\",\n _raw_revision=resolved_path._raw_revision,\n ).unresolve()\n if isinstance(path_info, RepoFile):\n out = {\n \"name\": root_path + \"/\" + path_info.path,\n \"size\": path_info.size,\n \"type\": \"file\",\n \"blob_id\": path_info.blob_id,\n \"lfs\": path_info.lfs,\n \"last_commit\": path_info.last_commit,\n \"security\": path_info.security,\n }\n else:\n out = {\n \"name\": root_path + \"/\" + path_info.path,\n \"size\": 0,\n \"type\": \"directory\",\n \"tree_id\": path_info.tree_id,\n \"last_commit\": path_info.last_commit,\n }\n if not expand_info:\n out = {k: out[k] for k in [\"name\", \"size\", \"type\"]}\n assert out is not None\n return copy.deepcopy(out) # copy to not let users modify the dircache\n\n def exists(self, path, **kwargs):\n \"\"\"Is there a file at the given path\"\"\"\n try:\n self.info(path, expand_info=False, **kwargs)\n return True\n except: # noqa: E722\n # any exception allowed bar FileNotFoundError?\n return False\n\n def isdir(self, path):\n \"\"\"Is this entry directory-like?\"\"\"\n try:\n return self.info(path, expand_info=False)[\"type\"] == \"directory\"\n except OSError:\n return False\n\n def isfile(self, path):\n \"\"\"Is this entry file-like?\"\"\"\n try:\n return self.info(path, expand_info=False)[\"type\"] == \"file\"\n except: # noqa: E722\n return False\n\n @property\n def transaction(self):\n \"\"\"A context within which files are committed together upon exit\n\n Requires the file class to implement `.commit()` and `.discard()`\n for the normal and exception cases.\n \"\"\"\n # Taken from https://github.com/fsspec/filesystem_spec/blob/3fbb6fee33b46cccb015607630843dea049d3243/fsspec/spec.py#L231\n # See https://github.com/huggingface/huggingface_hub/issues/1733\n raise NotImplementedError(\"Transactional commits are not supported.\")\n\n def start_transaction(self):\n \"\"\"Begin write transaction for deferring files, non-context version\"\"\"\n # Taken from https://github.com/fsspec/filesystem_spec/blob/3fbb6fee33b46cccb015607630843dea049d3243/fsspec/spec.py#L241\n # See https://github.com/huggingface/huggingface_hub/issues/1733\n raise NotImplementedError(\"Transactional commits are not supported.\")\n\n\nclass HfFileSystemFile(fsspec.spec.AbstractBufferedFile):\n def __init__(self, fs: HfFileSystem, path: str, revision: Optional[str] = None, **kwargs):\n try:\n self.resolved_path = fs.resolve_path(path, revision=revision)\n except FileNotFoundError as e:\n if \"w\" in kwargs.get(\"mode\", \"\"):\n raise FileNotFoundError(\n f\"{e}.\\nMake sure the repository and revision exist before writing data.\"\n ) from e\n super().__init__(fs, self.resolved_path.unresolve(), **kwargs)\n self.fs: HfFileSystem\n\n def __del__(self):\n if not hasattr(self, \"resolved_path\"):\n # Means that the constructor failed. Nothing to do.\n return\n return super().__del__()\n\n def _fetch_range(self, start: int, end: int) -> bytes:\n headers = {\n \"range\": f\"bytes={start}-{end - 1}\",\n **self.fs._api._build_hf_headers(),\n }\n url = hf_hub_url(\n repo_id=self.resolved_path.repo_id,\n revision=self.resolved_path.revision,\n filename=self.resolved_path.path_in_repo,\n repo_type=self.resolved_path.repo_type,\n endpoint=self.fs.endpoint,\n )\n r = http_backoff(\"GET\", url, headers=headers, retry_on_status_codes=(502, 503, 504))\n hf_raise_for_status(r)\n return r.content\n\n def _initiate_upload(self) -> None:\n self.temp_file = tempfile.NamedTemporaryFile(prefix=\"hffs-\", delete=False)\n\n def _upload_chunk(self, final: bool = False) -> None:\n self.buffer.seek(0)\n block = self.buffer.read()\n self.temp_file.write(block)\n if final:\n self.temp_file.close()\n self.fs._api.upload_file(\n path_or_fileobj=self.temp_file.name,\n path_in_repo=self.resolved_path.path_in_repo,\n repo_id=self.resolved_path.repo_id,\n token=self.fs.token,\n repo_type=self.resolved_path.repo_type,\n revision=self.resolved_path.revision,\n commit_message=self.kwargs.get(\"commit_message\"),\n commit_description=self.kwargs.get(\"commit_description\"),\n )\n os.remove(self.temp_file.name)\n self.fs.invalidate_cache(\n path=self.resolved_path.unresolve(),\n )\n\n\nclass HfFileSystemStreamFile(fsspec.spec.AbstractBufferedFile):\n def __init__(\n self,\n fs: HfFileSystem,\n path: str,\n mode: str = \"rb\",\n revision: Optional[str] = None,\n block_size: int = 0,\n cache_type: str = \"none\",\n **kwargs,\n ):\n if block_size != 0:\n raise ValueError(f\"HfFileSystemStreamFile only supports block_size=0 but got {block_size}\")\n if cache_type != \"none\":\n raise ValueError(f\"HfFileSystemStreamFile only supports cache_type='none' but got {cache_type}\")\n if \"w\" in mode:\n raise ValueError(f\"HfFileSystemStreamFile only supports reading but got mode='{mode}'\")\n try:\n self.resolved_path = fs.resolve_path(path, revision=revision)\n except FileNotFoundError as e:\n if \"w\" in kwargs.get(\"mode\", \"\"):\n raise FileNotFoundError(\n f\"{e}.\\nMake sure the repository and revision exist before writing data.\"\n ) from e\n # avoid an unecessary .info() call to instantiate .details\n self.details = {\"name\": self.resolved_path.unresolve(), \"size\": None}\n super().__init__(\n fs, self.resolved_path.unresolve(), mode=mode, block_size=block_size, cache_type=cache_type, **kwargs\n )\n self.response: Optional[Response] = None\n self.fs: HfFileSystem\n\n def seek(self, loc: int, whence: int = 0):\n if loc == 0 and whence == 1:\n return\n if loc == self.loc and whence == 0:\n return\n raise ValueError(\"Cannot seek streaming HF file\")\n\n def read(self, length: int = -1):\n read_args = (length,) if length >= 0 else ()\n if self.response is None or self.response.raw.isclosed():\n url = hf_hub_url(\n repo_id=self.resolved_path.repo_id,\n revision=self.resolved_path.revision,\n filename=self.resolved_path.path_in_repo,\n repo_type=self.resolved_path.repo_type,\n endpoint=self.fs.endpoint,\n )\n self.response = http_backoff(\n \"GET\",\n url,\n headers=self.fs._api._build_hf_headers(),\n retry_on_status_codes=(502, 503, 504),\n stream=True,\n )\n hf_raise_for_status(self.response)\n try:\n out = self.response.raw.read(*read_args)\n except Exception:\n self.response.close()\n\n # Retry by recreating the connection\n url = hf_hub_url(\n repo_id=self.resolved_path.repo_id,\n revision=self.resolved_path.revision,\n filename=self.resolved_path.path_in_repo,\n repo_type=self.resolved_path.repo_type,\n endpoint=self.fs.endpoint,\n )\n self.response = http_backoff(\n \"GET\",\n url,\n headers={\"Range\": \"bytes=%d-\" % self.loc, **self.fs._api._build_hf_headers()},\n retry_on_status_codes=(502, 503, 504),\n stream=True,\n )\n hf_raise_for_status(self.response)\n try:\n out = self.response.raw.read(*read_args)\n except Exception:\n self.response.close()\n raise\n self.loc += len(out)\n return out\n\n def __del__(self):\n if not hasattr(self, \"resolved_path\"):\n # Means that the constructor failed. Nothing to do.\n return\n return super().__del__()\n\n def __reduce__(self):\n return reopen, (self.fs, self.path, self.mode, self.blocksize, self.cache.name)\n\n\ndef safe_revision(revision: str) -> str:\n return revision if SPECIAL_REFS_REVISION_REGEX.match(revision) else safe_quote(revision)\n\n\ndef safe_quote(s: str) -> str:\n return quote(s, safe=\"\")\n\n\ndef _raise_file_not_found(path: str, err: Optional[Exception]) -> NoReturn:\n msg = path\n if isinstance(err, RepositoryNotFoundError):\n msg = f\"{path} (repository not found)\"\n elif isinstance(err, RevisionNotFoundError):\n msg = f\"{path} (revision not found)\"\n elif isinstance(err, HFValidationError):\n msg = f\"{path} (invalid repository id)\"\n raise FileNotFoundError(msg) from err\n\n\ndef reopen(fs: HfFileSystem, path: str, mode: str, block_size: int, cache_type: str):\n return fs.open(path, mode=mode, block_size=block_size, cache_type=cache_type)\n"}
|
{"src/huggingface_hub/hf_file_system.py": [{"type": "function", "name": "HfFileSystem.url", "lines": [580, 592], "signature": "def url(self, path: str) -> str:", "doc": "Get the HTTP URL of the given path"}, {"type": "function", "name": "HfFileSystemFile.url", "lines": [670, 671], "signature": "def url(self) -> str:", "doc": ""}, {"type": "function", "name": "HfFileSystemStreamFile.url", "lines": [760, 761], "signature": "def url(self) -> str:", "doc": ""}]}
| null |
["tests/test_hf_file_system.py::HfFileSystemTests::test_url"]
|
["tests/test_hf_file_system.py::HfFileSystemTests::test_copy_file", "tests/test_hf_file_system.py::HfFileSystemTests::test_file_type", "tests/test_hf_file_system.py::HfFileSystemTests::test_find_data_file_no_revision", "tests/test_hf_file_system.py::HfFileSystemTests::test_find_root_directory_no_revision", "tests/test_hf_file_system.py::HfFileSystemTests::test_find_root_directory_no_revision_with_incomplete_cache", "tests/test_hf_file_system.py::HfFileSystemTests::test_glob", "tests/test_hf_file_system.py::HfFileSystemTests::test_info", "tests/test_hf_file_system.py::HfFileSystemTests::test_initialize_from_fsspec", "tests/test_hf_file_system.py::HfFileSystemTests::test_list_data_directory_no_revision", "tests/test_hf_file_system.py::HfFileSystemTests::test_list_data_directory_with_revision", "tests/test_hf_file_system.py::HfFileSystemTests::test_list_data_file_no_revision", "tests/test_hf_file_system.py::HfFileSystemTests::test_list_root_directory_no_revision", "tests/test_hf_file_system.py::HfFileSystemTests::test_list_root_directory_no_revision_no_detail_then_with_detail", "tests/test_hf_file_system.py::HfFileSystemTests::test_modified_time", "tests/test_hf_file_system.py::HfFileSystemTests::test_read_file", "tests/test_hf_file_system.py::HfFileSystemTests::test_read_file_with_revision", "tests/test_hf_file_system.py::HfFileSystemTests::test_remove_directory", "tests/test_hf_file_system.py::HfFileSystemTests::test_remove_file", "tests/test_hf_file_system.py::HfFileSystemTests::test_stream_file", "tests/test_hf_file_system.py::HfFileSystemTests::test_stream_file_retry", "tests/test_hf_file_system.py::HfFileSystemTests::test_write_file", "tests/test_hf_file_system.py::HfFileSystemTests::test_write_file_multiple_chunks", "tests/test_hf_file_system.py::test_resolve_path[gpt2-None-model-gpt2-main-]", "tests/test_hf_file_system.py::test_resolve_path[gpt2-None-model-gpt2-main-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[gpt2-None-model-gpt2-main-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[gpt2-dev-model-gpt2-dev-]", "tests/test_hf_file_system.py::test_resolve_path[gpt2-dev-model-gpt2-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[gpt2-dev-model-gpt2-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@dev-None-model-gpt2-dev-]", "tests/test_hf_file_system.py::test_resolve_path[[email protected]]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@dev-None-model-gpt2-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad-None-dataset-squad-main-]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad-None-dataset-squad-main-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad-None-dataset-squad-main-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad-dev-dataset-squad-dev-]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad-dev-dataset-squad-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad-dev-dataset-squad-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad@dev-None-dataset-squad-dev-]", "tests/test_hf_file_system.py::test_resolve_path[datasets/[email protected]]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad@dev-None-dataset-squad-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model-None-model-username/my_model-main-]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model-None-model-username/my_model-main-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model-None-model-username/my_model-main-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model-dev-model-username/my_model-dev-]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model-dev-model-username/my_model-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model-dev-model-username/my_model-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model@dev-None-model-username/my_model-dev-]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model@dev-None-model-username/my_model-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[username/my_model@dev-None-model-username/my_model-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset-None-dataset-username/my_dataset-main-]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset-None-dataset-username/my_dataset-main-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset-None-dataset-username/my_dataset-main-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset-dev-dataset-username/my_dataset-dev-]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset-dev-dataset-username/my_dataset-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset-dev-dataset-username/my_dataset-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset@dev-None-dataset-username/my_dataset-dev-]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset@dev-None-dataset-username/my_dataset-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[datasets/username/my_dataset@dev-None-dataset-username/my_dataset-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2-None-model-gpt2-main-]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2-None-model-gpt2-main-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2-None-model-gpt2-main-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2-dev-model-gpt2-dev-]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2-dev-model-gpt2-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2-dev-model-gpt2-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2@dev-None-model-gpt2-dev-]", "tests/test_hf_file_system.py::test_resolve_path[hf://[email protected]]", "tests/test_hf_file_system.py::test_resolve_path[hf://gpt2@dev-None-model-gpt2-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad-None-dataset-squad-main-]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad-None-dataset-squad-main-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad-None-dataset-squad-main-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad-dev-dataset-squad-dev-]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad-dev-dataset-squad-dev-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad-dev-dataset-squad-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad@dev-None-dataset-squad-dev-]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/[email protected]]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/squad@dev-None-dataset-squad-dev-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad@refs/convert/parquet-None-dataset-squad-refs/convert/parquet-]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad@refs/convert/parquet-None-dataset-squad-refs/convert/parquet-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[datasets/squad@refs/convert/parquet-None-dataset-squad-refs/convert/parquet-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/username/my_dataset@refs/convert/parquet-None-dataset-username/my_dataset-refs/convert/parquet-]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/username/my_dataset@refs/convert/parquet-None-dataset-username/my_dataset-refs/convert/parquet-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://datasets/username/my_dataset@refs/convert/parquet-None-dataset-username/my_dataset-refs/convert/parquet-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@refs/pr/2-None-model-gpt2-refs/pr/2-]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@refs/pr/2-None-model-gpt2-refs/pr/2-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@refs/pr/2-None-model-gpt2-refs/pr/2-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@refs%2Fpr%2F2-None-model-gpt2-refs/pr/2-]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@refs%2Fpr%2F2-None-model-gpt2-refs/pr/2-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[gpt2@refs%2Fpr%2F2-None-model-gpt2-refs/pr/2-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs/pr/10-None-model-username/my_model-refs/pr/10-]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs/pr/10-None-model-username/my_model-refs/pr/10-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs/pr/10-None-model-username/my_model-refs/pr/10-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs/pr/10-refs/pr/10-model-username/my_model-refs/pr/10-]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs/pr/10-refs/pr/10-model-username/my_model-refs/pr/10-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs/pr/10-refs/pr/10-model-username/my_model-refs/pr/10-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs%2Fpr%2F10-refs/pr/10-model-username/my_model-refs/pr/10-]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs%2Fpr%2F10-refs/pr/10-model-username/my_model-refs/pr/10-file.txt]", "tests/test_hf_file_system.py::test_resolve_path[hf://username/my_model@refs%2Fpr%2F10-refs/pr/10-model-username/my_model-refs/pr/10-path/to/file]", "tests/test_hf_file_system.py::test_unresolve_path[hf://datasets/squad@dev-None-datasets/squad@dev-]", "tests/test_hf_file_system.py::test_unresolve_path[hf://datasets/squad@dev-None-datasets/[email protected]]", "tests/test_hf_file_system.py::test_unresolve_path[hf://datasets/squad@dev-None-datasets/squad@dev-path/to/file]", "tests/test_hf_file_system.py::test_unresolve_path[datasets/squad@refs/convert/parquet-None-datasets/squad@refs/convert/parquet-]", "tests/test_hf_file_system.py::test_unresolve_path[datasets/squad@refs/convert/parquet-None-datasets/squad@refs/convert/parquet-file.txt]", "tests/test_hf_file_system.py::test_unresolve_path[datasets/squad@refs/convert/parquet-None-datasets/squad@refs/convert/parquet-path/to/file]", "tests/test_hf_file_system.py::test_unresolve_path[hf://username/my_model@refs/pr/10-None-username/my_model@refs/pr/10-]", "tests/test_hf_file_system.py::test_unresolve_path[hf://username/my_model@refs/pr/10-None-username/my_model@refs/pr/10-file.txt]", "tests/test_hf_file_system.py::test_unresolve_path[hf://username/my_model@refs/pr/10-None-username/my_model@refs/pr/10-path/to/file]", "tests/test_hf_file_system.py::test_unresolve_path[username/my_model-refs/weirdo-username/my_model@refs%2Fweirdo-]", "tests/test_hf_file_system.py::test_unresolve_path[username/my_model-refs/weirdo-username/my_model@refs%2Fweirdo-file.txt]", "tests/test_hf_file_system.py::test_unresolve_path[username/my_model-refs/weirdo-username/my_model@refs%2Fweirdo-path/to/file]", "tests/test_hf_file_system.py::test_resolve_path_with_refs_revision", "tests/test_hf_file_system.py::test_resolve_path_with_non_matching_revisions", "tests/test_hf_file_system.py::test_access_repositories_lists[]", "tests/test_hf_file_system.py::test_access_repositories_lists[foo]", "tests/test_hf_file_system.py::test_access_repositories_lists[datasets]", "tests/test_hf_file_system.py::test_access_repositories_lists[datasets/foo]"]
|
4058e1f97ebe256b2f3006d4bc31be275c66df6b
|
{"first_commit_time": 1707932256.0, "pr_title": "Add `HfFileSystem.url` method", "pr_body": "Adds a `url` method to the `HfFileSystem` to simplify converting HF paths to HTTP URLs, which should be useful when working with libs that support HTTP URLs but not `fsspec` paths as input/output (e.g., `webdataset`, `polars`, etc.).\r\n\r\nPS: The `url` method is not part of the official `fsspec` specification, but popular filesystem implementations such as `gcsfs` and `s3fs` also have it", "pr_timeline": [{"time": 1707933994.0, "comment": "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/huggingface_hub/pr_2027). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."}, {"time": 1708013942.0, "comment": "## [Codecov](https://app.codecov.io/gh/huggingface/huggingface_hub/pull/2027?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface) Report\nAttention: `2 lines` in your changes are missing coverage. Please review.\n> Comparison is base [(`0c272d5`)](https://app.codecov.io/gh/huggingface/huggingface_hub/commit/0c272d506e390f2d7b9dac68159595845c7f8e3b?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface) 82.37% compared to head [(`ab935a4`)](https://app.codecov.io/gh/huggingface/huggingface_hub/pull/2027?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface) 82.37%.\n> Report is 2 commits behind head on main.\n\n> :exclamation: Current head ab935a4 differs from pull request most recent head be04be6. Consider uploading reports for the commit be04be6 to get more accurate results\n\n| [Files](https://app.codecov.io/gh/huggingface/huggingface_hub/pull/2027?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface) | Patch % | Lines |\n|---|---|---|\n| [src/huggingface\\_hub/hf\\_file\\_system.py](https://app.codecov.io/gh/huggingface/huggingface_hub/pull/2027?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface#diff-c3JjL2h1Z2dpbmdmYWNlX2h1Yi9oZl9maWxlX3N5c3RlbS5weQ==) | 77.77% | [2 Missing :warning: ](https://app.codecov.io/gh/huggingface/huggingface_hub/pull/2027?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface) |\n\n<details><summary>Additional details and impacted files</summary>\n\n\n```diff\n@@ Coverage Diff @@\n## main #2027 +/- ##\n==========================================\n- Coverage 82.37% 82.37% -0.01% \n==========================================\n Files 66 66 \n Lines 8240 8249 +9 \n==========================================\n+ Hits 6788 6795 +7 \n- Misses 1452 1454 +2 \n```\n\n\n\n</details>\n\n[:umbrella: View full report in Codecov by Sentry](https://app.codecov.io/gh/huggingface/huggingface_hub/pull/2027?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface). \n:loudspeaker: Have feedback on the report? [Share it here](https://about.codecov.io/codecov-pr-comment-feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=huggingface).\n"}], "issues": {}}
|
|
huggingface/pytorch-image-models
| 876
|
https://github.com/huggingface/pytorch-image-models/pull/876
|
huggingface__pytorch-image-models-876
|
[]
|
b5bf4dce98a09cb25a274f360c08f23998faedb8
|
diff --git a/timm/utils/model.py b/timm/utils/model.py
index bd46e2f49c..ffe66049c2 100644
--- a/timm/utils/model.py
+++ b/timm/utils/model.py
@@ -2,9 +2,15 @@
Hacked together by / Copyright 2020 Ross Wightman
"""
-from .model_ema import ModelEma
+from logging import root
+from typing import Sequence
+
import torch
import fnmatch
+from torchvision.ops.misc import FrozenBatchNorm2d
+
+from .model_ema import ModelEma
+
def unwrap_model(model):
if isinstance(model, ModelEma):
@@ -89,4 +95,176 @@ def extract_spp_stats(model,
hook = ActivationStatsHook(model, hook_fn_locs=hook_fn_locs, hook_fns=hook_fns)
_ = model(x)
return hook.stats
+
+
+def freeze_batch_norm_2d(module):
+ """
+ Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is
+ itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and
+ returned. Otherwise, the module is walked recursively and submodules are converted in place.
+
+ Args:
+ module (torch.nn.Module): Any PyTorch module.
+
+ Returns:
+ torch.nn.Module: Resulting module
+
+ Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762
+ """
+ res = module
+ if isinstance(module, (torch.nn.modules.batchnorm.BatchNorm2d, torch.nn.modules.batchnorm.SyncBatchNorm)):
+ res = FrozenBatchNorm2d(module.num_features)
+ res.num_features = module.num_features
+ res.affine = module.affine
+ if module.affine:
+ res.weight.data = module.weight.data.clone().detach()
+ res.bias.data = module.bias.data.clone().detach()
+ res.running_mean.data = module.running_mean.data
+ res.running_var.data = module.running_var.data
+ res.eps = module.eps
+ else:
+ for name, child in module.named_children():
+ new_child = freeze_batch_norm_2d(child)
+ if new_child is not child:
+ res.add_module(name, new_child)
+ return res
+
+
+def unfreeze_batch_norm_2d(module):
+ """
+ Converts all `FrozenBatchNorm2d` layers of provided module into `BatchNorm2d`. If `module` is itself and instance
+ of `FrozenBatchNorm2d`, it is converted into `BatchNorm2d` and returned. Otherwise, the module is walked
+ recursively and submodules are converted in place.
+
+ Args:
+ module (torch.nn.Module): Any PyTorch module.
+
+ Returns:
+ torch.nn.Module: Resulting module
+
+ Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762
+ """
+ res = module
+ if isinstance(module, FrozenBatchNorm2d):
+ res = torch.nn.BatchNorm2d(module.num_features)
+ if module.affine:
+ res.weight.data = module.weight.data.clone().detach()
+ res.bias.data = module.bias.data.clone().detach()
+ res.running_mean.data = module.running_mean.data
+ res.running_var.data = module.running_var.data
+ res.eps = module.eps
+ else:
+ for name, child in module.named_children():
+ new_child = unfreeze_batch_norm_2d(child)
+ if new_child is not child:
+ res.add_module(name, new_child)
+ return res
+
+
+def _freeze_unfreeze(root_module, submodules=[], include_bn_running_stats=True, mode='freeze'):
+ """
+ Freeze or unfreeze parameters of the specified modules and those of all their hierarchical descendants. This is
+ done in place.
+ Args:
+ root_module (nn.Module, optional): Root module relative to which the `submodules` are referenced.
+ submodules (list[str]): List of modules for which the parameters will be (un)frozen. They are to be provided as
+ named modules relative to the root module (accessible via `root_module.named_modules()`). An empty list
+ means that the whole root module will be (un)frozen. Defaults to []
+ include_bn_running_stats (bool): Whether to also (un)freeze the running statistics of batch norm 2d layers.
+ Defaults to `True`.
+ mode (bool): Whether to freeze ("freeze") or unfreeze ("unfreeze"). Defaults to `"freeze"`.
+ """
+ assert mode in ["freeze", "unfreeze"], '`mode` must be one of "freeze" or "unfreeze"'
+
+ if isinstance(root_module, (torch.nn.modules.batchnorm.BatchNorm2d, torch.nn.modules.batchnorm.SyncBatchNorm)):
+ # Raise assertion here because we can't convert it in place
+ raise AssertionError(
+ "You have provided a batch norm layer as the `root module`. Please use "
+ "`timm.utils.model.freeze_batch_norm_2d` or `timm.utils.model.unfreeze_batch_norm_2d` instead.")
+
+ if isinstance(submodules, str):
+ submodules = [submodules]
+
+ named_modules = submodules
+ submodules = [root_module.get_submodule(m) for m in submodules]
+
+ if not(len(submodules)):
+ named_modules, submodules = list(zip(*root_module.named_children()))
+
+ for n, m in zip(named_modules, submodules):
+ # (Un)freeze parameters
+ for p in m.parameters():
+ p.requires_grad = (False if mode == 'freeze' else True)
+ if include_bn_running_stats:
+ # Helper to add submodule specified as a named_module
+ def _add_submodule(module, name, submodule):
+ split = name.rsplit('.', 1)
+ if len(split) > 1:
+ module.get_submodule(split[0]).add_module(split[1], submodule)
+ else:
+ module.add_module(name, submodule)
+ # Freeze batch norm
+ if mode == 'freeze':
+ res = freeze_batch_norm_2d(m)
+ # It's possible that `m` is a type of BatchNorm in itself, in which case `unfreeze_batch_norm_2d` won't
+ # convert it in place, but will return the converted result. In this case `res` holds the converted
+ # result and we may try to re-assign the named module
+ if isinstance(m, (torch.nn.modules.batchnorm.BatchNorm2d, torch.nn.modules.batchnorm.SyncBatchNorm)):
+ _add_submodule(root_module, n, res)
+ # Unfreeze batch norm
+ else:
+ res = unfreeze_batch_norm_2d(m)
+ # Ditto. See note above in mode == 'freeze' branch
+ if isinstance(m, FrozenBatchNorm2d):
+ _add_submodule(root_module, n, res)
+
+
+def freeze(root_module, submodules=[], include_bn_running_stats=True):
+ """
+ Freeze parameters of the specified modules and those of all their hierarchical descendants. This is done in place.
+ Args:
+ root_module (nn.Module): Root module relative to which `submodules` are referenced.
+ submodules (list[str]): List of modules for which the parameters will be frozen. They are to be provided as
+ named modules relative to the root module (accessible via `root_module.named_modules()`). An empty list
+ means that the whole root module will be frozen. Defaults to `[]`.
+ include_bn_running_stats (bool): Whether to also freeze the running statistics of `BatchNorm2d` and
+ `SyncBatchNorm` layers. These will be converted to `FrozenBatchNorm2d` in place. Hint: During fine tuning,
+ it's good practice to freeze batch norm stats. And note that these are different to the affine parameters
+ which are just normal PyTorch parameters. Defaults to `True`.
+
+ Hint: If you want to freeze batch norm ONLY, use `timm.utils.model.freeze_batch_norm_2d`.
+
+ Examples::
+
+ >>> model = timm.create_model('resnet18')
+ >>> # Freeze up to and including layer2
+ >>> submodules = [n for n, _ in model.named_children()]
+ >>> print(submodules)
+ ['conv1', 'bn1', 'act1', 'maxpool', 'layer1', 'layer2', 'layer3', 'layer4', 'global_pool', 'fc']
+ >>> freeze(model, submodules[:submodules.index('layer2') + 1])
+ >>> # Check for yourself that it works as expected
+ >>> print(model.layer2[0].conv1.weight.requires_grad)
+ False
+ >>> print(model.layer3[0].conv1.weight.requires_grad)
+ True
+ >>> # Unfreeze
+ >>> unfreeze(model)
+ """
+ _freeze_unfreeze(root_module, submodules, include_bn_running_stats=include_bn_running_stats, mode="freeze")
+
+
+def unfreeze(root_module, submodules=[], include_bn_running_stats=True):
+ """
+ Unfreeze parameters of the specified modules and those of all their hierarchical descendants. This is done in place.
+ Args:
+ root_module (nn.Module): Root module relative to which `submodules` are referenced.
+ submodules (list[str]): List of submodules for which the parameters will be (un)frozen. They are to be provided
+ as named modules relative to the root module (accessible via `root_module.named_modules()`). An empty
+ list means that the whole root module will be unfrozen. Defaults to `[]`.
+ include_bn_running_stats (bool): Whether to also unfreeze the running statistics of `FrozenBatchNorm2d` layers.
+ These will be converted to `BatchNorm2d` in place. Defaults to `True`.
+
+ See example in docstring for `freeze`.
+ """
+ _freeze_unfreeze(root_module, submodules, include_bn_running_stats=include_bn_running_stats, mode="unfreeze")
\ No newline at end of file
|
diff --git a/tests/test_utils.py b/tests/test_utils.py
new file mode 100644
index 0000000000..b0f890d2fe
--- /dev/null
+++ b/tests/test_utils.py
@@ -0,0 +1,57 @@
+from torch.nn.modules.batchnorm import BatchNorm2d
+from torchvision.ops.misc import FrozenBatchNorm2d
+
+import timm
+from timm.utils.model import freeze, unfreeze
+
+
+def test_freeze_unfreeze():
+ model = timm.create_model('resnet18')
+
+ # Freeze all
+ freeze(model)
+ # Check top level module
+ assert model.fc.weight.requires_grad == False
+ # Check submodule
+ assert model.layer1[0].conv1.weight.requires_grad == False
+ # Check BN
+ assert isinstance(model.layer1[0].bn1, FrozenBatchNorm2d)
+
+ # Unfreeze all
+ unfreeze(model)
+ # Check top level module
+ assert model.fc.weight.requires_grad == True
+ # Check submodule
+ assert model.layer1[0].conv1.weight.requires_grad == True
+ # Check BN
+ assert isinstance(model.layer1[0].bn1, BatchNorm2d)
+
+ # Freeze some
+ freeze(model, ['layer1', 'layer2.0'])
+ # Check frozen
+ assert model.layer1[0].conv1.weight.requires_grad == False
+ assert isinstance(model.layer1[0].bn1, FrozenBatchNorm2d)
+ assert model.layer2[0].conv1.weight.requires_grad == False
+ # Check not frozen
+ assert model.layer3[0].conv1.weight.requires_grad == True
+ assert isinstance(model.layer3[0].bn1, BatchNorm2d)
+ assert model.layer2[1].conv1.weight.requires_grad == True
+
+ # Unfreeze some
+ unfreeze(model, ['layer1', 'layer2.0'])
+ # Check not frozen
+ assert model.layer1[0].conv1.weight.requires_grad == True
+ assert isinstance(model.layer1[0].bn1, BatchNorm2d)
+ assert model.layer2[0].conv1.weight.requires_grad == True
+
+ # Freeze/unfreeze BN
+ # From root
+ freeze(model, ['layer1.0.bn1'])
+ assert isinstance(model.layer1[0].bn1, FrozenBatchNorm2d)
+ unfreeze(model, ['layer1.0.bn1'])
+ assert isinstance(model.layer1[0].bn1, BatchNorm2d)
+ # From direct parent
+ freeze(model.layer1[0], ['bn1'])
+ assert isinstance(model.layer1[0].bn1, FrozenBatchNorm2d)
+ unfreeze(model.layer1[0], ['bn1'])
+ assert isinstance(model.layer1[0].bn1, BatchNorm2d)
\ No newline at end of file
| 2021-09-21T11:53:01
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"timm/utils/model.py": "\"\"\" Model / state_dict utils\n\nHacked together by / Copyright 2020 Ross Wightman\n\"\"\"\nfrom .model_ema import ModelEma\nimport torch \nimport fnmatch\n\ndef unwrap_model(model):\n if isinstance(model, ModelEma):\n return unwrap_model(model.ema)\n else:\n return model.module if hasattr(model, 'module') else model\n\n\ndef get_state_dict(model, unwrap_fn=unwrap_model):\n return unwrap_fn(model).state_dict()\n\n\ndef avg_sq_ch_mean(model, input, output): \n \"calculate average channel square mean of output activations\"\n return torch.mean(output.mean(axis=[0,2,3])**2).item()\n\n\ndef avg_ch_var(model, input, output): \n \"calculate average channel variance of output activations\"\n return torch.mean(output.var(axis=[0,2,3])).item()\\\n\n\ndef avg_ch_var_residual(model, input, output): \n \"calculate average channel variance of output activations\"\n return torch.mean(output.var(axis=[0,2,3])).item()\n\n\nclass ActivationStatsHook:\n \"\"\"Iterates through each of `model`'s modules and matches modules using unix pattern \n matching based on `hook_fn_locs` and registers `hook_fn` to the module if there is \n a match. \n\n Arguments:\n model (nn.Module): model from which we will extract the activation stats\n hook_fn_locs (List[str]): List of `hook_fn` locations based on Unix type string \n matching with the name of model's modules. \n hook_fns (List[Callable]): List of hook functions to be registered at every\n module in `layer_names`.\n \n Inspiration from https://docs.fast.ai/callback.hook.html.\n\n Refer to https://gist.github.com/amaarora/6e56942fcb46e67ba203f3009b30d950 for an example \n on how to plot Signal Propogation Plots using `ActivationStatsHook`.\n \"\"\"\n\n def __init__(self, model, hook_fn_locs, hook_fns):\n self.model = model\n self.hook_fn_locs = hook_fn_locs\n self.hook_fns = hook_fns\n if len(hook_fn_locs) != len(hook_fns):\n raise ValueError(\"Please provide `hook_fns` for each `hook_fn_locs`, \\\n their lengths are different.\")\n self.stats = dict((hook_fn.__name__, []) for hook_fn in hook_fns)\n for hook_fn_loc, hook_fn in zip(hook_fn_locs, hook_fns): \n self.register_hook(hook_fn_loc, hook_fn)\n\n def _create_hook(self, hook_fn):\n def append_activation_stats(module, input, output):\n out = hook_fn(module, input, output)\n self.stats[hook_fn.__name__].append(out)\n return append_activation_stats\n \n def register_hook(self, hook_fn_loc, hook_fn):\n for name, module in self.model.named_modules():\n if not fnmatch.fnmatch(name, hook_fn_loc):\n continue\n module.register_forward_hook(self._create_hook(hook_fn))\n\n\ndef extract_spp_stats(model, \n hook_fn_locs,\n hook_fns, \n input_shape=[8, 3, 224, 224]):\n \"\"\"Extract average square channel mean and variance of activations during \n forward pass to plot Signal Propogation Plots (SPP).\n \n Paper: https://arxiv.org/abs/2101.08692\n\n Example Usage: https://gist.github.com/amaarora/6e56942fcb46e67ba203f3009b30d950\n \"\"\" \n x = torch.normal(0., 1., input_shape)\n hook = ActivationStatsHook(model, hook_fn_locs=hook_fn_locs, hook_fns=hook_fns)\n _ = model(x)\n return hook.stats\n "}
|
{"timm/utils/model.py": [{"type": "function", "name": "freeze_batch_norm_2d", "lines": [100, 130], "signature": "def freeze_batch_norm_2d(module):", "doc": "Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is\nitself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and\nreturned. Otherwise, the module is walked recursively and submodules are converted in place.\n\nArgs:\n module (torch.nn.Module): Any PyTorch module.\n\nReturns:\n torch.nn.Module: Resulting module\n\nInspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762"}, {"type": "function", "name": "unfreeze_batch_norm_2d", "lines": [133, 161], "signature": "def unfreeze_batch_norm_2d(module):", "doc": "Converts all `FrozenBatchNorm2d` layers of provided module into `BatchNorm2d`. If `module` is itself and instance\nof `FrozenBatchNorm2d`, it is converted into `BatchNorm2d` and returned. Otherwise, the module is walked\nrecursively and submodules are converted in place.\n\nArgs:\n module (torch.nn.Module): Any PyTorch module.\n\nReturns:\n torch.nn.Module: Resulting module\n\nInspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762"}, {"type": "function", "name": "_freeze_unfreeze", "lines": [164, 219], "signature": "def _freeze_unfreeze(root_module, submodules=[], include_bn_running_stats=True, mode='freeze'):", "doc": "Freeze or unfreeze parameters of the specified modules and those of all their hierarchical descendants. This is\ndone in place.\nArgs:\n root_module (nn.Module, optional): Root module relative to which the `submodules` are referenced.\n submodules (list[str]): List of modules for which the parameters will be (un)frozen. They are to be provided as\n named modules relative to the root module (accessible via `root_module.named_modules()`). An empty list\n means that the whole root module will be (un)frozen. Defaults to []\n include_bn_running_stats (bool): Whether to also (un)freeze the running statistics of batch norm 2d layers.\n Defaults to `True`.\n mode (bool): Whether to freeze (\"freeze\") or unfreeze (\"unfreeze\"). Defaults to `\"freeze\"`."}, {"type": "function", "name": "_freeze_unfreeze._add_submodule", "lines": [200, 205], "signature": "def _add_submodule(module, name, submodule):", "doc": ""}, {"type": "function", "name": "freeze", "lines": [222, 253], "signature": "def freeze(root_module, submodules=[], include_bn_running_stats=True):", "doc": "Freeze parameters of the specified modules and those of all their hierarchical descendants. This is done in place.\nArgs:\n root_module (nn.Module): Root module relative to which `submodules` are referenced.\n submodules (list[str]): List of modules for which the parameters will be frozen. They are to be provided as\n named modules relative to the root module (accessible via `root_module.named_modules()`). An empty list\n means that the whole root module will be frozen. Defaults to `[]`.\n include_bn_running_stats (bool): Whether to also freeze the running statistics of `BatchNorm2d` and\n `SyncBatchNorm` layers. These will be converted to `FrozenBatchNorm2d` in place. Hint: During fine tuning,\n it's good practice to freeze batch norm stats. And note that these are different to the affine parameters\n which are just normal PyTorch parameters. Defaults to `True`.\n\nHint: If you want to freeze batch norm ONLY, use `timm.utils.model.freeze_batch_norm_2d`.\n\nExamples::\n\n >>> model = timm.create_model('resnet18')\n >>> # Freeze up to and including layer2\n >>> submodules = [n for n, _ in model.named_children()]\n >>> print(submodules)\n ['conv1', 'bn1', 'act1', 'maxpool', 'layer1', 'layer2', 'layer3', 'layer4', 'global_pool', 'fc']\n >>> freeze(model, submodules[:submodules.index('layer2') + 1])\n >>> # Check for yourself that it works as expected\n >>> print(model.layer2[0].conv1.weight.requires_grad)\n False\n >>> print(model.layer3[0].conv1.weight.requires_grad)\n True\n >>> # Unfreeze\n >>> unfreeze(model)"}, {"type": "function", "name": "unfreeze", "lines": [256, 269], "signature": "def unfreeze(root_module, submodules=[], include_bn_running_stats=True):", "doc": "Unfreeze parameters of the specified modules and those of all their hierarchical descendants. This is done in place.\nArgs:\n root_module (nn.Module): Root module relative to which `submodules` are referenced.\n submodules (list[str]): List of submodules for which the parameters will be (un)frozen. They are to be provided\n as named modules relative to the root module (accessible via `root_module.named_modules()`). An empty\n list means that the whole root module will be unfrozen. Defaults to `[]`.\n include_bn_running_stats (bool): Whether to also unfreeze the running statistics of `FrozenBatchNorm2d` layers.\n These will be converted to `BatchNorm2d` in place. Defaults to `True`.\n\nSee example in docstring for `freeze`."}]}
| null |
["tests/test_utils.py::test_freeze_unfreeze"]
|
[]
|
9260cf517d6ba597b33ad1bbf7dbff9d0ccadd75
|
{"first_commit_time": 1632224802.0, "pr_title": "Helper to freeze specific model layers", "pr_body": "Currently in draft for visibility.\r\n\r\nAdds freeze/unfreeze to timm.utils. These let you provide a list of modules or module names for which to freeze parameters.\r\n\r\n- [x] Resolve the ugliness of what to do if user provides a batch norm layer directly. Right now this won't always work because we need to do convert it in place within the scope of the parent module. But we can't do this if we aren't explicitly given the parent module.\r\n- [x] Implement unfreezing of batch norm layers.\r\n - @rwightman any idea on this one? I would consider doing in place replacement of all FrozenBatchNorm2d layers to BatchNorm2d (so the exact reverse of `FrozenBatchNorm2d.convert_frozen_batchnorm`).", "pr_timeline": [{"time": 1632225284.0, "comment": "Will hopefully pick this up again next week!"}, {"time": 1633186669.0, "comment": "@rwightman ready to review\r\n\r\nThe only thing kind of bothering me about this one is that there is a BatchNorm1d in levit and the freeze only handles BatchNorm2d. Wasn't familiar enough with levit to know if that case needs to be handled. In any case, the docstrings are pretty clear that only BatchNorm2d is handled."}, {"time": 1633580268.0, "comment": "@alexander-soare thanks! I tested witha few models today, looks good"}], "issues": {}}
|
|
joke2k/faker
| 1,350
|
https://github.com/joke2k/faker/pull/1350
|
joke2k__faker-1350
|
[]
|
d6117b28b36dd4f3ead76026654000ba9f410c3c
|
diff --git a/faker/providers/misc/__init__.py b/faker/providers/misc/__init__.py
index cb129ce5b8..68ca213d37 100644
--- a/faker/providers/misc/__init__.py
+++ b/faker/providers/misc/__init__.py
@@ -2,6 +2,7 @@
import hashlib
import io
import json
+import re
import string
import tarfile
import uuid
@@ -399,12 +400,12 @@ def json(self,
data structures it is recommended to use the dictionary format.
Data Column Dictionary format:
- {'key name': 'definition'}}
+ {'key name': 'definition'}
- The definition can simply be the 'name:argument_group' of a provider
- method, or can also be string {{ tokens }} that are passed to python
- provider pystr_format() method for complex string generation.
- Argument Groups are used to pass arguments to the provider methods.
+ The definition can be 'provider', 'provider:argument_group', tokenized
+ 'string {{ provider:argument_group }}' that is passed to the python
+ provider method pystr_format() for generation, or a fixed '@word'.
+ Using Lists, Tuples, and Dicts as a definition for structure.
Example:
fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})
@@ -426,8 +427,8 @@ def json(self,
:return: Serialized JSON data
:rtype: str
- :sample: data_columns={'ID': 'pyint', 'Details': {'Name': 'name',
- 'Address': 'address'}}, num_rows=1
+ :sample: data_columns={'Spec': '@1.0.1', 'ID': 'pyint',
+ 'Details': {'Name': 'name', 'Address': 'address'}}, num_rows=2
:sample: data_columns={'Candidates': ['name', 'name', 'name']},
num_rows=1
:sample: data_columns=[('Name', 'name'), ('Points', 'pyint',
@@ -449,7 +450,7 @@ def process_list_structure(data: list) -> dict:
raise TypeError('Invalid arguments type. Must be a dictionary')
if name is None:
- return self._format_selection(definition, **kwargs)
+ return self._value_format_selection(definition, **kwargs)
if isinstance(definition, tuple):
entry[name] = process_list_structure(definition)
@@ -457,28 +458,27 @@ def process_list_structure(data: list) -> dict:
entry[name] = [process_list_structure([item])
for item in definition]
else:
- entry[name] = self._format_selection(definition, **kwargs)
+ entry[name] = self._value_format_selection(definition, **kwargs)
return entry
def process_dict_structure(data: dict) -> dict:
entry = {}
if isinstance(data, str):
- return self._format_selection(data)
+ return self._value_format_selection(data)
- if isinstance(data, (float, int)):
- return data
+ if isinstance(data, dict):
+ for name, definition in data.items():
+ if isinstance(definition, (tuple, list, set)):
+ entry[name] = [process_dict_structure(item)
+ for item in definition]
+ elif isinstance(definition, (dict, int, float, bool)):
+ entry[name] = process_dict_structure(definition)
+ else:
+ entry[name] = self._value_format_selection(definition)
+ return entry
- for name, definition in data.items():
- if isinstance(definition, (tuple, list)):
- entry[name] = [process_dict_structure(item)
- for item in definition]
- elif isinstance(definition, (dict, int, float)):
- entry[name] = process_dict_structure(definition)
- else:
- entry[name] = self._format_selection(definition)
-
- return entry
+ return data
def create_json_structure(data_columns) -> dict:
if isinstance(data_columns, dict):
@@ -510,9 +510,11 @@ def fixed_width(self,
Data Column List format
[('field width', 'definition', {'arguments'})]
- The definition can simply be the 'name:argument_group' of a provider
- method, or can also be string tokens that are passed to python
- provider method pystr_format() for data generation.
+ The definition can be 'provider', 'provider:argument_group', tokenized
+ 'string {{ provider:argument_group }}' that is passed to the python
+ provider method pystr_format() for generation, or a fixed '@word'.
+ Using Lists, Tuples, and Dicts as a definition for structure.
+
Argument Groups can be used to pass arguments to the provider methods,
but will override the arguments supplied in the tuple record.
@@ -530,7 +532,7 @@ def fixed_width(self,
:rtype: str
:sample: data_columns=[(20, 'name'), (3, 'pyint', {'min_value': 50,
- 'max_value': 100})], align='right', num_rows=1
+ 'max_value': 100})], align='right', num_rows=2
"""
default_data_columns = [
(20, 'name'),
@@ -553,24 +555,39 @@ def fixed_width(self,
if not isinstance(kwargs, dict):
raise TypeError('Invalid arguments type. Must be a dictionary')
- result = self._format_selection(definition, **kwargs)
+ result = self._value_format_selection(definition, **kwargs)
field = "{0:%s%s}" % (align_map.get(align, '<'), width)
row.append(field.format(result)[:width])
data.append(''.join(row))
return '\n'.join(data)
- def _format_selection(self, definition, **kwargs):
+ def _value_format_selection(self, definition, **kwargs):
"""
- Formats the string with PyStr Format if special characters are found.
+ Formats the string in different ways depending on it's contents.
+
+ The return can be the '@word' itself, a '{{ token }}' passed to PyStr,
+ or a 'provider:argument_group' format field that returns potentially
+ a non-string type.
+
+ This ensures that Numbers, Boolean types that are generated in the
+ JSON structures in there proper type, and not just strings.
"""
- if '{{' in definition and '}}' in definition:
+
+ # Check for PyStr first as complex strings may start with @
+ if re.match(r'.*\{\{.*\}\}.*', definition):
return self.generator.pystr_format(definition)
- if definition.count(':') == 1:
+ # Check for fixed @words that won't be generated
+ if re.match(r'^@.*', definition):
+ return definition.lstrip('@')
+
+ # Check if a argument group has been supplied
+ if re.match(r'^[a-zA-Z0-9_-]*:\w', definition):
definition, argument_group = definition.split(':')
arguments = self.generator.get_arguments(argument_group.strip())
return self.generator.format(definition.strip(), **arguments)
+ # Assume the string is refering to a provider
return self.generator.format(definition, **kwargs)
|
diff --git a/tests/providers/test_misc.py b/tests/providers/test_misc.py
index d51d80c3f5..d54dfa2b0b 100644
--- a/tests/providers/test_misc.py
+++ b/tests/providers/test_misc.py
@@ -477,11 +477,13 @@ def test_json_multiple_rows(self, faker_with_foobar):
assert isinstance(json_data, list) and len(json_data) == 2
- def test_json_passthrough_int_float(self, faker_with_foobar):
+ def test_json_passthrough_values(self, faker_with_foobar):
kwargs = {
'data_columns': {
'item1': 1,
'item2': 1.0,
+ 'item3': True,
+ 'item4': '@fixed',
},
'num_rows': 1,
}
@@ -489,6 +491,8 @@ def test_json_passthrough_int_float(self, faker_with_foobar):
assert json_data['item1'] == 1
assert json_data['item2'] == 1.0
+ assert json_data['item3'] is True
+ assert json_data['item4'] == 'fixed'
def test_json_type_integrity_int(self, faker_with_foobar):
kwargs = {
| 2020-12-17T05:00:02
|
{}
|
{"faker/providers/misc/__init__.py": "import csv\nimport hashlib\nimport io\nimport json\nimport string\nimport tarfile\nimport uuid\nimport zipfile\n\nfrom .. import BaseProvider\n\nlocalized = True\n\ncsv.register_dialect('faker-csv', csv.excel, quoting=csv.QUOTE_ALL)\n\n\nclass Provider(BaseProvider):\n\n def boolean(self, chance_of_getting_true=50):\n \"\"\"Generate a random boolean value based on ``chance_of_getting_true``.\n\n :sample size=10: chance_of_getting_true=25\n :sample size=10: chance_of_getting_true=50\n :sample size=10: chance_of_getting_true=75\n \"\"\"\n return self.generator.random.randint(1, 100) <= chance_of_getting_true\n\n def null_boolean(self):\n \"\"\"Generate ``None``, ``True``, or ``False``, each with equal probability.\n\n :sample size=15:\n \"\"\"\n return {\n 0: None,\n 1: True,\n -1: False,\n }[self.generator.random.randint(-1, 1)]\n\n def binary(self, length=(1 * 1024 * 1024)):\n \"\"\"Generate a random binary blob of ``length`` bytes.\n\n :sample: length=64\n \"\"\"\n blob = [self.generator.random.randrange(256) for _ in range(length)]\n return bytes(blob)\n\n def md5(self, raw_output=False):\n \"\"\"Generate a random MD5 hash.\n\n If ``raw_output`` is ``False`` (default), a hexadecimal string representation of the MD5 hash\n will be returned. If ``True``, a ``bytes`` object representation will be returned instead.\n\n :sample: raw_output=False\n :sample: raw_output=True\n \"\"\"\n res = hashlib.md5(str(self.generator.random.random()).encode())\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n def sha1(self, raw_output=False):\n \"\"\"Generate a random SHA1 hash.\n\n If ``raw_output`` is ``False`` (default), a hexadecimal string representation of the SHA1 hash\n will be returned. If ``True``, a ``bytes`` object representation will be returned instead.\n\n :sample: raw_output=False\n :sample: raw_output=True\n \"\"\"\n res = hashlib.sha1(str(self.generator.random.random()).encode())\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n def sha256(self, raw_output=False):\n \"\"\"Generate a random SHA256 hash.\n\n If ``raw_output`` is ``False`` (default), a hexadecimal string representation of the SHA56 hash\n will be returned. If ``True``, a ``bytes`` object representation will be returned instead.\n\n :sample: raw_output=False\n :sample: raw_output=True\n \"\"\"\n res = hashlib.sha256(\n str(self.generator.random.random()).encode())\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n def uuid4(self, cast_to=str):\n \"\"\"Generate a random UUID4 object and cast it to another type if specified using a callable ``cast_to``.\n\n By default, ``cast_to`` is set to ``str``.\n\n May be called with ``cast_to=None`` to return a full-fledged ``UUID``.\n\n :sample:\n :sample: cast_to=None\n \"\"\"\n # Based on http://stackoverflow.com/q/41186818\n generated_uuid = uuid.UUID(int=self.generator.random.getrandbits(128), version=4)\n if cast_to is not None:\n generated_uuid = cast_to(generated_uuid)\n return generated_uuid\n\n def password(\n self,\n length=10,\n special_chars=True,\n digits=True,\n upper_case=True,\n lower_case=True):\n \"\"\"Generate a random password of the specified ``length``.\n\n The arguments ``special_chars``, ``digits``, ``upper_case``, and ``lower_case`` control\n what category of characters will appear in the generated password. If set to ``True``\n (default), at least one character from the corresponding category is guaranteed to appear.\n Special characters are characters from ``!@#$%^&*()_+``, digits are characters from\n ``0123456789``, and uppercase and lowercase characters are characters from the ASCII set of\n letters.\n\n :sample: length=12\n :sample: length=40, special_chars=False, upper_case=False\n \"\"\"\n choices = \"\"\n required_tokens = []\n if special_chars:\n required_tokens.append(\n self.generator.random.choice(\"!@#$%^&*()_+\"))\n choices += \"!@#$%^&*()_+\"\n if digits:\n required_tokens.append(self.generator.random.choice(string.digits))\n choices += string.digits\n if upper_case:\n required_tokens.append(\n self.generator.random.choice(string.ascii_uppercase))\n choices += string.ascii_uppercase\n if lower_case:\n required_tokens.append(\n self.generator.random.choice(string.ascii_lowercase))\n choices += string.ascii_lowercase\n\n assert len(\n required_tokens) <= length, \"Required length is shorter than required characters\"\n\n # Generate a first version of the password\n chars = self.random_choices(choices, length=length)\n\n # Pick some unique locations\n random_indexes = set()\n while len(random_indexes) < len(required_tokens):\n random_indexes.add(\n self.generator.random.randint(0, len(chars) - 1))\n\n # Replace them with the required characters\n for i, index in enumerate(random_indexes):\n chars[index] = required_tokens[i]\n\n return ''.join(chars)\n\n def zip(self, uncompressed_size=65536, num_files=1, min_file_size=4096, compression=None):\n \"\"\"Generate a bytes object containing a random valid zip archive file.\n\n The number and sizes of files contained inside the resulting archive can be controlled\n using the following arguments:\n\n - ``uncompressed_size`` - the total size of files before compression, 16 KiB by default\n - ``num_files`` - the number of files archived in resulting zip file, 1 by default\n - ``min_file_size`` - the minimum size of each file before compression, 4 KiB by default\n\n No compression is used by default, but setting ``compression`` to one of the values listed\n below will use the corresponding compression type.\n\n - ``'bzip2'`` or ``'bz2'`` for BZIP2\n - ``'lzma'`` or ``'xz'`` for LZMA\n - ``'deflate'``, ``'gzip'``, or ``'gz'`` for GZIP\n\n :sample: uncompressed_size=256, num_files=4, min_file_size=32\n :sample: uncompressed_size=256, num_files=32, min_file_size=4, compression='bz2'\n \"\"\"\n if any([\n not isinstance(num_files, int) or num_files <= 0,\n not isinstance(min_file_size, int) or min_file_size <= 0,\n not isinstance(uncompressed_size, int) or uncompressed_size <= 0,\n ]):\n raise ValueError(\n '`num_files`, `min_file_size`, and `uncompressed_size` must be positive integers',\n )\n if min_file_size * num_files > uncompressed_size:\n raise AssertionError(\n '`uncompressed_size` is smaller than the calculated minimum required size',\n )\n if compression in ['bzip2', 'bz2']:\n compression = zipfile.ZIP_BZIP2\n elif compression in ['lzma', 'xz']:\n compression = zipfile.ZIP_LZMA\n elif compression in ['deflate', 'gzip', 'gz']:\n compression = zipfile.ZIP_DEFLATED\n else:\n compression = zipfile.ZIP_STORED\n\n zip_buffer = io.BytesIO()\n remaining_size = uncompressed_size\n with zipfile.ZipFile(zip_buffer, mode='w', compression=compression) as zip_handle:\n for file_number in range(1, num_files + 1):\n filename = self.generator.pystr() + str(file_number)\n\n max_allowed_size = remaining_size - (num_files - file_number) * min_file_size\n if file_number < num_files:\n file_size = self.generator.random.randint(min_file_size, max_allowed_size)\n remaining_size = remaining_size - file_size\n else:\n file_size = remaining_size\n\n data = self.generator.binary(file_size)\n zip_handle.writestr(filename, data)\n return zip_buffer.getvalue()\n\n def tar(self, uncompressed_size=65536, num_files=1, min_file_size=4096, compression=None):\n \"\"\"Generate a bytes object containing a random valid tar file.\n\n The number and sizes of files contained inside the resulting archive can be controlled\n using the following arguments:\n\n - ``uncompressed_size`` - the total size of files before compression, 16 KiB by default\n - ``num_files`` - the number of files archived in resulting zip file, 1 by default\n - ``min_file_size`` - the minimum size of each file before compression, 4 KiB by default\n\n No compression is used by default, but setting ``compression`` to one of the values listed\n below will use the corresponding compression type.\n\n - ``'bzip2'`` or ``'bz2'`` for BZIP2\n - ``'lzma'`` or ``'xz'`` for LZMA\n - ``'gzip'`` or ``'gz'`` for GZIP\n\n :sample: uncompressed_size=256, num_files=4, min_file_size=32\n :sample: uncompressed_size=256, num_files=32, min_file_size=4, compression='bz2'\n \"\"\"\n if any([\n not isinstance(num_files, int) or num_files <= 0,\n not isinstance(min_file_size, int) or min_file_size <= 0,\n not isinstance(uncompressed_size, int) or uncompressed_size <= 0,\n ]):\n raise ValueError(\n '`num_files`, `min_file_size`, and `uncompressed_size` must be positive integers',\n )\n if min_file_size * num_files > uncompressed_size:\n raise AssertionError(\n '`uncompressed_size` is smaller than the calculated minimum required size',\n )\n if compression in ['gzip', 'gz']:\n mode = 'w:gz'\n elif compression in ['bzip2', 'bz2']:\n mode = 'w:bz2'\n elif compression in ['lzma', 'xz']:\n mode = 'w:xz'\n else:\n mode = 'w'\n\n tar_buffer = io.BytesIO()\n remaining_size = uncompressed_size\n with tarfile.open(mode=mode, fileobj=tar_buffer) as tar_handle:\n for file_number in range(1, num_files + 1):\n file_buffer = io.BytesIO()\n filename = self.generator.pystr() + str(file_number)\n\n max_allowed_size = remaining_size - (num_files - file_number) * min_file_size\n if file_number < num_files:\n file_size = self.generator.random.randint(min_file_size, max_allowed_size)\n remaining_size = remaining_size - file_size\n else:\n file_size = remaining_size\n\n tarinfo = tarfile.TarInfo(name=filename)\n data = self.generator.binary(file_size)\n file_buffer.write(data)\n tarinfo.size = len(file_buffer.getvalue())\n file_buffer.seek(0)\n tar_handle.addfile(tarinfo, file_buffer)\n file_buffer.close()\n return tar_buffer.getvalue()\n\n def dsv(self, dialect='faker-csv', header=None,\n data_columns=('{{name}}', '{{address}}'),\n num_rows=10, include_row_ids=False, **fmtparams):\n \"\"\"Generate random delimiter-separated values.\n\n This method's behavior share some similarities with ``csv.writer``. The ``dialect`` and\n ``**fmtparams`` arguments are the same arguments expected by ``csv.writer`` to control its\n behavior, and instead of expecting a file-like object to where output will be written, the\n output is controlled by additional keyword arguments and is returned as a string.\n\n The ``dialect`` argument defaults to ``'faker-csv'`` which is the name of a ``csv.excel``\n subclass with full quoting enabled.\n\n The ``header`` argument expects a list or a tuple of strings that will serve as the header row\n if supplied. The ``data_columns`` argument expects a list or a tuple of string tokens, and these\n string tokens will be passed to :meth:`pystr_format() <faker.providers.python.Provider.pystr_format>`\n for data generation. Argument Groups are used to pass arguments to the provider methods.\n Both ``header`` and ``data_columns`` must be of the same length.\n\n Example:\n fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})\n fake.dsv(data_columns=('{{ name }}', '{{ pyint:top_half }}'))\n\n The ``num_rows`` argument controls how many rows of data to generate, and the ``include_row_ids``\n argument may be set to ``True`` to include a sequential row ID column.\n\n :sample: dialect='excel', data_columns=('{{name}}', '{{address}}')\n :sample: dialect='excel-tab', data_columns=('{{name}}', '{{address}}'), include_row_ids=True\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=5, delimiter='$'\n \"\"\"\n\n if not isinstance(num_rows, int) or num_rows <= 0:\n raise ValueError('`num_rows` must be a positive integer')\n if not isinstance(data_columns, (list, tuple)):\n raise TypeError('`data_columns` must be a tuple or a list')\n if header is not None:\n if not isinstance(header, (list, tuple)):\n raise TypeError('`header` must be a tuple or a list')\n if len(header) != len(data_columns):\n raise ValueError('`header` and `data_columns` must have matching lengths')\n\n dsv_buffer = io.StringIO()\n writer = csv.writer(dsv_buffer, dialect=dialect, **fmtparams)\n\n if header:\n if include_row_ids:\n header = list(header)\n header.insert(0, 'ID')\n writer.writerow(header)\n\n for row_num in range(1, num_rows + 1):\n row = [self.generator.pystr_format(column) for column in data_columns]\n if include_row_ids:\n row.insert(0, str(row_num))\n\n writer.writerow(row)\n\n return dsv_buffer.getvalue()\n\n def csv(self, header=None, data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False):\n \"\"\"Generate random comma-separated values.\n\n For more information on the different arguments of this method, please refer to\n :meth:`dsv() <faker.providers.misc.Provider.dsv>` which is used under the hood.\n\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False\n :sample: header=('Name', 'Address', 'Favorite Color'),\n data_columns=('{{name}}', '{{address}}', '{{safe_color_name}}'),\n num_rows=10, include_row_ids=True\n \"\"\"\n return self.dsv(\n header=header, data_columns=data_columns, num_rows=num_rows,\n include_row_ids=include_row_ids, delimiter=',',\n )\n\n def tsv(self, header=None, data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False):\n \"\"\"Generate random tab-separated values.\n\n For more information on the different arguments of this method, please refer to\n :meth:`dsv() <faker.providers.misc.Provider.dsv>` which is used under the hood.\n\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False\n :sample: header=('Name', 'Address', 'Favorite Color'),\n data_columns=('{{name}}', '{{address}}', '{{safe_color_name}}'),\n num_rows=10, include_row_ids=True\n \"\"\"\n return self.dsv(\n header=header, data_columns=data_columns, num_rows=num_rows,\n include_row_ids=include_row_ids, delimiter='\\t',\n )\n\n def psv(self, header=None, data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False):\n \"\"\"Generate random pipe-separated values.\n\n For more information on the different arguments of this method, please refer to\n :meth:`dsv() <faker.providers.misc.Provider.dsv>` which is used under the hood.\n\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False\n :sample: header=('Name', 'Address', 'Favorite Color'),\n data_columns=('{{name}}', '{{address}}', '{{safe_color_name}}'),\n num_rows=10, include_row_ids=True\n \"\"\"\n return self.dsv(\n header=header, data_columns=data_columns, num_rows=num_rows,\n include_row_ids=include_row_ids, delimiter='|',\n )\n\n def json(self,\n data_columns: list = None,\n num_rows: int = 10,\n indent: int = None) -> str:\n \"\"\"\n Generate random JSON structure values.\n\n Using a dictionary or list of records that is passed as ``data_columns``,\n define the structure that is used to build JSON structures. For complex\n data structures it is recommended to use the dictionary format.\n\n Data Column Dictionary format:\n {'key name': 'definition'}}\n\n The definition can simply be the 'name:argument_group' of a provider\n method, or can also be string {{ tokens }} that are passed to python\n provider pystr_format() method for complex string generation.\n Argument Groups are used to pass arguments to the provider methods.\n\n Example:\n fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})\n fake.json(data_columns={'Name': 'name', 'Score': 'pyint:top_half'})\n\n Data Column List format:\n [('key name', 'definition', {'arguments'})]\n\n With the list format the definition can be a list of records, to create\n a list within the structure data. For literal entries within the list,\n set the 'field_name' to None.\n\n :param data_columns: specification for the data structure\n :type data_columns: dict\n :param num_rows: number of rows the returned\n :type num_rows: int\n :param indent: number of spaces to indent the fields\n :type indent: int\n :return: Serialized JSON data\n :rtype: str\n\n :sample: data_columns={'ID': 'pyint', 'Details': {'Name': 'name',\n 'Address': 'address'}}, num_rows=1\n :sample: data_columns={'Candidates': ['name', 'name', 'name']},\n num_rows=1\n :sample: data_columns=[('Name', 'name'), ('Points', 'pyint',\n {'min_value': 50, 'max_value': 100})], num_rows=1\n \"\"\"\n default_data_columns = {\n 'name': '{{name}}',\n 'residency': '{{address}}',\n }\n data_columns = data_columns if data_columns else default_data_columns\n\n def process_list_structure(data: list) -> dict:\n entry = {}\n\n for name, definition, *arguments in data:\n kwargs = arguments[0] if arguments else {}\n\n if not isinstance(kwargs, dict):\n raise TypeError('Invalid arguments type. Must be a dictionary')\n\n if name is None:\n return self._format_selection(definition, **kwargs)\n\n if isinstance(definition, tuple):\n entry[name] = process_list_structure(definition)\n elif isinstance(definition, (list, set)):\n entry[name] = [process_list_structure([item])\n for item in definition]\n else:\n entry[name] = self._format_selection(definition, **kwargs)\n return entry\n\n def process_dict_structure(data: dict) -> dict:\n entry = {}\n\n if isinstance(data, str):\n return self._format_selection(data)\n\n if isinstance(data, (float, int)):\n return data\n\n for name, definition in data.items():\n if isinstance(definition, (tuple, list)):\n entry[name] = [process_dict_structure(item)\n for item in definition]\n elif isinstance(definition, (dict, int, float)):\n entry[name] = process_dict_structure(definition)\n else:\n entry[name] = self._format_selection(definition)\n\n return entry\n\n def create_json_structure(data_columns) -> dict:\n if isinstance(data_columns, dict):\n return process_dict_structure(data_columns)\n\n if isinstance(data_columns, list):\n return process_list_structure(data_columns)\n\n raise TypeError('Invalid data_columns type. Must be a dictionary or list')\n\n if num_rows == 1:\n return json.dumps(create_json_structure(data_columns), indent=indent)\n\n data = [create_json_structure(data_columns) for _ in range(num_rows)]\n return json.dumps(data, indent=indent)\n\n def fixed_width(self,\n data_columns: list = None,\n num_rows: int = 10,\n align: str = 'left') -> str:\n \"\"\"\n Generate random fixed width values.\n\n Using a list of tuple records that is passed as ``data_columns``, that\n defines the structure that will be generated. Arguments within the\n record are provider specific, and should be a dictionary that will be\n passed to the provider method.\n\n Data Column List format\n [('field width', 'definition', {'arguments'})]\n\n The definition can simply be the 'name:argument_group' of a provider\n method, or can also be string tokens that are passed to python\n provider method pystr_format() for data generation.\n Argument Groups can be used to pass arguments to the provider methods,\n but will override the arguments supplied in the tuple record.\n\n Example:\n fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})\n fake.fixed_width(data_columns=[(20, 'name'), (3, 'pyint:top_half')])\n\n :param data_columns: specification for the data structure\n :type data_columns: list\n :param num_rows: number of rows the generator will yield\n :type num_rows: int\n :param align: positioning of the value. (left, middle, right)\n :type align: str\n :return: Serialized Fixed Width data\n :rtype: str\n\n :sample: data_columns=[(20, 'name'), (3, 'pyint', {'min_value': 50,\n 'max_value': 100})], align='right', num_rows=1\n \"\"\"\n default_data_columns = [\n (20, 'name'),\n (3, 'pyint', {'max_value': 20}),\n ]\n data_columns = data_columns if data_columns else default_data_columns\n align_map = {\n 'left': '<',\n 'middle': '^',\n 'right': '>',\n }\n data = []\n\n for _ in range(num_rows):\n row = []\n\n for width, definition, *arguments in data_columns:\n kwargs = arguments[0] if arguments else {}\n\n if not isinstance(kwargs, dict):\n raise TypeError('Invalid arguments type. Must be a dictionary')\n\n result = self._format_selection(definition, **kwargs)\n field = \"{0:%s%s}\" % (align_map.get(align, '<'), width)\n row.append(field.format(result)[:width])\n\n data.append(''.join(row))\n return '\\n'.join(data)\n\n def _format_selection(self, definition, **kwargs):\n \"\"\"\n Formats the string with PyStr Format if special characters are found.\n \"\"\"\n if '{{' in definition and '}}' in definition:\n return self.generator.pystr_format(definition)\n\n if definition.count(':') == 1:\n definition, argument_group = definition.split(':')\n arguments = self.generator.get_arguments(argument_group.strip())\n\n return self.generator.format(definition.strip(), **arguments)\n\n return self.generator.format(definition, **kwargs)\n"}
|
{"faker/providers/misc/__init__.py": [{"type": "function", "name": "Provider._value_format_selection", "lines": [565, 593], "signature": "def _value_format_selection(self, definition, **kwargs):", "doc": "Formats the string in different ways depending on it's contents.\n\nThe return can be the '@word' itself, a '{{ token }}' passed to PyStr,\nor a 'provider:argument_group' format field that returns potentially\na non-string type.\n\nThis ensures that Numbers, Boolean types that are generated in the\nJSON structures in there proper type, and not just strings."}]}
| null |
["tests/providers/test_misc.py::TestMiscProvider::test_json_passthrough_values"]
|
["tests/providers/test_misc.py::TestMiscProvider::test_uuid4_str", "tests/providers/test_misc.py::TestMiscProvider::test_uuid4_int", "tests/providers/test_misc.py::TestMiscProvider::test_uuid4_uuid_object", "tests/providers/test_misc.py::TestMiscProvider::test_uuid4_seedability", "tests/providers/test_misc.py::TestMiscProvider::test_zip_invalid_file", "tests/providers/test_misc.py::TestMiscProvider::test_zip_one_byte_undersized", "tests/providers/test_misc.py::TestMiscProvider::test_zip_exact_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_zip_over_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_zip_compression_py3", "tests/providers/test_misc.py::TestMiscProvider::test_tar_invalid_file", "tests/providers/test_misc.py::TestMiscProvider::test_tar_one_byte_undersized", "tests/providers/test_misc.py::TestMiscProvider::test_tar_exact_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_tar_over_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_tar_compression_py3", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_with_invalid_values", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_no_header", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_with_valid_header", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_with_row_ids", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_data_columns", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_csvwriter_kwargs", "tests/providers/test_misc.py::TestMiscProvider::test_csv_helper_method", "tests/providers/test_misc.py::TestMiscProvider::test_tsv_helper_method", "tests/providers/test_misc.py::TestMiscProvider::test_psv_helper_method", "tests/providers/test_misc.py::TestMiscProvider::test_json_with_arguments", "tests/providers/test_misc.py::TestMiscProvider::test_json_multiple_rows", "tests/providers/test_misc.py::TestMiscProvider::test_json_type_integrity_int", "tests/providers/test_misc.py::TestMiscProvider::test_json_type_integrity_float", "tests/providers/test_misc.py::TestMiscProvider::test_json_invalid_data_columns", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_invalid_arguments_type", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_nested_list_of_values", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_nested_list_of_objects", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_nested_objects", "tests/providers/test_misc.py::TestMiscProvider::test_json_dict_format_nested_list_of_values", "tests/providers/test_misc.py::TestMiscProvider::test_json_dict_format_nested_list_of_objects", "tests/providers/test_misc.py::TestMiscProvider::test_json_dict_format_nested_objects", "tests/providers/test_misc.py::TestMiscProvider::test_fixed_width_with_arguments", "tests/providers/test_misc.py::TestMiscProvider::test_fixed_width_invalid_arguments_type", "tests/providers/test_misc.py::TestMiscProvider::test_md5", "tests/providers/test_misc.py::TestMiscProvider::test_sha1", "tests/providers/test_misc.py::TestMiscProvider::test_sha256"]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1608179533.0, "pr_title": "FW and JSON Improvements", "pr_body": "### What does this changes\r\nThe Format Value Selector was a bit clunky, and needed some updating. It now allows for strings to be fixed or pinned to a static value by using '@', to begin with.\r\n\r\n**Example:**\r\n```python\r\nfake.json({\"Spec\": \"@2.1b\", \"Source\": \"@host.example.com\", \"ID\": \"pyint:min-max\"})\r\n```\r\nThis will fix the values Spec, and Source, so they are not generated, but the ID will using the parameter group 'min-max'\r\n\r\n### What was wrong\r\nBecause values in JSON can be non-string types, like integer, float, and boolean, it's important that PyStr selected only when needed. This is the role of the value format selector.\r\n\r\n**Example:**\r\n```python\r\nfake.json({\"ID\": \"{{ pyint }}\"}) # This returns a string value with quotes in the JSON\r\nfake.json({\"ID\": \"pyint\"}) # This returns a integer value without quotes in the JSON.\r\n```\r\nThe down fall of this though is when you need data with fixed values in some fields for things like event specification values to a Data Stream. The JSON provider lacked an easy and obvious way to do this.", "pr_timeline": [{"time": 1609344131.0, "comment": "Thank you! \u2728 "}], "issues": {}}
|
|
joke2k/faker
| 1,783
|
https://github.com/joke2k/faker/pull/1783
|
joke2k__faker-1783
|
[]
|
348708251a1093fb83c5b7cf83c1fb210a62125d
|
diff --git a/faker/providers/misc/__init__.py b/faker/providers/misc/__init__.py
index b37953ab9d..5fd22ea0f6 100644
--- a/faker/providers/misc/__init__.py
+++ b/faker/providers/misc/__init__.py
@@ -488,6 +488,21 @@ def psv(
delimiter="|",
)
+ def json_bytes(
+ self,
+ data_columns: Optional[List] = None,
+ num_rows: int = 10,
+ indent: Optional[int] = None,
+ cls: Optional[Type[json.JSONEncoder]] = None,
+ ) -> bytes:
+ """
+ Generate random JSON structure and return as bytes.
+
+ For more information on the different arguments of this method, refer to
+ :meth:`json() <faker.providers.misc.Provider.json>` which is used under the hood.
+ """
+ return self.json(data_columns=data_columns, num_rows=num_rows, indent=indent, cls=cls).encode()
+
def json(
self,
data_columns: Optional[List] = None,
|
diff --git a/tests/providers/test_misc.py b/tests/providers/test_misc.py
index 695defe7f2..f50822f39a 100644
--- a/tests/providers/test_misc.py
+++ b/tests/providers/test_misc.py
@@ -697,6 +697,13 @@ def test_json_type_integrity_datetime_no_encoder(self, faker_with_foobar):
with pytest.raises(TypeError):
faker_with_foobar.json(**kwargs)
+ def test_json_bytes(self, faker_with_foobar):
+ kwargs = {"data_columns": {"item1": "foo_bar"}, "num_rows": 1}
+ json_data_bytes = faker_with_foobar.json_bytes(**kwargs)
+ assert isinstance(json_data_bytes, bytes)
+ json_data = json.loads(json_data_bytes.decode())
+ assert json_data["item1"] == "FooBar"
+
def test_fixed_width_with_arguments(self, faker_with_foobar):
kwargs = {
"data_columns": [
| 2023-01-12T11:33:31
|
{}
|
{"faker/providers/misc/__init__.py": "import csv\nimport hashlib\nimport io\nimport json\nimport os\nimport re\nimport string\nimport tarfile\nimport uuid\nimport zipfile\n\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Set, Tuple, Type, Union\n\nfrom faker.exceptions import UnsupportedFeature\n\nfrom .. import BaseProvider\n\nlocalized = True\n\ncsv.register_dialect(\"faker-csv\", csv.excel, quoting=csv.QUOTE_ALL)\n\n\nclass Provider(BaseProvider):\n def boolean(self, chance_of_getting_true: int = 50) -> bool:\n \"\"\"Generate a random boolean value based on ``chance_of_getting_true``.\n\n :sample: chance_of_getting_true=25\n :sample: chance_of_getting_true=50\n :sample: chance_of_getting_true=75\n \"\"\"\n return self.generator.random.randint(1, 100) <= chance_of_getting_true\n\n def null_boolean(self) -> Optional[bool]:\n \"\"\"Generate ``None``, ``True``, or ``False``, each with equal probability.\"\"\"\n\n return {\n 0: None,\n 1: True,\n -1: False,\n }[self.generator.random.randint(-1, 1)]\n\n def binary(self, length: int = (1 * 1024 * 1024)) -> bytes:\n \"\"\"Generate a random binary blob of ``length`` bytes.\n\n If this faker instance has been seeded, performance will be signficiantly reduced, to conform\n to the seeding.\n\n :sample: length=64\n \"\"\"\n # If the generator has already been seeded, urandom can't be used\n if self.generator._is_seeded:\n blob = [self.generator.random.randrange(256) for _ in range(length)]\n return bytes(blob)\n\n # Generator is unseeded anyway, just use urandom\n return os.urandom(length)\n\n def md5(self, raw_output: bool = False) -> Union[bytes, str]:\n \"\"\"Generate a random MD5 hash.\n\n If ``raw_output`` is ``False`` (default), a hexadecimal string representation of the MD5 hash\n will be returned. If ``True``, a ``bytes`` object representation will be returned instead.\n\n :sample: raw_output=False\n :sample: raw_output=True\n \"\"\"\n res: hashlib._Hash = hashlib.md5(str(self.generator.random.random()).encode())\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n def sha1(self, raw_output: bool = False) -> Union[bytes, str]:\n \"\"\"Generate a random SHA-1 hash.\n\n If ``raw_output`` is ``False`` (default), a hexadecimal string representation of the SHA-1 hash\n will be returned. If ``True``, a ``bytes`` object representation will be returned instead.\n\n :sample: raw_output=False\n :sample: raw_output=True\n \"\"\"\n res: hashlib._Hash = hashlib.sha1(str(self.generator.random.random()).encode())\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n def sha256(self, raw_output: bool = False) -> Union[bytes, str]:\n \"\"\"Generate a random SHA-256 hash.\n\n If ``raw_output`` is ``False`` (default), a hexadecimal string representation of the SHA-256 hash\n will be returned. If ``True``, a ``bytes`` object representation will be returned instead.\n\n :sample: raw_output=False\n :sample: raw_output=True\n \"\"\"\n res: hashlib._Hash = hashlib.sha256(str(self.generator.random.random()).encode())\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n def uuid4(\n self,\n cast_to: Optional[Union[Callable[[uuid.UUID], str], Callable[[uuid.UUID], bytes]]] = str,\n ) -> Union[bytes, str, uuid.UUID]:\n \"\"\"Generate a random UUID4 object and cast it to another type if specified using a callable ``cast_to``.\n\n By default, ``cast_to`` is set to ``str``.\n\n May be called with ``cast_to=None`` to return a full-fledged ``UUID``.\n\n :sample:\n :sample: cast_to=None\n \"\"\"\n # Based on http://stackoverflow.com/q/41186818\n generated_uuid: uuid.UUID = uuid.UUID(int=self.generator.random.getrandbits(128), version=4)\n if cast_to is not None:\n return cast_to(generated_uuid)\n return generated_uuid\n\n def password(\n self,\n length: int = 10,\n special_chars: bool = True,\n digits: bool = True,\n upper_case: bool = True,\n lower_case: bool = True,\n ) -> str:\n \"\"\"Generate a random password of the specified ``length``.\n\n The arguments ``special_chars``, ``digits``, ``upper_case``, and ``lower_case`` control\n what category of characters will appear in the generated password. If set to ``True``\n (default), at least one character from the corresponding category is guaranteed to appear.\n Special characters are characters from ``!@#$%^&*()_+``, digits are characters from\n ``0123456789``, and uppercase and lowercase characters are characters from the ASCII set of\n letters.\n\n :sample: length=12\n :sample: length=40, special_chars=False, upper_case=False\n \"\"\"\n choices = \"\"\n required_tokens = []\n if special_chars:\n required_tokens.append(self.generator.random.choice(\"!@#$%^&*()_+\"))\n choices += \"!@#$%^&*()_+\"\n if digits:\n required_tokens.append(self.generator.random.choice(string.digits))\n choices += string.digits\n if upper_case:\n required_tokens.append(self.generator.random.choice(string.ascii_uppercase))\n choices += string.ascii_uppercase\n if lower_case:\n required_tokens.append(self.generator.random.choice(string.ascii_lowercase))\n choices += string.ascii_lowercase\n\n assert len(required_tokens) <= length, \"Required length is shorter than required characters\"\n\n # Generate a first version of the password\n chars: str = self.random_choices(choices, length=length) # type: ignore\n\n # Pick some unique locations\n random_indexes: Set[int] = set()\n while len(random_indexes) < len(required_tokens):\n random_indexes.add(self.generator.random.randint(0, len(chars) - 1))\n\n # Replace them with the required characters\n for i, index in enumerate(random_indexes):\n chars[index] = required_tokens[i] # type: ignore\n\n return \"\".join(chars)\n\n def zip(\n self,\n uncompressed_size: int = 65536,\n num_files: int = 1,\n min_file_size: int = 4096,\n compression: Optional[str] = None,\n ) -> bytes:\n \"\"\"Generate a bytes object containing a random valid zip archive file.\n\n The number and sizes of files contained inside the resulting archive can be controlled\n using the following arguments:\n\n - ``uncompressed_size`` - the total size of files before compression, 16 KiB by default\n - ``num_files`` - the number of files archived in resulting zip file, 1 by default\n - ``min_file_size`` - the minimum size of each file before compression, 4 KiB by default\n\n No compression is used by default, but setting ``compression`` to one of the values listed\n below will use the corresponding compression type.\n\n - ``'bzip2'`` or ``'bz2'`` for BZIP2\n - ``'lzma'`` or ``'xz'`` for LZMA\n - ``'deflate'``, ``'gzip'``, or ``'gz'`` for GZIP\n\n :sample: uncompressed_size=256, num_files=4, min_file_size=32\n :sample: uncompressed_size=256, num_files=32, min_file_size=4, compression='bz2'\n \"\"\"\n if any(\n [\n not isinstance(num_files, int) or num_files <= 0,\n not isinstance(min_file_size, int) or min_file_size <= 0,\n not isinstance(uncompressed_size, int) or uncompressed_size <= 0,\n ]\n ):\n raise ValueError(\n \"`num_files`, `min_file_size`, and `uncompressed_size` must be positive integers\",\n )\n if min_file_size * num_files > uncompressed_size:\n raise AssertionError(\n \"`uncompressed_size` is smaller than the calculated minimum required size\",\n )\n if compression in [\"bzip2\", \"bz2\"]:\n compression_ = zipfile.ZIP_BZIP2\n elif compression in [\"lzma\", \"xz\"]:\n compression_ = zipfile.ZIP_LZMA\n elif compression in [\"deflate\", \"gzip\", \"gz\"]:\n compression_ = zipfile.ZIP_DEFLATED\n else:\n compression_ = zipfile.ZIP_STORED\n\n zip_buffer = io.BytesIO()\n remaining_size = uncompressed_size\n with zipfile.ZipFile(zip_buffer, mode=\"w\", compression=compression_) as zip_handle:\n for file_number in range(1, num_files + 1):\n filename = self.generator.pystr() + str(file_number)\n\n max_allowed_size = remaining_size - (num_files - file_number) * min_file_size\n if file_number < num_files:\n file_size = self.generator.random.randint(min_file_size, max_allowed_size)\n remaining_size = remaining_size - file_size\n else:\n file_size = remaining_size\n\n data = self.generator.binary(file_size)\n zip_handle.writestr(filename, data)\n return zip_buffer.getvalue()\n\n def tar(\n self,\n uncompressed_size: int = 65536,\n num_files: int = 1,\n min_file_size: int = 4096,\n compression: Optional[str] = None,\n ) -> bytes:\n \"\"\"Generate a bytes object containing a random valid tar file.\n\n The number and sizes of files contained inside the resulting archive can be controlled\n using the following arguments:\n\n - ``uncompressed_size`` - the total size of files before compression, 16 KiB by default\n - ``num_files`` - the number of files archived in resulting zip file, 1 by default\n - ``min_file_size`` - the minimum size of each file before compression, 4 KiB by default\n\n No compression is used by default, but setting ``compression`` to one of the values listed\n below will use the corresponding compression type.\n\n - ``'bzip2'`` or ``'bz2'`` for BZIP2\n - ``'lzma'`` or ``'xz'`` for LZMA\n - ``'gzip'`` or ``'gz'`` for GZIP\n\n :sample: uncompressed_size=256, num_files=4, min_file_size=32\n :sample: uncompressed_size=256, num_files=32, min_file_size=4, compression='bz2'\n \"\"\"\n if any(\n [\n not isinstance(num_files, int) or num_files <= 0,\n not isinstance(min_file_size, int) or min_file_size <= 0,\n not isinstance(uncompressed_size, int) or uncompressed_size <= 0,\n ]\n ):\n raise ValueError(\n \"`num_files`, `min_file_size`, and `uncompressed_size` must be positive integers\",\n )\n if min_file_size * num_files > uncompressed_size:\n raise AssertionError(\n \"`uncompressed_size` is smaller than the calculated minimum required size\",\n )\n if compression in [\"gzip\", \"gz\"]:\n mode = \"w:gz\"\n elif compression in [\"bzip2\", \"bz2\"]:\n mode = \"w:bz2\"\n elif compression in [\"lzma\", \"xz\"]:\n mode = \"w:xz\"\n else:\n mode = \"w\"\n\n tar_buffer = io.BytesIO()\n remaining_size = uncompressed_size\n with tarfile.open(mode=mode, fileobj=tar_buffer) as tar_handle:\n for file_number in range(1, num_files + 1):\n file_buffer = io.BytesIO()\n filename = self.generator.pystr() + str(file_number)\n\n max_allowed_size = remaining_size - (num_files - file_number) * min_file_size\n if file_number < num_files:\n file_size = self.generator.random.randint(min_file_size, max_allowed_size)\n remaining_size = remaining_size - file_size\n else:\n file_size = remaining_size\n\n tarinfo = tarfile.TarInfo(name=filename)\n data = self.generator.binary(file_size)\n file_buffer.write(data)\n tarinfo.size = len(file_buffer.getvalue())\n file_buffer.seek(0)\n tar_handle.addfile(tarinfo, file_buffer)\n file_buffer.close()\n return tar_buffer.getvalue()\n\n def image(\n self,\n size: Tuple[int, int] = (256, 256),\n image_format: str = \"png\",\n hue: Optional[Union[int, Sequence[int], str]] = None,\n luminosity: Optional[str] = None,\n ) -> bytes:\n \"\"\"Generate an image and draw a random polygon on it using the Python Image Library.\n Without it installed, this provider won't be functional. Returns the bytes representing\n the image in a given format.\n\n The argument ``size`` must be a 2-tuple containing (width, height) in pixels. Defaults to 256x256.\n\n The argument ``image_format`` can be any valid format to the underlying library like ``'tiff'``,\n ``'jpeg'``, ``'pdf'`` or ``'png'`` (default). Note that some formats need present system libraries\n prior to building the Python Image Library.\n Refer to https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html for details.\n\n The arguments ``hue`` and ``luminosity`` are the same as in the color provider and are simply forwarded to\n it to generate both the background and the shape colors. Therefore, you can ask for a \"dark blue\" image, etc.\n\n :sample: size=(2, 2), hue='purple', luminosity='bright', image_format='pdf'\n :sample: size=(16, 16), hue=[90,270], image_format='ico'\n \"\"\"\n try:\n import PIL.Image\n import PIL.ImageDraw\n except ImportError:\n raise UnsupportedFeature(\"`image` requires the `Pillow` python library.\", \"image\")\n\n (width, height) = size\n image = PIL.Image.new(\"RGB\", size, self.generator.color(hue=hue, luminosity=luminosity))\n draw = PIL.ImageDraw.Draw(image)\n draw.polygon(\n [(self.random_int(0, width), self.random_int(0, height)) for _ in range(self.random_int(3, 12))],\n fill=self.generator.color(hue=hue, luminosity=luminosity),\n outline=self.generator.color(hue=hue, luminosity=luminosity),\n )\n with io.BytesIO() as fobj:\n image.save(fobj, format=image_format)\n fobj.seek(0)\n return fobj.read()\n\n def dsv(\n self,\n dialect: str = \"faker-csv\",\n header: Optional[Sequence[str]] = None,\n data_columns: Tuple[str, str] = (\"{{name}}\", \"{{address}}\"),\n num_rows: int = 10,\n include_row_ids: bool = False,\n **fmtparams: Any,\n ) -> str:\n \"\"\"Generate random delimiter-separated values.\n\n This method's behavior share some similarities with ``csv.writer``. The ``dialect`` and\n ``**fmtparams`` arguments are the same arguments expected by ``csv.writer`` to control its\n behavior, and instead of expecting a file-like object to where output will be written, the\n output is controlled by additional keyword arguments and is returned as a string.\n\n The ``dialect`` argument defaults to ``'faker-csv'`` which is the name of a ``csv.excel``\n subclass with full quoting enabled.\n\n The ``header`` argument expects a list or a tuple of strings that will serve as the header row\n if supplied. The ``data_columns`` argument expects a list or a tuple of string tokens, and these\n string tokens will be passed to :meth:`pystr_format() <faker.providers.python.Provider.pystr_format>`\n for data generation. Argument Groups are used to pass arguments to the provider methods.\n Both ``header`` and ``data_columns`` must be of the same length.\n\n Example:\n fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})\n fake.dsv(data_columns=('{{ name }}', '{{ pyint:top_half }}'))\n\n The ``num_rows`` argument controls how many rows of data to generate, and the ``include_row_ids``\n argument may be set to ``True`` to include a sequential row ID column.\n\n :sample: dialect='excel', data_columns=('{{name}}', '{{address}}')\n :sample: dialect='excel-tab', data_columns=('{{name}}', '{{address}}'), include_row_ids=True\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=5, delimiter='$'\n \"\"\"\n\n if not isinstance(num_rows, int) or num_rows <= 0:\n raise ValueError(\"`num_rows` must be a positive integer\")\n if not isinstance(data_columns, (list, tuple)):\n raise TypeError(\"`data_columns` must be a tuple or a list\")\n if header is not None:\n if not isinstance(header, (list, tuple)):\n raise TypeError(\"`header` must be a tuple or a list\")\n if len(header) != len(data_columns):\n raise ValueError(\"`header` and `data_columns` must have matching lengths\")\n\n dsv_buffer = io.StringIO()\n writer = csv.writer(dsv_buffer, dialect=dialect, **fmtparams)\n\n if header:\n if include_row_ids:\n header = list(header)\n header.insert(0, \"ID\")\n writer.writerow(header)\n\n for row_num in range(1, num_rows + 1):\n row = [self.generator.pystr_format(column) for column in data_columns]\n if include_row_ids:\n row.insert(0, str(row_num))\n\n writer.writerow(row)\n\n return dsv_buffer.getvalue()\n\n def csv(\n self,\n header: Optional[Sequence[str]] = None,\n data_columns: Tuple[str, str] = (\"{{name}}\", \"{{address}}\"),\n num_rows: int = 10,\n include_row_ids: bool = False,\n ) -> str:\n \"\"\"Generate random comma-separated values.\n\n For more information on the different arguments of this method, please refer to\n :meth:`dsv() <faker.providers.misc.Provider.dsv>` which is used under the hood.\n\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False\n :sample: header=('Name', 'Address', 'Favorite Color'),\n data_columns=('{{name}}', '{{address}}', '{{safe_color_name}}'),\n num_rows=10, include_row_ids=True\n \"\"\"\n return self.dsv(\n header=header,\n data_columns=data_columns,\n num_rows=num_rows,\n include_row_ids=include_row_ids,\n delimiter=\",\",\n )\n\n def tsv(\n self,\n header: Optional[Sequence[str]] = None,\n data_columns: Tuple[str, str] = (\"{{name}}\", \"{{address}}\"),\n num_rows: int = 10,\n include_row_ids: bool = False,\n ) -> str:\n \"\"\"Generate random tab-separated values.\n\n For more information on the different arguments of this method, please refer to\n :meth:`dsv() <faker.providers.misc.Provider.dsv>` which is used under the hood.\n\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False\n :sample: header=('Name', 'Address', 'Favorite Color'),\n data_columns=('{{name}}', '{{address}}', '{{safe_color_name}}'),\n num_rows=10, include_row_ids=True\n \"\"\"\n return self.dsv(\n header=header,\n data_columns=data_columns,\n num_rows=num_rows,\n include_row_ids=include_row_ids,\n delimiter=\"\\t\",\n )\n\n def psv(\n self,\n header: Optional[Sequence[str]] = None,\n data_columns: Tuple[str, str] = (\"{{name}}\", \"{{address}}\"),\n num_rows: int = 10,\n include_row_ids: bool = False,\n ) -> str:\n \"\"\"Generate random pipe-separated values.\n\n For more information on the different arguments of this method, please refer to\n :meth:`dsv() <faker.providers.misc.Provider.dsv>` which is used under the hood.\n\n :sample: data_columns=('{{name}}', '{{address}}'), num_rows=10, include_row_ids=False\n :sample: header=('Name', 'Address', 'Favorite Color'),\n data_columns=('{{name}}', '{{address}}', '{{safe_color_name}}'),\n num_rows=10, include_row_ids=True\n \"\"\"\n return self.dsv(\n header=header,\n data_columns=data_columns,\n num_rows=num_rows,\n include_row_ids=include_row_ids,\n delimiter=\"|\",\n )\n\n def json(\n self,\n data_columns: Optional[List] = None,\n num_rows: int = 10,\n indent: Optional[int] = None,\n cls: Optional[Type[json.JSONEncoder]] = None,\n ) -> str:\n \"\"\"\n Generate random JSON structure values.\n\n Using a dictionary or list of records that is passed as ``data_columns``,\n define the structure that is used to build JSON structures. For complex\n data structures it is recommended to use the dictionary format.\n\n Data Column Dictionary format:\n {'key name': 'definition'}\n\n The definition can be 'provider', 'provider:argument_group', tokenized\n 'string {{ provider:argument_group }}' that is passed to the python\n provider method pystr_format() for generation, or a fixed '@word'.\n Using Lists, Tuples, and Dicts as a definition for structure.\n\n Example:\n fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})\n fake.json(data_columns={'Name': 'name', 'Score': 'pyint:top_half'})\n\n Data Column List format:\n [('key name', 'definition', {'arguments'})]\n\n With the list format the definition can be a list of records, to create\n a list within the structure data. For literal entries within the list,\n set the 'field_name' to None.\n\n :param data_columns: specification for the data structure\n :type data_columns: dict\n :param num_rows: number of rows the returned\n :type num_rows: int\n :param indent: number of spaces to indent the fields\n :type indent: int\n :param cls: optional json encoder to use for non-standard objects such as datetimes\n :type cls: json.JSONEncoder\n :return: Serialized JSON data\n :rtype: str\n\n :sample: data_columns={'Spec': '@1.0.1', 'ID': 'pyint',\n 'Details': {'Name': 'name', 'Address': 'address'}}, num_rows=2\n :sample: data_columns={'Candidates': ['name', 'name', 'name']},\n num_rows=1\n :sample: data_columns=[('Name', 'name'), ('Points', 'pyint',\n {'min_value': 50, 'max_value': 100})], num_rows=1\n \"\"\"\n default_data_columns = {\n \"name\": \"{{name}}\",\n \"residency\": \"{{address}}\",\n }\n data_columns: Union[List, Dict] = data_columns if data_columns else default_data_columns\n\n def process_list_structure(data: Sequence[Any]) -> Any:\n entry: Dict[str, Any] = {}\n\n for name, definition, *arguments in data:\n kwargs = arguments[0] if arguments else {}\n\n if not isinstance(kwargs, dict):\n raise TypeError(\"Invalid arguments type. Must be a dictionary\")\n\n if name is None:\n return self._value_format_selection(definition, **kwargs)\n\n if isinstance(definition, tuple):\n entry[name] = process_list_structure(definition)\n elif isinstance(definition, (list, set)):\n entry[name] = [process_list_structure([item]) for item in definition]\n else:\n entry[name] = self._value_format_selection(definition, **kwargs)\n return entry\n\n def process_dict_structure(data: Union[int, float, bool, Dict[str, Any]]) -> Any:\n entry: Dict[str, Any] = {}\n\n if isinstance(data, str):\n return self._value_format_selection(data)\n\n if isinstance(data, dict):\n for name, definition in data.items():\n if isinstance(definition, (tuple, list, set)):\n entry[name] = [process_dict_structure(item) for item in definition]\n elif isinstance(definition, (dict, int, float, bool)):\n entry[name] = process_dict_structure(definition)\n else:\n entry[name] = self._value_format_selection(definition)\n return entry\n\n return data\n\n def create_json_structure(data_columns: Union[Dict, List]) -> dict:\n if isinstance(data_columns, dict):\n return process_dict_structure(data_columns)\n\n if isinstance(data_columns, list):\n return process_list_structure(data_columns)\n\n raise TypeError(\"Invalid data_columns type. Must be a dictionary or list\")\n\n if num_rows == 1:\n return json.dumps(create_json_structure(data_columns), indent=indent, cls=cls)\n\n data = [create_json_structure(data_columns) for _ in range(num_rows)]\n return json.dumps(data, indent=indent, cls=cls)\n\n def fixed_width(self, data_columns: Optional[list] = None, num_rows: int = 10, align: str = \"left\") -> str:\n \"\"\"\n Generate random fixed width values.\n\n Using a list of tuple records that is passed as ``data_columns``, that\n defines the structure that will be generated. Arguments within the\n record are provider specific, and should be a dictionary that will be\n passed to the provider method.\n\n Data Column List format\n [('field width', 'definition', {'arguments'})]\n\n The definition can be 'provider', 'provider:argument_group', tokenized\n 'string {{ provider:argument_group }}' that is passed to the python\n provider method pystr_format() for generation, or a fixed '@word'.\n Using Lists, Tuples, and Dicts as a definition for structure.\n\n Argument Groups can be used to pass arguments to the provider methods,\n but will override the arguments supplied in the tuple record.\n\n Example:\n fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})\n fake.fixed_width(data_columns=[(20, 'name'), (3, 'pyint:top_half')])\n\n :param data_columns: specification for the data structure\n :type data_columns: list\n :param num_rows: number of rows the generator will yield\n :type num_rows: int\n :param align: positioning of the value. (left, middle, right)\n :type align: str\n :return: Serialized Fixed Width data\n :rtype: str\n\n :sample: data_columns=[(20, 'name'), (3, 'pyint', {'min_value': 50,\n 'max_value': 100})], align='right', num_rows=2\n \"\"\"\n default_data_columns = [\n (20, \"name\"),\n (3, \"pyint\", {\"max_value\": 20}),\n ]\n data_columns = data_columns if data_columns else default_data_columns\n align_map = {\n \"left\": \"<\",\n \"middle\": \"^\",\n \"right\": \">\",\n }\n data = []\n\n for _ in range(num_rows):\n row = []\n\n for width, definition, *arguments in data_columns:\n kwargs = arguments[0] if arguments else {}\n\n if not isinstance(kwargs, dict):\n raise TypeError(\"Invalid arguments type. Must be a dictionary\")\n\n result = self._value_format_selection(definition, **kwargs)\n row.append(f'{result:{align_map.get(align, \"<\")}{width}}'[:width])\n\n data.append(\"\".join(row))\n return \"\\n\".join(data)\n\n def _value_format_selection(self, definition: str, **kwargs: Any) -> Union[int, str]:\n \"\"\"\n Formats the string in different ways depending on it's contents.\n\n The return can be the '@word' itself, a '{{ token }}' passed to PyStr,\n or a 'provider:argument_group' format field that returns potentially\n a non-string type.\n\n This ensures that Numbers, Boolean types that are generated in the\n JSON structures in there proper type, and not just strings.\n \"\"\"\n\n # Check for PyStr first as complex strings may start with @\n if re.match(r\".*\\{\\{.*\\}\\}.*\", definition):\n return self.generator.pystr_format(definition)\n\n # Check for fixed @words that won't be generated\n if re.match(r\"^@.*\", definition):\n return definition.lstrip(\"@\")\n\n # Check if a argument group has been supplied\n if re.match(r\"^[a-zA-Z0-9_-]*:\\w\", definition):\n definition, argument_group = definition.split(\":\")\n arguments = self.generator.get_arguments(argument_group.strip())\n\n return self.generator.format(definition.strip(), **arguments)\n\n # Assume the string is refering to a provider\n return self.generator.format(definition, **kwargs)\n"}
|
{"faker/providers/misc/__init__.py": [{"type": "function", "name": "Provider.json_bytes", "lines": [491, 504], "signature": "def json_bytes( self, data_columns: Optional[List] = None, num_rows: int = 10, indent: Optional[int] = None, cls: Optional[Type[json.JSONEncoder]] = None, ) -> bytes:", "doc": "Generate random JSON structure and return as bytes.\n\nFor more information on the different arguments of this method, refer to\n:meth:`json() <faker.providers.misc.Provider.json>` which is used under the hood."}]}
| null |
["tests/providers/test_misc.py::TestMiscProvider::test_json_bytes"]
|
["tests/providers/test_misc.py::TestMiscProvider::test_uuid4_str", "tests/providers/test_misc.py::TestMiscProvider::test_uuid4_int", "tests/providers/test_misc.py::TestMiscProvider::test_uuid4_uuid_object", "tests/providers/test_misc.py::TestMiscProvider::test_uuid4_seedability", "tests/providers/test_misc.py::TestMiscProvider::test_zip_invalid_file", "tests/providers/test_misc.py::TestMiscProvider::test_zip_one_byte_undersized", "tests/providers/test_misc.py::TestMiscProvider::test_zip_exact_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_zip_over_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_zip_compression_py3", "tests/providers/test_misc.py::TestMiscProvider::test_tar_invalid_file", "tests/providers/test_misc.py::TestMiscProvider::test_tar_one_byte_undersized", "tests/providers/test_misc.py::TestMiscProvider::test_tar_exact_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_tar_over_minimum_size", "tests/providers/test_misc.py::TestMiscProvider::test_tar_compression_py3", "tests/providers/test_misc.py::TestMiscProvider::test_image_no_pillow", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_with_invalid_values", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_no_header", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_with_valid_header", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_with_row_ids", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_data_columns", "tests/providers/test_misc.py::TestMiscProvider::test_dsv_csvwriter_kwargs", "tests/providers/test_misc.py::TestMiscProvider::test_csv_helper_method", "tests/providers/test_misc.py::TestMiscProvider::test_tsv_helper_method", "tests/providers/test_misc.py::TestMiscProvider::test_psv_helper_method", "tests/providers/test_misc.py::TestMiscProvider::test_json_with_arguments", "tests/providers/test_misc.py::TestMiscProvider::test_json_multiple_rows", "tests/providers/test_misc.py::TestMiscProvider::test_json_passthrough_values", "tests/providers/test_misc.py::TestMiscProvider::test_json_type_integrity_int", "tests/providers/test_misc.py::TestMiscProvider::test_json_type_integrity_float", "tests/providers/test_misc.py::TestMiscProvider::test_json_invalid_data_columns", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_invalid_arguments_type", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_nested_list_of_values", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_nested_list_of_objects", "tests/providers/test_misc.py::TestMiscProvider::test_json_list_format_nested_objects", "tests/providers/test_misc.py::TestMiscProvider::test_json_dict_format_nested_list_of_values", "tests/providers/test_misc.py::TestMiscProvider::test_json_dict_format_nested_list_of_objects", "tests/providers/test_misc.py::TestMiscProvider::test_json_dict_format_nested_objects", "tests/providers/test_misc.py::TestMiscProvider::test_json_type_integrity_datetime_using_encoder", "tests/providers/test_misc.py::TestMiscProvider::test_json_type_integrity_datetime_no_encoder", "tests/providers/test_misc.py::TestMiscProvider::test_fixed_width_with_arguments", "tests/providers/test_misc.py::TestMiscProvider::test_fixed_width_invalid_arguments_type", "tests/providers/test_misc.py::TestMiscProvider::test_md5", "tests/providers/test_misc.py::TestMiscProvider::test_sha1", "tests/providers/test_misc.py::TestMiscProvider::test_sha256"]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1673523064.0, "pr_title": "Add method to generate JSON as bytes", "pr_body": "### What does this change\r\n\r\nAdd wrapper method around the JSON provider to generate JSON as bytes.\r\n\r\n### What was wrong\r\n\r\nI use Faker through factory boy and couldn't find a convenient way to achieve that. The change is quite small so I thought it could be a nice addition to the library.\r\n\r\n### How this fixes it\r\n\r\nWraps the existing JSON provider and encode its output to bytes.\r\n", "pr_timeline": [], "issues": {}}
|
|
joke2k/faker
| 1,848
|
https://github.com/joke2k/faker/pull/1848
|
joke2k__faker-1848
|
[]
|
d93d8a0d42b7a92d8b386d73c85886b2633124e7
|
diff --git a/faker/providers/phone_number/en_US/__init__.py b/faker/providers/phone_number/en_US/__init__.py
index bb0b77df84..ac1367e8bf 100644
--- a/faker/providers/phone_number/en_US/__init__.py
+++ b/faker/providers/phone_number/en_US/__init__.py
@@ -37,3 +37,14 @@ class Provider(PhoneNumberProvider):
"001-###-###-####x####",
"001-###-###-####x#####",
)
+
+ basic_formats = (
+ # basic 10-digit phone number format with no extensions
+ "##########",
+ "###-###-####",
+ "(###)###-####",
+ )
+
+ def basic_phone_number(self) -> str:
+ pattern: str = self.random_element(self.basic_formats)
+ return self.numerify(self.generator.parse(pattern))
|
diff --git a/tests/providers/test_phone_number.py b/tests/providers/test_phone_number.py
index 8fd54ec316..1bb38daf9e 100644
--- a/tests/providers/test_phone_number.py
+++ b/tests/providers/test_phone_number.py
@@ -364,3 +364,18 @@ def test_phone_number(self, faker, num_samples):
for _ in range(num_samples):
phone_number = faker.phone_number()
assert any([re.match(pattern, phone_number) for pattern in patterns])
+
+
+class TestEnUs:
+ """Test En_US phone provider methods"""
+
+ def test_basic_phone_number(self, faker, num_samples):
+ pattern_no_whitespaces: Pattern = re.compile(
+ r"\d{9}",
+ )
+ pattern_dashes: Pattern = re.compile(r"\d{3}-\d{3}-\d{4}")
+ pattern_parens: Pattern = re.compile(r"\(\d{3}\)\d{3}-\d{4}")
+ patterns = [pattern_no_whitespaces, pattern_dashes, pattern_parens]
+ for _ in range(num_samples):
+ phone_number = faker.basic_phone_number()
+ assert any([re.match(pattern, phone_number) for pattern in patterns])
| 2023-04-22T05:52:53
|
{}
|
{"faker/providers/phone_number/en_US/__init__.py": "from .. import Provider as PhoneNumberProvider\n\n\nclass Provider(PhoneNumberProvider):\n formats = (\n # Standard 10-digit phone number formats\n \"##########\",\n \"##########\",\n \"###-###-####\",\n \"###-###-####\",\n # Optional 10-digit local phone number format\n \"(###)###-####\",\n \"(###)###-####\",\n # Non-standard 10-digit phone number format\n \"###.###.####\",\n \"###.###.####\",\n # Standard 10-digit phone number format with extensions\n \"###-###-####x###\",\n \"###-###-####x####\",\n \"###-###-####x#####\",\n # Optional 10-digit local phone number format with extensions\n \"(###)###-####x###\",\n \"(###)###-####x####\",\n \"(###)###-####x#####\",\n # Non-standard 10-digit phone number format with extensions\n \"###.###.####x###\",\n \"###.###.####x####\",\n \"###.###.####x#####\",\n # Standard 11-digit phone number format\n \"+1-###-###-####\",\n \"001-###-###-####\",\n # Standard 11-digit phone number format with extensions\n \"+1-###-###-####x###\",\n \"+1-###-###-####x####\",\n \"+1-###-###-####x#####\",\n \"001-###-###-####x###\",\n \"001-###-###-####x####\",\n \"001-###-###-####x#####\",\n )\n"}
|
{"faker/providers/phone_number/en_US/__init__.py": [{"type": "function", "name": "Provider.basic_phone_number", "lines": [48, 50], "signature": "def basic_phone_number(self) -> str:", "doc": ""}]}
| null |
["tests/providers/test_phone_number.py::TestEnUs::test_basic_phone_number"]
|
["tests/providers/test_phone_number.py::TestPhoneNumber::test_country_calling_code", "tests/providers/test_phone_number.py::TestPhoneNumber::test_msisdn", "tests/providers/test_phone_number.py::TestAzAz::test_phone_number", "tests/providers/test_phone_number.py::TestAzAz::test_cellphone_number", "tests/providers/test_phone_number.py::TestAzAz::test_landline_number", "tests/providers/test_phone_number.py::TestJaJp::test_phone_number", "tests/providers/test_phone_number.py::TestPtBr::test_phone_number", "tests/providers/test_phone_number.py::TestPtBr::test_msisdn", "tests/providers/test_phone_number.py::TestPtBr::test_cellphone", "tests/providers/test_phone_number.py::TestPtBr::test_service_phone", "tests/providers/test_phone_number.py::TestHuHu::test_phone_number", "tests/providers/test_phone_number.py::TestThTh::test_phone_number", "tests/providers/test_phone_number.py::TestHyAm::test_phone_number", "tests/providers/test_phone_number.py::TestEnPh::test_globe_mobile_number", "tests/providers/test_phone_number.py::TestEnPh::test_smart_mobile_number", "tests/providers/test_phone_number.py::TestEnPh::test_sun_mobile_number", "tests/providers/test_phone_number.py::TestEnPh::test_mobile_number", "tests/providers/test_phone_number.py::TestEnPh::test_globe_area2_landline_number", "tests/providers/test_phone_number.py::TestEnPh::test_pldt_area2_landline_number", "tests/providers/test_phone_number.py::TestEnPh::test_bayantel_area2_landline_number", "tests/providers/test_phone_number.py::TestEnPh::test_misc_area2_landline_number", "tests/providers/test_phone_number.py::TestEnPh::test_area2_landline_number", "tests/providers/test_phone_number.py::TestEnPh::test_non_area2_landline_number", "tests/providers/test_phone_number.py::TestEnPh::test_landline_number", "tests/providers/test_phone_number.py::TestFilPh::test_globe_mobile_number", "tests/providers/test_phone_number.py::TestFilPh::test_smart_mobile_number", "tests/providers/test_phone_number.py::TestFilPh::test_sun_mobile_number", "tests/providers/test_phone_number.py::TestFilPh::test_mobile_number", "tests/providers/test_phone_number.py::TestFilPh::test_globe_area2_landline_number", "tests/providers/test_phone_number.py::TestFilPh::test_pldt_area2_landline_number", "tests/providers/test_phone_number.py::TestFilPh::test_bayantel_area2_landline_number", "tests/providers/test_phone_number.py::TestFilPh::test_misc_area2_landline_number", "tests/providers/test_phone_number.py::TestFilPh::test_area2_landline_number", "tests/providers/test_phone_number.py::TestFilPh::test_non_area2_landline_number", "tests/providers/test_phone_number.py::TestFilPh::test_landline_number", "tests/providers/test_phone_number.py::TestTlPh::test_globe_mobile_number", "tests/providers/test_phone_number.py::TestTlPh::test_smart_mobile_number", "tests/providers/test_phone_number.py::TestTlPh::test_sun_mobile_number", "tests/providers/test_phone_number.py::TestTlPh::test_mobile_number", "tests/providers/test_phone_number.py::TestTlPh::test_globe_area2_landline_number", "tests/providers/test_phone_number.py::TestTlPh::test_pldt_area2_landline_number", "tests/providers/test_phone_number.py::TestTlPh::test_bayantel_area2_landline_number", "tests/providers/test_phone_number.py::TestTlPh::test_misc_area2_landline_number", "tests/providers/test_phone_number.py::TestTlPh::test_area2_landline_number", "tests/providers/test_phone_number.py::TestTlPh::test_non_area2_landline_number", "tests/providers/test_phone_number.py::TestTlPh::test_landline_number", "tests/providers/test_phone_number.py::TestTaIn::test_phone_number", "tests/providers/test_phone_number.py::TestEsCo::test_phone_number", "tests/providers/test_phone_number.py::TestEsEs::test_phone_number", "tests/providers/test_phone_number.py::TestArAe::test_cellphone_number", "tests/providers/test_phone_number.py::TestArAe::test_telephone_number", "tests/providers/test_phone_number.py::TestArAe::test_toll_number", "tests/providers/test_phone_number.py::TestArAe::test_service_phone_number", "tests/providers/test_phone_number.py::TestArAe::test_phone_number", "tests/providers/test_phone_number.py::TestFrFr::test_phone_number"]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1682142233.0, "pr_title": "add a separate basic number format for US", "pr_body": "issue ticket: https://github.com/joke2k/faker/issues/1847\r\n\r\n### What does this change\r\n\r\nAdds a 'basic phone number' type to en_US to comply with some form requirements, as well as a unit test.\r\n\r\n### What was wrong\r\n\r\nSome forms do not allow for phone formats with extensions or country codes. As a result, the faker get_phone function for en_US doesn't work for all cases. \r\n\r\n### How this fixes it\r\n\r\nAdds a function that allows the user to only use 'basic' ten-digit phone numbers and standard formatting. \r\n", "pr_timeline": [{"time": 1682393131.0, "comment": "@fcurella thanks for the catch -- linting complete. "}], "issues": {}}
|
|
joke2k/faker
| 494
|
https://github.com/joke2k/faker/pull/494
|
joke2k__faker-494
|
["493"]
|
d06d05f415e97b15f21683c991511c24e12c8304
|
diff --git a/faker/providers/file/__init__.py b/faker/providers/file/__init__.py
index b19a3c563c..4f4c14cef0 100644
--- a/faker/providers/file/__init__.py
+++ b/faker/providers/file/__init__.py
@@ -201,3 +201,16 @@ def file_extension(cls, category=None):
"""
category = category if category else cls.random_element(list(cls.file_extensions.keys()))
return cls.random_element(cls.file_extensions[category])
+
+ @classmethod
+ def file_path(cls, depth=1, category=None, extension=None):
+ """
+ :param category: audio|image|office|text|video
+ :param extension: file extension
+ :param depth: depth of the file (depth >= 0)
+ """
+ file = Provider.file_name(category, extension)
+ path = "/{0}".format(file)
+ for d in range(0, depth):
+ path = "/{0}{1}".format(WordProvider.word(), path)
+ return path
|
diff --git a/tests/providers/file.py b/tests/providers/file.py
new file mode 100644
index 0000000000..1a1617bac6
--- /dev/null
+++ b/tests/providers/file.py
@@ -0,0 +1,25 @@
+from __future__ import unicode_literals
+
+import unittest
+import re
+
+from faker import Factory
+from faker.providers.file import Provider as FileProvider
+
+
+class TestFile(unittest.TestCase):
+ """ Tests file """
+
+ def setUp(self):
+ self.factory = Factory.create()
+
+ def test_file_path(self):
+ for _ in range(100):
+ file_path = FileProvider.file_path()
+ self.assertTrue(re.search(r'\/\w+\/\w+\.\w+', file_path))
+ file_path = FileProvider.file_path(depth=3)
+ self.assertTrue(re.search(r'\/\w+\/\w+\/\w+\.\w+', file_path))
+ file_path = FileProvider.file_path(extension='pdf')
+ self.assertTrue(re.search(r'\/\w+\/\w+\.pdf', file_path))
+ file_path = FileProvider.file_path(category='image')
+ self.assertTrue(re.search(r'\/\w+\/\w+\.(bmp|gif|jpeg|jpg|png|tiff)', file_path))
| 2017-04-06T18:16:03
|
{}
|
{"faker/providers/file/__init__.py": "# coding=utf-8\nfrom __future__ import unicode_literals\nfrom collections import OrderedDict\n\nfrom .. import BaseProvider\nfrom ..lorem.la import Provider as WordProvider\n\n\nclass Provider(BaseProvider):\n application_mime_types = (\n\n \"application/atom+xml\", # Atom feeds\n \"application/ecmascript\",\n # ECMAScript/JavaScript; Defined in RFC 4329 (equivalent to application/javascript but with stricter processing rules)\n \"application/EDI-X12\", # EDI X12 data; Defined in RFC 1767\n \"application/EDIFACT\", # EDI EDIFACT data; Defined in RFC 1767\n \"application/json\", # JavaScript Object Notation JSON; Defined in RFC 4627\n \"application/javascript\", # ECMAScript/JavaScript; Defined in RFC 4329 (equivalent to application/ecmascript\n # but with looser processing rules) It is not accepted in IE 8\n # or earlier - text/javascript is accepted but it is defined as obsolete in RFC 4329.\n # The \"type\" attribute of the <script> tag in HTML5 is optional and in practice\n # omitting the media type of JavaScript programs is the most interoperable\n # solution since all browsers have always assumed the correct\n # default even before HTML5. \"application/octet-stream\", # Arbitrary binary data.[6] Generally speaking this type identifies files that are not associated with a specific application. Contrary to past assumptions by software packages such as Apache this is not a type that should be applied to unknown files. In such a case, a server or application should not indicate a content type, as it may be incorrect, but rather, should omit the type in order to allow the recipient to guess the type.[7]\n \"application/ogg\", # Ogg, a multimedia bitstream container format; Defined in RFC 5334\n \"application/pdf\", # Portable Document Format, PDF has been in use for document exchange\n # on the Internet since 1993; Defined in RFC 3778\n \"application/postscript\", # PostScript; Defined in RFC 2046\n \"application/rdf+xml\", # Resource Description Framework; Defined by RFC 3870\n \"application/rss+xml\", # RSS feeds\n \"application/soap+xml\", # SOAP; Defined by RFC 3902\n \"application/font-woff\", # Web Open Font Format; (candidate recommendation; use application/x-font-woff\n # until standard is official)\n \"application/xhtml+xml\", # XHTML; Defined by RFC 3236\n \"application/xml-dtd\", # DTD files; Defined by RFC 3023\n \"application/xop+xml\", # XOP\n \"application/zip\", # ZIP archive files; Registered[8]\n \"application/gzip\", # Gzip, Defined in RFC 6713\n )\n\n audio_mime_types = (\n \"audio/basic\", # mulaw audio at 8 kHz, 1 channel; Defined in RFC 2046\n \"audio/L24\", # 24bit Linear PCM audio at 8-48 kHz, 1-N channels; Defined in RFC 3190\n \"audio/mp4\", # MP4 audio\n \"audio/mpeg\", # MP3 or other MPEG audio; Defined in RFC 3003\n \"audio/ogg\", # Ogg Vorbis, Speex, Flac and other audio; Defined in RFC 5334\n \"audio/vorbis\", # Vorbis encoded audio; Defined in RFC 5215\n \"audio/vnd.rn-realaudio\", # RealAudio; Documented in RealPlayer Help[9]\n \"audio/vnd.wave\", # WAV audio; Defined in RFC 2361\n \"audio/webm\", # WebM open media format\n )\n\n image_mime_types = (\n \"image/gif\", # GIF image; Defined in RFC 2045 and RFC 2046\n \"image/jpeg\", # JPEG JFIF image; Defined in RFC 2045 and RFC 2046\n \"image/pjpeg\",\n # JPEG JFIF image; Associated with Internet Explorer; Listed in ms775147(v=vs.85) - Progressive JPEG, initiated before global browser support for progressive JPEGs (Microsoft and Firefox).\n \"image/png\", # Portable Network Graphics; Registered,[10] Defined in RFC 2083\n \"image/svg+xml\", # SVG vector image; Defined in SVG Tiny 1.2 Specification Appendix M\n \"image/tiff\", # Tag Image File Format (only for Baseline TIFF); Defined in RFC 3302\n \"image/vnd.microsoft.icon\", # ICO image; Registered[11]\n )\n\n message_mime_types = (\n \"message/http\", # Defined in RFC 2616\n \"message/imdn+xml\", # IMDN Instant Message Disposition Notification; Defined in RFC 5438\n \"message/partial\", # Email; Defined in RFC 2045 and RFC 2046\n \"message/rfc822\", # Email; EML files, MIME files, MHT files, MHTML files; Defined in RFC 2045 and RFC 2046\n )\n\n model_mime_types = (\n \"model/example\", # Defined in RFC 4735\n \"model/iges\", # IGS files, IGES files; Defined in RFC 2077\n \"model/mesh\", # MSH files, MESH files; Defined in RFC 2077, SILO files\n \"model/vrml\", # WRL files, VRML files; Defined in RFC 2077\n \"model/x3d+binary\", # X3D ISO standard for representing 3D computer graphics, X3DB binary files\n \"model/x3d+vrml\", # X3D ISO standard for representing 3D computer graphics, X3DV VRML files\n \"model/x3d+xml\", # X3D ISO standard for representing 3D computer graphics, X3D XML files\n )\n\n multipart_mime_types = (\n \"multipart/mixed\", # MIME Email; Defined in RFC 2045 and RFC 2046\n \"multipart/alternative\", # MIME Email; Defined in RFC 2045 and RFC 2046\n \"multipart/related\", # MIME Email; Defined in RFC 2387 and used by MHTML (HTML mail)\n \"multipart/form-data\", # MIME Webform; Defined in RFC 2388\n \"multipart/signed\", # Defined in RFC 1847\n \"multipart/encrypted\", # Defined in RFC 1847\n )\n\n text_mime_types = (\n \"text/cmd\", # commands; subtype resident in Gecko browsers like Firefox 3.5\n \"text/css\", # Cascading Style Sheets; Defined in RFC 2318\n \"text/csv\", # Comma-separated values; Defined in RFC 4180\n \"text/html\", # HTML; Defined in RFC 2854\n \"text/javascript\",\n # (Obsolete): JavaScript; Defined in and obsoleted by RFC 4329 in order to discourage its usage in favor of application/javascript. However, text/javascript is allowed in HTML 4 and 5 and, unlike application/javascript, has cross-browser support. The \"type\" attribute of the <script> tag in HTML5 is optional and there is no need to use it at all since all browsers have always assumed the correct default (even in HTML 4 where it was required by the specification).\n \"text/plain\", # Textual data; Defined in RFC 2046 and RFC 3676\n \"text/vcard\", # vCard (contact information); Defined in RFC 6350\n \"text/xml\", # Extensible Markup Language; Defined in RFC 3023\n )\n\n video_mime_types = (\n \"video/mpeg\", # MPEG-1 video with multiplexed audio; Defined in RFC 2045 and RFC 2046\n \"video/mp4\", # MP4 video; Defined in RFC 4337\n \"video/ogg\", # Ogg Theora or other video (with audio); Defined in RFC 5334\n \"video/quicktime\", # QuickTime video; Registered[12]\n \"video/webm\", # WebM Matroska-based open media format\n \"video/x-matroska\", # Matroska open media format\n \"video/x-ms-wmv\", # Windows Media Video; Documented in Microsoft KB 288102\n \"video/x-flv\", # Flash video (FLV files)\n )\n\n mime_types = OrderedDict((\n ('application', application_mime_types),\n ('audio', audio_mime_types),\n ('image', image_mime_types),\n ('message', message_mime_types),\n ('model', model_mime_types),\n ('multipart', multipart_mime_types),\n ('text', text_mime_types),\n ('video', video_mime_types),\n ))\n\n audio_file_extensions = (\n \"flac\",\n \"mp3\",\n \"wav\",\n )\n\n image_file_extensions = (\n \"bmp\",\n \"gif\",\n \"jpeg\",\n \"jpg\",\n \"png\",\n \"tiff\",\n )\n\n text_file_extensions = (\n \"css\",\n \"csv\",\n \"html\",\n \"js\",\n \"json\",\n \"txt\",\n )\n\n video_file_extensions = (\n \"mp4\",\n \"avi\",\n \"mov\",\n \"webm\",\n )\n\n office_file_extensions = (\n \"doc\", # legacy MS Word\n \"docx\", # MS Word\n \"xls\", # legacy MS Excel\n \"xlsx\", # MS Excel\n \"ppt\", # legacy MS PowerPoint\n \"pptx\", # MS PowerPoint\n \"odt\", # LibreOffice document\n \"ods\", # LibreOffice spreadsheet\n \"odp\", # LibreOffice presentation\n \"pages\", # Apple Pages\n \"numbers\", # Apple Numbers\n \"key\", # Apple Keynote\n \"pdf\", # Portable Document Format\n )\n\n file_extensions = OrderedDict((\n (\"audio\", audio_file_extensions),\n (\"image\", image_file_extensions),\n (\"office\", office_file_extensions),\n (\"text\", text_file_extensions),\n (\"video\", video_file_extensions),\n ))\n\n @classmethod\n def mime_type(cls, category=None):\n \"\"\"\n :param category: application|audio|image|message|model|multipart|text|video\n \"\"\"\n category = category if category else cls.random_element(list(cls.mime_types.keys()))\n return cls.random_element(cls.mime_types[category])\n\n @classmethod\n def file_name(cls, category=None, extension=None):\n \"\"\"\n :param category: audio|image|office|text|video\n :param extension: file extension\n \"\"\"\n extension = extension if extension else cls.file_extension(category)\n filename = WordProvider.word()\n return '{0}.{1}'.format(filename, extension)\n\n @classmethod\n def file_extension(cls, category=None):\n \"\"\"\n :param category: audio|image|office|text|video\n \"\"\"\n category = category if category else cls.random_element(list(cls.file_extensions.keys()))\n return cls.random_element(cls.file_extensions[category])\n"}
|
{"faker/providers/file/__init__.py": [{"type": "function", "name": "Provider.file_path", "lines": [206, 216], "signature": "def file_path(cls, depth=1, category=None, extension=None):", "doc": ":param category: audio|image|office|text|video\n:param extension: file extension\n:param depth: depth of the file (depth >= 0)"}]}
| null |
["tests/providers/file.py::TestFile::test_file_path"]
|
[]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1491502337.0, "pr_title": "[close #493] adding_file_path_provider", "pr_body": "Issue #493 \r\n\r\nAdding a file path provider in file provider.", "pr_timeline": [{"time": 1491728077.0, "comment": "Thank you! "}], "issues": {"493": {"issue_title": "Add a file path provider", "issue_body": "In the file providers would be nice to have a file_path provider who would return a path like ```/lorem/ipsum/lorem.pdf```.", "issue_timeline": []}}}
|
|
joke2k/faker
| 738
|
https://github.com/joke2k/faker/pull/738
|
joke2k__faker-738
|
[]
|
5159a00f3965fca5d3c5a73c6e352a00d16d37bd
|
diff --git a/faker/providers/file/__init__.py b/faker/providers/file/__init__.py
index a6ab2777e8..594151cd38 100644
--- a/faker/providers/file/__init__.py
+++ b/faker/providers/file/__init__.py
@@ -1,5 +1,7 @@
# coding=utf-8
from __future__ import unicode_literals
+
+import string
from collections import OrderedDict
from .. import BaseProvider
@@ -187,6 +189,7 @@ class Provider(BaseProvider):
("text", text_file_extensions),
("video", video_file_extensions),
))
+ unix_device_prefixes = ('sd', 'vd', 'xvd',)
def mime_type(self, category=None):
"""
@@ -224,3 +227,20 @@ def file_path(self, depth=1, category=None, extension=None):
for d in range(0, depth):
path = "/{0}{1}".format(self.generator.word(), path)
return path
+
+ def unix_device(self, prefix=None):
+ """
+ :param prefix: sd|vd|xvd
+ """
+ prefix = prefix or self.random_element(self.unix_device_prefixes)
+ suffix = self.random_element(string.ascii_lowercase)
+ path = '/dev/%s%s' % (prefix, suffix)
+ return path
+
+ def unix_partition(self, prefix=None):
+ """
+ :param prefix: sd|vd|xvd
+ """
+ path = self.unix_device(prefix=prefix)
+ path += str(self.random_digit())
+ return path
|
diff --git a/tests/providers/test_file.py b/tests/providers/test_file.py
index cca458d0bf..767d48ecc1 100644
--- a/tests/providers/test_file.py
+++ b/tests/providers/test_file.py
@@ -22,3 +22,27 @@ def test_file_path(self):
self.assertTrue(re.search(r'\/\w+\/\w+\.pdf', file_path))
file_path = self.factory.file_path(category='image')
self.assertTrue(re.search(r'\/\w+\/\w+\.(bmp|gif|jpeg|jpg|png|tiff)', file_path))
+
+ def test_unix_device(self):
+ reg_device = re.compile('^/dev/(vd|sd|xvd)[a-z]$')
+ # Test default
+ for _ in range(100):
+ path = self.factory.unix_device()
+ self.assertTrue(reg_device.match(path))
+ # Test with prefix
+ for _ in range(100):
+ path = self.factory.unix_device('sd')
+ self.assertTrue(reg_device.match(path))
+ self.assertTrue(path.startswith('/dev/sd'))
+
+ def test_unix_partition(self):
+ reg_part = re.compile('^/dev/(vd|sd|xvd)[a-z]\d$')
+ # Test default
+ for _ in range(100):
+ path = self.factory.unix_partition()
+ self.assertTrue(reg_part.match(path))
+ # Test with prefix
+ for _ in range(100):
+ path = self.factory.unix_partition('sd')
+ self.assertTrue(reg_part.match(path))
+ self.assertTrue(path.startswith('/dev/sd'))
| 2018-04-06T13:01:29
|
{}
|
{"faker/providers/file/__init__.py": "# coding=utf-8\nfrom __future__ import unicode_literals\nfrom collections import OrderedDict\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n application_mime_types = (\n\n \"application/atom+xml\", # Atom feeds\n \"application/ecmascript\",\n # ECMAScript/JavaScript; Defined in RFC 4329 (equivalent to\n # application/javascript but with stricter processing rules)\n \"application/EDI-X12\", # EDI X12 data; Defined in RFC 1767\n \"application/EDIFACT\", # EDI EDIFACT data; Defined in RFC 1767\n \"application/json\", # JavaScript Object Notation JSON; Defined in RFC 4627\n # ECMAScript/JavaScript; Defined in RFC 4329 (equivalent to\n # application/ecmascript\n \"application/javascript\",\n # but with looser processing rules) It is not accepted in IE 8\n # or earlier - text/javascript is accepted but it is defined as obsolete in RFC 4329.\n # The \"type\" attribute of the <script> tag in HTML5 is optional and in practice\n # omitting the media type of JavaScript programs is the most interoperable\n # solution since all browsers have always assumed the correct\n # default even before HTML5. \"application/octet-stream\", # Arbitrary binary data.[6] Generally speaking this type identifies files that are not associated with a specific application. Contrary to past assumptions by software packages such as Apache this is not a type that should be applied to unknown files. In such a case, a server or application should not indicate a content type, as it may be incorrect, but rather, should omit the type in order to allow the recipient to guess the type.[7]\n \"application/ogg\", # Ogg, a multimedia bitstream container format; Defined in RFC 5334\n \"application/pdf\", # Portable Document Format, PDF has been in use for document exchange\n # on the Internet since 1993; Defined in RFC 3778\n \"application/postscript\", # PostScript; Defined in RFC 2046\n \"application/rdf+xml\", # Resource Description Framework; Defined by RFC 3870\n \"application/rss+xml\", # RSS feeds\n \"application/soap+xml\", # SOAP; Defined by RFC 3902\n # Web Open Font Format; (candidate recommendation; use application/x-font-woff\n \"application/font-woff\",\n # until standard is official)\n \"application/xhtml+xml\", # XHTML; Defined by RFC 3236\n \"application/xml-dtd\", # DTD files; Defined by RFC 3023\n \"application/xop+xml\", # XOP\n \"application/zip\", # ZIP archive files; Registered[8]\n \"application/gzip\", # Gzip, Defined in RFC 6713\n )\n\n audio_mime_types = (\n \"audio/basic\", # mulaw audio at 8 kHz, 1 channel; Defined in RFC 2046\n \"audio/L24\", # 24bit Linear PCM audio at 8-48 kHz, 1-N channels; Defined in RFC 3190\n \"audio/mp4\", # MP4 audio\n \"audio/mpeg\", # MP3 or other MPEG audio; Defined in RFC 3003\n \"audio/ogg\", # Ogg Vorbis, Speex, Flac and other audio; Defined in RFC 5334\n \"audio/vorbis\", # Vorbis encoded audio; Defined in RFC 5215\n # RealAudio; Documented in RealPlayer Help[9]\n \"audio/vnd.rn-realaudio\",\n \"audio/vnd.wave\", # WAV audio; Defined in RFC 2361\n \"audio/webm\", # WebM open media format\n )\n\n image_mime_types = (\n \"image/gif\", # GIF image; Defined in RFC 2045 and RFC 2046\n \"image/jpeg\", # JPEG JFIF image; Defined in RFC 2045 and RFC 2046\n \"image/pjpeg\",\n # JPEG JFIF image; Associated with Internet Explorer; Listed in ms775147(v=vs.85) - Progressive JPEG, initiated before global browser support for progressive JPEGs (Microsoft and Firefox).\n # Portable Network Graphics; Registered,[10] Defined in RFC 2083\n \"image/png\",\n \"image/svg+xml\", # SVG vector image; Defined in SVG Tiny 1.2 Specification Appendix M\n # Tag Image File Format (only for Baseline TIFF); Defined in RFC 3302\n \"image/tiff\",\n \"image/vnd.microsoft.icon\", # ICO image; Registered[11]\n )\n\n message_mime_types = (\n \"message/http\", # Defined in RFC 2616\n \"message/imdn+xml\", # IMDN Instant Message Disposition Notification; Defined in RFC 5438\n \"message/partial\", # Email; Defined in RFC 2045 and RFC 2046\n # Email; EML files, MIME files, MHT files, MHTML files; Defined in RFC\n # 2045 and RFC 2046\n \"message/rfc822\",\n )\n\n model_mime_types = (\n \"model/example\", # Defined in RFC 4735\n \"model/iges\", # IGS files, IGES files; Defined in RFC 2077\n \"model/mesh\", # MSH files, MESH files; Defined in RFC 2077, SILO files\n \"model/vrml\", # WRL files, VRML files; Defined in RFC 2077\n # X3D ISO standard for representing 3D computer graphics, X3DB binary\n # files\n \"model/x3d+binary\",\n \"model/x3d+vrml\", # X3D ISO standard for representing 3D computer graphics, X3DV VRML files\n \"model/x3d+xml\", # X3D ISO standard for representing 3D computer graphics, X3D XML files\n )\n\n multipart_mime_types = (\n \"multipart/mixed\", # MIME Email; Defined in RFC 2045 and RFC 2046\n \"multipart/alternative\", # MIME Email; Defined in RFC 2045 and RFC 2046\n # MIME Email; Defined in RFC 2387 and used by MHTML (HTML mail)\n \"multipart/related\",\n \"multipart/form-data\", # MIME Webform; Defined in RFC 2388\n \"multipart/signed\", # Defined in RFC 1847\n \"multipart/encrypted\", # Defined in RFC 1847\n )\n\n text_mime_types = (\n \"text/cmd\", # commands; subtype resident in Gecko browsers like Firefox 3.5\n \"text/css\", # Cascading Style Sheets; Defined in RFC 2318\n \"text/csv\", # Comma-separated values; Defined in RFC 4180\n \"text/html\", # HTML; Defined in RFC 2854\n \"text/javascript\",\n # (Obsolete): JavaScript; Defined in and obsoleted by RFC 4329 in order to discourage its usage in favor of application/javascript. However, text/javascript is allowed in HTML 4 and 5 and, unlike application/javascript, has cross-browser support. The \"type\" attribute of the <script> tag in HTML5 is optional and there is no need to use it at all since all browsers have always assumed the correct default (even in HTML 4 where it was required by the specification).\n \"text/plain\", # Textual data; Defined in RFC 2046 and RFC 3676\n \"text/vcard\", # vCard (contact information); Defined in RFC 6350\n \"text/xml\", # Extensible Markup Language; Defined in RFC 3023\n )\n\n video_mime_types = (\n \"video/mpeg\", # MPEG-1 video with multiplexed audio; Defined in RFC 2045 and RFC 2046\n \"video/mp4\", # MP4 video; Defined in RFC 4337\n # Ogg Theora or other video (with audio); Defined in RFC 5334\n \"video/ogg\",\n \"video/quicktime\", # QuickTime video; Registered[12]\n \"video/webm\", # WebM Matroska-based open media format\n \"video/x-matroska\", # Matroska open media format\n \"video/x-ms-wmv\", # Windows Media Video; Documented in Microsoft KB 288102\n \"video/x-flv\", # Flash video (FLV files)\n )\n\n mime_types = OrderedDict((\n ('application', application_mime_types),\n ('audio', audio_mime_types),\n ('image', image_mime_types),\n ('message', message_mime_types),\n ('model', model_mime_types),\n ('multipart', multipart_mime_types),\n ('text', text_mime_types),\n ('video', video_mime_types),\n ))\n\n audio_file_extensions = (\n \"flac\",\n \"mp3\",\n \"wav\",\n )\n\n image_file_extensions = (\n \"bmp\",\n \"gif\",\n \"jpeg\",\n \"jpg\",\n \"png\",\n \"tiff\",\n )\n\n text_file_extensions = (\n \"css\",\n \"csv\",\n \"html\",\n \"js\",\n \"json\",\n \"txt\",\n )\n\n video_file_extensions = (\n \"mp4\",\n \"avi\",\n \"mov\",\n \"webm\",\n )\n\n office_file_extensions = (\n \"doc\", # legacy MS Word\n \"docx\", # MS Word\n \"xls\", # legacy MS Excel\n \"xlsx\", # MS Excel\n \"ppt\", # legacy MS PowerPoint\n \"pptx\", # MS PowerPoint\n \"odt\", # LibreOffice document\n \"ods\", # LibreOffice spreadsheet\n \"odp\", # LibreOffice presentation\n \"pages\", # Apple Pages\n \"numbers\", # Apple Numbers\n \"key\", # Apple Keynote\n \"pdf\", # Portable Document Format\n )\n\n file_extensions = OrderedDict((\n (\"audio\", audio_file_extensions),\n (\"image\", image_file_extensions),\n (\"office\", office_file_extensions),\n (\"text\", text_file_extensions),\n (\"video\", video_file_extensions),\n ))\n\n def mime_type(self, category=None):\n \"\"\"\n :param category: application|audio|image|message|model|multipart|text|video\n \"\"\"\n category = category if category else self.random_element(\n list(self.mime_types.keys()))\n return self.random_element(self.mime_types[category])\n\n def file_name(self, category=None, extension=None):\n \"\"\"\n :param category: audio|image|office|text|video\n :param extension: file extension\n \"\"\"\n extension = extension if extension else self.file_extension(category)\n filename = self.generator.word()\n return '{0}.{1}'.format(filename, extension)\n\n def file_extension(self, category=None):\n \"\"\"\n :param category: audio|image|office|text|video\n \"\"\"\n category = category if category else self.random_element(\n list(self.file_extensions.keys()))\n return self.random_element(self.file_extensions[category])\n\n def file_path(self, depth=1, category=None, extension=None):\n \"\"\"\n :param category: audio|image|office|text|video\n :param extension: file extension\n :param depth: depth of the file (depth >= 0)\n \"\"\"\n file = self.file_name(category, extension)\n path = \"/{0}\".format(file)\n for d in range(0, depth):\n path = \"/{0}{1}\".format(self.generator.word(), path)\n return path\n"}
|
{"faker/providers/file/__init__.py": [{"type": "function", "name": "Provider.unix_device", "lines": [231, 238], "signature": "def unix_device(self, prefix=None):", "doc": ":param prefix: sd|vd|xvd"}, {"type": "function", "name": "Provider.unix_partition", "lines": [240, 246], "signature": "def unix_partition(self, prefix=None):", "doc": ":param prefix: sd|vd|xvd"}]}
| null |
["tests/providers/test_file.py::TestFile::test_unix_device", "tests/providers/test_file.py::TestFile::test_unix_partition"]
|
["tests/providers/test_file.py::TestFile::test_file_path"]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1523019530.0, "pr_title": "Added Unix device & partition", "pr_body": "### What does this changes\r\n\r\nI added:\r\n- File.unix_device : To create Unix device path. ie: /dev/sda, /dev/xvde, etc\r\n- File.unix_partition: To create Unix partition path. ie: /dev/vdc1, /dev/sdb1, etc\r\n\r\nWith tests, of course ;)", "pr_timeline": [{"time": 1523024238.0, "comment": "Thank you! \u2728 "}], "issues": {}}
|
|
joke2k/faker
| 739
|
https://github.com/joke2k/faker/pull/739
|
joke2k__faker-739
|
[]
|
5159a00f3965fca5d3c5a73c6e352a00d16d37bd
|
diff --git a/faker/providers/__init__.py b/faker/providers/__init__.py
index 417a033e7c..3a12f97eba 100644
--- a/faker/providers/__init__.py
+++ b/faker/providers/__init__.py
@@ -91,6 +91,14 @@ def random_letter(self):
return self.generator.random.choice(
getattr(string, 'letters', string.ascii_letters))
+ def random_lowercase_letter(self):
+ """Returns a random lowercase letter (between a-z)."""
+ return self.generator.random.choice(string.ascii_lowercase)
+
+ def random_uppercase_letter(self):
+ """Returns a random letter (between A-Z)."""
+ return self.generator.random.choice(string.ascii_uppercase)
+
def random_element(self, elements=('a', 'b', 'c')):
"""
Returns a random element from a passed object.
|
diff --git a/tests/test_base_provider.py b/tests/test_base_provider.py
index e45c6692f6..e604912ba0 100644
--- a/tests/test_base_provider.py
+++ b/tests/test_base_provider.py
@@ -105,3 +105,20 @@ def test_hexify(self):
for c in hexified:
self.assertIn(c, string.hexdigits[:-6].upper())
self.assertNotIn(c, string.hexdigits[-6:].lower())
+
+ def test_random_letter(self):
+ for i in range(100):
+ letter = self.provider.random_letter()
+ self.assertTrue(letter.isalpha())
+
+ def test_random_lowercase_letter(self):
+ for i in range(100):
+ letter = self.provider.random_lowercase_letter()
+ self.assertTrue(letter.isalpha())
+ self.assertEqual(letter.lower(), letter)
+
+ def test_random_uppercase_letter(self):
+ for i in range(100):
+ letter = self.provider.random_uppercase_letter()
+ self.assertTrue(letter.isalpha())
+ self.assertEqual(letter.upper(), letter)
| 2018-04-06T13:22:46
|
{}
|
{"faker/providers/__init__.py": "# coding=utf-8\n\nfrom collections import Counter\nimport re\nimport string\n\nfrom faker.utils.distribution import choice_distribution\n\n\n_re_hash = re.compile(r'#')\n_re_perc = re.compile(r'%')\n_re_excl = re.compile(r'!')\n_re_at = re.compile(r'@')\n_re_qm = re.compile(r'\\?')\n_re_cir = re.compile(r'\\^')\n\n\nclass BaseProvider(object):\n\n __provider__ = 'base'\n __lang__ = None\n\n def __init__(self, generator):\n self.generator = generator\n\n def random_int(self, min=0, max=9999):\n \"\"\"\n Returns a random integer between two values.\n\n :param min: lower bound value (inclusive; default=0)\n :param max: upper bound value (inclusive; default=9999)\n :returns: random integer between min and max\n \"\"\"\n return self.generator.random.randint(min, max)\n\n def random_digit(self):\n \"\"\"\n Returns a random digit/number\n between 0 and 9.\n \"\"\"\n return self.generator.random.randint(0, 9)\n\n def random_digit_not_null(self):\n \"\"\"\n Returns a random non-zero digit/number\n between 1 and 9.\n \"\"\"\n return self.generator.random.randint(1, 9)\n\n def random_digit_or_empty(self):\n \"\"\"\n Returns a random digit/number\n between 0 and 9 or an empty string.\n \"\"\"\n if self.generator.random.randint(0, 1):\n return self.generator.random.randint(0, 9)\n else:\n return ''\n\n def random_digit_not_null_or_empty(self):\n \"\"\"\n Returns a random non-zero digit/number\n between 1 and 9 or and empty string.\n \"\"\"\n if self.generator.random.randint(0, 1):\n return self.generator.random.randint(1, 9)\n else:\n return ''\n\n def random_number(self, digits=None, fix_len=False):\n \"\"\"\n Returns a random number with 1 digit (default, when digits==None),\n a random number with 0 to given number of digits, or a random number\n with given number to given number of digits (when ``fix_len==True``).\n\n :param digits: maximum number of digits\n :param fix_len: should the number have fixed length?\n :returns: random number with 0 to given number of digits or\n fixed length number\n \"\"\"\n if digits is None:\n digits = self.random_digit()\n if fix_len:\n return self.generator.random.randint(\n pow(10, digits - 1), pow(10, digits) - 1)\n else:\n return self.generator.random.randint(0, pow(10, digits) - 1)\n\n def random_letter(self):\n \"\"\"Returns a random letter (between a-z and A-Z).\"\"\"\n return self.generator.random.choice(\n getattr(string, 'letters', string.ascii_letters))\n\n def random_element(self, elements=('a', 'b', 'c')):\n \"\"\"\n Returns a random element from a passed object.\n\n If `elements` is a dictionary, the value will be used as\n a weighting element. For example::\n\n random_element({\"{{variable_1}}\": 0.5, \"{{variable_2}}\": 0.2, \"{{variable_3}}\": 0.2, \"{{variable_4}}\": 0.1})\n\n will have the following distribution:\n * `variable_1`: 50% probability\n * `variable_2`: 20% probability\n * `variable_3`: 20% probability\n * `variable_4`: 10% probability\n\n \"\"\"\n\n if isinstance(elements, dict):\n choices = elements.keys()\n probabilities = elements.values()\n return choice_distribution(\n list(choices),\n list(probabilities),\n self.generator.random)\n else:\n return self.generator.random.choice(list(elements))\n\n def random_sample(self, elements=('a', 'b', 'c'), length=None):\n if length is None:\n length = self.generator.random.randint(1, len(elements))\n\n return [self.random_element(elements) for _ in range(length)]\n\n def random_sample_unique(self, elements=('a', 'b', 'c'), length=None):\n \"\"\"\n Returns a `set` of random unique elements for the specified length.\n Multiple occurances of the same value increase its probabilty to be in the output.\n \"\"\"\n elements = Counter(elements)\n if length is None:\n length = self.generator.random.randint(1, len(elements))\n\n if length > len(elements):\n raise ValueError(\n \"Sample length cannot be longer than the number of unique elements to pick from.\")\n sample = set()\n for _ in range(length):\n element = self.random_element(elements)\n sample.add(element)\n elements.pop(element)\n return sample\n\n def randomize_nb_elements(\n self,\n number=10,\n le=False,\n ge=False,\n min=None,\n max=None):\n \"\"\"\n Returns a random value near number.\n\n :param number: value to which the result must be near\n :param le: result must be lower or equal to number\n :param ge: result must be greater or equal to number\n :returns: a random int near number\n \"\"\"\n if le and ge:\n return number\n _min = 100 if ge else 60\n _max = 100 if le else 140\n nb = int(number * self.generator.random.randint(_min, _max) / 100)\n if min is not None and nb < min:\n nb = min\n if max is not None and nb > min:\n nb = max\n return nb\n\n def numerify(self, text='###'):\n \"\"\"\n Replaces all placeholders in given text with randomized values,\n replacing: all hash sign ('#') occurrences with a random digit\n (from 0 to 9); all percentage sign ('%') occurrences with a\n random non-zero digit (from 1 to 9); all exclamation mark ('!')\n occurrences with a random digit (from 0 to 9) or an empty string;\n and all at symbol ('@') occurrences with a random non-zero digit\n (from 1 to 9) or an empty string.\n\n :param text: string to be parsed\n :returns: string with all numerical placeholders filled in\n \"\"\"\n text = _re_hash.sub(\n lambda x: str(self.random_digit()),\n text)\n text = _re_perc.sub(\n lambda x: str(self.random_digit_not_null()),\n text)\n text = _re_excl.sub(\n lambda x: str(self.random_digit_or_empty()),\n text)\n text = _re_at.sub(\n lambda x: str(self.random_digit_not_null_or_empty()),\n text)\n return text\n\n def lexify(self, text='????', letters=string.ascii_letters):\n \"\"\"\n Replaces all question mark ('?') occurrences with a random letter.\n\n :param text: string to be parsed\n :param letters: a set of letters to choose from.\n :returns: string with all letter placeholders filled in\n \"\"\"\n return _re_qm.sub(lambda x: self.random_element(letters), text)\n\n def bothify(self, text='## ??', letters=string.ascii_letters):\n \"\"\"\n Replaces all placeholders with random numbers and letters.\n\n :param text: string to be parsed\n :returns: string with all numerical and letter placeholders filled in\n \"\"\"\n return self.lexify(self.numerify(text), letters=letters)\n\n def hexify(self, text='^^^^', upper=False):\n \"\"\"\n Replaces all circumflex ('^') occurrences with a random\n hexadecimal character.\n\n :param text: string to be parsed\n :param upper: Format as uppercase hexadecimal\n :returns: string with all letter placeholders filled in\n \"\"\"\n letters = string.hexdigits[:-6]\n if upper:\n letters = letters.upper()\n return _re_cir.sub(lambda x: self.random_element(letters), text)\n"}
|
{"faker/providers/__init__.py": [{"type": "function", "name": "BaseProvider.random_lowercase_letter", "lines": [94, 96], "signature": "def random_lowercase_letter(self):", "doc": "Returns a random lowercase letter (between a-z)."}, {"type": "function", "name": "BaseProvider.random_uppercase_letter", "lines": [98, 100], "signature": "def random_uppercase_letter(self):", "doc": "Returns a random letter (between A-Z)."}]}
| null |
["tests/test_base_provider.py::TestBaseProvider::test_random_lowercase_letter", "tests/test_base_provider.py::TestBaseProvider::test_random_uppercase_letter"]
|
["tests/test_base_provider.py::TestBaseProvider::test_bothify_empty_text", "tests/test_base_provider.py::TestBaseProvider::test_bothify_mixed_values", "tests/test_base_provider.py::TestBaseProvider::test_bothify_only_letters", "tests/test_base_provider.py::TestBaseProvider::test_hexify", "tests/test_base_provider.py::TestBaseProvider::test_lexify_empty_text", "tests/test_base_provider.py::TestBaseProvider::test_lexify_mixed_values", "tests/test_base_provider.py::TestBaseProvider::test_lexify_only_letters", "tests/test_base_provider.py::TestBaseProvider::test_random_letter"]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1523020761.0, "pr_title": "Added random lower and uppercase letters", "pr_body": "### What does this changes\r\n\r\nAdded method to get lowercase and uppercase ASCII letters.\r\n\r\n### What was wrong\r\n\r\nIn my previous PR (#738), I wanted to have a random ASCII letter,\r\nBut have to import string or make a small trick.\r\nHere's all is made is the base provider.", "pr_timeline": [{"time": 1523024296.0, "comment": "good idea! Thank you!"}], "issues": {}}
|
|
joke2k/faker
| 874
|
https://github.com/joke2k/faker/pull/874
|
joke2k__faker-874
|
[]
|
02f678a3d15041e3aa10b226faae88e7d28d6696
|
diff --git a/faker/providers/company/ja_JP/__init__.py b/faker/providers/company/ja_JP/__init__.py
index 649fafadaa..b650fb06f7 100644
--- a/faker/providers/company/ja_JP/__init__.py
+++ b/faker/providers/company/ja_JP/__init__.py
@@ -5,10 +5,15 @@
class Provider(CompanyProvider):
formats = (
- '{{company_prefix}} {{last_name}}',
+ '{{company_prefix}}{{last_name}}{{company_category}}',
+ '{{last_name}}{{company_category}}{{company_prefix}}',
)
company_prefixes = ('株式会社', '有限会社', '合同会社')
+ company_categories = ('水産', '農林', '鉱業', '建設', '食品', '印刷', '電気', 'ガス', '情報', '通信', '運輸', '銀行', '保険')
def company_prefix(self):
return self.random_element(self.company_prefixes)
+
+ def company_category(self):
+ return self.random_element(self.company_categories)
|
diff --git a/tests/providers/test_company.py b/tests/providers/test_company.py
index ca6b220b55..0334a08719 100644
--- a/tests/providers/test_company.py
+++ b/tests/providers/test_company.py
@@ -37,15 +37,19 @@ def setUp(self):
def test_company(self):
prefixes = JaProvider.company_prefixes
-
prefix = self.factory.company_prefix()
assert isinstance(prefix, six.string_types)
assert prefix in prefixes
+ categories = JaProvider.company_categories
+ category = self.factory.company_category()
+ assert isinstance(category, six.string_types)
+ assert category in categories
+
company = self.factory.company()
assert isinstance(company, six.string_types)
- assert any(prefix in company for prefix in prefixes)
- assert any(company.startswith(prefix) for prefix in prefixes)
+ assert any(company.startswith(prefix) or company.endswith(prefix) for prefix in prefixes)
+ assert any(category in company for category in categories)
class TestPtBR(unittest.TestCase):
| 2018-12-02T08:23:21
|
{}
|
{"faker/providers/company/ja_JP/__init__.py": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nfrom .. import Provider as CompanyProvider\n\n\nclass Provider(CompanyProvider):\n formats = (\n '{{company_prefix}} {{last_name}}',\n )\n\n company_prefixes = ('株式会社', '有限会社', '合同会社')\n\n def company_prefix(self):\n return self.random_element(self.company_prefixes)\n"}
|
{"faker/providers/company/ja_JP/__init__.py": [{"type": "function", "name": "Provider.company_category", "lines": [18, 19], "signature": "def company_category(self):", "doc": ""}]}
| null |
["tests/providers/test_company.py::TestJaJP::test_company"]
|
["tests/providers/test_company.py::TestFiFI::test_company_business_id", "tests/providers/test_company.py::TestPtBR::test_pt_BR_cnpj", "tests/providers/test_company.py::TestPtBR::test_pt_BR_company_id", "tests/providers/test_company.py::TestPtBR::test_pt_BR_company_id_checksum", "tests/providers/test_company.py::TestHuHU::test_company", "tests/providers/test_company.py::TestHuHU::test_company_suffix", "tests/providers/test_company.py::TestPlPL::test_company_prefix", "tests/providers/test_company.py::TestPlPL::test_company_suffix", "tests/providers/test_company.py::TestPlPL::test_company_vat", "tests/providers/test_company.py::TestPlPL::test_company_vat_checksum", "tests/providers/test_company.py::TestPlPL::test_local_regon", "tests/providers/test_company.py::TestPlPL::test_local_regon_checksum", "tests/providers/test_company.py::TestPlPL::test_regon", "tests/providers/test_company.py::TestPlPL::test_regon_checksum"]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1543492138.0, "pr_title": "Add categories to Japanese company provider", "pr_body": "### What does this changes\r\n\r\nAdded variety of Japanese company names.\r\n\r\nThe category list is from Ruby Faker. https://github.com/stympy/faker/blob/master/lib/locales/ja.yml#L28\r\n\r\n### What was wrong\r\n\r\nNothing was wrong.\r\n\r\n### How this fixes it\r\n\r\nAdded categories and format patterns.\r\n\r\n", "pr_timeline": [{"time": 1543915146.0, "comment": "Coveralls says coverage decreased. But this part is completely unrelated to this PR.\r\nCan someone who has WRITE access rebuild travis? It will solve coverage decreasing.\r\n\r\nThe problem is issued here https://github.com/joke2k/faker/issues/773"}, {"time": 1544199540.0, "comment": "Thank you! \u2728 "}], "issues": {}}
|
|
joke2k/faker
| 964
|
https://github.com/joke2k/faker/pull/964
|
joke2k__faker-964
|
["931"]
|
89eefd7962928b3150e72707fc8718613ab63e63
|
diff --git a/faker/providers/person/pl_PL/__init__.py b/faker/providers/person/pl_PL/__init__.py
index 49c622df8a..025b50f135 100644
--- a/faker/providers/person/pl_PL/__init__.py
+++ b/faker/providers/person/pl_PL/__init__.py
@@ -21,28 +21,6 @@ def checksum_identity_card_number(characters):
return check_digit
-def generate_pesel_checksum_value(pesel_digits):
- """
- Calculates and returns a control digit for given PESEL.
- """
- checksum_values = [9, 7, 3, 1, 9, 7, 3, 1, 9, 7]
-
- checksum = sum((int(a) * b for a, b in zip(list(pesel_digits), checksum_values)))
-
- return checksum % 10
-
-
-def checksum_pesel_number(pesel_digits):
- """
- Calculates and returns True if PESEL is valid.
- """
- checksum_values = [1, 3, 7, 9, 1, 3, 7, 9, 1, 3, 1]
-
- checksum = sum((int(a) * b for a, b in zip(list(pesel_digits), checksum_values)))
-
- return checksum % 10 == 0
-
-
class Provider(PersonProvider):
formats = (
'{{first_name}} {{last_name}}',
@@ -725,30 +703,74 @@ def identity_card_number(self):
return ''.join(str(character) for character in identity)
- def pesel(self):
+ @staticmethod
+ def pesel_compute_check_digit(pesel):
+ checksum_values = [9, 7, 3, 1, 9, 7, 3, 1, 9, 7]
+ return sum(int(a) * b for a, b in zip(pesel, checksum_values)) % 10
+
+ def pesel(self, date_of_birth=None, sex=None):
"""
Returns 11 characters of Universal Electronic System for Registration of the Population.
Polish: Powszechny Elektroniczny System Ewidencji Ludności.
PESEL has 11 digits which identifies just one person.
- Month: if person was born in 1900-2000, december is 12. If person was born > 2000, we have to add 20 to month,
- so december is 32.
- Person id: last digit identifies person's sex. Even for females, odd for males.
+ pesel_date: if person was born in 1900-2000, december is 12. If person was born > 2000, we have to add 20 to
+ month, so december is 32.
+ pesel_sex: last digit identifies person's sex. Even for females, odd for males.
https://en.wikipedia.org/wiki/PESEL
"""
+ if date_of_birth is None:
+ date_of_birth = self.generator.date_of_birth()
- birth = self.generator.date_of_birth()
+ pesel_date = '{year}{month:02d}{day:02d}'.format(
+ year=date_of_birth.year, day=date_of_birth.day,
+ month=date_of_birth.month if date_of_birth.year < 2000 else date_of_birth.month + 20)
+ pesel_date = pesel_date[2:]
- year_pesel = str(birth.year)[-2:]
- month_pesel = birth.month if birth.year < 2000 else birth.month + 20
- day_pesel = birth.day
- person_id = self.random_int(1000, 9999)
+ pesel_core = ''.join(map(str, (self.random_digit() for _ in range(3))))
+ pesel_sex = self.random_digit()
- current_pesel = '{year}{month:02d}{day:02d}{person_id:04d}'.format(year=year_pesel, month=month_pesel,
- day=day_pesel,
- person_id=person_id)
+ if (sex == 'M' and pesel_sex % 2 == 0) or (sex == 'F' and pesel_sex % 2 == 1):
+ pesel_sex = (pesel_sex + 1) % 10
+
+ pesel = '{date}{core}{sex}'.format(date=pesel_date, core=pesel_core, sex=pesel_sex)
+ pesel += str(self.pesel_compute_check_digit(pesel))
+
+ return pesel
+
+ @staticmethod
+ def pwz_doctor_compute_check_digit(x):
+ return sum((i+1)*d for i, d in enumerate(x)) % 11
+
+ def pwz_doctor(self):
+ """
+ Function generates an identification number for medical doctors
+ Polish: Prawo Wykonywania Zawodu (PWZ)
+
+ https://www.nil.org.pl/rejestry/centralny-rejestr-lekarzy/zasady-weryfikowania-nr-prawa-wykonywania-zawodu
+ """
+ core = [self.random_digit() for _ in range(6)]
+ check_digit = self.pwz_doctor_compute_check_digit(core)
+
+ if check_digit == 0:
+ core[-1] = (core[-1] + 1) % 10
+ check_digit = self.pwz_doctor_compute_check_digit(core)
+
+ return '{}{}'.format(check_digit, ''.join(map(str, core)))
+
+ def pwz_nurse(self, kind='nurse'):
+ """
+ Function generates an identification number for nurses and midwives
+ Polish: Prawo Wykonywania Zawodu (PWZ)
+
+ http://arch.nipip.pl/index.php/prawo/uchwaly/naczelnych-rad/w-roku-2015/posiedzenie-15-17-grudnia/3664-uchwala-
+ nr-381-vi-2015-w-sprawie-trybu-postepowania-dotyczacego-stwierdzania-i-przyznawania-prawa-wykonywania-zawodu-pi
+ elegniarki-i-zawodu-poloznej-oraz-sposobu-prowadzenia-rejestru-pielegniarek-i-rejestru-poloznych-przez-okregowe
+ -rady-pielegniarek-i-polo
+ """
+ region = self.random_int(1, 45)
+ core = [self.random_digit() for _ in range(5)]
+ kind_char = 'A' if kind == 'midwife' else 'P'
- checksum_value = generate_pesel_checksum_value(current_pesel)
- return '{pesel_without_checksum}{checksum_value}'.format(pesel_without_checksum=current_pesel,
- checksum_value=checksum_value)
+ return '{:02d}{}{}'.format(region, ''.join(map(str, core)), kind_char)
|
diff --git a/tests/providers/test_person.py b/tests/providers/test_person.py
index 3767ef4d89..90b8d4c2ca 100644
--- a/tests/providers/test_person.py
+++ b/tests/providers/test_person.py
@@ -4,9 +4,14 @@
import re
import unittest
-
+import datetime
import six
+try:
+ from unittest import mock
+except ImportError:
+ import mock
+
from faker import Faker
from faker.providers.person.ar_AA import Provider as ArProvider
from faker.providers.person.fi_FI import Provider as FiProvider
@@ -14,9 +19,9 @@
from faker.providers.person.ne_NP import Provider as NeProvider
from faker.providers.person.sv_SE import Provider as SvSEProvider
from faker.providers.person.cs_CZ import Provider as CsCZProvider
+from faker.providers.person.pl_PL import Provider as PlPLProvider
from faker.providers.person.pl_PL import (
checksum_identity_card_number as pl_checksum_identity_card_number,
- checksum_pesel_number as pl_checksum_pesel_number,
)
from faker.providers.person.zh_CN import Provider as ZhCNProvider
from faker.providers.person.zh_TW import Provider as ZhTWProvider
@@ -207,13 +212,43 @@ def test_identity_card_number(self):
for _ in range(100):
assert re.search(r'^[A-Z]{3}\d{6}$', self.factory.identity_card_number())
- def test_pesel_number_checksum(self):
- assert pl_checksum_pesel_number('31090655159') is True
- assert pl_checksum_pesel_number('95030853577') is True
- assert pl_checksum_pesel_number('05260953442') is True
- assert pl_checksum_pesel_number('31090655158') is False
- assert pl_checksum_pesel_number('95030853576') is False
- assert pl_checksum_pesel_number('05260953441') is False
+ @mock.patch.object(PlPLProvider, 'random_digit')
+ def test_pesel_birth_date(self, mock_random_digit):
+ mock_random_digit.side_effect = [3, 5, 8, 8, 7, 9, 9, 3]
+ assert self.factory.pesel(datetime.date(1999, 12, 31)) == '99123135885'
+ assert self.factory.pesel(datetime.date(2000, 1, 1)) == '00210179936'
+
+ @mock.patch.object(PlPLProvider, 'random_digit')
+ def test_pesel_sex_male(self, mock_random_digit):
+ mock_random_digit.side_effect = [1, 3, 4, 5, 6, 1, 7, 0]
+ assert self.factory.pesel(datetime.date(1909, 3, 3), 'M') == '09030313454'
+ assert self.factory.pesel(datetime.date(1913, 8, 16), 'M') == '13081661718'
+
+ @mock.patch.object(PlPLProvider, 'random_digit')
+ def test_pesel_sex_female(self, mock_random_digit):
+ mock_random_digit.side_effect = [4, 9, 1, 6, 6, 1, 7, 3]
+ assert self.factory.pesel(datetime.date(2007, 4, 13), 'F') == '07241349161'
+ assert self.factory.pesel(datetime.date(1933, 12, 16), 'F') == '33121661744'
+
+ @mock.patch.object(PlPLProvider, 'random_digit')
+ def test_pwz_doctor(self, mock_random_digit):
+ mock_random_digit.side_effect = [6, 9, 1, 9, 6, 5, 2, 7, 9, 9, 1, 5]
+ assert self.factory.pwz_doctor() == '2691965'
+ assert self.factory.pwz_doctor() == '4279915'
+
+ @mock.patch.object(PlPLProvider, 'random_digit')
+ def test_pwz_doctor_check_digit_zero(self, mock_random_digit):
+ mock_random_digit.side_effect = [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 9, 9]
+ assert self.factory.pwz_doctor() == '6000012'
+ assert self.factory.pwz_doctor() == '1000090'
+
+ @mock.patch.object(PlPLProvider, 'random_int')
+ @mock.patch.object(PlPLProvider, 'random_digit')
+ def test_pwz_nurse(self, mock_random_digit, mock_random_int):
+ mock_random_digit.side_effect = [3, 4, 5, 6, 7, 1, 7, 5, 1, 2]
+ mock_random_int.side_effect = [45, 3]
+ assert self.factory.pwz_nurse(kind='nurse') == '4534567P'
+ assert self.factory.pwz_nurse(kind='midwife') == '0317512A'
class TestCsCZ(unittest.TestCase):
| 2019-05-27T10:39:41
|
{}
|
{"faker/providers/person/pl_PL/__init__.py": "# coding=utf-8\nfrom __future__ import unicode_literals\nfrom .. import Provider as PersonProvider\n\n\ndef checksum_identity_card_number(characters):\n \"\"\"\n Calculates and returns a control digit for given list of characters basing on Identity Card Number standards.\n \"\"\"\n weights_for_check_digit = [7, 3, 1, 0, 7, 3, 1, 7, 3]\n check_digit = 0\n\n for i in range(3):\n check_digit += weights_for_check_digit[i] * (ord(characters[i]) - 55)\n\n for i in range(4, 9):\n check_digit += weights_for_check_digit[i] * characters[i]\n\n check_digit %= 10\n\n return check_digit\n\n\ndef generate_pesel_checksum_value(pesel_digits):\n \"\"\"\n Calculates and returns a control digit for given PESEL.\n \"\"\"\n checksum_values = [9, 7, 3, 1, 9, 7, 3, 1, 9, 7]\n\n checksum = sum((int(a) * b for a, b in zip(list(pesel_digits), checksum_values)))\n\n return checksum % 10\n\n\ndef checksum_pesel_number(pesel_digits):\n \"\"\"\n Calculates and returns True if PESEL is valid.\n \"\"\"\n checksum_values = [1, 3, 7, 9, 1, 3, 7, 9, 1, 3, 1]\n\n checksum = sum((int(a) * b for a, b in zip(list(pesel_digits), checksum_values)))\n\n return checksum % 10 == 0\n\n\nclass Provider(PersonProvider):\n formats = (\n '{{first_name}} {{last_name}}',\n '{{first_name}} {{last_name}}',\n '{{first_name}} {{last_name}}',\n '{{first_name}} {{last_name}}',\n '{{first_name}} {{last_name}}',\n '{{prefix_female}} {{first_name_female}} {{last_name_female}}',\n '{{first_name}} {{last_name}}',\n '{{prefix_male}} {{first_name_male}} {{last_name_male}}',\n )\n\n first_names_male = (\n 'Jakub',\n 'Jan',\n 'Mateusz',\n 'Bartek',\n 'Kacper',\n 'Michał',\n 'Szymon',\n 'Antoni',\n 'Filip',\n 'Piotr',\n 'Maciej',\n 'Aleksander',\n 'Franciszek',\n 'Mikołaj',\n 'Adam',\n 'Stanisław',\n 'Wiktor',\n 'Krzysztof',\n 'Wojciech',\n 'Igor',\n 'Maksymilian',\n 'Karol',\n 'Dawid',\n 'Tomasz',\n 'Patryk',\n 'Oskar',\n 'Paweł',\n 'Dominik',\n 'Kamil',\n 'Oliwier',\n 'Ignacy',\n 'Marcel',\n 'Hubert',\n 'Adrian',\n 'Łukasz',\n 'Sebastian',\n 'Julian',\n 'Tymon',\n 'Krystian',\n 'Marcin',\n 'Damian',\n 'Miłosz',\n 'Leon',\n 'Alan',\n 'Tymoteusz',\n 'Kajetan',\n 'Grzegorz',\n 'Daniel',\n 'Rafał',\n 'Eryk',\n 'Konrad',\n 'Ksawery',\n 'Gabriel',\n 'Nikodem',\n 'Bruno',\n 'Przemysław',\n 'Borys',\n 'Artur',\n 'Olaf',\n 'Jerzy',\n 'Marek',\n 'Tadeusz',\n 'Andrzej',\n 'Witold',\n 'Iwo',\n 'Juliusz',\n 'Robert',\n 'Błażej',\n 'Cezary',\n 'Jeremi',\n 'Jacek',\n 'Konstanty',\n 'Ryszard',\n 'Stefan',\n 'Aleks',\n 'Gustaw',\n 'Radosław',\n 'Emil',\n 'Norbert',\n 'Fabian',\n 'Jędrzej',\n 'Alex',\n 'Kazimierz',\n 'Arkadiusz',\n 'Kornel',\n 'Józef',\n 'Natan',\n 'Cyprian',\n 'Mieszko',\n 'Nataniel',\n 'Maks',\n 'Maurycy',\n 'Olgierd',\n 'Dariusz',\n 'Leonard',\n 'Mariusz',\n 'Albert',\n 'Fryderyk',\n 'Ernest',\n 'Tobiasz')\n\n first_names_female = (\n 'Kamila',\n 'Ewa',\n 'Blanka',\n 'Olga',\n 'Kalina',\n 'Klara',\n 'Urszula',\n 'Sandra',\n 'Kaja',\n 'Marianna',\n 'Kornelia',\n 'Justyna',\n 'Monika',\n 'Sara',\n 'Adrianna',\n 'Aniela',\n 'Agnieszka',\n 'Róża',\n 'Marcelina',\n 'Roksana',\n 'Natasza',\n 'Lidia',\n 'Malwina',\n 'Karina',\n 'Ada',\n 'Marika',\n 'Anastazja',\n 'Sonia',\n 'Nela',\n 'Dorota',\n 'Apolonia',\n 'Ida',\n 'Eliza',\n 'Angelika',\n 'Anna Maria',\n 'Liwia',\n 'Ewelina',\n 'Julita',\n 'Rozalia',\n 'Inga',\n 'Krystyna',\n 'Bianka',\n 'Dagmara',\n 'Melania',\n 'Sylwia',\n 'Nicole',\n 'Anita',\n 'Aurelia',\n 'Elżbieta',\n 'Janina',\n 'Julianna',\n 'Tola',\n 'Gaja')\n\n unisex_last_names = (\n 'Wandzel', 'Pajda', 'Dzienis', 'Borysewicz', 'Szlaga', 'Krzysiek', 'Iwańczyk', 'Cierpisz',\n 'Borczyk', 'Szymula', 'Pietrasiak', 'Minkiewicz', 'Hojka', 'Goral', 'Staś', 'Smoter',\n 'Bosek', 'Bitner', 'Kondej', 'Furgał', 'Durlik', 'Kusa', 'Pacewicz', 'Masiak', 'Kucz',\n 'Cichowlas', 'Anders', 'Wawszczak', 'Słupek', 'Pych', 'Piszcz', 'Opoka', 'Lorenz',\n 'Grochowina', 'Wicha', 'Pawliczek', 'Kus', 'Zysk', 'Sroga', 'Rychel', 'Patora', 'Maciocha',\n 'Rozmiarek', 'Pesta', 'Działak', 'Godyń', 'Chmara', 'Jakubaszek', 'Bałazy', 'Rykała',\n 'Wika', 'Kotala', 'Fikus', 'Sus', 'Kunc', 'Mateusiak', 'Kusyk', 'Romańczyk', 'Makieła',\n 'Lejman', 'Kołaczek', 'Kurzak', 'Bondyra', 'Podkowa', 'Paśnik', 'Oleszko', 'Marcol',\n 'Szybiak', 'Ruszczak', 'Zbroja', 'Stosik', 'Gruchot', 'Boś', 'Wożniak', 'Gniewek', 'Buława',\n 'Wiatrak', 'Talaśka', 'Patalas', 'Kwoka', 'Krzempek', 'Danilczuk', 'Ważny', 'Sidorczuk',\n 'Legutko', 'Kobos', 'Tylek', 'Szkoda', 'Przerwa', 'Linek', 'Galik', 'Dulewicz', 'Drozda',\n 'Nowek', 'Matulewicz', 'Karpeta', 'Jurczuk', 'Buśko', 'Słomian', 'Drywa', 'Rybus', 'Langa',\n 'Kluczek', 'Orkisz', 'Ziemkiewicz', 'Siara', 'Para', 'Kwasek', 'Januszko', 'Hejduk',\n 'Łuszczak', 'Sprawka', 'Kiełek', 'Jop', 'Faryna', 'Zimoń', 'Utrata', 'Mirga', 'Kozaczuk',\n 'Wojtyna', 'Rzońca', 'Madejczyk', 'Glapiak', 'Dziadkowiec', 'Ochnio', 'Sieja', 'Malewicz',\n 'Bachanek', 'Mirocha', 'Domżał', 'Tworzydło', 'Płaneta', 'Feret', 'Witas', 'Figat', 'Muc',\n 'Kuciel', 'Kielan', 'Hałat', 'Tecław', 'Loba', 'Klucznik', 'Bielas', 'Rajczyk', 'Myszak',\n 'Muniak', 'Michalczak', 'Kochanowicz', 'Szołtysik', 'Rychert', 'Pyda', 'Janowiak', 'Janiga',\n 'Grądziel', 'Wdowczyk', 'Pytlarz', 'Kuzia', 'Dziewa', 'Bernatowicz', 'Ostapiuk', 'Rejniak',\n 'Kotlarek', 'Gajownik', 'Brach', 'Tatarek', 'Szyc', 'Masny', 'Drop', 'Saternus',\n 'Podsiadła', 'Patyna', 'Kargol', 'Truchan', 'Pietrusiak', 'Kolbusz', 'Kalota', 'Hołubowicz',\n 'Andrzejuk', 'Zdziech', 'Szymonik', 'Sych', 'Strojna', 'Seta', 'Orman', 'Hermanowicz',\n 'Denkiewicz', 'Bulanda', 'Szwaja', 'Jankowicz', 'Pochopień', 'Kobza', 'Karwot', 'Kałek',\n 'Laszuk', 'Aleksiejuk', 'Witaszek', 'Wawryniuk', 'Jacak', 'Bugla', 'Wejman', 'Jaroch',\n 'Janiszek', 'Gorzelańczyk', 'Zieja', 'Krochmal', 'Filas', 'Wawrzynowicz', 'Szałas',\n 'Machoń', 'Labus', 'Irzyk', 'Gomuła', 'Wesoły', 'Solarek', 'Kośka', 'Myszk', 'Moryc',\n 'Lizoń', 'Lesisz', 'Kiełbowicz', 'Serwa', 'Piórek', 'Majdak', 'Bruzda', 'Bakun', 'Subocz',\n 'Stypuła', 'Gołek', 'Fik', 'Wołczyk', 'Waniek', 'Parzyszek', 'Oszust', 'Burza', 'Żbik',\n 'Misztela', 'Kurant', 'Drygas', 'Łaciak', 'Franczuk', 'Rycerz', 'Żok', 'Zeman', 'Mejer',\n 'Kanarek', 'Jędruch', 'Saj', 'Nieroda', 'Juśkiewicz', 'Surdyk', 'Paliga', 'Makaruk',\n 'Hamera', 'Łukowicz', 'Barcz', 'Witos', 'Strzelczak', 'Siedlaczek', 'Pakosz', 'Burchardt',\n 'Nurek', 'Morys', 'Korbel', 'Kokosza', 'Kijanka', 'Bobak', 'Samson', 'Jarosiewicz',\n 'Szelest', 'Stanisławek', 'Perka', 'Ciepłuch', 'Bryja', 'Świątkiewicz', 'Samul', 'Rohde',\n 'Prucnal', 'Miszkiewicz', 'Kuropatwa', 'Gajdzik', 'Mućka', 'Misiaszek', 'Fornalik',\n 'Wiszowaty', 'Thiel', 'Osiadacz', 'Miśko', 'Mielcarz', 'Drózd', 'Oleksiuk', 'Matyka',\n 'Łyczak', 'Cabała', 'Ośka', 'Bereś', 'Armatys', 'Szmajda', 'Młyńczak', 'Kupidura', 'Kijas',\n 'Chomiuk', 'Gowin', 'Dybka', 'Bródka', 'Wziątek', 'Ślęczka', 'Koj', 'Drabczyk', 'Buczko',\n 'Sawko', 'Kłysz', 'Karpiel', 'Jarczyk', 'Flaga', 'Fiedorczuk', 'Tomalak', 'Nałęcz',\n 'Choroś', 'Brańka', 'Rajchel', 'Kiedrowicz', 'Gąbka', 'Fiołek', 'Drozdowicz', 'Stypa',\n 'Kawala', 'Mazanek', 'Kwinta', 'Koczy', 'Hyży', 'Grzejszczak', 'Wywiał', 'Sacharczuk',\n 'Jaroszuk', 'Golon', 'Chachuła', 'Malarczyk', 'Kawula', 'Bohdanowicz', 'Bartocha', 'Lewko',\n 'Igras', 'Damps', 'Tlałka', 'Niechciał', 'Łyskawa', 'Goś', 'Więckiewicz', 'Leśko', 'Konsek',\n 'Juszczuk', 'Szczudło', 'Poniedziałek', 'Palus', 'Bodziony', 'Śmieszek', 'Rej', 'Pietryga',\n 'Mieszała', 'Malcher', 'Kopij', 'Kaczan', 'Janasik', 'Watras', 'Stojak', 'Strzyż',\n 'Siemieniec', 'Kośnik', 'Kasperczak', 'Woszczyna', 'Wiech', 'Stefanik', 'Miara', 'Łodyga',\n 'Walo', 'Oleksiewicz', 'Mainka', 'Baka', 'Trybuś', 'Samol', 'Jamroży', 'Gruszczyk',\n 'Deluga', 'Trzos', 'Sinkiewicz', 'Lesik', 'Kroczak', 'Klamka', 'Grzelczyk', 'Dycha',\n 'Ciesielczyk', 'Armata', 'Wawrzyczek', 'Prokopczyk', 'Hampel', 'Grzech', 'Rzucidło', 'Rawa',\n 'Kręcisz', 'Karyś', 'Rodzeń', 'Karalus', 'Mikosz', 'Kazimierczuk', 'Hajda', 'Berg', 'Teper',\n 'Słabosz', 'Dziechciarz', 'Dmoch', 'Śleziak', 'Pietrek', 'Martyka', 'Wołk', 'Smętek',\n 'Kroll', 'Grab', 'Dziedzina', 'Noszczyk', 'Kazek', 'Jędrusiak', 'Cebo', 'Tokarek', 'Małota',\n 'Hanc', 'Uliasz', 'Pysz', 'Piłka', 'Błaszyk', 'Wyrobek', 'Trybus', 'Szlęk', 'Pindor', 'Łuc',\n 'Baszak', 'Majak', 'Łój', 'Szczypek', 'Łuczkiewicz', 'Łaszcz', 'Froń', 'Dybaś', 'Budner',\n 'Ostasz', 'Siekierka', 'Pilipczuk', 'Kandzia', 'Gieroń', 'Drost', 'Chwała', 'Malesza',\n 'Fiedler', 'Suszko', 'Kurnik', 'Bereda', 'Nalewajko', 'Duczmal', 'Sieradzan', 'Pietrasz',\n 'Cecot', 'Tomaszkiewicz', 'Rabiej', 'Staniaszek', 'Mikusek', 'Kuryłowicz', 'Herda',\n 'Brzykcy', 'Początek', 'Ochal', 'Koral', 'Kaźmierczyk', 'Kandziora', 'Sycz', 'Reich',\n 'Lindner', 'Fulara', 'Przybycień', 'Hermann', 'Forysiak', 'Strzępek', 'Sondej', 'Pyć',\n 'Piaścik', 'Grygo', 'Wita', 'Szynkiewicz', 'Piesik', 'Nasiadka', 'Murach', 'Kostro',\n 'Hinca', 'Engler', 'Tułacz', 'Przewoźny', 'Pizoń', 'Łapacz', 'Hajduga', 'Bulczak', 'Bubel',\n 'Smutek', 'Samoraj', 'Plaskota', 'Fraś', 'Becker', 'Baranowicz', 'Trznadel', 'Topa',\n 'Stanisławczyk', 'Lato', 'Kołton', 'Uryga', 'Tomaszczyk', 'Szymanik', 'Stochmal',\n 'Kiszczak', 'Dylong', 'Chruszcz', 'Byra', 'Friedrich', 'Cyganik', 'Pacocha', 'Jonczyk',\n 'Szymańczyk', 'Radko', 'Meler', 'Kuran', 'Koman', 'Błądek', 'Banachowicz', 'Babiuch',\n 'Kruszka', 'Fijoł', 'Zatoń', 'Włodarz', 'Trepka', 'Świerszcz', 'Strzała', 'Opioła', 'Kursa',\n 'Dyś', 'Broś', 'Tyka', 'Syroka', 'Grys', 'Szczepaniuk', 'Marcińczyk', 'Leks', 'Kubina',\n 'Janke', 'Dąbrowicz', 'Hulbój', 'Cieciura', 'Chochół', 'Szpila', 'Samiec', 'Rduch',\n 'Nabiałek', 'Margol', 'Kopa', 'Engel', 'Czerepak', 'Rosłon', 'Pusz', 'Matla', 'Wołoch',\n 'Pazik', 'Nazimek', 'Kuśka', 'Karczmarz', 'Gajzler', 'Sławik', 'Lalak', 'Grabias', 'Gągała',\n 'Chwedoruk', 'Wasil', 'Pachołek', 'Wichłacz', 'Walentynowicz', 'Tylus', 'Kosz', 'Iwanow',\n 'Garczarek', 'Dorociak', 'Boguta', 'Betka', 'Widuch', 'Wawrzynek', 'Szymajda', 'Stanaszek',\n 'Klama', 'Goj', 'Dzierżak', 'Walasik', 'Skwira', 'Luks', 'Kujawiak', 'Dworczak', 'Tofil',\n 'Rurarz', 'Pachla', 'Lenarcik', 'Kusztal', 'Chaber', 'Skała', 'Radzewicz', 'Kramer',\n 'Kochel', 'Dukat', 'Naglik', 'Szurek', 'Litwiniuk', 'Halama', 'Grzela', 'Wojaczek',\n 'Popielarczyk', 'Krysik', 'Dawidczyk', 'Barteczko', 'Balik', 'Warych', 'Miodek', 'Madera',\n 'Leszczyk', 'Kolanek', 'Fijak', 'Furgała', 'Faruga', 'Poleszak', 'Kusek', 'Herok', 'Golda',\n 'Rymarz', 'Pociask', 'Kowalak', 'Czupryna', 'Trzcionka', 'Sulik', 'Matulka', 'Herbut',\n 'Stosio', 'Kurtyka', 'Ciuk', 'Szczerbiak', 'Snoch', 'Budniak', 'Boruc', 'Tylka', 'Kwak',\n 'Garncarz', 'Szuta', 'Miśkowiec', 'Sykut', 'Jarosik', 'Golus', 'Chmielak', 'Abramczuk',\n 'Skrobek', 'Patrzałek', 'Linkiewicz', 'Jereczek', 'Jarema', 'Flasza', 'Fiedoruk',\n 'Budkiewicz', 'Świgoń', 'Przewoźnik', 'Parada', 'Heller', 'Gierak', 'Ferdyn', 'Sumera',\n 'Bik', 'Kamela', 'Ciereszko', 'Świtaj', 'Pastuszko', 'Łobacz', 'Kuba', 'Krzywonos',\n 'Granat', 'Szóstak', 'Płoskonka', 'Kumorek', 'Komuda', 'Klinkosz', 'Falba', 'Szczechowicz',\n 'Rozum', 'Moroń', 'Matynia', 'Greszta', 'Łuczka', 'Dziewit', 'Mueller', 'Kapral',\n 'Hrynkiewicz', 'Gonsior', 'Forma', 'Ciesiółka', 'Bors', 'Siwa', 'Niemczuk', 'Nazar',\n 'Liśkiewicz', 'Jarczak', 'Felisiak', 'Fedorczyk', 'Wilusz', 'Pastor', 'Gierek', 'Romaniak',\n 'Oleszczak', 'Juras', 'Zachwieja', 'Szmurło', 'Smektała', 'Przewoźna', 'Nikel', 'Chlebek',\n 'Balas', 'Latuszek', 'Ambrozik', 'Janczura', 'Aleksandrzak', 'Wojtalik', 'Rok', 'Nagórka',\n 'Latoszek', 'Kubowicz', 'Domian', 'Ciemięga', 'Soliwoda', 'Komsta', 'Filus', 'Wierzchoń',\n 'Skotarczak', 'Cader', 'Trzmiel', 'Jagieło', 'Wawszczyk', 'Troć', 'Swatek', 'Bączkiewicz',\n 'Ulewicz', 'Tutka', 'Pałac', 'Mydlarz', 'Molka', 'Janiuk', 'Guziak', 'Frycz', 'Drzał',\n 'Zacharek', 'Wiencek', 'Szłapka', 'Kurach', 'Bareja', 'Pawlukiewicz', 'Moździerz', 'Mich',\n 'Lisik', 'Kałwa', 'Dadej', 'Matela', 'Lenda', 'Wolff', 'Wojnicz', 'Sendor', 'Mrózek',\n 'Łągiewka', 'Kulisz', 'Kolarz', 'Walus', 'Mikoda', 'Kral', 'Darul', 'Warczak', 'Kunysz',\n 'Kidoń', 'Ciuła', 'Chomiak', 'Rzeźniczak', 'Przeniosło', 'Chomik', 'Zimoląg', 'Wojtyś',\n 'Mędrala', 'Hennig', 'Handzel', 'Twardzik', 'Śmieja', 'Solarczyk', 'Mendak', 'Lemieszek',\n 'Kiryluk', 'Wrześniak', 'Kwarciak', 'Gasik', 'Borysiewicz', 'Sierota', 'Mysiak',\n 'Kraszkiewicz', 'Hyjek', 'Polaszek', 'Pazera', 'Kubisz', 'Kościukiewicz', 'Kopczyk',\n 'Kliber', 'Kaczmar', 'Kaczka', 'Bicz', 'Augustynek', 'Straszak', 'Sajewicz', 'Glanc',\n 'Bzymek', 'Zieniewicz', 'Pagacz', 'Gortat', 'Bubak', 'Warwas', 'Skoneczna', 'Nestorowicz',\n 'Dziopa', 'Danisz', 'Bazydło', 'Garncarek', 'Albin', 'Szeszko', 'Naczk', 'Łukowiak',\n 'Kopciuch', 'Jakoniuk', 'Węgrzynowicz', 'Walencik', 'Turlej', 'Leonowicz', 'Kierepka',\n 'Hendzel', 'Fronczek', 'Zarzeczna', 'Zagrodnik', 'Wałęsa', 'Trzepizur', 'Tereszkiewicz',\n 'Szczubełek', 'Magier', 'Działo', 'Drygała', 'Czesak', 'Majorek', 'Wlizło', 'Skutnik',\n 'Radke', 'Piątkiewicz', 'Oślizło', 'Kansy', 'Szela', 'Mol', 'Kuświk', 'Karpik', 'Janczarek',\n 'Hajdukiewicz', 'Mzyk', 'Kostera', 'Leszkiewicz', 'Hutnik', 'Glaza', 'Fydrych', 'Piegza',\n 'Matusewicz', 'Matus', 'Kluczyk', 'Drobnik', 'Połom', 'Okraska', 'Neska', 'Kozłowicz',\n 'Wołos', 'Wacławczyk', 'Ochnik', 'Maruszczak', 'Lesner', 'Kuncewicz', 'Kieszek', 'Betlej',\n 'Wałdoch', 'Szarejko', 'Smalec', 'Łosiewicz', 'Lisak', 'Walkusz', 'Owsiak', 'Kowaluk',\n 'Simon', 'Rup', 'Neubauer', 'Muskała', 'Kucharzyk', 'Gabryel', 'Zimniak', 'Warmuz', 'Opas',\n 'Michniak', 'Cieloch', 'Wójcikiewicz', 'Świech', 'Powierża', 'Olko', 'Miękus', 'Kutnik',\n 'Kustosz', 'Kochman', 'Trąbka', 'Szyja', 'Młynarz', 'Wojtak', 'Dzierwa', 'Zyguła', 'Taciak',\n 'Koziatek', 'Koss', 'Walenciak', 'Twardosz', 'Pakos', 'Mamcarz', 'Burzawa', 'Lenik',\n 'Franc', 'Sadza', 'Mądrzak', 'Mak', 'Bobel', 'Szajna', 'Proch', 'Kosela', 'Guźniczak',\n 'Radziewicz', 'Olchawa', 'Morcinek', 'Bastek', 'Ragan', 'Podeszwa', 'Mitek', 'Janoszka',\n 'Słaba', 'Rusnak', 'Płócienniczak', 'Hanke', 'Gosek', 'Wujek', 'Warchał', 'Starzak',\n 'Prochownik', 'Molak', 'Duszkiewicz', 'Sztaba', 'Piwek', 'Nowotnik', 'Kiljan', 'Dubel',\n 'Brodowicz', 'Tylec', 'Pik', 'Pastucha', 'Księżak', 'Gumieniak', 'Ufnal', 'Stawinoga',\n 'Słoń', 'Kolarczyk', 'John', 'Fleszar', 'Lemke', 'Kurc', 'Kamieniarz', 'Jaskóła', 'Jaremko',\n 'Gogacz', 'Dudała', 'Chlipała', 'Szłapa', 'Seidel', 'Kopyt', 'Karłowicz', 'Gębura',\n 'Frączkiewicz', 'Frankowicz', 'Dybiec', 'Drobny', 'Brózda', 'Boruń', 'Pelka', 'Macias',\n 'Ruszel', 'Pabis', 'Krefta', 'Ćwierz', 'Bieleń', 'Szyca', 'Pronobis', 'Dreszer', 'Bryzek',\n 'Ambrożewicz', 'Słobodzian', 'Mrozowicz', 'Wojak', 'Szklarek', 'Paw', 'Kościelak',\n 'Kalarus', 'Wylegała', 'Powązka', 'Młot', 'Krekora', 'Bilewicz', 'Pyszka', 'Niedźwiadek',\n 'Lubera', 'Chodak', 'Breguła', 'Synak', 'Supeł', 'Suda', 'Roczniak', 'Matuszyk', 'Helak',\n 'Gubernat', 'Wojtera', 'Wiszowata', 'Świętoń', 'Deryło', 'Szałaj', 'Rzeszutko', 'Matejczuk',\n 'Żołądź', 'Suchta', 'Pokrzywa', 'Piguła', 'Litwińczuk', 'Kik', 'Gula', 'Geisler', 'Micał',\n 'Maszota', 'Kurzyna', 'Feliksiak', 'Cybul', 'Wiaderek', 'Śnieg', 'Linka', 'Fidler',\n 'Fabiszak', 'Cibor', 'Ryczko', 'Rudolf', 'Jędrzejek', 'Bekus', 'Bek', 'Wolan', 'Radzio',\n 'Kuliberda', 'Kolanko', 'Szykuła', 'Skowyra', 'Porwoł', 'Kosiak', 'Kasica', 'Jakiel',\n 'Piejko', 'Owczarczak', 'Michnik', 'Linke', 'Kutera', 'Bobryk', 'Szabla', 'Powała',\n 'Marciniszyn', 'Gorgol', 'Czerwionka', 'Ledzion', 'Dykas', 'Zygmuntowicz', 'Listwan',\n 'Bobrowicz', 'Żurawik', 'Migała', 'Merchel', 'Bogumił', 'Wojsa', 'Sadura', 'Łyjak', 'Giers',\n 'Gałat', 'Parafiniuk', 'Kryszkiewicz', 'Wyrostek', 'Wałek', 'Rembisz', 'Paściak', 'Maksym',\n 'Kusio', 'Kostek', 'Kalisiak', 'Bździuch', 'Szlufik', 'Pogorzelec', 'Pielech', 'Kafel',\n 'Gmur', 'Glazer', 'Borysiuk', 'Białk', 'Adamaszek', 'Wiesiołek', 'Wakuła', 'Rogula',\n 'Leszczuk', 'Kapciak', 'Gul', 'Buszka', 'Sklorz', 'Parda', 'Miszkiel', 'Latek', 'Kurzydło',\n 'Kucharz', 'Giec', 'Wajdzik', 'Mazik', 'Klimko', 'Kleina', 'Dorawa', 'Perczak', 'Lang',\n 'Grunt', 'Cywka', 'Batóg', 'Widłak', 'Miszta', 'Kość', 'Kosidło', 'Aleksander',\n 'Marchlewicz', 'Korkosz', 'Beśka', 'Bak', 'Stoch', 'Makles', 'Hudzik', 'Hornik', 'Bujko',\n 'Ziętal', 'Zawal', 'Sochaj', 'Podpora', 'Małyszek', 'Maćków', 'Latacz', 'Kozdra', 'Kosno',\n 'Gogół', 'Fit', 'Bienia', 'Wendt', 'Szyda', 'Suchoń', 'Sobel', 'Lesiewicz', 'Koleśnik',\n 'Kinder', 'Kasper', 'Jaszczyszyn', 'Weremczuk', 'Steinke', 'Sądej', 'Puła', 'Nowrot',\n 'Nowotny', 'Majorczyk', 'Kunert', 'Jerzyk', 'Capała', 'Bartoś', 'Wojciech', 'Stelmasiak',\n 'Portka', 'Pietrak', 'Łuksza', 'Kulma', 'Jeske', 'Góraj', 'Fyda', 'Siemion', 'Rusiniak',\n 'Flisiak', 'Cherek', 'Bryndza', 'Zioła', 'Zapaśnik', 'Raszkiewicz', 'Pszczółka', 'Pałgan',\n 'Kozar', 'Gumienny', 'Fedak', 'Erdmann', 'Matura', 'Kapera', 'Golan', 'Szczesiak',\n 'Szambelan', 'Półchłopek', 'Łuszczyk', 'Szymocha', 'Pielka', 'Macioł', 'Brudny', 'Babij',\n 'Zacharczuk', 'Pilarek', 'Owsianka', 'Harasimiuk', 'Durlak', 'Długajczyk', 'Wijata',\n 'Szyndler', 'Morka', 'Mendyka', 'Kubiaczyk', 'Kij', 'Gaudyn', 'Bok', 'Posłuszny', 'Plich',\n 'Pacyga', 'Miętus', 'Ficner', 'Świerkosz', 'Krzywoń', 'Kojder', 'Kiepura', 'Godzisz',\n 'Ciuba', 'Bukowiec', 'Wlaźlak', 'Teterycz', 'Ścibisz', 'Sobkiewicz', 'Raczkiewicz',\n 'Konrad', 'Kohut', 'Gonet', 'Frydel', 'Dyka', 'Siemek', 'Ośko', 'Gospodarek', 'Stryjek',\n 'Labudda', 'Kosiec', 'Indyk', 'Franik', 'Fiołka', 'Strycharz', 'Ostapczuk', 'Laszczyk',\n 'Lament', 'Korzekwa', 'Kędziorek', 'Dziuban', 'Biegała', 'Witoń', 'Szpara', 'Padło',\n 'Otremba', 'Mierzwiak', 'Kordus', 'Bojczuk', 'Szmelter', 'Rudzik', 'Madzia', 'Grabara',\n 'Górkiewicz', 'Bartel', 'Śliz', 'Sura', 'Skrzecz', 'Puto', 'Pułka', 'Piotrowiak', 'Mazan',\n 'Kobryń', 'Klatka', 'Januchta', 'Grubba', 'Zaucha', 'Sularz', 'Siergiej', 'Pianka',\n 'Jędruszczak', 'Groth', 'Sobisz', 'Siejak', 'Rećko', 'Lorens', 'Cegła', 'Wochnik', 'Kuryś',\n 'Gregorowicz', 'Filek', 'Salawa', 'Piekarek', 'Pabisiak', 'Glonek', 'Butrym', 'Przewoźniak',\n 'Macek', 'Konstanty', 'Kolber', 'Jędrasiak', 'Wężyk', 'Szaj', 'Malara', 'Kłoczko',\n 'Karsznia', 'Golenia', 'Zajko', 'Wudarczyk', 'Stanuch', 'Niklewicz', 'Matejczyk', 'Kopyto',\n 'Grygorowicz', 'Szajda', 'Stachelek', 'Słyk', 'Loska', 'Job', 'Dziadura', 'Dworniczak',\n 'Skubis', 'Obst', 'Kazimierczyk', 'Cymer', 'Ciak', 'Chudoba', 'Achtelik', 'Tytko', 'Skupin',\n 'Skierka', 'Panuś', 'Pabiś', 'Folta', 'Bogaczyk', 'Basa', 'Trzpil', 'Morek', 'Kloska',\n 'Kapustka', 'Gzyl', 'Gołoś', 'Danel', 'Borkiewicz', 'Araszkiewicz', 'Miotke', 'Rezler',\n 'Potyrała', 'Pacholak', 'Herba', 'Grzenia', 'Giezek', 'Gajowiak', 'Filak', 'Fechner',\n 'Droździk', 'Cyman', 'Wieczerzak', 'Stróż', 'Staciwa', 'Ruchała', 'Rogal', 'Reszke',\n 'Kurpisz', 'Gryga', 'Stempniak', 'Matraszek', 'Kózka', 'Elsner', 'Boba', 'Barłóg',\n 'Kiliszek', 'Jessa', 'Ignatiuk', 'Gogola', 'Drobek', 'Lica', 'Larysz', 'Kalka', 'Dziczek',\n 'Czupryn', 'Żołna', 'Pytko', 'Misiarz', 'Majnusz', 'Kaszkowiak', 'Jonak', 'Basista',\n 'Potęga', 'Natanek', 'Matyszczak', 'Majerczyk', 'Łapaj', 'Korzonek', 'Jaśko', 'Futyma',\n 'Duszczyk', 'Antończak', 'Wysota', 'Dela', 'Stawowczyk', 'Milczarczyk', 'Malisz',\n 'Andrearczyk', 'Żynda', 'Swaczyna', 'Ryndak', 'Moskalik', 'Mitoraj', 'Łyś', 'Łepek',\n 'Knieć', 'Janisz', 'Gorol', 'Ciężka', 'Żyrek', 'Zmarzły', 'Wojtaszczyk', 'Szyguła',\n 'Szalast', 'Rząd', 'Nicewicz', 'Danieluk', 'Bulak', 'Wojtasiewicz', 'Pleskot', 'Materek',\n 'Kurczak', 'Dytko', 'Świstek', 'Szafarz', 'Litwa', 'Kreczmer', 'Idec', 'Grabczak',\n 'Goliszek', 'Flieger', 'Filiks', 'Dyszy', 'Błażejczak', 'Maksimowicz', 'Komisarczyk',\n 'Jewuła', 'Hallmann', 'Gabara', 'Budzyń', 'Andruszko', 'Pałyga', 'Moj', 'Koterba', 'Gruza',\n 'Gamoń', 'Pasierbek', 'Kuchciak', 'Kanik', 'Cis', 'Zegar', 'Sadlik', 'Paprotny', 'Nalazek',\n 'Mikita', 'Kucab', 'Kranc', 'Godzik', 'Sip', 'Powałka', 'Penkala', 'Pachuta', 'Nagel',\n 'Litwinowicz', 'Kukuczka', 'Knysak', 'Fojt', 'Brejnak', 'Tasarz', 'Zielke', 'Zaraś',\n 'Zaranek', 'Waleczek', 'Rubaj', 'Bazylewicz', 'Banyś', 'Balawender', 'Zmuda', 'Wojcik',\n 'Łabno', 'Gęsiarz', 'Frost', 'Bany', 'Żero', 'Rudowicz', 'Nyk', 'Milcarz', 'Lipowicz',\n 'Kycia', 'Kościołek', 'Korda', 'Berus', 'Wiese', 'Olkowicz', 'Dzieża', 'Doroszkiewicz',\n 'Cetera', 'Pazdan', 'Pacia', 'Kempka', 'Dydak', 'Ścibior', 'Szyjka', 'Pyziak', 'Pleśniak',\n 'Maszczyk', 'Ludwiniak', 'Zadora', 'Strug', 'Mokwa', 'Łasak', 'Kulczak', 'Kruszona',\n 'Zacharewicz', 'Miękina', 'Klaus', 'Glegoła', 'Wyderka', 'Maleszka', 'Malcherek', 'Lew',\n 'Kulis', 'Bodzak', 'Błaziak', 'Bartłomiejczyk', 'Toś', 'Kubasiak', 'Dorobisz', 'Cukier',\n 'Ciećko', 'Zapadka', 'Kłosowicz', 'Kasak', 'Czubaszek', 'Baumgart', 'Szemraj', 'Nogieć',\n 'Burczak', 'Pietraś', 'Ostafin', 'Noculak', 'Kukieła', 'Fogel', 'Duczek', 'Cylwik',\n 'Biernacik', 'Wydrych', 'Szajek', 'Siwczak', 'Majewicz', 'Łosiak', 'Karkut', 'Durys',\n 'Chwalisz', 'Bembenek', 'Bartkowicz', 'Piskor', 'Mikus', 'Księżyk', 'Goss', 'Drewniok',\n 'Bąkiewicz', 'Wódka', 'Wota', 'Prażmo', 'Kiwior', 'Bogdał', 'Rubacha', 'Hanus', 'Wasiewicz',\n 'Trochimiuk', 'Szwiec', 'Suszka', 'Palak', 'Ziemann', 'Maćczak', 'Kruzel', 'Kołaczyk',\n 'Kapka', 'Jodko', 'Jeszke', 'Gros', 'Gendek', 'Dubik', 'Ważna', 'Pierchała', 'Nieszporek',\n 'Kandora', 'Janasz', 'Gryszkiewicz', 'Drobik', 'Ciołczyk', 'Wołkowicz', 'Tylman', 'Pituła',\n 'Pioch', 'Pilich', 'Marach', 'Malon', 'Lepa', 'Kaliciak', 'Joszko', 'Hejna', 'Gryta',\n 'Frelich', 'Bełz', 'Bakalarczyk', 'Nóżka', 'Holewa', 'Fierek', 'Żuchowicz', 'Wojtunik',\n 'Trzop', 'Masłoń', 'Linda', 'Kurp', 'Gryka', 'Draus', 'Rezmer', 'Mizak', 'Makurat',\n 'Kościk', 'Helman', 'Gendera', 'Dydo', 'Bondaruk', 'Bodek', 'Wujec', 'Sady', 'Przekwas',\n 'Postawa', 'Polasik', 'Plebanek', 'Lejk', 'Kacperek', 'Gołofit', 'Tomys', 'Świadek',\n 'Mizgała', 'Kubrak', 'Ernst', 'Wielgos', 'Martynowicz', 'Drela', 'Ziarnik', 'Stasica',\n 'Semik', 'Mytych', 'Melka', 'Marat', 'Dąbrówka', 'Wyroba', 'Siudek', 'Senator',\n 'Ryszkiewicz', 'Podsiedlik', 'Małys', 'Lepianka', 'Giersz', 'Zugaj', 'Procek', 'Makosz',\n 'Kunda', 'Ziółko', 'Trzyna', 'Stroka', 'Rzeszut', 'Pyza', 'Krężołek', 'Kazior', 'Fidos',\n 'Sołek', 'Gordon', 'Dubis', 'Ciochoń', 'Bieszke', 'Żołnierczyk', 'Sobstyl', 'Skalik',\n 'Namysło', 'Litewka', 'Krzysztofek', 'Grycz', 'Feluś', 'Downar', 'Szram', 'Oleksik',\n 'Milej', 'Kudela', 'Klaja', 'Giedrojć', 'Getka', 'Durma', 'Dudko', 'Dębosz', 'Browarczyk',\n 'Sąsiadek', 'Picheta', 'Peciak', 'Niećko', 'Midura', 'Maciejko', 'Gregorek', 'Wąsiewicz',\n 'Twardy', 'Szachniewicz', 'Sypek', 'Sojda', 'Saran', 'Mosiołek', 'Guściora', 'Golak',\n 'Ellwart', 'Drewicz', 'Barszczak', 'Wójt', 'Strawa', 'Sereda', 'Rejmer', 'Prostak', 'Kołak',\n 'Klekot', 'Gerlach', 'Ciepła', 'Barankiewicz', 'Welc', 'Skotarek', 'Sadłocha',\n 'Roszkiewicz', 'Połetek', 'Ofiara', 'Kiełbus', 'Kałwak', 'Jas', 'Jarkiewicz', 'Jambor',\n 'Hartman', 'Graś', 'Raźniak', 'Janc', 'Doroz', 'Baster', 'Banak', 'Spólnik', 'Poreda',\n 'Orwat', 'Matyjas', 'Laskus', 'Bajak', 'Witko', 'Ślimak', 'Sapeta', 'Sadownik', 'Roszko',\n 'Nazarewicz', 'Mrotek', 'Gnyp', 'Dziarmaga', 'Zaniewicz', 'Walusiak', 'Toborek', 'Szulim',\n 'Pawliczak', 'Nikołajuk', 'Myszor', 'Mila', 'Liedtke', 'Korpal', 'Jaźwiec', 'Groborz',\n 'Świerkot', 'Sabała', 'Kluj', 'Żach', 'Wawrzyńczyk', 'Szumiło', 'Sulich', 'Stępak',\n 'Rutowicz', 'Krzyszczak', 'Kiełbik', 'Gogol', 'Buszkiewicz', 'Basaj', 'Bartuś', 'Samulak',\n 'Ryfa', 'Potoczna', 'Panicz', 'Leśny', 'Lada', 'Kuska', 'Gleba', 'Folga', 'Barczuk',\n 'Ślebioda', 'Olma', 'Kuśnierek', 'Krzan', 'Hubert', 'Grzebyk', 'Fras', 'Durlej', 'Pielach',\n 'Klin', 'Jędrak', 'Frelek', 'Brząkała', 'Borysiak', 'Zagozda', 'Śliż', 'Szkopek', 'Raźny',\n 'Olearczyk', 'Mirończuk', 'Chyb', 'Żybura', 'Żelazo', 'Kunka', 'Kosałka', 'Gosz', 'Dulas',\n 'Żelazek', 'Terka', 'Sośniak', 'Pikor', 'Pezda', 'Hadam', 'Groń', 'Fal', 'Chalimoniuk',\n 'Karnas', 'Uziębło', 'Grochola', 'Gawliczek', 'Freitag', 'Ćmiel', 'Wacław', 'Symonowicz',\n 'Strzoda', 'Sterna', 'Spadło', 'Rajtar', 'Krzykała', 'Holc', 'Gronostaj', 'Barej',\n 'Wasilewicz', 'Podgórny', 'Łapot', 'Lepak', 'Hojda', 'Dziuda', 'Klupś', 'Brzeźniak',\n 'Bojarczuk', 'Tryka', 'Nalewajek', 'Kudłacik', 'Kubasiewicz', 'Bazyluk', 'Bartoszak',\n 'Zbylut', 'Tołoczko', 'Szaruga', 'Obuchowicz', 'Gryska', 'Bociek', 'Wowra', 'Szramka',\n 'Spychaj', 'Roj', 'Musiolik', 'Franas', 'Dłubak', 'Cholewka', 'Bobko', 'Białous', 'Osial',\n 'Nieborak', 'Minta', 'Kozica', 'Kowara', 'Gwara', 'Tekieli', 'Pancerz', 'Mleczak', 'Celuch',\n 'Zapiór', 'Graboś', 'Fidura', 'Cyrek', 'Bracha', 'Gradek', 'Noras', 'Mulawa', 'Moniuszko',\n 'Kapcia', 'Gumienna', 'Graj', 'Gilewicz', 'Żółtek', 'Wojtalewicz', 'Szumny', 'Opyrchał',\n 'Macha', 'Łuczyk', 'Hus', 'Czak', 'Borzym', 'Wojtczuk', 'Winnik', 'Kuk', 'Kubanek',\n 'Dziełak', 'Dudziec', 'Cimoch', 'Ciapa', 'Buchalik', 'Zbróg', 'Węgrzyniak', 'Wawrzkiewicz',\n 'Teodorowicz', 'Szkoła', 'Sutor', 'Kapuścik', 'Hajdas', 'Fołta', 'Burkiewicz', 'Aleksa',\n 'Wajer', 'Siembab', 'Kozon', 'Wojewódka', 'Wenda', 'Majos', 'Huczek', 'Domoń', 'Zubel',\n 'Szymaniuk', 'Salomon', 'Mikiciuk', 'Grodek', 'Wielądek', 'Szymańczak', 'Sommer', 'Saczuk',\n 'Pastuszek', 'Mroczko', 'Łokaj', 'Deptuch', 'Wawak', 'Szczepaniec', 'Romejko', 'Rogacz',\n 'Poczta', 'Nowotka', 'Jaszcz', 'Jany', 'Hewelt', 'Stachów', 'Smykla', 'Sędek', 'Niemira',\n 'Młodzik', 'Łyczek', 'Kleban', 'Fura', 'Fudalej', 'Cyroń', 'Zagożdżon', 'Kenig',\n 'Górnisiewicz', 'Wołoszyk', 'Szatanik', 'Sajda', 'Pyrkosz', 'Misiejuk', 'Mikołajewicz',\n 'Kołsut', 'Glenc', 'Eckert', 'Dziadowicz', 'Waszczyk', 'Szyba', 'Steckiewicz', 'Kloch',\n 'Kabala', 'Zamora', 'Tabiś', 'Sobków', 'Pupek', 'Neugebauer', 'Kołtuniak', 'Galek', 'Stój',\n 'Rajda', 'Pruchnik', 'Kuza', 'Karaśkiewicz', 'Judek', 'Jędryczka', 'Grzegorzak', 'Drobniak',\n 'Chowaniak', 'Wąsek', 'Smagacz', 'Pędzik', 'Klinger', 'Klęczar', 'Wochna', 'Rejek',\n 'Krakowczyk', 'Kobak', 'Kawiak', 'Grosz', 'Czubaj', 'Chorążewicz', 'Zadka', 'Wietecha',\n 'Sass', 'Męcik', 'Gustaw', 'Furga', 'Frącz', 'Dawiec', 'Wypchło', 'Tarasek', 'Szmaj',\n 'Ornat', 'Huszcza', 'Dudczak', 'Ułanowicz', 'Rubin', 'Pich', 'Makoś', 'Krępa', 'Korek',\n 'Jonik', 'Andrejczuk', 'Wiertel', 'Soroko', 'Składanek', 'Mortka', 'Małocha', 'Majsterek',\n 'Lemanowicz', 'Lelito', 'Krystkowiak', 'Krasa', 'Kierat', 'Jędraszczyk', 'Handke',\n 'Dymarczyk', 'Doruch', 'Beker', 'Peszko', 'Osik', 'Łyp', 'Karmelita', 'Herdzik', 'Brzęk',\n 'Białczyk', 'Uss', 'Pitura', 'Łusiak', 'Knapek', 'Gumuła', 'Darłak', 'Znojek', 'Wilkos',\n 'Rut', 'Przekop', 'Kręcichwost', 'Korab', 'Józwik', 'Jagiełka', 'Chylak', 'Zbiciak',\n 'Wasążnik', 'Tłuczek', 'Syldatk', 'Parkitny', 'Juroszek', 'Wisz', 'Wiciak', 'Palonek',\n 'Kusik', 'Kocurek', 'Kacperczyk', 'Bluszcz', 'Wydmuch', 'Wereda', 'Trybała', 'Sito',\n 'Pietraszkiewicz', 'Nojek', 'Madziar', 'Kazana', 'Szulczyk', 'Rosołek', 'Roskosz', 'Proć',\n 'Mazek', 'Koniecko', 'Horbacz', 'Zastawny', 'Orszulik', 'Mesjasz', 'Margas', 'Koźlak',\n 'Dzidek', 'Damek', 'Zinkiewicz', 'Sznura', 'Sapała', 'Piaseczna', 'Osada', 'Koziarz',\n 'Korta', 'Kłosiewicz', 'Klyszcz', 'Janoszek', 'Deszcz', 'Okła', 'Matacz', 'Hankiewicz',\n 'Front', 'Daraż', 'Czura', 'Bylina', 'Bugiel', 'Anioła', 'Amanowicz', 'Zach', 'Starościak',\n 'Kliszcz', 'Hadała', 'Czopik', 'Bytner', 'Wośko', 'Wawrzyn', 'Świt', 'Sanetra', 'Pyszczek',\n 'Potaczek', 'Osman', 'Materka', 'Madura', 'Kniaź', 'Gryciuk', 'Fidor', 'Dunal', 'Dobroń',\n 'Chlebda', 'Słupik', 'Osica', 'Oleksak', 'Maraszek', 'Kręgiel', 'Kopytko', 'Gomoła',\n 'Droździel', 'Szott', 'Szkup', 'Posmyk', 'Młotek', 'Klejna', 'Jałowiec', 'Heinrich',\n 'Haraburda', 'Grupa', 'Dziadkiewicz', 'Zaczyk', 'Rapa', 'Łodej', 'Lempart', 'Lamch',\n 'Głuszko', 'Cudzich', 'Brojek', 'Ziemak', 'Tusk', 'Kieloch', 'Dziduch', 'Dudkowiak',\n 'Czerner', 'Sommerfeld', 'Migoń', 'Macheta', 'Dusik', 'Ćwirko', 'Bilik', 'Sydor', 'Swiątek',\n 'Sporek', 'Olesiejuk', 'Kutek', 'Jaszczur', 'Jarmuż', 'Gronkiewicz', 'Witan', 'Staniczek',\n 'Rząca', 'Roter', 'Pracz', 'Hnat', 'Cydzik', 'Szatko', 'Styrna', 'Podleśna', 'Oleksa',\n 'Nieścior', 'Matyjaszek', 'Łasica', 'Kwapień', 'Koronkiewicz', 'Hołota', 'Elert',\n 'Czochara', 'Toczko', 'Święs', 'Słysz', 'Salach', 'Leśna', 'Głownia', 'Galica', 'Cieniuch',\n 'Szulist', 'Pedrycz', 'Królczyk', 'Zyzik', 'Zaborek', 'Skałka', 'Sankiewicz', 'Pleban',\n 'Martin', 'Lewek', 'Jędrys', 'Guzdek', 'Dumała', 'Wszoła', 'Rębiś', 'Pośnik', 'Porzucek',\n 'Hawro', 'Dziób', 'Zwara', 'Wiraszka', 'Romankiewicz', 'Roch', 'Paleń', 'Ogonek', 'Makar',\n 'Majdan', 'Kozdrój', 'Kozdroń', 'Jachna', 'Duniec', 'Dułak', 'Wojtanowicz', 'Waloch',\n 'Ubysz', 'Stożek', 'Małycha', 'Kmak', 'Hass', 'Frydrychowicz', 'Domka', 'Żugaj', 'Zubowicz',\n 'Wyrwał', 'Mordal', 'Kordys', 'Gozdur', 'Gabrych', 'Zbrożek', 'Zbroszczyk', 'Wojtoń',\n 'Tórz', 'Torbus', 'Letkiewicz', 'Lampart', 'Superson', 'Sopata', 'Sobiło', 'Sapa', 'Salwin',\n 'Pera', 'Organiściak', 'Matwiejczyk', 'Matejuk', 'Mały', 'Krüger', 'Dyszkiewicz', 'Basak',\n 'Ankiewicz', 'Adamiuk', 'Sykała', 'Skonieczka', 'Pawełko', 'Nojman', 'Iskierka', 'Zięcik',\n 'Trojanek', 'Sadlak', 'Nieradko', 'Behrendt', 'Wojewodzic', 'Polewka', 'Zasępa', 'Szczerek',\n 'Szałata', 'Sot', 'Mleczek', 'Kukawka', 'Kaczmarkiewicz', 'Dorobek', 'Burchard', 'Blaut',\n 'Witka', 'Sasak', 'Pasiak', 'Panasiewicz', 'Motak', 'Lizurej', 'Kuboń', 'Jędraszek',\n 'Dylik', 'Cal', 'Buszko', 'Burnat', 'Wyskiel', 'Winek', 'Wiertelak', 'Wiak', 'Roś',\n 'Orzeszek', 'Ochota', 'Mijas', 'Maculewicz', 'Kaja', 'Ciesielka', 'Bejm', 'Szmuc', 'Sygut',\n 'Siarkiewicz', 'Ryznar', 'Patoka', 'Miszkurka', 'Kudełka', 'Krzyśko', 'Galon', 'Buczma',\n 'Ziegler', 'Uroda', 'Turczyk', 'Tolak', 'Sypuła', 'Sadowy', 'Rasała', 'Kazubek', 'Han',\n 'Wasiuk', 'Stempin', 'Stawczyk', 'Prokopiak', 'Pospiech', 'Polakiewicz', 'Olas',\n 'Maruszczyk', 'Kapinos', 'Kabza', 'Szwałek', 'Smagała', 'Musiała', 'Miksza', 'Lampa',\n 'Kulon', 'Koczara', 'Drynda', 'Szczypiór', 'Pawełkiewicz', 'Myk', 'Kuczak', 'Kołata',\n 'Żywica', 'Tondera', 'Szmalec', 'Szczap', 'Sypień', 'Sołtysek', 'Mosur', 'Kościesza',\n 'Kosowicz', 'Kolendo', 'Huber', 'Giel', 'Gałęza', 'Dyja', 'Cacko', 'Apanowicz', 'Wandas',\n 'Siebert', 'Moneta', 'Ziajka', 'Sieg', 'Paluszak', 'Lichoń', 'Kastelik', 'Gwizdek', 'Drewa',\n 'Andrys', 'Zbrzeźniak', 'Wlazły', 'Wittbrodt', 'Niksa', 'Habdas', 'Fryś', 'Doktór', 'Detka',\n 'Cieplucha', 'Ciarka', 'Witkowicz', 'Wardzała', 'Stąpór', 'Pniak', 'Pierzak', 'Kryk',\n 'Kożuszek', 'Kohnke', 'Kapałka', 'Domino', 'Czuj', 'Boksa', 'Wocial', 'Stuglik', 'Steciuk',\n 'Smela', 'Plona', 'Piwowarek', 'Pernak', 'Minkina', 'Klos', 'Halik', 'Dzika', 'Dargacz',\n 'Damian', 'Adrian', 'Węgrzynek', 'Tomal', 'Świerad', 'Szkatuła', 'Sajnóg', 'Kudlak',\n 'Golczyk', 'Fronczyk', 'Czapiga', 'Błażejak', 'Bejma', 'Bartela', 'Tadeusiak', 'Nędzi',\n 'Kurcz', 'Jasionek', 'Heleniak', 'Ziarek', 'Zera', 'Sarniak', 'Różak', 'Ligas', 'Kuzior',\n 'Kuder', 'Korzeniak', 'Fac', 'Domowicz', 'Dębniak', 'Cieciora', 'Chaberek', 'Bogusiewicz',\n 'Block', 'Wardziak', 'Prawdzik', 'Niebudek', 'Jeszka', 'Szpyrka', 'Szkaradek', 'Starek',\n 'Pasich', 'Lademann', 'Jantos', 'Grzelec', 'Zapora', 'Wnuczek', 'Wąsala', 'Pompa', 'Małas',\n 'Janka', 'Gałaj', 'Dybał', 'Chromy', 'Szpyt', 'Senger', 'Prygiel', 'Pawela', 'Łakota',\n 'Jama', 'Graban', 'Fogt', 'Cebulak', 'Boryczko', 'Bojdo', 'Biesek', 'Arendarczyk',\n 'Schubert', 'Namysł', 'Milewczyk', 'Hetmańczyk', 'Dyczko', 'Dankiewicz', 'Czerniec',\n 'Staśko', 'Rochowiak', 'Misiuk', 'Markiel', 'Ksel', 'Krzyżostaniak', 'Elwart', 'Delekta',\n 'Zębik', 'Siatka', 'Niewiara', 'Miozga', 'Mętel', 'Korgul', 'Karwan', 'Franków', 'Domek',\n 'Ciepluch', 'Chojna', 'Surmiak', 'Strama', 'Stein', 'Siewiera', 'Robaszkiewicz', 'Piksa',\n 'Kociemba', 'Klyta', 'Gromala', 'Gill', 'Broszkiewicz', 'Zontek', 'Stiller', 'Rosada',\n 'Mieloch', 'Kornak', 'Goworek', 'Gadzała', 'Fitas', 'Uzar', 'Siedlarz', 'Rorat', 'Oskroba',\n 'Mitera', 'Grygorcewicz', 'Gmurczyk', 'Dylak', 'Zybura', 'Wojtaszak', 'Wisła', 'Wasyluk',\n 'Szałkiewicz', 'Krzysztoszek', 'Kościuszko', 'Kasiak', 'Wyrwich', 'Wołoszczuk', 'Śledzik',\n 'Smorąg', 'Satora', 'Pochroń', 'Melaniuk', 'Jajko', 'Czajor', 'Bajko', 'Wojsław', 'Szumiec',\n 'Nehring', 'Naumiuk', 'Luberda', 'Kęsek', 'Jaśkowiec', 'Foit', 'Fita', 'Fedyk', 'Działa',\n 'Cygal', 'Zdancewicz', 'Walocha', 'Toma', 'Soczewka', 'Monkiewicz', 'Majtyka', 'Hynek',\n 'Dynia', 'Czuryło', 'Bernatek', 'Apostel', 'Zawiasa', 'Piersa', 'Megger', 'Kukier', 'Jarka',\n 'Glazik', 'Dyjas', 'Buś', 'Bona', 'Bandyk', 'Zięciak', 'Krajniak', 'Koperek', 'Kazberuk',\n 'Dziewior', 'Chachaj', 'Sołoducha', 'Słomiany', 'Skolik', 'Pęksa', 'Mularz', 'Kosman',\n 'Kolonko', 'Januszewicz', 'Gramza', 'Foremniak', 'Fijałek', 'Cierpka', 'Polnik', 'Drwięga',\n 'Semenowicz', 'Pieszak', 'Narożna', 'Ładniak', 'Kontny', 'Klemens', 'Jancewicz', 'Fąferek',\n 'Bisaga', 'Złotnik', 'Wosiek', 'Supernak', 'Kala', 'Giża', 'Bielat', 'Żyto', 'Rompa',\n 'Kurpanik', 'Kołpak', 'Gołas', 'Długozima', 'Bacia', 'Wincenciak', 'Styn', 'Moczko',\n 'Langier', 'Szrama', 'Szok', 'Suchenek', 'Pieczarka', 'Parus', 'Machul', 'Latko',\n 'Krzyśków', 'Galos', 'Ekert', 'Dawidek', 'Czerkies', 'Bujas', 'Andryszczyk', 'Zuziak',\n 'Węgrzyk', 'Stąpor', 'Pinda', 'Muzyk', 'Maligłówka', 'Łukasiuk', 'Kinal', 'Dobosiewicz',\n 'Waraksa', 'Szywała', 'Nastały', 'Mordak', 'Ligenza', 'Leszczak', 'Krauz', 'Kopała',\n 'Byzdra', 'Bartman', 'Wojtach', 'Wałaszek', 'Szara', 'Hapka', 'Wielgat', 'Węgier', 'Pokusa',\n 'Małż', 'Kononowicz', 'Hawrylak', 'Grund', 'Druszcz', 'Dacko', 'Sprycha', 'Pryszcz',\n 'Łachut', 'Dobrosz', 'Brygoła', 'Ryguła', 'Posłuszna', 'Mydlak', 'Bernard', 'Woroch',\n 'Uliczka', 'Tomaszuk', 'Pastuła', 'Pachnik', 'Kudra', 'Kretek', 'Keler', 'Heczko', 'Beck',\n 'Tekiela', 'Plizga', 'Piekacz', 'Ochab', 'Maziarczyk', 'Krzosek', 'Gabryelczyk', 'Stępka',\n 'Rajch', 'Owsiany', 'Kossak', 'Kocaj', 'Gierach', 'Buza', 'Berendt', 'Tabak', 'Przewłoka',\n 'Nytko', 'Kuban', 'Gebauer', 'Gajcy', 'Franaszek', 'Chwedczuk', 'Bochnak', 'Stachewicz',\n 'Sosnówka', 'Słowiak', 'Mądro', 'Malcharek', 'Łukasz', 'Kornek', 'Hanusiak',\n 'Furmankiewicz', 'Dzikiewicz', 'Duży', 'Delikat', 'Chojak', 'Zyga', 'Pyrz', 'Pietrusiewicz',\n 'Olszyna', 'Olszowa', 'Ograbek', 'Molga', 'Maron', 'Jasica', 'Frymus', 'Buszta', 'Woszczak',\n 'Woronko', 'Trawka', 'Rychcik', 'Przystupa', 'Oczko', 'Migda', 'Klebba', 'Jaje', 'Grabas',\n 'Bugno', 'Bortkiewicz', 'Wesoła', 'Sudak', 'Puc', 'Przeklasa', 'Kocoł', 'Goik',\n 'Błażejewicz', 'Tuzimek', 'Petrus', 'Pawlaczek', 'Pacholczak', 'Maciejewicz', 'Jakóbik',\n 'Frania', 'Duszczak', 'Domurad', 'Bednarowicz', 'Thomas', 'Rakus', 'Przybyś', 'Pasiut',\n 'Małyszka', 'Kurz', 'Kuczaj', 'Doktor', 'Tadla', 'Praczyk', 'Milka', 'Leszcz', 'Kryza',\n 'Kryszczuk', 'Juraszczyk', 'Durczok', 'Boduch', 'Szeja', 'Pryk', 'Pitala', 'Molek',\n 'Duchnik', 'Brachaczek', 'Wieja', 'Waloszek', 'Nawrotek', 'Nawój', 'Mironiuk', 'Matyjasek',\n 'Łachacz', 'Kubów', 'Kidawa', 'Jaremek', 'Hasiak', 'Gierat', 'Gawłowicz', 'Wichary',\n 'Sornat', 'Solich', 'Kurczab', 'Jasnoch', 'Famuła', 'Budrewicz', 'Pawliszyn', 'Kułach',\n 'Kuffel', 'Konieczek', 'Koćwin', 'Imiołczyk', 'Dyda', 'Zander', 'Stochel', 'Osojca',\n 'Mysior', 'Kuciak', 'Kłósek', 'Buchholz', 'Zegadło', 'Wiewiórka', 'Stochaj', 'Smolka',\n 'Piotrak', 'Misior', 'Leoniak', 'Karwala', 'Jasina', 'Cięciwa', 'Ciastek', 'Chadaj',\n 'Białach', 'Tabisz', 'Such', 'Sromek', 'Rysz', 'Puch', 'Plak', 'Palej', 'Och', 'Niedbał',\n 'Mytnik', 'Morgała', 'Lukas', 'Lisoń', 'Królikiewicz', 'Kamieniak', 'Jachimczyk',\n 'Grzywnowicz', 'Frukacz', 'Feliniak', 'Dzienisz', 'Drążyk', 'Żelasko', 'Waloszczyk',\n 'Strójwąs', 'Smoczyk', 'Klorek', 'Kajdan', 'Kajak', 'Gral', 'Zawodnik', 'Ulfik',\n 'Sobieszczyk', 'Skrobot', 'Ochał', 'Leżoń', 'Krywult', 'Iciek', 'Gasek', 'Czenczek',\n 'Budzeń', 'Botor', 'Wikło', 'Tymczyszyn', 'Szpyra', 'Słonka', 'Prasek', 'Majczyna', 'Lula',\n 'Jakubiuk', 'Hanzel', 'Głowiak', 'Calik', 'Zagrajek', 'Stefankiewicz', 'Serzysko',\n 'Piechna', 'Myga', 'Maślankiewicz', 'Kuziora', 'Korniak', 'Indyka', 'Gałach', 'Gadzina',\n 'Cyba', 'Bystrek', 'Bazela', 'Wabik', 'Ragus', 'Pitek', 'Mizia', 'Łaskawiec', 'Holeksa',\n 'Hajdasz', 'Fugiel', 'Białasik', 'Woźniczko', 'Wilma', 'Rode', 'Preś', 'Komander', 'Klus',\n 'Sarosiek', 'Sadoch', 'Osipowicz', 'Lelonek', 'Korbut', 'Jarmużek', 'Włodyka', 'Józefczak',\n 'Jędra', 'Hamerla', 'Gęgotek', 'Domińczak', 'Wypiór', 'Sudnik', 'Słoboda', 'Pela', 'Kupś',\n 'Kostorz', 'Kosak', 'Kopyść', 'Jarmuła', 'Daniec', 'Blank', 'Balcewicz', 'Starostka',\n 'Siemieńczuk', 'Reiter', 'Mycek', 'Miętka', 'Łupina', 'Lipok', 'Knych', 'Drobisz', 'Cuch',\n 'Wojtarowicz', 'Wojniak', 'Piechura', 'Meissner', 'Lemiesz', 'Klęk', 'Jargieło', 'Jamroz',\n 'Huczko', 'Ceynowa', 'Trochim', 'Kremer', 'Janic', 'Gal', 'Cyrulik', 'Bejger', 'Bawoł',\n 'Szczepan', 'Plewnia', 'Pędrak', 'Niedośpiał', 'Maras', 'Klepka', 'Kawulok', 'Katana',\n 'Bronka', 'Bender', 'Bałdys', 'Wawrzonek', 'Taranek', 'Tadych', 'Szymała', 'Stebel', 'Skup',\n 'Skubała', 'Pasieczna', 'Karkocha', 'Hak', 'Gąszczak', 'Pyś', 'Prażuch', 'Politowicz',\n 'Piestrzeniewicz', 'Pajek', 'Nitek', 'Kozok', 'Kowala', 'Kalinka', 'Galuba', 'Buk', 'Breś',\n 'Bodych', 'Bittner', 'Bakiera', 'Rembacz', 'Podgórna', 'Myrcik', 'Mojsa', 'Karpiak',\n 'Kajdas', 'Gregorczuk', 'Dziurla', 'Dzienniak', 'Dyrek', 'Żołądkiewicz', 'Szumacher',\n 'Sado', 'Pyszny', 'Narożny', 'Kuszyk', 'Jakimiak', 'Dynak', 'Dejneka', 'Wiekiera',\n 'Tatarczuk', 'Rudyk', 'Nieścioruk', 'Laszkiewicz', 'Gołota', 'Golisz', 'Bąbel', 'Taczała',\n 'Świć', 'Siciarz', 'Ropiak', 'Pacura', 'Makulec', 'Krauza', 'Grzesiek', 'Gemza', 'Dering',\n 'Banek', 'Andziak', 'Wiza', 'Trojanowicz', 'Parkitna', 'Pacholik', 'Majtczak', 'Krenc',\n 'Koniec', 'Wawrzeńczyk', 'Stupak', 'Roda', 'Maciejczuk', 'Irla', 'Husak', 'Fuławka',\n 'Fabiańczyk', 'Bryda', 'Zackiewicz', 'Szoka', 'Melcer', 'Kempny', 'Dulemba', 'Duc',\n 'Ziniewicz', 'Truchel', 'Szajner', 'Petryk', 'Peda', 'Obarzanek', 'Maszkiewicz', 'Łabaj',\n 'Cymbała', 'Biesaga', 'Zdobylak', 'Wojtiuk', 'Ulrych', 'Szymków', 'Sporysz', 'Smardz',\n 'Mandrysz', 'Kulus', 'Duras', 'Dumin', 'Borejko', 'Wyłupek', 'Ufniarz', 'Stypka',\n 'Młyńczyk', 'Miros', 'Maciuk', 'Hrabia', 'Burzec', 'Buksa', 'Wygoda', 'Tomzik', 'Pindral',\n 'Nijak', 'Mszyca', 'Maciejuk', 'Kudłacz', 'Dziwak', 'Chaba', 'Borkowicz', 'Berek',\n 'Żakiewicz', 'Wykręt', 'Sztuba', 'Smykała', 'Pyc', 'Pęciak', 'Parzonka', 'Kyc', 'Klemczak',\n 'Gąsienica', 'Gabryszak', 'Częścik', 'Cisoń', 'Zmyślony', 'Komisarek', 'Ficoń', 'Citko',\n 'Bidas', 'Bas', 'Żabierek', 'Wyciszkiewicz', 'Tarach', 'Staniewicz', 'Reichel',\n 'Panasewicz', 'Kucewicz', 'Kilar', 'Hein', 'Fronia', 'Derek', 'Bruś', 'Antoń', 'Pawlos',\n 'Ochwat', 'Kurbiel', 'Gosik', 'Gierasimiuk', 'Doroba', 'Chłąd', 'Wrochna', 'Protasiuk',\n 'Opalach', 'Mućko', 'Martyn', 'Drgas', 'Ceran', 'Bryczek', 'Ziarno', 'Wołodźko', 'Wac',\n 'Szpala', 'Szlachcic', 'Rurka', 'Oczkowicz', 'Mik', 'Małysiak', 'Kubek', 'Imiela', 'Graboń',\n 'Garbacik', 'Dolega', 'Broncel', 'Baum', 'Bancerz', 'Siedlik', 'Miąsko', 'Lenc', 'Konat',\n 'Kaletka', 'Jenek', 'Honkisz', 'Droś', 'Suchojad', 'Ratka', 'Raba', 'Lulek', 'Komperda',\n 'Kołodziejak', 'Koloch', 'Kolka', 'Joniak', 'Jezior', 'Faltyn', 'Dyjach', 'Czulak', 'Cop',\n 'Wyroślak', 'Woda', 'Stranc', 'Solis', 'Skomra', 'Sierpień', 'Rzeźniczek', 'Pajdak',\n 'Mostek', 'Machowiak', 'Janduła', 'Fitrzyk', 'Welenc', 'Tyczka', 'Skiepko', 'Potok',\n 'Olewniczak', 'Nitkiewicz', 'Myrcha', 'Krata', 'Kara', 'Hołysz', 'Hałka', 'Florian',\n 'Dziurdzia', 'Dryka', 'Sysło', 'Rolek', 'Młocek', 'Idzi', 'Haponiuk', 'Grębowiec', 'Gęca',\n 'Bochnia', 'Ślipek', 'Sieczko', 'Pierz', 'Nyc', 'Łacina', 'Ludwisiak', 'Kujda', 'Hutyra',\n 'Dziugieł', 'Białka', 'Zemanek', 'Zawartka', 'Smyl', 'Smolec', 'Słoka', 'Putek',\n 'Pietrewicz', 'Lepka', 'Krzeszowiec', 'Kowalówka', 'Jośko', 'Hamrol', 'Gapys', 'Antoszczyk',\n 'Turoń', 'Teter', 'Surdel', 'Pieczyrak', 'Mudlaff', 'Manista', 'Kolek', 'Kadela', 'Jeka',\n 'Jamrożek', 'Goliasz', 'Dywan', 'Drewnik', 'Dąbroś', 'Ciaś', 'Obiała', 'Nocek', 'Marko',\n 'Ładziak', 'Hadaś', 'Dulik', 'Dorynek', 'Wolańczyk', 'Stoltmann', 'Rozumek', 'Łudzik',\n 'Łaś', 'Leoniuk', 'Krzyk', 'Karol', 'Kamyszek', 'Filusz', 'Czermak', 'Budych', 'Żółkiewicz',\n 'Tatarczyk', 'Pietrus', 'Pachowicz', 'Niesporek', 'Kultys', 'Kornet', 'Kajstura',\n 'Grześków', 'Dub', 'Drobot', 'Urynowicz', 'Swacha', 'Prokopczuk', 'Michnowicz', 'Malka',\n 'Labocha', 'Capiga', 'Zawalich', 'Wizner', 'Startek', 'Smolorz', 'Rozynek', 'Pal',\n 'Madajczyk', 'Ławniczek', 'Haremza', 'Bejnarowicz', 'Żuberek', 'Windak', 'Sobolak',\n 'Sibiga', 'Rajczak', 'Pudełek', 'Michalkiewicz', 'Fularczyk', 'Broniarek', 'Żabka',\n 'Towarek', 'Sugier', 'Pikula', 'Pawlonka', 'Marosz', 'Kut', 'Grymuza', 'Dąbkiewicz',\n 'Ciechowicz', 'Brodawka', 'Borzych', 'Bela', 'Zaguła', 'Tyniec', 'Trepczyk', 'Stwora',\n 'Paczos', 'Olbrych', 'Ogrodowicz', 'Michel', 'Mazepa', 'Lazarek', 'Krzystek', 'Jażdżyk',\n 'Goska', 'Fraszczyk', 'Drożdżal', 'Cofała', 'Chołody', 'Wawrzyk', 'Prokurat', 'Policht',\n 'Płodzień', 'Pasztaleniec', 'Osipiuk', 'Mateńko', 'Kiciak', 'Grotek', 'Członka', 'Żal',\n 'Zimmer', 'Wosiak', 'Srokosz', 'Paździora', 'Patoła', 'Pałęga', 'Orawiec', 'Nastaj',\n 'Mirgos', 'Merda', 'Machniak', 'Łokietek', 'Fogiel', 'Elias', 'Świergiel', 'Stempel',\n 'Skocz', 'Potoczek', 'Penar', 'Miecznik', 'Kwapis', 'Jakóbiak', 'Gietka', 'Flisek',\n 'Dudzicz', 'Cich', 'Broniek', 'Wiercigroch', 'Usarek', 'Tryc', 'Szylar', 'Szczot', 'Ptok',\n 'Prystupa', 'Preuss', 'Piekara', 'Łaszczyk', 'Kurzaj', 'Kopiczko', 'Jachimczak', 'Hirsch',\n 'Dytrych', 'Dorna', 'Bystroń', 'Worach', 'Tokaj', 'Szmagaj', 'Solnica', 'Rejmak', 'Reimann',\n 'Pazoła', 'Nieradzik', 'Miechowicz', 'Langiewicz', 'Kruś', 'Kozień', 'Kielczyk', 'Jargiło',\n 'Dąbal', 'Cichos', 'Sorbian', 'Ruman', 'Piotrkowicz', 'Oziębło', 'Henke', 'Czosnyka',\n 'Choina', 'Chabior', 'Warzybok', 'Seweryniak', 'Pyzel', 'Niewola', 'Nesterowicz', 'Liss',\n 'Kiepas', 'Kalista', 'Demiańczuk', 'Cłapa', 'Błasik', 'Berdzik', 'Bełza', 'Złotek',\n 'Tonder', 'Szwaj', 'Szarzec', 'Suchora', 'Sarota', 'Palica', 'Matula', 'Malecha', 'Magryta',\n 'Łuckiewicz', 'Kuster', 'Stoltman', 'Siewert', 'Serwach', 'Schwarz', 'Kuźnia', 'Kuśmider',\n 'Kurzac', 'Klisz', 'Gwardiak', 'Gotfryd', 'Deneka', 'Ciuruś', 'Żmija', 'Tałaj', 'Sobuś',\n 'Rajman', 'Perlik', 'Kurda', 'Kosznik', 'Kaluga', 'Jaracz', 'Hanas', 'Dzwonnik', 'Ziegert',\n 'Szyma', 'Różewicz', 'Paszkowiak', 'Maślach', 'Lewicz', 'Heba', 'Godzwon', 'Drej', 'Borak',\n 'Adamów', 'Tywoniuk', 'Ścieszka', 'Smal', 'Łabuś', 'Kominiak', 'Dietrich', 'Cąkała',\n 'Budzich', 'Bąbol', 'Zgoła', 'Sładek', 'Sierżant', 'Misiurek', 'Miąsik', 'Mądrzyk',\n 'Kretowicz', 'Kasznia', 'Jeżyna', 'Humeniuk', 'Fiutak', 'Czerniakiewicz', 'Bork', 'Żymełka',\n 'Tomalik', 'Szarpak', 'Sołtan', 'Maciuszek', 'Krysta', 'Grzeszkowiak', 'Brachman', 'Zys',\n 'Westfal', 'Waluk', 'Wacławiak', 'Sałuda', 'Sabak', 'Niedojadło', 'Nazarko', 'Murat',\n 'Majzner', 'Ludwin', 'Kubaczyk', 'Kielich', 'Doliwa', 'Dej', 'Chuchla', 'Boguś', 'Bobik',\n 'Zadworny', 'Wójs', 'Tyma', 'Sztuczka', 'Strządała', 'Sowała', 'Omiotek', 'Oleśkiewicz',\n 'Morawiak', 'Kwapisiewicz', 'Krokosz', 'Hajder', 'Garczyk', 'Burdach', 'Związek', 'Wojczuk',\n 'Stanclik', 'Piekart', 'Mielke', 'Machowicz', 'Kozieja', 'Kaziród', 'Gaś', 'Garbaciak',\n 'Chatys', 'Bzdęga', 'Bartoszczyk', 'Zdonek', 'Więcławek', 'Wielgo', 'Steuer', 'Staręga',\n 'Sakwa', 'Orpel', 'Kobel', 'Golonko', 'Stark', 'Soczówka', 'Nickel', 'Kupaj', 'Kolman',\n 'Kieca', 'Kamyk', 'Jeżyk', 'Glica', 'Gasz', 'Gamrat', 'Franiak', 'Bacik', 'Andrukiewicz',\n 'Troka', 'Siwka', 'Odrzywołek', 'Nurkiewicz', 'Kozubal', 'Kott', 'Głowienka', 'Doroszuk',\n 'Cogiel', 'Cheba', 'Baś', 'Andreasik', 'Wenzel', 'Szumna', 'Rosłoń', 'Ogłaza',\n 'Mikłaszewicz', 'Kubieniec', 'Jędral', 'Bieniak', 'Wons', 'Władyka', 'Rolak', 'Prejs',\n 'Płocharczyk', 'Ostręga', 'Łęgowik', 'Ludwik', 'Kopik', 'Kleinschmidt', 'Karczmarek',\n 'Gładka', 'Czylok', 'Wawrzynkiewicz',\n )\n male_last_names = (\n 'Kowalski', 'Wiśniewski', 'Dąbrowski', 'Lewandowski', 'Wójcik', 'Kamiński', 'Kowalczyk',\n 'Zieliński', 'Szymański', 'Woźniak', 'Kozłowski', 'Jankowski', 'Wojciechowski',\n 'Kwiatkowski', 'Kaczmarek', 'Mazur', 'Krawczyk', 'Piotrowski', 'Grabowski', 'Nowakowski',\n 'Pawłowski', 'Michalski', 'Nowicki', 'Adamczyk', 'Dudek', 'Zając', 'Wieczorek', 'Jabłoński',\n 'Król', 'Majewski', 'Olszewski', 'Jaworski', 'Wróbel', 'Malinowski', 'Pawlak', 'Witkowski',\n 'Walczak', 'Stępień', 'Górski', 'Rutkowski', 'Michalak', 'Sikora', 'Ostrowski', 'Baran',\n 'Duda', 'Szewczyk', 'Tomaszewski', 'Pietrzak', 'Marciniak', 'Wróblewski', 'Zalewski',\n 'Jakubowski', 'Jasiński', 'Zawadzki', 'Sadowski', 'Bąk', 'Chmielewski', 'Włodarczyk',\n 'Borkowski', 'Czarnecki', 'Sawicki', 'Sokołowski', 'Urbański', 'Kubiak', 'Maciejewski',\n 'Szczepański', 'Kucharski', 'Wilk', 'Kalinowski', 'Lis', 'Mazurek', 'Wysocki', 'Adamski',\n 'Kaźmierczak', 'Wasilewski', 'Sobczak', 'Czerwiński', 'Andrzejewski', 'Cieślak', 'Głowacki',\n 'Zakrzewski', 'Kołodziej', 'Sikorski', 'Krajewski', 'Gajewski', 'Szymczak', 'Szulc',\n 'Baranowski', 'Laskowski', 'Brzeziński', 'Makowski', 'Ziółkowski', 'Przybylski', 'Domański',\n 'Nowacki', 'Borowski', 'Błaszczyk', 'Chojnacki', 'Ciesielski', 'Mróz', 'Szczepaniak',\n 'Wesołowski', 'Górecki', 'Krupa', 'Kaczmarczyk', 'Leszczyński', 'Lipiński', 'Kowalewski',\n 'Urbaniak', 'Kozak', 'Kania', 'Mikołajczyk', 'Czajkowski', 'Mucha', 'Tomczak', 'Kozioł',\n 'Markowski', 'Kowalik', 'Nawrocki', 'Brzozowski', 'Janik', 'Musiał', 'Wawrzyniak',\n 'Markiewicz', 'Orłowski', 'Tomczyk', 'Jarosz', 'Kołodziejczyk', 'Kurek', 'Kopeć', 'Żak',\n 'Wolski', 'Łuczak', 'Dziedzic', 'Kot', 'Stasiak', 'Stankiewicz', 'Piątek', 'Jóźwiak',\n 'Urban', 'Dobrowolski', 'Pawlik', 'Kruk', 'Domagała', 'Piasecki', 'Wierzbicki', 'Karpiński',\n 'Jastrzębski', 'Polak', 'Zięba', 'Janicki', 'Wójtowicz', 'Stefański', 'Sosnowski',\n 'Bednarek', 'Majchrzak', 'Bielecki', 'Małecki', 'Maj', 'Sowa', 'Milewski', 'Gajda',\n 'Klimek', 'Olejniczak', 'Ratajczak', 'Romanowski', 'Matuszewski', 'Śliwiński', 'Madej',\n 'Kasprzak', 'Wilczyński', 'Grzelak', 'Socha', 'Czajka', 'Marek', 'Kowal', 'Bednarczyk',\n 'Skiba', 'Wrona', 'Owczarek', 'Marcinkowski', 'Matusiak', 'Orzechowski', 'Sobolewski',\n 'Kędzierski', 'Kurowski', 'Rogowski', 'Olejnik', 'Dębski', 'Barański', 'Skowroński',\n 'Mazurkiewicz', 'Pająk', 'Czech', 'Janiszewski', 'Bednarski', 'Łukasik', 'Chrzanowski',\n 'Bukowski', 'Leśniak',\n )\n\n prefixes_male = ('pan',)\n prefixes_female = ('pani',)\n\n first_names = first_names_male + first_names_female\n\n def last_name(self):\n return self.random_element(self.unisex_last_names)\n\n def identity_card_number(self):\n \"\"\"\n Returns 9 character Polish Identity Card Number,\n Polish: Numer Dowodu Osobistego.\n\n The card number consists of 3 letters followed by 6 digits (for example, ABA300000),\n of which the first digit (at position 3) is the check digit.\n\n https://en.wikipedia.org/wiki/Polish_identity_card\n \"\"\"\n identity = []\n\n for _ in range(3):\n identity.append(self.random_letter().upper())\n\n # it will be overwritten by a checksum\n identity.append(0)\n\n for _ in range(5):\n identity.append(self.random_digit())\n\n identity[3] = checksum_identity_card_number(identity)\n\n return ''.join(str(character) for character in identity)\n\n def pesel(self):\n \"\"\"\n Returns 11 characters of Universal Electronic System for Registration of the Population.\n Polish: Powszechny Elektroniczny System Ewidencji Ludności.\n\n PESEL has 11 digits which identifies just one person.\n Month: if person was born in 1900-2000, december is 12. If person was born > 2000, we have to add 20 to month,\n so december is 32.\n Person id: last digit identifies person's sex. Even for females, odd for males.\n\n https://en.wikipedia.org/wiki/PESEL\n \"\"\"\n\n birth = self.generator.date_of_birth()\n\n year_pesel = str(birth.year)[-2:]\n month_pesel = birth.month if birth.year < 2000 else birth.month + 20\n day_pesel = birth.day\n person_id = self.random_int(1000, 9999)\n\n current_pesel = '{year}{month:02d}{day:02d}{person_id:04d}'.format(year=year_pesel, month=month_pesel,\n day=day_pesel,\n person_id=person_id)\n\n checksum_value = generate_pesel_checksum_value(current_pesel)\n return '{pesel_without_checksum}{checksum_value}'.format(pesel_without_checksum=current_pesel,\n checksum_value=checksum_value)\n"}
|
{"faker/providers/person/pl_PL/__init__.py": [{"type": "function", "name": "Provider.pesel_compute_check_digit", "lines": [707, 709], "signature": "def pesel_compute_check_digit(pesel):", "doc": ""}, {"type": "function", "name": "Provider.pwz_doctor_compute_check_digit", "lines": [743, 744], "signature": "def pwz_doctor_compute_check_digit(x):", "doc": ""}, {"type": "function", "name": "Provider.pwz_doctor", "lines": [746, 760], "signature": "def pwz_doctor(self):", "doc": "Function generates an identification number for medical doctors\nPolish: Prawo Wykonywania Zawodu (PWZ)\n\nhttps://www.nil.org.pl/rejestry/centralny-rejestr-lekarzy/zasady-weryfikowania-nr-prawa-wykonywania-zawodu"}, {"type": "function", "name": "Provider.pwz_nurse", "lines": [762, 776], "signature": "def pwz_nurse(self, kind='nurse'):", "doc": "Function generates an identification number for nurses and midwives\nPolish: Prawo Wykonywania Zawodu (PWZ)\n\nhttp://arch.nipip.pl/index.php/prawo/uchwaly/naczelnych-rad/w-roku-2015/posiedzenie-15-17-grudnia/3664-uchwala-\nnr-381-vi-2015-w-sprawie-trybu-postepowania-dotyczacego-stwierdzania-i-przyznawania-prawa-wykonywania-zawodu-pi\nelegniarki-i-zawodu-poloznej-oraz-sposobu-prowadzenia-rejestru-pielegniarek-i-rejestru-poloznych-przez-okregowe\n-rady-pielegniarek-i-polo"}]}
| null |
["tests/providers/test_person.py::TestPlPL::test_pesel_birth_date", "tests/providers/test_person.py::TestPlPL::test_pesel_sex_female", "tests/providers/test_person.py::TestPlPL::test_pesel_sex_male", "tests/providers/test_person.py::TestPlPL::test_pwz_doctor", "tests/providers/test_person.py::TestPlPL::test_pwz_doctor_check_digit_zero", "tests/providers/test_person.py::TestPlPL::test_pwz_nurse"]
|
["tests/providers/test_person.py::TestAr::test_first_name", "tests/providers/test_person.py::TestAr::test_last_name", "tests/providers/test_person.py::TestJaJP::test_person", "tests/providers/test_person.py::TestNeNP::test_names", "tests/providers/test_person.py::TestFiFI::test_gender_first_names", "tests/providers/test_person.py::TestFiFI::test_last_names", "tests/providers/test_person.py::TestSvSE::test_gender_first_names", "tests/providers/test_person.py::TestPlPL::test_identity_card_number", "tests/providers/test_person.py::TestPlPL::test_identity_card_number_checksum", "tests/providers/test_person.py::TestCsCZ::test_name_female", "tests/providers/test_person.py::TestCsCZ::test_name_male", "tests/providers/test_person.py::TestZhCN::test_first_name", "tests/providers/test_person.py::TestZhCN::test_last_name", "tests/providers/test_person.py::TestZhCN::test_name", "tests/providers/test_person.py::TestZhTW::test_first_name", "tests/providers/test_person.py::TestZhTW::test_last_name", "tests/providers/test_person.py::TestZhTW::test_name", "tests/providers/test_person.py::TestHyAM::test_first_name", "tests/providers/test_person.py::TestHyAM::test_last_name", "tests/providers/test_person.py::TestHyAM::test_name"]
|
6edfdbf6ae90b0153309e3bf066aa3b2d16494a7
|
{"first_commit_time": 1558951622.0, "pr_title": "add pwz generator and date_of_brith and sex asargument to pesel (`pl_PL`)", "pr_body": "What does this changes\r\n\r\n- `Person` in `pl_PL`\r\n\r\nWhat was wrong\r\n\r\n- it wasn't possible to generate pesel number for given date of birth and/or sex\r\n- generator for a `pwz` number was unavailable \r\n\r\nHow this fixes it\r\n\r\n- adds pwz_doctor and pwz_nurse generator to `Person` (pl_PL)\r\n- adds date_of_birth and sex as parameter to `Person.pesel` (pl_PL)\r\n\r\nFixes #931 \r\n", "pr_timeline": [{"time": 1559387432.0, "comment": "It can be a problem w `appveyor`. \r\nI tried to rebuild commit `45282611`but it failed with the same error. "}, {"time": 1559403852.0, "comment": "Heads up, I\u2019ll be traveling for the next few days, so I won\u2019t be able to\nlook at thins until at least Tuesday\n\nOn Sat, Jun 1, 2019 at 6:08 AM tomaszrycz <[email protected]> wrote:\n\n> It can be a problem w appveyor.\n> I tried to build commit 45282611 but it failed with the same error.\n>\n> \u2014\n> You are receiving this because you commented.\n>\n>\n> Reply to this email directly, view it on GitHub\n> <https://github.com/joke2k/faker/pull/964?email_source=notifications&email_token=AAAV4B7ZK2HN4PH677D3IITPYJKCPA5CNFSM4HP2W642YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWW6GYI#issuecomment-497935201>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AAAV4BZ7A54VZNYKLHB5QM3PYJKCPANCNFSM4HP2W64Q>\n> .\n>\n"}, {"time": 1562953492.0, "comment": "@tomaszrycz appveyor is fixed on master. Do you mind rebasing your branch?"}, {"time": 1562962767.0, "comment": "@fcurella done!"}, {"time": 1562962812.0, "comment": "Thank you! \u2728 "}], "issues": {"931": {"issue_title": "pl_PL tasks - a lot of things to add", "issue_body": "Hi,\r\n\r\nI'm using faker quite often, but there is some missing functionalities for pl_PL, like e.g. PESEL generator (Polish id for every person), car plates or some more. \r\n\r\nCan I contribute on my own branch (e.g. pl-PL-features or something), to add missing functionalities?", "issue_timeline": [{"time": 1553116199.0, "comment": "Hi @adwojak ,\r\n\r\nThank for reaching out!\r\n\r\nFeel free to fork the repo, create your branch and submit Pull Requests!"}, {"time": 1553116507.0, "comment": "Thanks a lot!"}]}}}
|
|
lark-parser/lark
| 1,467
|
https://github.com/lark-parser/lark/pull/1467
|
lark-parser__lark-1467
|
["1466"]
|
5faea9223cc54d1dbd0985cf830d05a10a7729ec
|
diff --git a/.gitignore b/.gitignore
index 26275e4b..d4e64180 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,6 +1,7 @@
*.pyc
*.pyo
/.tox
+/lark.egg-info/**
/lark_parser.egg-info/**
tags
.vscode
diff --git a/lark/tree.py b/lark/tree.py
index 76f8738e..9dccadd7 100644
--- a/lark/tree.py
+++ b/lark/tree.py
@@ -3,8 +3,10 @@
from typing import List, Callable, Iterator, Union, Optional, Generic, TypeVar, TYPE_CHECKING
+from .lexer import Token
+
if TYPE_CHECKING:
- from .lexer import TerminalDef, Token
+ from .lexer import TerminalDef
try:
import rich
except ImportError:
@@ -171,6 +173,16 @@ def find_data(self, data: str) -> 'Iterator[Tree[_Leaf_T]]':
###}
+ def find_token(self, token_type: str) -> Iterator[_Leaf_T]:
+ """Returns all tokens whose type equals the given token_type.
+
+ This is a recursive function that will find tokens in all the subtrees.
+
+ Example:
+ >>> term_tokens = tree.find_token('TERM')
+ """
+ return self.scan_values(lambda v: isinstance(v, Token) and v.type == token_type)
+
def expand_kids_by_data(self, *data_values):
"""Expand (inline) children with any of the given data values. Returns True if anything changed"""
changed = False
|
diff --git a/tests/test_trees.py b/tests/test_trees.py
index 1f69869e..55fdae91 100644
--- a/tests/test_trees.py
+++ b/tests/test_trees.py
@@ -17,6 +17,11 @@
class TestTrees(TestCase):
def setUp(self):
self.tree1 = Tree('a', [Tree(x, y) for x, y in zip('bcd', 'xyz')])
+ self.tree2 = Tree('a', [
+ Tree('b', [Token('T', 'x')]),
+ Tree('c', [Token('T', 'y')]),
+ Tree('d', [Tree('z', [Token('T', 'zz'), Tree('zzz', 'zzz')])]),
+ ])
def test_eq(self):
assert self.tree1 == self.tree1
@@ -48,6 +53,11 @@ def test_iter_subtrees_topdown(self):
nodes = list(self.tree1.iter_subtrees_topdown())
self.assertEqual(nodes, expected)
+ def test_find_token(self):
+ expected = [Token('T', 'x'), Token('T', 'y'), Token('T', 'zz')]
+ tokens = list(self.tree2.find_token('T'))
+ self.assertEqual(tokens, expected)
+
def test_visitor(self):
class Visitor1(Visitor):
def __init__(self):
| 2024-09-11T20:10:22
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{".gitignore": "*.pyc\n*.pyo\n/.tox\n/lark_parser.egg-info/**\ntags\n.vscode\n.idea\n.ropeproject\n.cache\n.mypy_cache\n/dist\n/build\ndocs/_build\ndocs/examples\n", "lark/tree.py": "import sys\nfrom copy import deepcopy\n\nfrom typing import List, Callable, Iterator, Union, Optional, Generic, TypeVar, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from .lexer import TerminalDef, Token\n try:\n import rich\n except ImportError:\n pass\n from typing import Literal\n\n###{standalone\n\nclass Meta:\n\n empty: bool\n line: int\n column: int\n start_pos: int\n end_line: int\n end_column: int\n end_pos: int\n orig_expansion: 'List[TerminalDef]'\n match_tree: bool\n\n def __init__(self):\n self.empty = True\n\n\n_Leaf_T = TypeVar(\"_Leaf_T\")\nBranch = Union[_Leaf_T, 'Tree[_Leaf_T]']\n\n\nclass Tree(Generic[_Leaf_T]):\n \"\"\"The main tree class.\n\n Creates a new tree, and stores \"data\" and \"children\" in attributes of the same name.\n Trees can be hashed and compared.\n\n Parameters:\n data: The name of the rule or alias\n children: List of matched sub-rules and terminals\n meta: Line & Column numbers (if ``propagate_positions`` is enabled).\n meta attributes: (line, column, end_line, end_column, start_pos, end_pos,\n container_line, container_column, container_end_line, container_end_column)\n container_* attributes consider all symbols, including those that have been inlined in the tree.\n For example, in the rule 'a: _A B _C', the regular attributes will mark the start and end of B,\n but the container_* attributes will also include _A and _C in the range. However, rules that\n contain 'a' will consider it in full, including _A and _C for all attributes.\n \"\"\"\n\n data: str\n children: 'List[Branch[_Leaf_T]]'\n\n def __init__(self, data: str, children: 'List[Branch[_Leaf_T]]', meta: Optional[Meta]=None) -> None:\n self.data = data\n self.children = children\n self._meta = meta\n\n @property\n def meta(self) -> Meta:\n if self._meta is None:\n self._meta = Meta()\n return self._meta\n\n def __repr__(self):\n return 'Tree(%r, %r)' % (self.data, self.children)\n\n def _pretty_label(self):\n return self.data\n\n def _pretty(self, level, indent_str):\n yield f'{indent_str*level}{self._pretty_label()}'\n if len(self.children) == 1 and not isinstance(self.children[0], Tree):\n yield f'\\t{self.children[0]}\\n'\n else:\n yield '\\n'\n for n in self.children:\n if isinstance(n, Tree):\n yield from n._pretty(level+1, indent_str)\n else:\n yield f'{indent_str*(level+1)}{n}\\n'\n\n def pretty(self, indent_str: str=' ') -> str:\n \"\"\"Returns an indented string representation of the tree.\n\n Great for debugging.\n \"\"\"\n return ''.join(self._pretty(0, indent_str))\n\n def __rich__(self, parent:Optional['rich.tree.Tree']=None) -> 'rich.tree.Tree':\n \"\"\"Returns a tree widget for the 'rich' library.\n\n Example:\n ::\n from rich import print\n from lark import Tree\n\n tree = Tree('root', ['node1', 'node2'])\n print(tree)\n \"\"\"\n return self._rich(parent)\n\n def _rich(self, parent):\n if parent:\n tree = parent.add(f'[bold]{self.data}[/bold]')\n else:\n import rich.tree\n tree = rich.tree.Tree(self.data)\n\n for c in self.children:\n if isinstance(c, Tree):\n c._rich(tree)\n else:\n tree.add(f'[green]{c}[/green]')\n\n return tree\n\n def __eq__(self, other):\n try:\n return self.data == other.data and self.children == other.children\n except AttributeError:\n return False\n\n def __ne__(self, other):\n return not (self == other)\n\n def __hash__(self) -> int:\n return hash((self.data, tuple(self.children)))\n\n def iter_subtrees(self) -> 'Iterator[Tree[_Leaf_T]]':\n \"\"\"Depth-first iteration.\n\n Iterates over all the subtrees, never returning to the same node twice (Lark's parse-tree is actually a DAG).\n \"\"\"\n queue = [self]\n subtrees = dict()\n for subtree in queue:\n subtrees[id(subtree)] = subtree\n queue += [c for c in reversed(subtree.children)\n if isinstance(c, Tree) and id(c) not in subtrees]\n\n del queue\n return reversed(list(subtrees.values()))\n\n def iter_subtrees_topdown(self):\n \"\"\"Breadth-first iteration.\n\n Iterates over all the subtrees, return nodes in order like pretty() does.\n \"\"\"\n stack = [self]\n stack_append = stack.append\n stack_pop = stack.pop\n while stack:\n node = stack_pop()\n if not isinstance(node, Tree):\n continue\n yield node\n for child in reversed(node.children):\n stack_append(child)\n\n def find_pred(self, pred: 'Callable[[Tree[_Leaf_T]], bool]') -> 'Iterator[Tree[_Leaf_T]]':\n \"\"\"Returns all nodes of the tree that evaluate pred(node) as true.\"\"\"\n return filter(pred, self.iter_subtrees())\n\n def find_data(self, data: str) -> 'Iterator[Tree[_Leaf_T]]':\n \"\"\"Returns all nodes of the tree whose data equals the given data.\"\"\"\n return self.find_pred(lambda t: t.data == data)\n\n###}\n\n def expand_kids_by_data(self, *data_values):\n \"\"\"Expand (inline) children with any of the given data values. Returns True if anything changed\"\"\"\n changed = False\n for i in range(len(self.children)-1, -1, -1):\n child = self.children[i]\n if isinstance(child, Tree) and child.data in data_values:\n self.children[i:i+1] = child.children\n changed = True\n return changed\n\n\n def scan_values(self, pred: 'Callable[[Branch[_Leaf_T]], bool]') -> Iterator[_Leaf_T]:\n \"\"\"Return all values in the tree that evaluate pred(value) as true.\n\n This can be used to find all the tokens in the tree.\n\n Example:\n >>> all_tokens = tree.scan_values(lambda v: isinstance(v, Token))\n \"\"\"\n for c in self.children:\n if isinstance(c, Tree):\n for t in c.scan_values(pred):\n yield t\n else:\n if pred(c):\n yield c\n\n def __deepcopy__(self, memo):\n return type(self)(self.data, deepcopy(self.children, memo), meta=self._meta)\n\n def copy(self) -> 'Tree[_Leaf_T]':\n return type(self)(self.data, self.children)\n\n def set(self, data: str, children: 'List[Branch[_Leaf_T]]') -> None:\n self.data = data\n self.children = children\n\n\nParseTree = Tree['Token']\n\n\nclass SlottedTree(Tree):\n __slots__ = 'data', 'children', 'rule', '_meta'\n\n\ndef pydot__tree_to_png(tree: Tree, filename: str, rankdir: 'Literal[\"TB\", \"LR\", \"BT\", \"RL\"]'=\"LR\", **kwargs) -> None:\n graph = pydot__tree_to_graph(tree, rankdir, **kwargs)\n graph.write_png(filename)\n\n\ndef pydot__tree_to_dot(tree: Tree, filename, rankdir=\"LR\", **kwargs):\n graph = pydot__tree_to_graph(tree, rankdir, **kwargs)\n graph.write(filename)\n\n\ndef pydot__tree_to_graph(tree: Tree, rankdir=\"LR\", **kwargs):\n \"\"\"Creates a colorful image that represents the tree (data+children, without meta)\n\n Possible values for `rankdir` are \"TB\", \"LR\", \"BT\", \"RL\", corresponding to\n directed graphs drawn from top to bottom, from left to right, from bottom to\n top, and from right to left, respectively.\n\n `kwargs` can be any graph attribute (e. g. `dpi=200`). For a list of\n possible attributes, see https://www.graphviz.org/doc/info/attrs.html.\n \"\"\"\n\n import pydot # type: ignore[import-not-found]\n graph = pydot.Dot(graph_type='digraph', rankdir=rankdir, **kwargs)\n\n i = [0]\n\n def new_leaf(leaf):\n node = pydot.Node(i[0], label=repr(leaf))\n i[0] += 1\n graph.add_node(node)\n return node\n\n def _to_pydot(subtree):\n color = hash(subtree.data) & 0xffffff\n color |= 0x808080\n\n subnodes = [_to_pydot(child) if isinstance(child, Tree) else new_leaf(child)\n for child in subtree.children]\n node = pydot.Node(i[0], style=\"filled\", fillcolor=\"#%x\" % color, label=subtree.data)\n i[0] += 1\n graph.add_node(node)\n\n for subnode in subnodes:\n graph.add_edge(pydot.Edge(node, subnode))\n\n return node\n\n _to_pydot(tree)\n return graph\n"}
|
diff --git a/.gitignore b/.gitignore
index 26275e4b..d4e64180 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,6 +1,7 @@
*.pyc
*.pyo
/.tox
+/lark.egg-info/**
/lark_parser.egg-info/**
tags
.vscode
|
{"lark/tree.py": [{"type": "function", "name": "Tree.find_token", "lines": [176, 184], "signature": "def find_token(self, token_type: str) -> Iterator[_Leaf_T]:", "doc": "Returns all tokens whose type equals the given token_type.\n\nThis is a recursive function that will find tokens in all the subtrees.\n\nExample:\n >>> term_tokens = tree.find_token('TERM')"}]}
| null |
["tests/test_trees.py::TestTrees::test_find_token"]
|
["tests/test_trees.py::TestTrees::test_copy", "tests/test_trees.py::TestTrees::test_deepcopy", "tests/test_trees.py::TestTrees::test_discard", "tests/test_trees.py::TestTrees::test_eq", "tests/test_trees.py::TestTrees::test_inline_static", "tests/test_trees.py::TestTrees::test_interp", "tests/test_trees.py::TestTrees::test_iter_subtrees", "tests/test_trees.py::TestTrees::test_iter_subtrees_topdown", "tests/test_trees.py::TestTrees::test_merge_transformers", "tests/test_trees.py::TestTrees::test_partial", "tests/test_trees.py::TestTrees::test_pickle", "tests/test_trees.py::TestTrees::test_repr_runnable", "tests/test_trees.py::TestTrees::test_smart_decorator", "tests/test_trees.py::TestTrees::test_transform_token", "tests/test_trees.py::TestTrees::test_transformer", "tests/test_trees.py::TestTrees::test_transformer_variants", "tests/test_trees.py::TestTrees::test_vargs", "tests/test_trees.py::TestTrees::test_vargs_override", "tests/test_trees.py::TestTrees::test_vargs_set_name", "tests/test_trees.py::TestTrees::test_visitor"]
|
5faea9223cc54d1dbd0985cf830d05a10a7729ec
|
{"first_commit_time": 1726085281.0, "pr_title": "Add Tree.find_token() method", "pr_body": "Resolves #1466.", "pr_timeline": [{"time": 1726085481.0, "comment": "I put this method outside of \"standalone\" scope as it uses `scan_values()` that is not included as well. Not sure if this is correct."}, {"time": 1726085550.0, "comment": "@erezsh ready for review."}, {"time": 1726090439.0, "comment": "Overall looks good. A few requests:\r\n\r\n- Add a test\r\n- Rename typ to token_type\r\n- Clarify in the docstring that it is a recursive function, i.e. it will find tokens in all the subtrees.\r\n"}, {"time": 1726118134.0, "comment": "I changed `isinstance(v, Token)` to `not isinstance(v, Tree)` (same as in `_pretty()`) to avoid importing `Token`.\r\n\r\n@erezsh ready for review."}, {"time": 1735734586.0, "comment": "Sorry it tooks me this long to reply! It slipped my memory.\r\n\r\nI think it's better to keep it as isinstance(.., Token)."}, {"time": 1735734768.0, "comment": "The logic in pretty is a little different.\r\n\r\nP.S. if you don't have time right now, let me know, and I'll make the change for you."}, {"time": 1735885050.0, "comment": "Thank you so much, @erezsh! This change is pretty simple, added it to PR."}, {"time": 1735946568.0, "comment": "Looks good, however I would also like to see a test for what happens if there is a leaf node that isn't a `Token` instance. (i.e. it doesn't get listed)."}, {"time": 1735948031.0, "comment": "@MegaIng Thank you for the hint. I added a non-token item at the end of the tree. As expected, it didn't affect the result."}, {"time": 1735985492.0, "comment": "Thanks!"}], "issues": {"1466": {"issue_title": "Syntactic sugar: Tree.find_token()", "issue_body": "**Suggestion**\r\nAdd new method `Tree.find_type(typ: str) -> Iterator[Token]` as a companion for `Tree.find_data()`:\r\n```python\r\nfor term in tree.find_type('TERM'):\r\n ...\r\n```\r\n\r\n**Describe alternatives you've considered**\r\nRight now this can be achieved by\r\n```python\r\nfor term in tree.find_pred(lambda node: isinstance(node, Token) and node.type == 'TERM'):\r\n ...\r\n```\r\nwhich doesn't fit Black's line width and will look even uglier in real cases.\r\n\r\n**Additional context**\r\nI can provide PR if you find this enhancement reasonable.\r\n\r\nThe method proposed seems to be a logical extension of existing API:\r\n```\r\nTree => find_data()\r\nToken => find_type()\r\n```", "issue_timeline": [{"time": 1726083516.0, "comment": "Sorry for bothering, just started using Lark, and see missing pieces (as of my taste). Handy API already made many packages super-popular (like click, requests etc.)"}, {"time": 1726083862.0, "comment": "Actually you need to use `scan_values()`\r\n\r\nBut, yeah, I think that's a good one to add. But let's call it find_token"}]}}}
|
lark-parser/lark
| 466
|
https://github.com/lark-parser/lark/pull/466
|
lark-parser__lark-466
|
[]
|
f07359c31683805f4004fe2d6f37dec84b7c094f
|
diff --git a/lark/visitors.py b/lark/visitors.py
index 7d40e7479..c6e4f6bed 100644
--- a/lark/visitors.py
+++ b/lark/visitors.py
@@ -186,6 +186,11 @@ def visit(self, tree):
self._call_userfunc(subtree)
return tree
+ def visit_topdown(self,tree):
+ for subtree in tree.iter_subtrees_topdown():
+ self._call_userfunc(subtree)
+ return tree
+
class Visitor_Recursive(VisitorBase):
"""Bottom-up visitor, recursive
@@ -198,8 +203,16 @@ def visit(self, tree):
if isinstance(child, Tree):
self.visit(child)
- f = getattr(self, tree.data, self.__default__)
- f(tree)
+ self._call_userfunc(tree)
+ return tree
+
+ def visit_topdown(self,tree):
+ self._call_userfunc(tree)
+
+ for child in tree.children:
+ if isinstance(child, Tree):
+ self.visit_topdown(child)
+
return tree
|
diff --git a/tests/test_trees.py b/tests/test_trees.py
index 4216bd6c5..edd2a8b91 100644
--- a/tests/test_trees.py
+++ b/tests/test_trees.py
@@ -7,7 +7,7 @@
import functools
from lark.tree import Tree
-from lark.visitors import Transformer, Interpreter, visit_children_decor, v_args, Discard
+from lark.visitors import Visitor, Visitor_Recursive, Transformer, Interpreter, visit_children_decor, v_args, Discard
class TestTrees(TestCase):
@@ -34,6 +34,43 @@ def test_iter_subtrees_topdown(self):
nodes = list(self.tree1.iter_subtrees_topdown())
self.assertEqual(nodes, expected)
+ def test_visitor(self):
+ class Visitor1(Visitor):
+ def __init__(self):
+ self.nodes=[]
+
+ def __default__(self,tree):
+ self.nodes.append(tree)
+ class Visitor1_Recursive(Visitor_Recursive):
+ def __init__(self):
+ self.nodes=[]
+
+ def __default__(self,tree):
+ self.nodes.append(tree)
+
+ visitor1=Visitor1()
+ visitor1_recursive=Visitor1_Recursive()
+
+ expected_top_down = [Tree('a', [Tree('b', 'x'), Tree('c', 'y'), Tree('d', 'z')]),
+ Tree('b', 'x'), Tree('c', 'y'), Tree('d', 'z')]
+ expected_botton_up= [Tree('b', 'x'), Tree('c', 'y'), Tree('d', 'z'),
+ Tree('a', [Tree('b', 'x'), Tree('c', 'y'), Tree('d', 'z')])]
+
+ visitor1.visit(self.tree1)
+ self.assertEqual(visitor1.nodes,expected_botton_up)
+
+ visitor1_recursive.visit(self.tree1)
+ self.assertEqual(visitor1_recursive.nodes,expected_botton_up)
+
+ visitor1.nodes=[]
+ visitor1_recursive.nodes=[]
+
+ visitor1.visit_topdown(self.tree1)
+ self.assertEqual(visitor1.nodes,expected_top_down)
+
+ visitor1_recursive.visit_topdown(self.tree1)
+ self.assertEqual(visitor1_recursive.nodes,expected_top_down)
+
def test_interp(self):
t = Tree('a', [Tree('b', []), Tree('c', []), 'd'])
| 2019-10-31T12:19:33
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"lark/visitors.py": "from functools import wraps\n\nfrom .utils import smart_decorator\nfrom .tree import Tree\nfrom .exceptions import VisitError, GrammarError\nfrom .lexer import Token\n\n###{standalone\nfrom inspect import getmembers, getmro\n\nclass Discard(Exception):\n pass\n\n# Transformers\n\nclass Transformer:\n \"\"\"Visits the tree recursively, starting with the leaves and finally the root (bottom-up)\n\n Calls its methods (provided by user via inheritance) according to tree.data\n The returned value replaces the old one in the structure.\n\n Can be used to implement map or reduce.\n \"\"\"\n\n __visit_tokens__ = False # For backwards compatibility\n def __init__(self, visit_tokens=False):\n self.__visit_tokens__ = visit_tokens\n\n def _call_userfunc(self, tree, new_children=None):\n # Assumes tree is already transformed\n children = new_children if new_children is not None else tree.children\n try:\n f = getattr(self, tree.data)\n except AttributeError:\n return self.__default__(tree.data, children, tree.meta)\n else:\n try:\n if getattr(f, 'meta', False):\n return f(children, tree.meta)\n elif getattr(f, 'inline', False):\n return f(*children)\n elif getattr(f, 'whole_tree', False):\n if new_children is not None:\n tree.children = new_children\n return f(tree)\n else:\n return f(children)\n except (GrammarError, Discard):\n raise\n except Exception as e:\n raise VisitError(tree, e)\n\n def _call_userfunc_token(self, token):\n try:\n f = getattr(self, token.type)\n except AttributeError:\n return self.__default_token__(token)\n else:\n try:\n return f(token)\n except (GrammarError, Discard):\n raise\n except Exception as e:\n raise VisitError(token, e)\n\n\n def _transform_children(self, children):\n for c in children:\n try:\n if isinstance(c, Tree):\n yield self._transform_tree(c)\n elif self.__visit_tokens__ and isinstance(c, Token):\n yield self._call_userfunc_token(c)\n else:\n yield c\n except Discard:\n pass\n\n def _transform_tree(self, tree):\n children = list(self._transform_children(tree.children))\n return self._call_userfunc(tree, children)\n\n def transform(self, tree):\n return self._transform_tree(tree)\n\n def __mul__(self, other):\n return TransformerChain(self, other)\n\n def __default__(self, data, children, meta):\n \"Default operation on tree (for override)\"\n return Tree(data, children, meta)\n\n def __default_token__(self, token):\n \"Default operation on token (for override)\"\n return token\n\n\n @classmethod\n def _apply_decorator(cls, decorator, **kwargs):\n mro = getmro(cls)\n assert mro[0] is cls\n libmembers = {name for _cls in mro[1:] for name, _ in getmembers(_cls)}\n for name, value in getmembers(cls):\n\n # Make sure the function isn't inherited (unless it's overwritten)\n if name.startswith('_') or (name in libmembers and name not in cls.__dict__):\n continue\n if not callable(cls.__dict__[name]):\n continue\n\n # Skip if v_args already applied (at the function level)\n if hasattr(cls.__dict__[name], 'vargs_applied'):\n continue\n\n static = isinstance(cls.__dict__[name], (staticmethod, classmethod))\n setattr(cls, name, decorator(value, static=static, **kwargs))\n return cls\n\n\nclass InlineTransformer(Transformer): # XXX Deprecated\n def _call_userfunc(self, tree, new_children=None):\n # Assumes tree is already transformed\n children = new_children if new_children is not None else tree.children\n try:\n f = getattr(self, tree.data)\n except AttributeError:\n return self.__default__(tree.data, children, tree.meta)\n else:\n return f(*children)\n\n\nclass TransformerChain(object):\n def __init__(self, *transformers):\n self.transformers = transformers\n\n def transform(self, tree):\n for t in self.transformers:\n tree = t.transform(tree)\n return tree\n\n def __mul__(self, other):\n return TransformerChain(*self.transformers + (other,))\n\n\nclass Transformer_InPlace(Transformer):\n \"Non-recursive. Changes the tree in-place instead of returning new instances\"\n def _transform_tree(self, tree): # Cancel recursion\n return self._call_userfunc(tree)\n\n def transform(self, tree):\n for subtree in tree.iter_subtrees():\n subtree.children = list(self._transform_children(subtree.children))\n\n return self._transform_tree(tree)\n\n\nclass Transformer_InPlaceRecursive(Transformer):\n \"Recursive. Changes the tree in-place instead of returning new instances\"\n def _transform_tree(self, tree):\n tree.children = list(self._transform_children(tree.children))\n return self._call_userfunc(tree)\n\n\n\n# Visitors\n\nclass VisitorBase:\n def _call_userfunc(self, tree):\n return getattr(self, tree.data, self.__default__)(tree)\n\n def __default__(self, tree):\n \"Default operation on tree (for override)\"\n return tree\n\n\nclass Visitor(VisitorBase):\n \"\"\"Bottom-up visitor, non-recursive\n\n Visits the tree, starting with the leaves and finally the root (bottom-up)\n Calls its methods (provided by user via inheritance) according to tree.data\n \"\"\"\n\n\n def visit(self, tree):\n for subtree in tree.iter_subtrees():\n self._call_userfunc(subtree)\n return tree\n\nclass Visitor_Recursive(VisitorBase):\n \"\"\"Bottom-up visitor, recursive\n\n Visits the tree, starting with the leaves and finally the root (bottom-up)\n Calls its methods (provided by user via inheritance) according to tree.data\n \"\"\"\n\n def visit(self, tree):\n for child in tree.children:\n if isinstance(child, Tree):\n self.visit(child)\n\n f = getattr(self, tree.data, self.__default__)\n f(tree)\n return tree\n\n\n\ndef visit_children_decor(func):\n \"See Interpreter\"\n @wraps(func)\n def inner(cls, tree):\n values = cls.visit_children(tree)\n return func(cls, values)\n return inner\n\n\nclass Interpreter:\n \"\"\"Top-down visitor, recursive\n\n Visits the tree, starting with the root and finally the leaves (top-down)\n Calls its methods (provided by user via inheritance) according to tree.data\n\n Unlike Transformer and Visitor, the Interpreter doesn't automatically visit its sub-branches.\n The user has to explicitly call visit_children, or use the @visit_children_decor\n \"\"\"\n def visit(self, tree):\n return getattr(self, tree.data)(tree)\n\n def visit_children(self, tree):\n return [self.visit(child) if isinstance(child, Tree) else child\n for child in tree.children]\n\n def __getattr__(self, name):\n return self.__default__\n\n def __default__(self, tree):\n return self.visit_children(tree)\n\n\n\n\n# Decorators\n\ndef _apply_decorator(obj, decorator, **kwargs):\n try:\n _apply = obj._apply_decorator\n except AttributeError:\n return decorator(obj, **kwargs)\n else:\n return _apply(decorator, **kwargs)\n\n\n\ndef _inline_args__func(func):\n @wraps(func)\n def create_decorator(_f, with_self):\n if with_self:\n def f(self, children):\n return _f(self, *children)\n else:\n def f(self, children):\n return _f(*children)\n return f\n\n return smart_decorator(func, create_decorator)\n\n\ndef inline_args(obj): # XXX Deprecated\n return _apply_decorator(obj, _inline_args__func)\n\n\n\ndef _visitor_args_func_dec(func, inline=False, meta=False, whole_tree=False, static=False):\n assert [whole_tree, meta, inline].count(True) <= 1\n def create_decorator(_f, with_self):\n if with_self:\n def f(self, *args, **kwargs):\n return _f(self, *args, **kwargs)\n else:\n def f(self, *args, **kwargs):\n return _f(*args, **kwargs)\n return f\n\n if static:\n f = wraps(func)(create_decorator(func, False))\n else:\n f = smart_decorator(func, create_decorator)\n f.vargs_applied = True\n f.inline = inline\n f.meta = meta\n f.whole_tree = whole_tree\n return f\n\ndef v_args(inline=False, meta=False, tree=False):\n \"A convenience decorator factory, for modifying the behavior of user-supplied visitor methods\"\n if [tree, meta, inline].count(True) > 1:\n raise ValueError(\"Visitor functions can either accept tree, or meta, or be inlined. These cannot be combined.\")\n def _visitor_args_dec(obj):\n return _apply_decorator(obj, _visitor_args_func_dec, inline=inline, meta=meta, whole_tree=tree)\n return _visitor_args_dec\n\n\n###}\n"}
|
{"lark/visitors.py": [{"type": "function", "name": "Visitor.visit_topdown", "lines": [189, 192], "signature": "def visit_topdown(self,tree):", "doc": ""}, {"type": "function", "name": "Visitor_Recursive.visit_topdown", "lines": [209, 216], "signature": "def visit_topdown(self,tree):", "doc": ""}]}
| null |
["tests/test_trees.py::TestTrees::test_visitor"]
|
["tests/test_trees.py::TestTrees::test_deepcopy", "tests/test_trees.py::TestTrees::test_discard", "tests/test_trees.py::TestTrees::test_interp", "tests/test_trees.py::TestTrees::test_iter_subtrees", "tests/test_trees.py::TestTrees::test_iter_subtrees_topdown", "tests/test_trees.py::TestTrees::test_partial", "tests/test_trees.py::TestTrees::test_pickle", "tests/test_trees.py::TestTrees::test_transformer", "tests/test_trees.py::TestTrees::test_vargs", "tests/test_trees.py::TestTrees::test_vargs_override"]
|
5faea9223cc54d1dbd0985cf830d05a10a7729ec
|
{"first_commit_time": 1572648811.0, "pr_title": "added visit_topdown methods to Visitor classes", "pr_body": "I think it would be useful have visit_top_down methods in visitors.", "pr_timeline": [{"time": 1572556797.0, "comment": "Looks good, but please include a test that shows that it works. You can reuse the test used for bottom-up visits"}, {"time": 1572570745.0, "comment": "@erezsh sure!, but where can i find it ?, i couldn't find in the tests or example folders...\r\ni created this one\r\n\r\nsetup:\r\n```python\r\nfrom lark.tree import Tree\r\nfrom lark.visitors import Visitor,Visitor_Recursive\r\n\r\nclass Printer(Visitor):\r\n\tdef __default__(self,tree):\r\n\t\tprint(tree.data)\r\nclass Printer_Recursive(Visitor_Recursive):\r\n\tdef __default__(self,tree):\r\n\t\tprint(tree.data)\r\ntree=Tree(\"a\",[\r\n\tTree(\"b\",[\r\n\t\tTree(\"c\",[]),\r\n\t\tTree(\"d\",[])\r\n\t]),\r\n\tTree(\"e\",[\r\n\t\tTree(\"f\",[]),\r\n\t\tTree(\"g\",[])\r\n\t])\r\n])\r\n```\r\n\r\niterative\r\n```python\r\n Printer().visit_top_down(tree)\r\n```\r\n> a\r\nb\r\nc\r\nd\r\ne\r\nf\r\ng\r\n\r\nrecursive\r\n```python\r\n Printer_Recursive().visit_top_down(tree)\r\n```\r\n> a\r\nb\r\nc\r\nd\r\ne\r\nf\r\ng\r\n\r\nbottom-up\r\n```python\r\nPrinter().visit(tree)\r\n```\r\n>c\r\nd\r\nb\r\nf\r\ng\r\ne\r\na\r\n"}, {"time": 1572571485.0, "comment": "I think they are around here: https://github.com/lark-parser/lark/blob/master/tests/test_trees.py#L31 but I could not find the bottom up tests."}, {"time": 1572573441.0, "comment": "added on `tests/test_trees.py`"}, {"time": 1572611404.0, "comment": "Looks great! Just fix the formatting and I'll merge it in."}, {"time": 1572663967.0, "comment": "@erezsh is it okay now ?"}, {"time": 1572694880.0, "comment": "Yes, thanks!"}], "issues": {}}
|
|
matplotlib/matplotlib
| 19,646
|
https://github.com/matplotlib/matplotlib/pull/19646
|
matplotlib__matplotlib-19646
|
[]
|
bd7df784719a3393bc7d6e7b471fb2b5ed52da50
|
diff --git a/lib/matplotlib/patches.py b/lib/matplotlib/patches.py
index a5bb2019db56..e72485915ef3 100644
--- a/lib/matplotlib/patches.py
+++ b/lib/matplotlib/patches.py
@@ -799,6 +799,10 @@ def get_height(self):
"""Return the height of the rectangle."""
return self._height
+ def get_angle(self):
+ """Get the rotation angle in degrees."""
+ return self.angle
+
def set_x(self, x):
"""Set the left coordinate of the rectangle."""
self._x0 = x
@@ -809,6 +813,15 @@ def set_y(self, y):
self._y0 = y
self.stale = True
+ def set_angle(self, angle):
+ """
+ Set the rotation angle in degrees.
+
+ The rotation is performed anti-clockwise around *xy*.
+ """
+ self.angle = angle
+ self.stale = True
+
def set_xy(self, xy):
"""
Set the left and bottom coordinates of the rectangle.
|
diff --git a/lib/matplotlib/tests/test_patches.py b/lib/matplotlib/tests/test_patches.py
index 2a908098364e..c54fbe5faea1 100644
--- a/lib/matplotlib/tests/test_patches.py
+++ b/lib/matplotlib/tests/test_patches.py
@@ -76,6 +76,27 @@ def test_rotate_rect():
assert_almost_equal(rect1.get_verts(), new_verts)
+@check_figures_equal(extensions=['png'])
+def test_rotate_rect_draw(fig_test, fig_ref):
+ ax_test = fig_test.add_subplot()
+ ax_ref = fig_ref.add_subplot()
+
+ loc = (0, 0)
+ width, height = (1, 1)
+ angle = 30
+ rect_ref = Rectangle(loc, width, height, angle=angle)
+ ax_ref.add_patch(rect_ref)
+ assert rect_ref.get_angle() == angle
+
+ # Check that when the angle is updated after adding to an axes, that the
+ # patch is marked stale and redrawn in the correct location
+ rect_test = Rectangle(loc, width, height)
+ assert rect_test.get_angle() == 0
+ ax_test.add_patch(rect_test)
+ rect_test.set_angle(angle)
+ assert rect_test.get_angle() == angle
+
+
def test_negative_rect():
# These two rectangles have the same vertices, but starting from a
# different point. (We also drop the last vertex, which is a duplicate.)
| 2021-03-05T20:31:48
|
{}
|
{"lib/matplotlib/patches.py": "import contextlib\nimport functools\nimport inspect\nimport math\nfrom numbers import Number\nimport textwrap\nfrom collections import namedtuple\n\nimport numpy as np\n\nimport matplotlib as mpl\nfrom . import (_api, artist, cbook, colors, docstring, hatch as mhatch,\n lines as mlines, transforms)\nfrom .bezier import (\n NonIntersectingPathException, get_cos_sin, get_intersection,\n get_parallels, inside_circle, make_wedged_bezier2,\n split_bezier_intersecting_with_closedpath, split_path_inout)\nfrom .path import Path\nfrom ._enums import JoinStyle, CapStyle\n\n\n@cbook._define_aliases({\n \"antialiased\": [\"aa\"],\n \"edgecolor\": [\"ec\"],\n \"facecolor\": [\"fc\"],\n \"linestyle\": [\"ls\"],\n \"linewidth\": [\"lw\"],\n})\nclass Patch(artist.Artist):\n \"\"\"\n A patch is a 2D artist with a face color and an edge color.\n\n If any of *edgecolor*, *facecolor*, *linewidth*, or *antialiased*\n are *None*, they default to their rc params setting.\n \"\"\"\n zorder = 1\n\n @_api.deprecated(\"3.4\")\n @_api.classproperty\n def validCap(cls):\n with _api.suppress_matplotlib_deprecation_warning():\n return mlines.Line2D.validCap\n\n @_api.deprecated(\"3.4\")\n @_api.classproperty\n def validJoin(cls):\n with _api.suppress_matplotlib_deprecation_warning():\n return mlines.Line2D.validJoin\n\n # Whether to draw an edge by default. Set on a\n # subclass-by-subclass basis.\n _edge_default = False\n\n def __init__(self,\n edgecolor=None,\n facecolor=None,\n color=None,\n linewidth=None,\n linestyle=None,\n antialiased=None,\n hatch=None,\n fill=True,\n capstyle=None,\n joinstyle=None,\n **kwargs):\n \"\"\"\n The following kwarg properties are supported\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__()\n\n if linewidth is None:\n linewidth = mpl.rcParams['patch.linewidth']\n if linestyle is None:\n linestyle = \"solid\"\n if capstyle is None:\n capstyle = CapStyle.butt\n if joinstyle is None:\n joinstyle = JoinStyle.miter\n if antialiased is None:\n antialiased = mpl.rcParams['patch.antialiased']\n\n self._hatch_color = colors.to_rgba(mpl.rcParams['hatch.color'])\n self._fill = True # needed for set_facecolor call\n if color is not None:\n if edgecolor is not None or facecolor is not None:\n _api.warn_external(\n \"Setting the 'color' property will override \"\n \"the edgecolor or facecolor properties.\")\n self.set_color(color)\n else:\n self.set_edgecolor(edgecolor)\n self.set_facecolor(facecolor)\n # unscaled dashes. Needed to scale dash patterns by lw\n self._us_dashes = None\n self._linewidth = 0\n\n self.set_fill(fill)\n self.set_linestyle(linestyle)\n self.set_linewidth(linewidth)\n self.set_antialiased(antialiased)\n self.set_hatch(hatch)\n self.set_capstyle(capstyle)\n self.set_joinstyle(joinstyle)\n\n if len(kwargs):\n self.update(kwargs)\n\n def get_verts(self):\n \"\"\"\n Return a copy of the vertices used in this patch.\n\n If the patch contains Bezier curves, the curves will be interpolated by\n line segments. To access the curves as curves, use `get_path`.\n \"\"\"\n trans = self.get_transform()\n path = self.get_path()\n polygons = path.to_polygons(trans)\n if len(polygons):\n return polygons[0]\n return []\n\n def _process_radius(self, radius):\n if radius is not None:\n return radius\n if isinstance(self._picker, Number):\n _radius = self._picker\n else:\n if self.get_edgecolor()[3] == 0:\n _radius = 0\n else:\n _radius = self.get_linewidth()\n return _radius\n\n def contains(self, mouseevent, radius=None):\n \"\"\"\n Test whether the mouse event occurred in the patch.\n\n Returns\n -------\n (bool, empty dict)\n \"\"\"\n inside, info = self._default_contains(mouseevent)\n if inside is not None:\n return inside, info\n radius = self._process_radius(radius)\n codes = self.get_path().codes\n if codes is not None:\n vertices = self.get_path().vertices\n # if the current path is concatenated by multiple sub paths.\n # get the indexes of the starting code(MOVETO) of all sub paths\n idxs, = np.where(codes == Path.MOVETO)\n # Don't split before the first MOVETO.\n idxs = idxs[1:]\n subpaths = map(\n Path, np.split(vertices, idxs), np.split(codes, idxs))\n else:\n subpaths = [self.get_path()]\n inside = any(\n subpath.contains_point(\n (mouseevent.x, mouseevent.y), self.get_transform(), radius)\n for subpath in subpaths)\n return inside, {}\n\n def contains_point(self, point, radius=None):\n \"\"\"\n Return whether the given point is inside the patch.\n\n Parameters\n ----------\n point : (float, float)\n The point (x, y) to check, in target coordinates of\n ``self.get_transform()``. These are display coordinates for patches\n that are added to a figure or axes.\n radius : float, optional\n Add an additional margin on the patch in target coordinates of\n ``self.get_transform()``. See `.Path.contains_point` for further\n details.\n\n Returns\n -------\n bool\n\n Notes\n -----\n The proper use of this method depends on the transform of the patch.\n Isolated patches do not have a transform. In this case, the patch\n creation coordinates and the point coordinates match. The following\n example checks that the center of a circle is within the circle\n\n >>> center = 0, 0\n >>> c = Circle(center, radius=1)\n >>> c.contains_point(center)\n True\n\n The convention of checking against the transformed patch stems from\n the fact that this method is predominantly used to check if display\n coordinates (e.g. from mouse events) are within the patch. If you want\n to do the above check with data coordinates, you have to properly\n transform them first:\n\n >>> center = 0, 0\n >>> c = Circle(center, radius=1)\n >>> plt.gca().add_patch(c)\n >>> transformed_center = c.get_transform().transform(center)\n >>> c.contains_point(transformed_center)\n True\n\n \"\"\"\n radius = self._process_radius(radius)\n return self.get_path().contains_point(point,\n self.get_transform(),\n radius)\n\n def contains_points(self, points, radius=None):\n \"\"\"\n Return whether the given points are inside the patch.\n\n Parameters\n ----------\n points : (N, 2) array\n The points to check, in target coordinates of\n ``self.get_transform()``. These are display coordinates for patches\n that are added to a figure or axes. Columns contain x and y values.\n radius : float, optional\n Add an additional margin on the patch in target coordinates of\n ``self.get_transform()``. See `.Path.contains_point` for further\n details.\n\n Returns\n -------\n length-N bool array\n\n Notes\n -----\n The proper use of this method depends on the transform of the patch.\n See the notes on `.Patch.contains_point`.\n \"\"\"\n radius = self._process_radius(radius)\n return self.get_path().contains_points(points,\n self.get_transform(),\n radius)\n\n def update_from(self, other):\n # docstring inherited.\n super().update_from(other)\n # For some properties we don't need or don't want to go through the\n # getters/setters, so we just copy them directly.\n self._edgecolor = other._edgecolor\n self._facecolor = other._facecolor\n self._original_edgecolor = other._original_edgecolor\n self._original_facecolor = other._original_facecolor\n self._fill = other._fill\n self._hatch = other._hatch\n self._hatch_color = other._hatch_color\n # copy the unscaled dash pattern\n self._us_dashes = other._us_dashes\n self.set_linewidth(other._linewidth) # also sets dash properties\n self.set_transform(other.get_data_transform())\n # If the transform of other needs further initialization, then it will\n # be the case for this artist too.\n self._transformSet = other.is_transform_set()\n\n def get_extents(self):\n \"\"\"\n Return the `Patch`'s axis-aligned extents as a `~.transforms.Bbox`.\n \"\"\"\n return self.get_path().get_extents(self.get_transform())\n\n def get_transform(self):\n \"\"\"Return the `~.transforms.Transform` applied to the `Patch`.\"\"\"\n return self.get_patch_transform() + artist.Artist.get_transform(self)\n\n def get_data_transform(self):\n \"\"\"\n Return the `~.transforms.Transform` mapping data coordinates to\n physical coordinates.\n \"\"\"\n return artist.Artist.get_transform(self)\n\n def get_patch_transform(self):\n \"\"\"\n Return the `~.transforms.Transform` instance mapping patch coordinates\n to data coordinates.\n\n For example, one may define a patch of a circle which represents a\n radius of 5 by providing coordinates for a unit circle, and a\n transform which scales the coordinates (the patch coordinate) by 5.\n \"\"\"\n return transforms.IdentityTransform()\n\n def get_antialiased(self):\n \"\"\"Return whether antialiasing is used for drawing.\"\"\"\n return self._antialiased\n\n def get_edgecolor(self):\n \"\"\"Return the edge color.\"\"\"\n return self._edgecolor\n\n def get_facecolor(self):\n \"\"\"Return the face color.\"\"\"\n return self._facecolor\n\n def get_linewidth(self):\n \"\"\"Return the line width in points.\"\"\"\n return self._linewidth\n\n def get_linestyle(self):\n \"\"\"Return the linestyle.\"\"\"\n return self._linestyle\n\n def set_antialiased(self, aa):\n \"\"\"\n Set whether to use antialiased rendering.\n\n Parameters\n ----------\n b : bool or None\n \"\"\"\n if aa is None:\n aa = mpl.rcParams['patch.antialiased']\n self._antialiased = aa\n self.stale = True\n\n def _set_edgecolor(self, color):\n set_hatch_color = True\n if color is None:\n if (mpl.rcParams['patch.force_edgecolor'] or\n not self._fill or self._edge_default):\n color = mpl.rcParams['patch.edgecolor']\n else:\n color = 'none'\n set_hatch_color = False\n\n self._edgecolor = colors.to_rgba(color, self._alpha)\n if set_hatch_color:\n self._hatch_color = self._edgecolor\n self.stale = True\n\n def set_edgecolor(self, color):\n \"\"\"\n Set the patch edge color.\n\n Parameters\n ----------\n color : color or None or 'auto'\n \"\"\"\n self._original_edgecolor = color\n self._set_edgecolor(color)\n\n def _set_facecolor(self, color):\n if color is None:\n color = mpl.rcParams['patch.facecolor']\n alpha = self._alpha if self._fill else 0\n self._facecolor = colors.to_rgba(color, alpha)\n self.stale = True\n\n def set_facecolor(self, color):\n \"\"\"\n Set the patch face color.\n\n Parameters\n ----------\n color : color or None\n \"\"\"\n self._original_facecolor = color\n self._set_facecolor(color)\n\n def set_color(self, c):\n \"\"\"\n Set both the edgecolor and the facecolor.\n\n Parameters\n ----------\n c : color\n\n See Also\n --------\n Patch.set_facecolor, Patch.set_edgecolor\n For setting the edge or face color individually.\n \"\"\"\n self.set_facecolor(c)\n self.set_edgecolor(c)\n\n def set_alpha(self, alpha):\n # docstring inherited\n super().set_alpha(alpha)\n self._set_facecolor(self._original_facecolor)\n self._set_edgecolor(self._original_edgecolor)\n # stale is already True\n\n def set_linewidth(self, w):\n \"\"\"\n Set the patch linewidth in points.\n\n Parameters\n ----------\n w : float or None\n \"\"\"\n if w is None:\n w = mpl.rcParams['patch.linewidth']\n if w is None:\n w = mpl.rcParams['axes.linewidth']\n\n self._linewidth = float(w)\n # scale the dash pattern by the linewidth\n offset, ls = self._us_dashes\n self._dashoffset, self._dashes = mlines._scale_dashes(\n offset, ls, self._linewidth)\n self.stale = True\n\n def set_linestyle(self, ls):\n \"\"\"\n Set the patch linestyle.\n\n =========================== =================\n linestyle description\n =========================== =================\n ``'-'`` or ``'solid'`` solid line\n ``'--'`` or ``'dashed'`` dashed line\n ``'-.'`` or ``'dashdot'`` dash-dotted line\n ``':'`` or ``'dotted'`` dotted line\n ``'None'`` draw nothing\n ``'none'`` draw nothing\n ``' '`` draw nothing\n ``''`` draw nothing\n =========================== =================\n\n Alternatively a dash tuple of the following form can be provided::\n\n (offset, onoffseq)\n\n where ``onoffseq`` is an even length tuple of on and off ink in points.\n\n Parameters\n ----------\n ls : {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}\n The line style.\n \"\"\"\n if ls is None:\n ls = \"solid\"\n if ls in [' ', '', 'none']:\n ls = 'None'\n self._linestyle = ls\n # get the unscaled dash pattern\n offset, ls = self._us_dashes = mlines._get_dash_pattern(ls)\n # scale the dash pattern by the linewidth\n self._dashoffset, self._dashes = mlines._scale_dashes(\n offset, ls, self._linewidth)\n self.stale = True\n\n def set_fill(self, b):\n \"\"\"\n Set whether to fill the patch.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self._fill = bool(b)\n self._set_facecolor(self._original_facecolor)\n self._set_edgecolor(self._original_edgecolor)\n self.stale = True\n\n def get_fill(self):\n \"\"\"Return whether the patch is filled.\"\"\"\n return self._fill\n\n # Make fill a property so as to preserve the long-standing\n # but somewhat inconsistent behavior in which fill was an\n # attribute.\n fill = property(get_fill, set_fill)\n\n @docstring.interpd\n def set_capstyle(self, s):\n \"\"\"\n Set the `.CapStyle`.\n\n Parameters\n ----------\n s : `.CapStyle` or %(CapStyle)s\n \"\"\"\n cs = CapStyle(s)\n self._capstyle = cs\n self.stale = True\n\n def get_capstyle(self):\n \"\"\"Return the capstyle.\"\"\"\n return self._capstyle\n\n @docstring.interpd\n def set_joinstyle(self, s):\n \"\"\"\n Set the `.JoinStyle`.\n\n Parameters\n ----------\n s : `.JoinStyle` or %(JoinStyle)s\n \"\"\"\n js = JoinStyle(s)\n self._joinstyle = js\n self.stale = True\n\n def get_joinstyle(self):\n \"\"\"Return the joinstyle.\"\"\"\n return self._joinstyle\n\n def set_hatch(self, hatch):\n r\"\"\"\n Set the hatching pattern.\n\n *hatch* can be one of::\n\n / - diagonal hatching\n \\ - back diagonal\n | - vertical\n - - horizontal\n + - crossed\n x - crossed diagonal\n o - small circle\n O - large circle\n . - dots\n * - stars\n\n Letters can be combined, in which case all the specified\n hatchings are done. If same letter repeats, it increases the\n density of hatching of that pattern.\n\n Hatching is supported in the PostScript, PDF, SVG and Agg\n backends only.\n\n Parameters\n ----------\n hatch : {'/', '\\\\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}\n \"\"\"\n # Use validate_hatch(list) after deprecation.\n mhatch._validate_hatch_pattern(hatch)\n self._hatch = hatch\n self.stale = True\n\n def get_hatch(self):\n \"\"\"Return the hatching pattern.\"\"\"\n return self._hatch\n\n @contextlib.contextmanager\n def _bind_draw_path_function(self, renderer):\n \"\"\"\n ``draw()`` helper factored out for sharing with `FancyArrowPatch`.\n\n Yields a callable ``dp`` such that calling ``dp(*args, **kwargs)`` is\n equivalent to calling ``renderer1.draw_path(gc, *args, **kwargs)``\n where ``renderer1`` and ``gc`` have been suitably set from ``renderer``\n and the artist's properties.\n \"\"\"\n\n renderer.open_group('patch', self.get_gid())\n gc = renderer.new_gc()\n\n gc.set_foreground(self._edgecolor, isRGBA=True)\n\n lw = self._linewidth\n if self._edgecolor[3] == 0 or self._linestyle == 'None':\n lw = 0\n gc.set_linewidth(lw)\n gc.set_dashes(self._dashoffset, self._dashes)\n gc.set_capstyle(self._capstyle)\n gc.set_joinstyle(self._joinstyle)\n\n gc.set_antialiased(self._antialiased)\n self._set_gc_clip(gc)\n gc.set_url(self._url)\n gc.set_snap(self.get_snap())\n\n gc.set_alpha(self._alpha)\n\n if self._hatch:\n gc.set_hatch(self._hatch)\n gc.set_hatch_color(self._hatch_color)\n\n if self.get_sketch_params() is not None:\n gc.set_sketch_params(*self.get_sketch_params())\n\n if self.get_path_effects():\n from matplotlib.patheffects import PathEffectRenderer\n renderer = PathEffectRenderer(self.get_path_effects(), renderer)\n\n # In `with _bind_draw_path_function(renderer) as draw_path: ...`\n # (in the implementations of `draw()` below), calls to `draw_path(...)`\n # will occur as if they took place here with `gc` inserted as\n # additional first argument.\n yield functools.partial(renderer.draw_path, gc)\n\n gc.restore()\n renderer.close_group('patch')\n self.stale = False\n\n @artist.allow_rasterization\n def draw(self, renderer):\n # docstring inherited\n if not self.get_visible():\n return\n # Patch has traditionally ignored the dashoffset.\n with cbook._setattr_cm(self, _dashoffset=0), \\\n self._bind_draw_path_function(renderer) as draw_path:\n path = self.get_path()\n transform = self.get_transform()\n tpath = transform.transform_path_non_affine(path)\n affine = transform.get_affine()\n draw_path(tpath, affine,\n # Work around a bug in the PDF and SVG renderers, which\n # do not draw the hatches if the facecolor is fully\n # transparent, but do if it is None.\n self._facecolor if self._facecolor[3] else None)\n\n def get_path(self):\n \"\"\"Return the path of this patch.\"\"\"\n raise NotImplementedError('Derived must override')\n\n def get_window_extent(self, renderer=None):\n return self.get_path().get_extents(self.get_transform())\n\n def _convert_xy_units(self, xy):\n \"\"\"Convert x and y units for a tuple (x, y).\"\"\"\n x = self.convert_xunits(xy[0])\n y = self.convert_yunits(xy[1])\n return x, y\n\n\n_patch_kwdoc = artist.kwdoc(Patch)\nfor k in ['Rectangle', 'Circle', 'RegularPolygon', 'Polygon', 'Wedge', 'Arrow',\n 'FancyArrow', 'CirclePolygon', 'Ellipse', 'Arc', 'FancyBboxPatch',\n 'Patch']:\n docstring.interpd.update({f'{k}_kwdoc': _patch_kwdoc})\n\n# define Patch.__init__ docstring after the class has been added to interpd\ndocstring.dedent_interpd(Patch.__init__)\n\n\nclass Shadow(Patch):\n def __str__(self):\n return \"Shadow(%s)\" % (str(self.patch))\n\n @_api.delete_parameter(\"3.3\", \"props\")\n @docstring.dedent_interpd\n def __init__(self, patch, ox, oy, props=None, **kwargs):\n \"\"\"\n Create a shadow of the given *patch*.\n\n By default, the shadow will have the same face color as the *patch*,\n but darkened.\n\n Parameters\n ----------\n patch : `.Patch`\n The patch to create the shadow for.\n ox, oy : float\n The shift of the shadow in data coordinates, scaled by a factor\n of dpi/72.\n props : dict\n *deprecated (use kwargs instead)* Properties of the shadow patch.\n **kwargs\n Properties of the shadow patch. Supported keys are:\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__()\n self.patch = patch\n # Note: when removing props, we can directly pass kwargs to _update()\n # and remove self._props\n if props is None:\n color = .3 * np.asarray(colors.to_rgb(self.patch.get_facecolor()))\n props = {\n 'facecolor': color,\n 'edgecolor': color,\n 'alpha': 0.5,\n }\n self._props = {**props, **kwargs}\n self._ox, self._oy = ox, oy\n self._shadow_transform = transforms.Affine2D()\n self._update()\n\n props = _api.deprecate_privatize_attribute(\"3.3\")\n\n def _update(self):\n self.update_from(self.patch)\n\n # Place the shadow patch directly behind the inherited patch.\n self.set_zorder(np.nextafter(self.patch.zorder, -np.inf))\n\n self.update(self._props)\n\n def _update_transform(self, renderer):\n ox = renderer.points_to_pixels(self._ox)\n oy = renderer.points_to_pixels(self._oy)\n self._shadow_transform.clear().translate(ox, oy)\n\n def get_path(self):\n return self.patch.get_path()\n\n def get_patch_transform(self):\n return self.patch.get_patch_transform() + self._shadow_transform\n\n def draw(self, renderer):\n self._update_transform(renderer)\n super().draw(renderer)\n\n\nclass Rectangle(Patch):\n \"\"\"\n A rectangle defined via an anchor point *xy* and its *width* and *height*.\n\n The rectangle extends from ``xy[0]`` to ``xy[0] + width`` in x-direction\n and from ``xy[1]`` to ``xy[1] + height`` in y-direction. ::\n\n : +------------------+\n : | |\n : height |\n : | |\n : (xy)---- width -----+\n\n One may picture *xy* as the bottom left corner, but which corner *xy* is\n actually depends on the the direction of the axis and the sign of *width*\n and *height*; e.g. *xy* would be the bottom right corner if the x-axis\n was inverted or if *width* was negative.\n \"\"\"\n\n def __str__(self):\n pars = self._x0, self._y0, self._width, self._height, self.angle\n fmt = \"Rectangle(xy=(%g, %g), width=%g, height=%g, angle=%g)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, width, height, angle=0.0, **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n The anchor point.\n width : float\n Rectangle width.\n height : float\n Rectangle height.\n angle : float, default: 0\n Rotation in degrees anti-clockwise about *xy*.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self._x0 = xy[0]\n self._y0 = xy[1]\n self._width = width\n self._height = height\n self.angle = float(angle)\n self._convert_units() # Validate the inputs.\n\n def get_path(self):\n \"\"\"Return the vertices of the rectangle.\"\"\"\n return Path.unit_rectangle()\n\n def _convert_units(self):\n \"\"\"Convert bounds of the rectangle.\"\"\"\n x0 = self.convert_xunits(self._x0)\n y0 = self.convert_yunits(self._y0)\n x1 = self.convert_xunits(self._x0 + self._width)\n y1 = self.convert_yunits(self._y0 + self._height)\n return x0, y0, x1, y1\n\n def get_patch_transform(self):\n # Note: This cannot be called until after this has been added to\n # an Axes, otherwise unit conversion will fail. This makes it very\n # important to call the accessor method and not directly access the\n # transformation member variable.\n bbox = self.get_bbox()\n return (transforms.BboxTransformTo(bbox)\n + transforms.Affine2D().rotate_deg_around(\n bbox.x0, bbox.y0, self.angle))\n\n def get_x(self):\n \"\"\"Return the left coordinate of the rectangle.\"\"\"\n return self._x0\n\n def get_y(self):\n \"\"\"Return the bottom coordinate of the rectangle.\"\"\"\n return self._y0\n\n def get_xy(self):\n \"\"\"Return the left and bottom coords of the rectangle as a tuple.\"\"\"\n return self._x0, self._y0\n\n def get_width(self):\n \"\"\"Return the width of the rectangle.\"\"\"\n return self._width\n\n def get_height(self):\n \"\"\"Return the height of the rectangle.\"\"\"\n return self._height\n\n def set_x(self, x):\n \"\"\"Set the left coordinate of the rectangle.\"\"\"\n self._x0 = x\n self.stale = True\n\n def set_y(self, y):\n \"\"\"Set the bottom coordinate of the rectangle.\"\"\"\n self._y0 = y\n self.stale = True\n\n def set_xy(self, xy):\n \"\"\"\n Set the left and bottom coordinates of the rectangle.\n\n Parameters\n ----------\n xy : (float, float)\n \"\"\"\n self._x0, self._y0 = xy\n self.stale = True\n\n def set_width(self, w):\n \"\"\"Set the width of the rectangle.\"\"\"\n self._width = w\n self.stale = True\n\n def set_height(self, h):\n \"\"\"Set the height of the rectangle.\"\"\"\n self._height = h\n self.stale = True\n\n def set_bounds(self, *args):\n \"\"\"\n Set the bounds of the rectangle as *left*, *bottom*, *width*, *height*.\n\n The values may be passed as separate parameters or as a tuple::\n\n set_bounds(left, bottom, width, height)\n set_bounds((left, bottom, width, height))\n\n .. ACCEPTS: (left, bottom, width, height)\n \"\"\"\n if len(args) == 1:\n l, b, w, h = args[0]\n else:\n l, b, w, h = args\n self._x0 = l\n self._y0 = b\n self._width = w\n self._height = h\n self.stale = True\n\n def get_bbox(self):\n \"\"\"Return the `.Bbox`.\"\"\"\n x0, y0, x1, y1 = self._convert_units()\n return transforms.Bbox.from_extents(x0, y0, x1, y1)\n\n xy = property(get_xy, set_xy)\n\n\nclass RegularPolygon(Patch):\n \"\"\"A regular polygon patch.\"\"\"\n\n def __str__(self):\n s = \"RegularPolygon((%g, %g), %d, radius=%g, orientation=%g)\"\n return s % (self.xy[0], self.xy[1], self.numvertices, self.radius,\n self.orientation)\n\n @docstring.dedent_interpd\n def __init__(self, xy, numVertices, radius=5, orientation=0,\n **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n The center position.\n\n numVertices : int\n The number of vertices.\n\n radius : float\n The distance from the center to each of the vertices.\n\n orientation : float\n The polygon rotation angle (in radians).\n\n **kwargs\n `Patch` properties:\n\n %(Patch_kwdoc)s\n \"\"\"\n self.xy = xy\n self.numvertices = numVertices\n self.orientation = orientation\n self.radius = radius\n self._path = Path.unit_regular_polygon(numVertices)\n self._patch_transform = transforms.Affine2D()\n super().__init__(**kwargs)\n\n def get_path(self):\n return self._path\n\n def get_patch_transform(self):\n return self._patch_transform.clear() \\\n .scale(self.radius) \\\n .rotate(self.orientation) \\\n .translate(*self.xy)\n\n\nclass PathPatch(Patch):\n \"\"\"A general polycurve path patch.\"\"\"\n\n _edge_default = True\n\n def __str__(self):\n s = \"PathPatch%d((%g, %g) ...)\"\n return s % (len(self._path.vertices), *tuple(self._path.vertices[0]))\n\n @docstring.dedent_interpd\n def __init__(self, path, **kwargs):\n \"\"\"\n *path* is a `~.path.Path` object.\n\n Valid keyword arguments are:\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self._path = path\n\n def get_path(self):\n return self._path\n\n def set_path(self, path):\n self._path = path\n\n\nclass StepPatch(PathPatch):\n \"\"\"\n A path patch describing a stepwise constant function.\n\n By default the path is not closed and starts and stops at\n baseline value.\n \"\"\"\n\n _edge_default = False\n\n @docstring.dedent_interpd\n def __init__(self, values, edges, *,\n orientation='vertical', baseline=0, **kwargs):\n \"\"\"\n Parameters\n ----------\n values : array-like\n The step heights.\n\n edges : array-like\n The edge positions, with ``len(edges) == len(vals) + 1``,\n between which the curve takes on vals values.\n\n orientation : {'vertical', 'horizontal'}, default: 'vertical'\n The direction of the steps. Vertical means that *values* are\n along the y-axis, and edges are along the x-axis.\n\n baseline : float, array-like or None, default: 0\n The bottom value of the bounding edges or when\n ``fill=True``, position of lower edge. If *fill* is\n True or an array is passed to *baseline*, a closed\n path is drawn.\n\n Other valid keyword arguments are:\n\n %(Patch_kwdoc)s\n \"\"\"\n self.orientation = orientation\n self._edges = np.asarray(edges)\n self._values = np.asarray(values)\n self._baseline = np.asarray(baseline) if baseline is not None else None\n self._update_path()\n super().__init__(self._path, **kwargs)\n\n def _update_path(self):\n if np.isnan(np.sum(self._edges)):\n raise ValueError('Nan values in \"edges\" are disallowed')\n if self._edges.size - 1 != self._values.size:\n raise ValueError('Size mismatch between \"values\" and \"edges\". '\n \"Expected `len(values) + 1 == len(edges)`, but \"\n f\"`len(values) = {self._values.size}` and \"\n f\"`len(edges) = {self._edges.size}`.\")\n # Initializing with empty arrays allows supporting empty stairs.\n verts, codes = [np.empty((0, 2))], [np.empty(0, dtype=Path.code_type)]\n\n _nan_mask = np.isnan(self._values)\n if self._baseline is not None:\n _nan_mask |= np.isnan(self._baseline)\n for idx0, idx1 in cbook.contiguous_regions(~_nan_mask):\n x = np.repeat(self._edges[idx0:idx1+1], 2)\n y = np.repeat(self._values[idx0:idx1], 2)\n if self._baseline is None:\n y = np.concatenate([y[:1], y, y[-1:]])\n elif self._baseline.ndim == 0: # single baseline value\n y = np.concatenate([[self._baseline], y, [self._baseline]])\n elif self._baseline.ndim == 1: # baseline array\n base = np.repeat(self._baseline[idx0:idx1], 2)[::-1]\n x = np.concatenate([x, x[::-1]])\n y = np.concatenate([base[-1:], y, base[:1],\n base[:1], base, base[-1:]])\n else: # no baseline\n raise ValueError('Invalid `baseline` specified')\n if self.orientation == 'vertical':\n xy = np.column_stack([x, y])\n else:\n xy = np.column_stack([y, x])\n verts.append(xy)\n codes.append([Path.MOVETO] + [Path.LINETO]*(len(xy)-1))\n self._path = Path(np.concatenate(verts), np.concatenate(codes))\n\n def get_data(self):\n \"\"\"Get `.StepPatch` values, edges and baseline as namedtuple.\"\"\"\n StairData = namedtuple('StairData', 'values edges baseline')\n return StairData(self._values, self._edges, self._baseline)\n\n def set_data(self, values=None, edges=None, baseline=None):\n \"\"\"\n Set `.StepPatch` values, edges and baseline.\n\n Parameters\n ----------\n values : 1D array-like or None\n Will not update values, if passing None\n edges : 1D array-like, optional\n baseline : float, 1D array-like or None\n \"\"\"\n if values is None and edges is None and baseline is None:\n raise ValueError(\"Must set *values*, *edges* or *baseline*.\")\n if values is not None:\n self._values = np.asarray(values)\n if edges is not None:\n self._edges = np.asarray(edges)\n if baseline is not None:\n self._baseline = np.asarray(baseline)\n self._update_path()\n self.stale = True\n\n\nclass Polygon(Patch):\n \"\"\"A general polygon patch.\"\"\"\n\n def __str__(self):\n s = \"Polygon%d((%g, %g) ...)\"\n return s % (len(self._path.vertices), *tuple(self._path.vertices[0]))\n\n @docstring.dedent_interpd\n def __init__(self, xy, closed=True, **kwargs):\n \"\"\"\n *xy* is a numpy array with shape Nx2.\n\n If *closed* is *True*, the polygon will be closed so the\n starting and ending points are the same.\n\n Valid keyword arguments are:\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self._closed = closed\n self.set_xy(xy)\n\n def get_path(self):\n \"\"\"Get the `.Path` of the polygon.\"\"\"\n return self._path\n\n def get_closed(self):\n \"\"\"Return whether the polygon is closed.\"\"\"\n return self._closed\n\n def set_closed(self, closed):\n \"\"\"\n Set whether the polygon is closed.\n\n Parameters\n ----------\n closed : bool\n True if the polygon is closed\n \"\"\"\n if self._closed == bool(closed):\n return\n self._closed = bool(closed)\n self.set_xy(self.get_xy())\n self.stale = True\n\n def get_xy(self):\n \"\"\"\n Get the vertices of the path.\n\n Returns\n -------\n (N, 2) numpy array\n The coordinates of the vertices.\n \"\"\"\n return self._path.vertices\n\n def set_xy(self, xy):\n \"\"\"\n Set the vertices of the polygon.\n\n Parameters\n ----------\n xy : (N, 2) array-like\n The coordinates of the vertices.\n\n Notes\n -----\n Unlike `~.path.Path`, we do not ignore the last input vertex. If the\n polygon is meant to be closed, and the last point of the polygon is not\n equal to the first, we assume that the user has not explicitly passed a\n ``CLOSEPOLY`` vertex, and add it ourselves.\n \"\"\"\n xy = np.asarray(xy)\n nverts, _ = xy.shape\n if self._closed:\n # if the first and last vertex are the \"same\", then we assume that\n # the user explicitly passed the CLOSEPOLY vertex. Otherwise, we\n # have to append one since the last vertex will be \"ignored\" by\n # Path\n if nverts == 1 or nverts > 1 and (xy[0] != xy[-1]).any():\n xy = np.concatenate([xy, [xy[0]]])\n else:\n # if we aren't closed, and the last vertex matches the first, then\n # we assume we have an unnecessary CLOSEPOLY vertex and remove it\n if nverts > 2 and (xy[0] == xy[-1]).all():\n xy = xy[:-1]\n self._path = Path(xy, closed=self._closed)\n self.stale = True\n\n xy = property(get_xy, set_xy,\n doc='The vertices of the path as (N, 2) numpy array.')\n\n\nclass Wedge(Patch):\n \"\"\"Wedge shaped patch.\"\"\"\n\n def __str__(self):\n pars = (self.center[0], self.center[1], self.r,\n self.theta1, self.theta2, self.width)\n fmt = \"Wedge(center=(%g, %g), r=%g, theta1=%g, theta2=%g, width=%s)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, center, r, theta1, theta2, width=None, **kwargs):\n \"\"\"\n A wedge centered at *x*, *y* center with radius *r* that\n sweeps *theta1* to *theta2* (in degrees). If *width* is given,\n then a partial wedge is drawn from inner radius *r* - *width*\n to outer radius *r*.\n\n Valid keyword arguments are:\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self.center = center\n self.r, self.width = r, width\n self.theta1, self.theta2 = theta1, theta2\n self._patch_transform = transforms.IdentityTransform()\n self._recompute_path()\n\n def _recompute_path(self):\n # Inner and outer rings are connected unless the annulus is complete\n if abs((self.theta2 - self.theta1) - 360) <= 1e-12:\n theta1, theta2 = 0, 360\n connector = Path.MOVETO\n else:\n theta1, theta2 = self.theta1, self.theta2\n connector = Path.LINETO\n\n # Form the outer ring\n arc = Path.arc(theta1, theta2)\n\n if self.width is not None:\n # Partial annulus needs to draw the outer ring\n # followed by a reversed and scaled inner ring\n v1 = arc.vertices\n v2 = arc.vertices[::-1] * (self.r - self.width) / self.r\n v = np.concatenate([v1, v2, [v1[0, :], (0, 0)]])\n c = np.concatenate([\n arc.codes, arc.codes, [connector, Path.CLOSEPOLY]])\n c[len(arc.codes)] = connector\n else:\n # Wedge doesn't need an inner ring\n v = np.concatenate([\n arc.vertices, [(0, 0), arc.vertices[0, :], (0, 0)]])\n c = np.concatenate([\n arc.codes, [connector, connector, Path.CLOSEPOLY]])\n\n # Shift and scale the wedge to the final location.\n v *= self.r\n v += np.asarray(self.center)\n self._path = Path(v, c)\n\n def set_center(self, center):\n self._path = None\n self.center = center\n self.stale = True\n\n def set_radius(self, radius):\n self._path = None\n self.r = radius\n self.stale = True\n\n def set_theta1(self, theta1):\n self._path = None\n self.theta1 = theta1\n self.stale = True\n\n def set_theta2(self, theta2):\n self._path = None\n self.theta2 = theta2\n self.stale = True\n\n def set_width(self, width):\n self._path = None\n self.width = width\n self.stale = True\n\n def get_path(self):\n if self._path is None:\n self._recompute_path()\n return self._path\n\n\n# COVERAGE NOTE: Not used internally or from examples\nclass Arrow(Patch):\n \"\"\"An arrow patch.\"\"\"\n\n def __str__(self):\n return \"Arrow()\"\n\n _path = Path([[0.0, 0.1], [0.0, -0.1],\n [0.8, -0.1], [0.8, -0.3],\n [1.0, 0.0], [0.8, 0.3],\n [0.8, 0.1], [0.0, 0.1]],\n closed=True)\n\n @docstring.dedent_interpd\n def __init__(self, x, y, dx, dy, width=1.0, **kwargs):\n \"\"\"\n Draws an arrow from (*x*, *y*) to (*x* + *dx*, *y* + *dy*).\n The width of the arrow is scaled by *width*.\n\n Parameters\n ----------\n x : float\n x coordinate of the arrow tail.\n y : float\n y coordinate of the arrow tail.\n dx : float\n Arrow length in the x direction.\n dy : float\n Arrow length in the y direction.\n width : float, default: 1\n Scale factor for the width of the arrow. With a default value of 1,\n the tail width is 0.2 and head width is 0.6.\n **kwargs\n Keyword arguments control the `Patch` properties:\n\n %(Patch_kwdoc)s\n\n See Also\n --------\n FancyArrow\n Patch that allows independent control of the head and tail\n properties.\n \"\"\"\n super().__init__(**kwargs)\n self._patch_transform = (\n transforms.Affine2D()\n .scale(np.hypot(dx, dy), width)\n .rotate(np.arctan2(dy, dx))\n .translate(x, y)\n .frozen())\n\n def get_path(self):\n return self._path\n\n def get_patch_transform(self):\n return self._patch_transform\n\n\nclass FancyArrow(Polygon):\n \"\"\"\n Like Arrow, but lets you set head width and head height independently.\n \"\"\"\n\n _edge_default = True\n\n def __str__(self):\n return \"FancyArrow()\"\n\n @docstring.dedent_interpd\n def __init__(self, x, y, dx, dy, width=0.001, length_includes_head=False,\n head_width=None, head_length=None, shape='full', overhang=0,\n head_starts_at_zero=False, **kwargs):\n \"\"\"\n Parameters\n ----------\n width : float, default: 0.001\n Width of full arrow tail.\n\n length_includes_head : bool, default: False\n True if head is to be counted in calculating the length.\n\n head_width : float or None, default: 3*width\n Total width of the full arrow head.\n\n head_length : float or None, default: 1.5*head_width\n Length of arrow head.\n\n shape : {'full', 'left', 'right'}, default: 'full'\n Draw the left-half, right-half, or full arrow.\n\n overhang : float, default: 0\n Fraction that the arrow is swept back (0 overhang means\n triangular shape). Can be negative or greater than one.\n\n head_starts_at_zero : bool, default: False\n If True, the head starts being drawn at coordinate 0\n instead of ending at coordinate 0.\n\n **kwargs\n `.Patch` properties:\n\n %(Patch_kwdoc)s\n \"\"\"\n if head_width is None:\n head_width = 3 * width\n if head_length is None:\n head_length = 1.5 * head_width\n\n distance = np.hypot(dx, dy)\n\n if length_includes_head:\n length = distance\n else:\n length = distance + head_length\n if not length:\n verts = np.empty([0, 2]) # display nothing if empty\n else:\n # start by drawing horizontal arrow, point at (0, 0)\n hw, hl, hs, lw = head_width, head_length, overhang, width\n left_half_arrow = np.array([\n [0.0, 0.0], # tip\n [-hl, -hw / 2], # leftmost\n [-hl * (1 - hs), -lw / 2], # meets stem\n [-length, -lw / 2], # bottom left\n [-length, 0],\n ])\n # if we're not including the head, shift up by head length\n if not length_includes_head:\n left_half_arrow += [head_length, 0]\n # if the head starts at 0, shift up by another head length\n if head_starts_at_zero:\n left_half_arrow += [head_length / 2, 0]\n # figure out the shape, and complete accordingly\n if shape == 'left':\n coords = left_half_arrow\n else:\n right_half_arrow = left_half_arrow * [1, -1]\n if shape == 'right':\n coords = right_half_arrow\n elif shape == 'full':\n # The half-arrows contain the midpoint of the stem,\n # which we can omit from the full arrow. Including it\n # twice caused a problem with xpdf.\n coords = np.concatenate([left_half_arrow[:-1],\n right_half_arrow[-2::-1]])\n else:\n raise ValueError(\"Got unknown shape: %s\" % shape)\n if distance != 0:\n cx = dx / distance\n sx = dy / distance\n else:\n # Account for division by zero\n cx, sx = 0, 1\n M = [[cx, sx], [-sx, cx]]\n verts = np.dot(coords, M) + (x + dx, y + dy)\n\n super().__init__(verts, closed=True, **kwargs)\n\n\ndocstring.interpd.update(\n FancyArrow=\"\\n\".join(inspect.getdoc(FancyArrow.__init__).splitlines()[2:]))\n\n\nclass CirclePolygon(RegularPolygon):\n \"\"\"A polygon-approximation of a circle patch.\"\"\"\n\n def __str__(self):\n s = \"CirclePolygon((%g, %g), radius=%g, resolution=%d)\"\n return s % (self.xy[0], self.xy[1], self.radius, self.numvertices)\n\n @docstring.dedent_interpd\n def __init__(self, xy, radius=5,\n resolution=20, # the number of vertices\n ** kwargs):\n \"\"\"\n Create a circle at *xy* = (*x*, *y*) with given *radius*.\n\n This circle is approximated by a regular polygon with *resolution*\n sides. For a smoother circle drawn with splines, see `Circle`.\n\n Valid keyword arguments are:\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__(xy, resolution, radius, orientation=0, **kwargs)\n\n\nclass Ellipse(Patch):\n \"\"\"A scale-free ellipse.\"\"\"\n\n def __str__(self):\n pars = (self._center[0], self._center[1],\n self.width, self.height, self.angle)\n fmt = \"Ellipse(xy=(%s, %s), width=%s, height=%s, angle=%s)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, width, height, angle=0, **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n xy coordinates of ellipse centre.\n width : float\n Total length (diameter) of horizontal axis.\n height : float\n Total length (diameter) of vertical axis.\n angle : float, default: 0\n Rotation in degrees anti-clockwise.\n\n Notes\n -----\n Valid keyword arguments are:\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n\n self._center = xy\n self._width, self._height = width, height\n self._angle = angle\n self._path = Path.unit_circle()\n # Note: This cannot be calculated until this is added to an Axes\n self._patch_transform = transforms.IdentityTransform()\n\n def _recompute_transform(self):\n \"\"\"\n Notes\n -----\n This cannot be called until after this has been added to an Axes,\n otherwise unit conversion will fail. This makes it very important to\n call the accessor method and not directly access the transformation\n member variable.\n \"\"\"\n center = (self.convert_xunits(self._center[0]),\n self.convert_yunits(self._center[1]))\n width = self.convert_xunits(self._width)\n height = self.convert_yunits(self._height)\n self._patch_transform = transforms.Affine2D() \\\n .scale(width * 0.5, height * 0.5) \\\n .rotate_deg(self.angle) \\\n .translate(*center)\n\n def get_path(self):\n \"\"\"Return the path of the ellipse.\"\"\"\n return self._path\n\n def get_patch_transform(self):\n self._recompute_transform()\n return self._patch_transform\n\n def set_center(self, xy):\n \"\"\"\n Set the center of the ellipse.\n\n Parameters\n ----------\n xy : (float, float)\n \"\"\"\n self._center = xy\n self.stale = True\n\n def get_center(self):\n \"\"\"Return the center of the ellipse.\"\"\"\n return self._center\n\n center = property(get_center, set_center)\n\n def set_width(self, width):\n \"\"\"\n Set the width of the ellipse.\n\n Parameters\n ----------\n width : float\n \"\"\"\n self._width = width\n self.stale = True\n\n def get_width(self):\n \"\"\"\n Return the width of the ellipse.\n \"\"\"\n return self._width\n\n width = property(get_width, set_width)\n\n def set_height(self, height):\n \"\"\"\n Set the height of the ellipse.\n\n Parameters\n ----------\n height : float\n \"\"\"\n self._height = height\n self.stale = True\n\n def get_height(self):\n \"\"\"Return the height of the ellipse.\"\"\"\n return self._height\n\n height = property(get_height, set_height)\n\n def set_angle(self, angle):\n \"\"\"\n Set the angle of the ellipse.\n\n Parameters\n ----------\n angle : float\n \"\"\"\n self._angle = angle\n self.stale = True\n\n def get_angle(self):\n \"\"\"Return the angle of the ellipse.\"\"\"\n return self._angle\n\n angle = property(get_angle, set_angle)\n\n\nclass Circle(Ellipse):\n \"\"\"A circle patch.\"\"\"\n\n def __str__(self):\n pars = self.center[0], self.center[1], self.radius\n fmt = \"Circle(xy=(%g, %g), radius=%g)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, radius=5, **kwargs):\n \"\"\"\n Create a true circle at center *xy* = (*x*, *y*) with given *radius*.\n\n Unlike `CirclePolygon` which is a polygonal approximation, this uses\n Bezier splines and is much closer to a scale-free circle.\n\n Valid keyword arguments are:\n\n %(Patch_kwdoc)s\n \"\"\"\n super().__init__(xy, radius * 2, radius * 2, **kwargs)\n self.radius = radius\n\n def set_radius(self, radius):\n \"\"\"\n Set the radius of the circle.\n\n Parameters\n ----------\n radius : float\n \"\"\"\n self.width = self.height = 2 * radius\n self.stale = True\n\n def get_radius(self):\n \"\"\"Return the radius of the circle.\"\"\"\n return self.width / 2.\n\n radius = property(get_radius, set_radius)\n\n\nclass Arc(Ellipse):\n \"\"\"\n An elliptical arc, i.e. a segment of an ellipse.\n\n Due to internal optimizations, there are certain restrictions on using Arc:\n\n - The arc cannot be filled.\n\n - The arc must be used in an `~.axes.Axes` instance. It can not be added\n directly to a `.Figure` because it is optimized to only render the\n segments that are inside the axes bounding box with high resolution.\n \"\"\"\n def __str__(self):\n pars = (self.center[0], self.center[1], self.width,\n self.height, self.angle, self.theta1, self.theta2)\n fmt = (\"Arc(xy=(%g, %g), width=%g, \"\n \"height=%g, angle=%g, theta1=%g, theta2=%g)\")\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, width, height, angle=0.0,\n theta1=0.0, theta2=360.0, **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n The center of the ellipse.\n\n width : float\n The length of the horizontal axis.\n\n height : float\n The length of the vertical axis.\n\n angle : float\n Rotation of the ellipse in degrees (counterclockwise).\n\n theta1, theta2 : float, default: 0, 360\n Starting and ending angles of the arc in degrees. These values\n are relative to *angle*, e.g. if *angle* = 45 and *theta1* = 90\n the absolute starting angle is 135.\n Default *theta1* = 0, *theta2* = 360, i.e. a complete ellipse.\n The arc is drawn in the counterclockwise direction.\n Angles greater than or equal to 360, or smaller than 0, are\n represented by an equivalent angle in the range [0, 360), by\n taking the input value mod 360.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties\n Most `.Patch` properties are supported as keyword arguments,\n with the exception of *fill* and *facecolor* because filling is\n not supported.\n\n %(Patch_kwdoc)s\n \"\"\"\n fill = kwargs.setdefault('fill', False)\n if fill:\n raise ValueError(\"Arc objects can not be filled\")\n\n super().__init__(xy, width, height, angle, **kwargs)\n\n self.theta1 = theta1\n self.theta2 = theta2\n\n @artist.allow_rasterization\n def draw(self, renderer):\n \"\"\"\n Draw the arc to the given *renderer*.\n\n Notes\n -----\n Ellipses are normally drawn using an approximation that uses\n eight cubic Bezier splines. The error of this approximation\n is 1.89818e-6, according to this unverified source:\n\n Lancaster, Don. *Approximating a Circle or an Ellipse Using\n Four Bezier Cubic Splines.*\n\n https://www.tinaja.com/glib/ellipse4.pdf\n\n There is a use case where very large ellipses must be drawn\n with very high accuracy, and it is too expensive to render the\n entire ellipse with enough segments (either splines or line\n segments). Therefore, in the case where either radius of the\n ellipse is large enough that the error of the spline\n approximation will be visible (greater than one pixel offset\n from the ideal), a different technique is used.\n\n In that case, only the visible parts of the ellipse are drawn,\n with each visible arc using a fixed number of spline segments\n (8). The algorithm proceeds as follows:\n\n 1. The points where the ellipse intersects the axes bounding\n box are located. (This is done be performing an inverse\n transformation on the axes bbox such that it is relative\n to the unit circle -- this makes the intersection\n calculation much easier than doing rotated ellipse\n intersection directly).\n\n This uses the \"line intersecting a circle\" algorithm from:\n\n Vince, John. *Geometry for Computer Graphics: Formulae,\n Examples & Proofs.* London: Springer-Verlag, 2005.\n\n 2. The angles of each of the intersection points are calculated.\n\n 3. Proceeding counterclockwise starting in the positive\n x-direction, each of the visible arc-segments between the\n pairs of vertices are drawn using the Bezier arc\n approximation technique implemented in `.Path.arc`.\n \"\"\"\n if not hasattr(self, 'axes'):\n raise RuntimeError('Arcs can only be used in Axes instances')\n if not self.get_visible():\n return\n\n self._recompute_transform()\n\n width = self.convert_xunits(self.width)\n height = self.convert_yunits(self.height)\n\n # If the width and height of ellipse are not equal, take into account\n # stretching when calculating angles to draw between\n def theta_stretch(theta, scale):\n theta = np.deg2rad(theta)\n x = np.cos(theta)\n y = np.sin(theta)\n stheta = np.rad2deg(np.arctan2(scale * y, x))\n # arctan2 has the range [-pi, pi], we expect [0, 2*pi]\n return (stheta + 360) % 360\n\n theta1 = self.theta1\n theta2 = self.theta2\n\n if (\n # if we need to stretch the angles because we are distorted\n width != height\n # and we are not doing a full circle.\n #\n # 0 and 360 do not exactly round-trip through the angle\n # stretching (due to both float precision limitations and\n # the difference between the range of arctan2 [-pi, pi] and\n # this method [0, 360]) so avoid doing it if we don't have to.\n and not (theta1 != theta2 and theta1 % 360 == theta2 % 360)\n ):\n theta1 = theta_stretch(self.theta1, width / height)\n theta2 = theta_stretch(self.theta2, width / height)\n\n # Get width and height in pixels we need to use\n # `self.get_data_transform` rather than `self.get_transform`\n # because we want the transform from dataspace to the\n # screen space to estimate how big the arc will be in physical\n # units when rendered (the transform that we get via\n # `self.get_transform()` goes from an idealized unit-radius\n # space to screen space).\n data_to_screen_trans = self.get_data_transform()\n pwidth, pheight = (data_to_screen_trans.transform((width, height)) -\n data_to_screen_trans.transform((0, 0)))\n inv_error = (1.0 / 1.89818e-6) * 0.5\n\n if pwidth < inv_error and pheight < inv_error:\n self._path = Path.arc(theta1, theta2)\n return Patch.draw(self, renderer)\n\n def line_circle_intersect(x0, y0, x1, y1):\n dx = x1 - x0\n dy = y1 - y0\n dr2 = dx * dx + dy * dy\n D = x0 * y1 - x1 * y0\n D2 = D * D\n discrim = dr2 - D2\n if discrim >= 0.0:\n sign_dy = np.copysign(1, dy) # +/-1, never 0.\n sqrt_discrim = np.sqrt(discrim)\n return np.array(\n [[(D * dy + sign_dy * dx * sqrt_discrim) / dr2,\n (-D * dx + abs(dy) * sqrt_discrim) / dr2],\n [(D * dy - sign_dy * dx * sqrt_discrim) / dr2,\n (-D * dx - abs(dy) * sqrt_discrim) / dr2]])\n else:\n return np.empty((0, 2))\n\n def segment_circle_intersect(x0, y0, x1, y1):\n epsilon = 1e-9\n if x1 < x0:\n x0e, x1e = x1, x0\n else:\n x0e, x1e = x0, x1\n if y1 < y0:\n y0e, y1e = y1, y0\n else:\n y0e, y1e = y0, y1\n xys = line_circle_intersect(x0, y0, x1, y1)\n xs, ys = xys.T\n return xys[\n (x0e - epsilon < xs) & (xs < x1e + epsilon)\n & (y0e - epsilon < ys) & (ys < y1e + epsilon)\n ]\n\n # Transforms the axes box_path so that it is relative to the unit\n # circle in the same way that it is relative to the desired ellipse.\n box_path_transform = (transforms.BboxTransformTo(self.axes.bbox)\n + self.get_transform().inverted())\n box_path = Path.unit_rectangle().transformed(box_path_transform)\n\n thetas = set()\n # For each of the point pairs, there is a line segment\n for p0, p1 in zip(box_path.vertices[:-1], box_path.vertices[1:]):\n xy = segment_circle_intersect(*p0, *p1)\n x, y = xy.T\n # arctan2 return [-pi, pi), the rest of our angles are in\n # [0, 360], adjust as needed.\n theta = (np.rad2deg(np.arctan2(y, x)) + 360) % 360\n thetas.update(theta[(theta1 < theta) & (theta < theta2)])\n thetas = sorted(thetas) + [theta2]\n last_theta = theta1\n theta1_rad = np.deg2rad(theta1)\n inside = box_path.contains_point(\n (np.cos(theta1_rad), np.sin(theta1_rad))\n )\n\n # save original path\n path_original = self._path\n for theta in thetas:\n if inside:\n self._path = Path.arc(last_theta, theta, 8)\n Patch.draw(self, renderer)\n inside = False\n else:\n inside = True\n last_theta = theta\n\n # restore original path\n self._path = path_original\n\n\ndef bbox_artist(artist, renderer, props=None, fill=True):\n \"\"\"\n A debug function to draw a rectangle around the bounding\n box returned by an artist's `.Artist.get_window_extent`\n to test whether the artist is returning the correct bbox.\n\n *props* is a dict of rectangle props with the additional property\n 'pad' that sets the padding around the bbox in points.\n \"\"\"\n if props is None:\n props = {}\n props = props.copy() # don't want to alter the pad externally\n pad = props.pop('pad', 4)\n pad = renderer.points_to_pixels(pad)\n bbox = artist.get_window_extent(renderer)\n r = Rectangle(\n xy=(bbox.x0 - pad / 2, bbox.y0 - pad / 2),\n width=bbox.width + pad, height=bbox.height + pad,\n fill=fill, transform=transforms.IdentityTransform(), clip_on=False)\n r.update(props)\n r.draw(renderer)\n\n\ndef draw_bbox(bbox, renderer, color='k', trans=None):\n \"\"\"\n A debug function to draw a rectangle around the bounding\n box returned by an artist's `.Artist.get_window_extent`\n to test whether the artist is returning the correct bbox.\n \"\"\"\n r = Rectangle(xy=(bbox.x0, bbox.y0), width=bbox.width, height=bbox.height,\n edgecolor=color, fill=False, clip_on=False)\n if trans is not None:\n r.set_transform(trans)\n r.draw(renderer)\n\n\ndef _simpleprint_styles(_styles):\n \"\"\"\n A helper function for the _Style class. Given the dictionary of\n {stylename: styleclass}, return a string rep of the list of keys.\n Used to update the documentation.\n \"\"\"\n return \"[{}]\".format(\"|\".join(map(\" '{}' \".format, sorted(_styles))))\n\n\nclass _Style:\n \"\"\"\n A base class for the Styles. It is meant to be a container class,\n where actual styles are declared as subclass of it, and it\n provides some helper functions.\n \"\"\"\n def __new__(cls, stylename, **kw):\n \"\"\"Return the instance of the subclass with the given style name.\"\"\"\n\n # The \"class\" should have the _style_list attribute, which is a mapping\n # of style names to style classes.\n\n _list = stylename.replace(\" \", \"\").split(\",\")\n _name = _list[0].lower()\n try:\n _cls = cls._style_list[_name]\n except KeyError as err:\n raise ValueError(\"Unknown style : %s\" % stylename) from err\n\n try:\n _args_pair = [cs.split(\"=\") for cs in _list[1:]]\n _args = {k: float(v) for k, v in _args_pair}\n except ValueError as err:\n raise ValueError(\"Incorrect style argument : %s\" %\n stylename) from err\n _args.update(kw)\n\n return _cls(**_args)\n\n @classmethod\n def get_styles(cls):\n \"\"\"Return a dictionary of available styles.\"\"\"\n return cls._style_list\n\n @classmethod\n def pprint_styles(cls):\n \"\"\"Return the available styles as pretty-printed string.\"\"\"\n table = [('Class', 'Name', 'Attrs'),\n *[(cls.__name__,\n # Add backquotes, as - and | have special meaning in reST.\n f'``{name}``',\n # [1:-1] drops the surrounding parentheses.\n str(inspect.signature(cls))[1:-1] or 'None')\n for name, cls in sorted(cls._style_list.items())]]\n # Convert to rst table.\n col_len = [max(len(cell) for cell in column) for column in zip(*table)]\n table_formatstr = ' '.join('=' * cl for cl in col_len)\n rst_table = '\\n'.join([\n '',\n table_formatstr,\n ' '.join(cell.ljust(cl) for cell, cl in zip(table[0], col_len)),\n table_formatstr,\n *[' '.join(cell.ljust(cl) for cell, cl in zip(row, col_len))\n for row in table[1:]],\n table_formatstr,\n '',\n ])\n return textwrap.indent(rst_table, prefix=' ' * 2)\n\n @classmethod\n def register(cls, name, style):\n \"\"\"Register a new style.\"\"\"\n if not issubclass(style, cls._Base):\n raise ValueError(\"%s must be a subclass of %s\" % (style,\n cls._Base))\n cls._style_list[name] = style\n\n\ndef _register_style(style_list, cls=None, *, name=None):\n \"\"\"Class decorator that stashes a class in a (style) dictionary.\"\"\"\n if cls is None:\n return functools.partial(_register_style, style_list, name=name)\n style_list[name or cls.__name__.lower()] = cls\n return cls\n\n\nclass BoxStyle(_Style):\n \"\"\"\n `BoxStyle` is a container class which defines several\n boxstyle classes, which are used for `FancyBboxPatch`.\n\n A style object can be created as::\n\n BoxStyle.Round(pad=0.2)\n\n or::\n\n BoxStyle(\"Round\", pad=0.2)\n\n or::\n\n BoxStyle(\"Round, pad=0.2\")\n\n The following boxstyle classes are defined.\n\n %(AvailableBoxstyles)s\n\n An instance of any boxstyle class is an callable object,\n whose call signature is::\n\n __call__(self, x0, y0, width, height, mutation_size)\n\n and returns a `.Path` instance. *x0*, *y0*, *width* and\n *height* specify the location and size of the box to be\n drawn. *mutation_scale* determines the overall size of the\n mutation (by which I mean the transformation of the rectangle to\n the fancy box).\n \"\"\"\n\n _style_list = {}\n\n @_api.deprecated(\"3.4\")\n class _Base:\n \"\"\"\n Abstract base class for styling of `.FancyBboxPatch`.\n\n This class is not an artist itself. The `__call__` method returns the\n `~matplotlib.path.Path` for outlining the fancy box. The actual drawing\n is handled in `.FancyBboxPatch`.\n\n Subclasses may only use parameters with default values in their\n ``__init__`` method because they must be able to be initialized\n without arguments.\n\n Subclasses must implement the `__call__` method. It receives the\n enclosing rectangle *x0, y0, width, height* as well as the\n *mutation_size*, which scales the outline properties such as padding.\n It returns the outline of the fancy box as `.path.Path`.\n \"\"\"\n\n @_api.deprecated(\"3.4\")\n def transmute(self, x0, y0, width, height, mutation_size):\n \"\"\"Return the `~.path.Path` outlining the given rectangle.\"\"\"\n return self(self, x0, y0, width, height, mutation_size, 1)\n\n # This can go away once the deprecation period elapses, leaving _Base\n # as a fully abstract base class just providing docstrings, no logic.\n def __init_subclass__(cls):\n transmute = _api.deprecate_method_override(\n __class__.transmute, cls, since=\"3.4\")\n if transmute:\n cls.__call__ = transmute\n return\n\n __call__ = cls.__call__\n\n @_api.delete_parameter(\"3.4\", \"mutation_aspect\")\n def call_wrapper(\n self, x0, y0, width, height, mutation_size,\n mutation_aspect=_api.deprecation._deprecated_parameter):\n if mutation_aspect is _api.deprecation._deprecated_parameter:\n # Don't trigger deprecation warning internally.\n return __call__(self, x0, y0, width, height, mutation_size)\n else:\n # Squeeze the given height by the aspect_ratio.\n y0, height = y0 / mutation_aspect, height / mutation_aspect\n path = self(x0, y0, width, height, mutation_size,\n mutation_aspect)\n vertices, codes = path.vertices, path.codes\n # Restore the height.\n vertices[:, 1] = vertices[:, 1] * mutation_aspect\n return Path(vertices, codes)\n\n cls.__call__ = call_wrapper\n\n def __call__(self, x0, y0, width, height, mutation_size):\n \"\"\"\n Given the location and size of the box, return the path of\n the box around it.\n\n Parameters\n ----------\n x0, y0, width, height : float\n Location and size of the box.\n mutation_size : float\n A reference scale for the mutation.\n\n Returns\n -------\n `~matplotlib.path.Path`\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n @_register_style(_style_list)\n class Square(_Base):\n \"\"\"A square box.\"\"\"\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n pad = mutation_size * self.pad\n # width and height with padding added.\n width, height = width + 2 * pad, height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad\n x1, y1 = x0 + width, y0 + height\n return Path([(x0, y0), (x1, y0), (x1, y1), (x0, y1), (x0, y0)],\n closed=True)\n\n @_register_style(_style_list)\n class Circle(_Base):\n \"\"\"A circular box.\"\"\"\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n pad = mutation_size * self.pad\n width, height = width + 2 * pad, height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad\n return Path.circle((x0 + width / 2, y0 + height / 2),\n max(width, height) / 2)\n\n @_register_style(_style_list)\n class LArrow(_Base):\n \"\"\"A box in the shape of a left-pointing arrow.\"\"\"\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n # padding\n pad = mutation_size * self.pad\n # width and height with padding added.\n width, height = width + 2 * pad, height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad,\n x1, y1 = x0 + width, y0 + height\n\n dx = (y1 - y0) / 2\n dxx = dx / 2\n x0 = x0 + pad / 1.4 # adjust by ~sqrt(2)\n\n return Path([(x0 + dxx, y0), (x1, y0), (x1, y1), (x0 + dxx, y1),\n (x0 + dxx, y1 + dxx), (x0 - dx, y0 + dx),\n (x0 + dxx, y0 - dxx), # arrow\n (x0 + dxx, y0), (x0 + dxx, y0)],\n closed=True)\n\n @_register_style(_style_list)\n class RArrow(LArrow):\n \"\"\"A box in the shape of a right-pointing arrow.\"\"\"\n\n def __call__(self, x0, y0, width, height, mutation_size):\n p = BoxStyle.LArrow.__call__(\n self, x0, y0, width, height, mutation_size)\n p.vertices[:, 0] = 2 * x0 + width - p.vertices[:, 0]\n return p\n\n @_register_style(_style_list)\n class DArrow(_Base):\n \"\"\"A box in the shape of a two-way arrow.\"\"\"\n # Modified from LArrow to add a right arrow to the bbox.\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n # padding\n pad = mutation_size * self.pad\n # width and height with padding added.\n # The width is padded by the arrows, so we don't need to pad it.\n height = height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad\n x1, y1 = x0 + width, y0 + height\n\n dx = (y1 - y0) / 2\n dxx = dx / 2\n x0 = x0 + pad / 1.4 # adjust by ~sqrt(2)\n\n return Path([(x0 + dxx, y0), (x1, y0), # bot-segment\n (x1, y0 - dxx), (x1 + dx + dxx, y0 + dx),\n (x1, y1 + dxx), # right-arrow\n (x1, y1), (x0 + dxx, y1), # top-segment\n (x0 + dxx, y1 + dxx), (x0 - dx, y0 + dx),\n (x0 + dxx, y0 - dxx), # left-arrow\n (x0 + dxx, y0), (x0 + dxx, y0)], # close-poly\n closed=True)\n\n @_register_style(_style_list)\n class Round(_Base):\n \"\"\"A box with round corners.\"\"\"\n\n def __init__(self, pad=0.3, rounding_size=None):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n rounding_size : float, default: *pad*\n Radius of the corners.\n \"\"\"\n self.pad = pad\n self.rounding_size = rounding_size\n\n def __call__(self, x0, y0, width, height, mutation_size):\n\n # padding\n pad = mutation_size * self.pad\n\n # size of the rounding corner\n if self.rounding_size:\n dr = mutation_size * self.rounding_size\n else:\n dr = pad\n\n width, height = width + 2 * pad, height + 2 * pad\n\n x0, y0 = x0 - pad, y0 - pad,\n x1, y1 = x0 + width, y0 + height\n\n # Round corners are implemented as quadratic Bezier, e.g.,\n # [(x0, y0-dr), (x0, y0), (x0+dr, y0)] for lower left corner.\n cp = [(x0 + dr, y0),\n (x1 - dr, y0),\n (x1, y0), (x1, y0 + dr),\n (x1, y1 - dr),\n (x1, y1), (x1 - dr, y1),\n (x0 + dr, y1),\n (x0, y1), (x0, y1 - dr),\n (x0, y0 + dr),\n (x0, y0), (x0 + dr, y0),\n (x0 + dr, y0)]\n\n com = [Path.MOVETO,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.CLOSEPOLY]\n\n path = Path(cp, com)\n\n return path\n\n @_register_style(_style_list)\n class Round4(_Base):\n \"\"\"A box with rounded edges.\"\"\"\n\n def __init__(self, pad=0.3, rounding_size=None):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n rounding_size : float, default: *pad*/2\n Rounding of edges.\n \"\"\"\n self.pad = pad\n self.rounding_size = rounding_size\n\n def __call__(self, x0, y0, width, height, mutation_size):\n\n # padding\n pad = mutation_size * self.pad\n\n # Rounding size; defaults to half of the padding.\n if self.rounding_size:\n dr = mutation_size * self.rounding_size\n else:\n dr = pad / 2.\n\n width = width + 2 * pad - 2 * dr\n height = height + 2 * pad - 2 * dr\n\n x0, y0 = x0 - pad + dr, y0 - pad + dr,\n x1, y1 = x0 + width, y0 + height\n\n cp = [(x0, y0),\n (x0 + dr, y0 - dr), (x1 - dr, y0 - dr), (x1, y0),\n (x1 + dr, y0 + dr), (x1 + dr, y1 - dr), (x1, y1),\n (x1 - dr, y1 + dr), (x0 + dr, y1 + dr), (x0, y1),\n (x0 - dr, y1 - dr), (x0 - dr, y0 + dr), (x0, y0),\n (x0, y0)]\n\n com = [Path.MOVETO,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CLOSEPOLY]\n\n path = Path(cp, com)\n\n return path\n\n @_register_style(_style_list)\n class Sawtooth(_Base):\n \"\"\"A box with a sawtooth outline.\"\"\"\n\n def __init__(self, pad=0.3, tooth_size=None):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n tooth_size : float, default: *pad*/2\n Size of the sawtooth.\n \"\"\"\n self.pad = pad\n self.tooth_size = tooth_size\n\n def _get_sawtooth_vertices(self, x0, y0, width, height, mutation_size):\n\n # padding\n pad = mutation_size * self.pad\n\n # size of sawtooth\n if self.tooth_size is None:\n tooth_size = self.pad * .5 * mutation_size\n else:\n tooth_size = self.tooth_size * mutation_size\n\n tooth_size2 = tooth_size / 2\n width = width + 2 * pad - tooth_size\n height = height + 2 * pad - tooth_size\n\n # the sizes of the vertical and horizontal sawtooth are\n # separately adjusted to fit the given box size.\n dsx_n = int(round((width - tooth_size) / (tooth_size * 2))) * 2\n dsx = (width - tooth_size) / dsx_n\n dsy_n = int(round((height - tooth_size) / (tooth_size * 2))) * 2\n dsy = (height - tooth_size) / dsy_n\n\n x0, y0 = x0 - pad + tooth_size2, y0 - pad + tooth_size2\n x1, y1 = x0 + width, y0 + height\n\n bottom_saw_x = [\n x0,\n *(x0 + tooth_size2 + dsx * .5 * np.arange(dsx_n * 2)),\n x1 - tooth_size2,\n ]\n bottom_saw_y = [\n y0,\n *([y0 - tooth_size2, y0, y0 + tooth_size2, y0] * dsx_n),\n y0 - tooth_size2,\n ]\n right_saw_x = [\n x1,\n *([x1 + tooth_size2, x1, x1 - tooth_size2, x1] * dsx_n),\n x1 + tooth_size2,\n ]\n right_saw_y = [\n y0,\n *(y0 + tooth_size2 + dsy * .5 * np.arange(dsy_n * 2)),\n y1 - tooth_size2,\n ]\n top_saw_x = [\n x1,\n *(x1 - tooth_size2 - dsx * .5 * np.arange(dsx_n * 2)),\n x0 + tooth_size2,\n ]\n top_saw_y = [\n y1,\n *([y1 + tooth_size2, y1, y1 - tooth_size2, y1] * dsx_n),\n y1 + tooth_size2,\n ]\n left_saw_x = [\n x0,\n *([x0 - tooth_size2, x0, x0 + tooth_size2, x0] * dsy_n),\n x0 - tooth_size2,\n ]\n left_saw_y = [\n y1,\n *(y1 - tooth_size2 - dsy * .5 * np.arange(dsy_n * 2)),\n y0 + tooth_size2,\n ]\n\n saw_vertices = [*zip(bottom_saw_x, bottom_saw_y),\n *zip(right_saw_x, right_saw_y),\n *zip(top_saw_x, top_saw_y),\n *zip(left_saw_x, left_saw_y),\n (bottom_saw_x[0], bottom_saw_y[0])]\n\n return saw_vertices\n\n def __call__(self, x0, y0, width, height, mutation_size):\n saw_vertices = self._get_sawtooth_vertices(x0, y0, width,\n height, mutation_size)\n path = Path(saw_vertices, closed=True)\n return path\n\n @_register_style(_style_list)\n class Roundtooth(Sawtooth):\n \"\"\"A box with a rounded sawtooth outline.\"\"\"\n\n def __call__(self, x0, y0, width, height, mutation_size):\n saw_vertices = self._get_sawtooth_vertices(x0, y0,\n width, height,\n mutation_size)\n # Add a trailing vertex to allow us to close the polygon correctly\n saw_vertices = np.concatenate([saw_vertices, [saw_vertices[0]]])\n codes = ([Path.MOVETO] +\n [Path.CURVE3, Path.CURVE3] * ((len(saw_vertices)-1)//2) +\n [Path.CLOSEPOLY])\n return Path(saw_vertices, codes)\n\n\nclass ConnectionStyle(_Style):\n \"\"\"\n `ConnectionStyle` is a container class which defines\n several connectionstyle classes, which is used to create a path\n between two points. These are mainly used with `FancyArrowPatch`.\n\n A connectionstyle object can be either created as::\n\n ConnectionStyle.Arc3(rad=0.2)\n\n or::\n\n ConnectionStyle(\"Arc3\", rad=0.2)\n\n or::\n\n ConnectionStyle(\"Arc3, rad=0.2\")\n\n The following classes are defined\n\n %(AvailableConnectorstyles)s\n\n An instance of any connection style class is an callable object,\n whose call signature is::\n\n __call__(self, posA, posB,\n patchA=None, patchB=None,\n shrinkA=2., shrinkB=2.)\n\n and it returns a `.Path` instance. *posA* and *posB* are\n tuples of (x, y) coordinates of the two points to be\n connected. *patchA* (or *patchB*) is given, the returned path is\n clipped so that it start (or end) from the boundary of the\n patch. The path is further shrunk by *shrinkA* (or *shrinkB*)\n which is given in points.\n \"\"\"\n\n _style_list = {}\n\n class _Base:\n \"\"\"\n A base class for connectionstyle classes. The subclass needs\n to implement a *connect* method whose call signature is::\n\n connect(posA, posB)\n\n where posA and posB are tuples of x, y coordinates to be\n connected. The method needs to return a path connecting two\n points. This base class defines a __call__ method, and a few\n helper methods.\n \"\"\"\n\n class SimpleEvent:\n def __init__(self, xy):\n self.x, self.y = xy\n\n def _clip(self, path, patchA, patchB):\n \"\"\"\n Clip the path to the boundary of the patchA and patchB.\n The starting point of the path needed to be inside of the\n patchA and the end point inside the patch B. The *contains*\n methods of each patch object is utilized to test if the point\n is inside the path.\n \"\"\"\n\n if patchA:\n def insideA(xy_display):\n xy_event = ConnectionStyle._Base.SimpleEvent(xy_display)\n return patchA.contains(xy_event)[0]\n\n try:\n left, right = split_path_inout(path, insideA)\n except ValueError:\n right = path\n\n path = right\n\n if patchB:\n def insideB(xy_display):\n xy_event = ConnectionStyle._Base.SimpleEvent(xy_display)\n return patchB.contains(xy_event)[0]\n\n try:\n left, right = split_path_inout(path, insideB)\n except ValueError:\n left = path\n\n path = left\n\n return path\n\n def _shrink(self, path, shrinkA, shrinkB):\n \"\"\"\n Shrink the path by fixed size (in points) with shrinkA and shrinkB.\n \"\"\"\n if shrinkA:\n insideA = inside_circle(*path.vertices[0], shrinkA)\n try:\n left, path = split_path_inout(path, insideA)\n except ValueError:\n pass\n if shrinkB:\n insideB = inside_circle(*path.vertices[-1], shrinkB)\n try:\n path, right = split_path_inout(path, insideB)\n except ValueError:\n pass\n return path\n\n def __call__(self, posA, posB,\n shrinkA=2., shrinkB=2., patchA=None, patchB=None):\n \"\"\"\n Call the *connect* method to create a path between *posA* and\n *posB*; then clip and shrink the path.\n \"\"\"\n path = self.connect(posA, posB)\n clipped_path = self._clip(path, patchA, patchB)\n shrunk_path = self._shrink(clipped_path, shrinkA, shrinkB)\n return shrunk_path\n\n @_register_style(_style_list)\n class Arc3(_Base):\n \"\"\"\n Creates a simple quadratic Bezier curve between two\n points. The curve is created so that the middle control point\n (C1) is located at the same distance from the start (C0) and\n end points(C2) and the distance of the C1 to the line\n connecting C0-C2 is *rad* times the distance of C0-C2.\n \"\"\"\n\n def __init__(self, rad=0.):\n \"\"\"\n *rad*\n curvature of the curve.\n \"\"\"\n self.rad = rad\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n x12, y12 = (x1 + x2) / 2., (y1 + y2) / 2.\n dx, dy = x2 - x1, y2 - y1\n\n f = self.rad\n\n cx, cy = x12 + f * dy, y12 - f * dx\n\n vertices = [(x1, y1),\n (cx, cy),\n (x2, y2)]\n codes = [Path.MOVETO,\n Path.CURVE3,\n Path.CURVE3]\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Angle3(_Base):\n \"\"\"\n Creates a simple quadratic Bezier curve between two\n points. The middle control points is placed at the\n intersecting point of two lines which cross the start and\n end point, and have a slope of angleA and angleB, respectively.\n \"\"\"\n\n def __init__(self, angleA=90, angleB=0):\n \"\"\"\n *angleA*\n starting angle of the path\n\n *angleB*\n ending angle of the path\n \"\"\"\n\n self.angleA = angleA\n self.angleB = angleB\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n\n cosA = math.cos(math.radians(self.angleA))\n sinA = math.sin(math.radians(self.angleA))\n cosB = math.cos(math.radians(self.angleB))\n sinB = math.sin(math.radians(self.angleB))\n\n cx, cy = get_intersection(x1, y1, cosA, sinA,\n x2, y2, cosB, sinB)\n\n vertices = [(x1, y1), (cx, cy), (x2, y2)]\n codes = [Path.MOVETO, Path.CURVE3, Path.CURVE3]\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Angle(_Base):\n \"\"\"\n Creates a piecewise continuous quadratic Bezier path between\n two points. The path has a one passing-through point placed at\n the intersecting point of two lines which cross the start\n and end point, and have a slope of angleA and angleB, respectively.\n The connecting edges are rounded with *rad*.\n \"\"\"\n\n def __init__(self, angleA=90, angleB=0, rad=0.):\n \"\"\"\n *angleA*\n starting angle of the path\n\n *angleB*\n ending angle of the path\n\n *rad*\n rounding radius of the edge\n \"\"\"\n\n self.angleA = angleA\n self.angleB = angleB\n\n self.rad = rad\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n\n cosA = math.cos(math.radians(self.angleA))\n sinA = math.sin(math.radians(self.angleA))\n cosB = math.cos(math.radians(self.angleB))\n sinB = math.sin(math.radians(self.angleB))\n\n cx, cy = get_intersection(x1, y1, cosA, sinA,\n x2, y2, cosB, sinB)\n\n vertices = [(x1, y1)]\n codes = [Path.MOVETO]\n\n if self.rad == 0.:\n vertices.append((cx, cy))\n codes.append(Path.LINETO)\n else:\n dx1, dy1 = x1 - cx, y1 - cy\n d1 = np.hypot(dx1, dy1)\n f1 = self.rad / d1\n dx2, dy2 = x2 - cx, y2 - cy\n d2 = np.hypot(dx2, dy2)\n f2 = self.rad / d2\n vertices.extend([(cx + dx1 * f1, cy + dy1 * f1),\n (cx, cy),\n (cx + dx2 * f2, cy + dy2 * f2)])\n codes.extend([Path.LINETO, Path.CURVE3, Path.CURVE3])\n\n vertices.append((x2, y2))\n codes.append(Path.LINETO)\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Arc(_Base):\n \"\"\"\n Creates a piecewise continuous quadratic Bezier path between\n two points. The path can have two passing-through points, a\n point placed at the distance of armA and angle of angleA from\n point A, another point with respect to point B. The edges are\n rounded with *rad*.\n \"\"\"\n\n def __init__(self, angleA=0, angleB=0, armA=None, armB=None, rad=0.):\n \"\"\"\n *angleA* :\n starting angle of the path\n\n *angleB* :\n ending angle of the path\n\n *armA* :\n length of the starting arm\n\n *armB* :\n length of the ending arm\n\n *rad* :\n rounding radius of the edges\n \"\"\"\n\n self.angleA = angleA\n self.angleB = angleB\n self.armA = armA\n self.armB = armB\n\n self.rad = rad\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n\n vertices = [(x1, y1)]\n rounded = []\n codes = [Path.MOVETO]\n\n if self.armA:\n cosA = math.cos(math.radians(self.angleA))\n sinA = math.sin(math.radians(self.angleA))\n # x_armA, y_armB\n d = self.armA - self.rad\n rounded.append((x1 + d * cosA, y1 + d * sinA))\n d = self.armA\n rounded.append((x1 + d * cosA, y1 + d * sinA))\n\n if self.armB:\n cosB = math.cos(math.radians(self.angleB))\n sinB = math.sin(math.radians(self.angleB))\n x_armB, y_armB = x2 + self.armB * cosB, y2 + self.armB * sinB\n\n if rounded:\n xp, yp = rounded[-1]\n dx, dy = x_armB - xp, y_armB - yp\n dd = (dx * dx + dy * dy) ** .5\n\n rounded.append((xp + self.rad * dx / dd,\n yp + self.rad * dy / dd))\n vertices.extend(rounded)\n codes.extend([Path.LINETO,\n Path.CURVE3,\n Path.CURVE3])\n else:\n xp, yp = vertices[-1]\n dx, dy = x_armB - xp, y_armB - yp\n dd = (dx * dx + dy * dy) ** .5\n\n d = dd - self.rad\n rounded = [(xp + d * dx / dd, yp + d * dy / dd),\n (x_armB, y_armB)]\n\n if rounded:\n xp, yp = rounded[-1]\n dx, dy = x2 - xp, y2 - yp\n dd = (dx * dx + dy * dy) ** .5\n\n rounded.append((xp + self.rad * dx / dd,\n yp + self.rad * dy / dd))\n vertices.extend(rounded)\n codes.extend([Path.LINETO,\n Path.CURVE3,\n Path.CURVE3])\n\n vertices.append((x2, y2))\n codes.append(Path.LINETO)\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Bar(_Base):\n \"\"\"\n A line with *angle* between A and B with *armA* and\n *armB*. One of the arms is extended so that they are connected in\n a right angle. The length of armA is determined by (*armA*\n + *fraction* x AB distance). Same for armB.\n \"\"\"\n\n def __init__(self, armA=0., armB=0., fraction=0.3, angle=None):\n \"\"\"\n Parameters\n ----------\n armA : float\n minimum length of armA\n\n armB : float\n minimum length of armB\n\n fraction : float\n a fraction of the distance between two points that\n will be added to armA and armB.\n\n angle : float or None\n angle of the connecting line (if None, parallel\n to A and B)\n \"\"\"\n self.armA = armA\n self.armB = armB\n self.fraction = fraction\n self.angle = angle\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x20, y20 = x2, y2 = posB\n\n theta1 = math.atan2(y2 - y1, x2 - x1)\n dx, dy = x2 - x1, y2 - y1\n dd = (dx * dx + dy * dy) ** .5\n ddx, ddy = dx / dd, dy / dd\n\n armA, armB = self.armA, self.armB\n\n if self.angle is not None:\n theta0 = np.deg2rad(self.angle)\n dtheta = theta1 - theta0\n dl = dd * math.sin(dtheta)\n dL = dd * math.cos(dtheta)\n x2, y2 = x1 + dL * math.cos(theta0), y1 + dL * math.sin(theta0)\n armB = armB - dl\n\n # update\n dx, dy = x2 - x1, y2 - y1\n dd2 = (dx * dx + dy * dy) ** .5\n ddx, ddy = dx / dd2, dy / dd2\n\n arm = max(armA, armB)\n f = self.fraction * dd + arm\n\n cx1, cy1 = x1 + f * ddy, y1 - f * ddx\n cx2, cy2 = x2 + f * ddy, y2 - f * ddx\n\n vertices = [(x1, y1),\n (cx1, cy1),\n (cx2, cy2),\n (x20, y20)]\n codes = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO]\n\n return Path(vertices, codes)\n\n\ndef _point_along_a_line(x0, y0, x1, y1, d):\n \"\"\"\n Return the point on the line connecting (*x0*, *y0*) -- (*x1*, *y1*) whose\n distance from (*x0*, *y0*) is *d*.\n \"\"\"\n dx, dy = x0 - x1, y0 - y1\n ff = d / (dx * dx + dy * dy) ** .5\n x2, y2 = x0 - ff * dx, y0 - ff * dy\n\n return x2, y2\n\n\nclass ArrowStyle(_Style):\n \"\"\"\n `ArrowStyle` is a container class which defines several\n arrowstyle classes, which is used to create an arrow path along a\n given path. These are mainly used with `FancyArrowPatch`.\n\n A arrowstyle object can be either created as::\n\n ArrowStyle.Fancy(head_length=.4, head_width=.4, tail_width=.4)\n\n or::\n\n ArrowStyle(\"Fancy\", head_length=.4, head_width=.4, tail_width=.4)\n\n or::\n\n ArrowStyle(\"Fancy, head_length=.4, head_width=.4, tail_width=.4\")\n\n The following classes are defined\n\n %(AvailableArrowstyles)s\n\n An instance of any arrow style class is a callable object,\n whose call signature is::\n\n __call__(self, path, mutation_size, linewidth, aspect_ratio=1.)\n\n and it returns a tuple of a `.Path` instance and a boolean\n value. *path* is a `.Path` instance along which the arrow\n will be drawn. *mutation_size* and *aspect_ratio* have the same\n meaning as in `BoxStyle`. *linewidth* is a line width to be\n stroked. This is meant to be used to correct the location of the\n head so that it does not overshoot the destination point, but not all\n classes support it.\n \"\"\"\n\n _style_list = {}\n\n class _Base:\n \"\"\"\n Arrow Transmuter Base class\n\n ArrowTransmuterBase and its derivatives are used to make a fancy\n arrow around a given path. The __call__ method returns a path\n (which will be used to create a PathPatch instance) and a boolean\n value indicating the path is open therefore is not fillable. This\n class is not an artist and actual drawing of the fancy arrow is\n done by the FancyArrowPatch class.\n\n \"\"\"\n\n # The derived classes are required to be able to be initialized\n # w/o arguments, i.e., all its argument (except self) must have\n # the default values.\n\n @staticmethod\n def ensure_quadratic_bezier(path):\n \"\"\"\n Some ArrowStyle class only works with a simple quadratic Bezier\n curve (created with Arc3Connection or Angle3Connector). This static\n method is to check if the provided path is a simple quadratic\n Bezier curve and returns its control points if true.\n \"\"\"\n segments = list(path.iter_segments())\n if (len(segments) != 2 or segments[0][1] != Path.MOVETO or\n segments[1][1] != Path.CURVE3):\n raise ValueError(\n \"'path' is not a valid quadratic Bezier curve\")\n return [*segments[0][0], *segments[1][0]]\n\n def transmute(self, path, mutation_size, linewidth):\n \"\"\"\n The transmute method is the very core of the ArrowStyle class and\n must be overridden in the subclasses. It receives the path object\n along which the arrow will be drawn, and the mutation_size, with\n which the arrow head etc. will be scaled. The linewidth may be\n used to adjust the path so that it does not pass beyond the given\n points. It returns a tuple of a Path instance and a boolean. The\n boolean value indicate whether the path can be filled or not. The\n return value can also be a list of paths and list of booleans of a\n same length.\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n def __call__(self, path, mutation_size, linewidth,\n aspect_ratio=1.):\n \"\"\"\n The __call__ method is a thin wrapper around the transmute method\n and takes care of the aspect ratio.\n \"\"\"\n\n if aspect_ratio is not None:\n # Squeeze the given height by the aspect_ratio\n vertices = path.vertices / [1, aspect_ratio]\n path_shrunk = Path(vertices, path.codes)\n # call transmute method with squeezed height.\n path_mutated, fillable = self.transmute(path_shrunk,\n mutation_size,\n linewidth)\n if np.iterable(fillable):\n path_list = []\n for p in path_mutated:\n # Restore the height\n path_list.append(\n Path(p.vertices * [1, aspect_ratio], p.codes))\n return path_list, fillable\n else:\n return path_mutated, fillable\n else:\n return self.transmute(path, mutation_size, linewidth)\n\n class _Curve(_Base):\n \"\"\"\n A simple arrow which will work with any path instance. The\n returned path is simply concatenation of the original path + at\n most two paths representing the arrow head at the begin point and the\n at the end point. The arrow heads can be either open or closed.\n \"\"\"\n\n def __init__(self, beginarrow=None, endarrow=None,\n fillbegin=False, fillend=False,\n head_length=.2, head_width=.1):\n \"\"\"\n The arrows are drawn if *beginarrow* and/or *endarrow* are\n true. *head_length* and *head_width* determines the size\n of the arrow relative to the *mutation scale*. The\n arrowhead at the begin (or end) is closed if fillbegin (or\n fillend) is True.\n \"\"\"\n self.beginarrow, self.endarrow = beginarrow, endarrow\n self.head_length, self.head_width = head_length, head_width\n self.fillbegin, self.fillend = fillbegin, fillend\n super().__init__()\n\n def _get_arrow_wedge(self, x0, y0, x1, y1,\n head_dist, cos_t, sin_t, linewidth):\n \"\"\"\n Return the paths for arrow heads. Since arrow lines are\n drawn with capstyle=projected, The arrow goes beyond the\n desired point. This method also returns the amount of the path\n to be shrunken so that it does not overshoot.\n \"\"\"\n\n # arrow from x0, y0 to x1, y1\n dx, dy = x0 - x1, y0 - y1\n\n cp_distance = np.hypot(dx, dy)\n\n # pad_projected : amount of pad to account the\n # overshooting of the projection of the wedge\n pad_projected = (.5 * linewidth / sin_t)\n\n # Account for division by zero\n if cp_distance == 0:\n cp_distance = 1\n\n # apply pad for projected edge\n ddx = pad_projected * dx / cp_distance\n ddy = pad_projected * dy / cp_distance\n\n # offset for arrow wedge\n dx = dx / cp_distance * head_dist\n dy = dy / cp_distance * head_dist\n\n dx1, dy1 = cos_t * dx + sin_t * dy, -sin_t * dx + cos_t * dy\n dx2, dy2 = cos_t * dx - sin_t * dy, sin_t * dx + cos_t * dy\n\n vertices_arrow = [(x1 + ddx + dx1, y1 + ddy + dy1),\n (x1 + ddx, y1 + ddy),\n (x1 + ddx + dx2, y1 + ddy + dy2)]\n codes_arrow = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO]\n\n return vertices_arrow, codes_arrow, ddx, ddy\n\n def transmute(self, path, mutation_size, linewidth):\n\n head_length = self.head_length * mutation_size\n head_width = self.head_width * mutation_size\n head_dist = np.hypot(head_length, head_width)\n cos_t, sin_t = head_length / head_dist, head_width / head_dist\n\n # begin arrow\n x0, y0 = path.vertices[0]\n x1, y1 = path.vertices[1]\n\n # If there is no room for an arrow and a line, then skip the arrow\n has_begin_arrow = self.beginarrow and (x0, y0) != (x1, y1)\n verticesA, codesA, ddxA, ddyA = (\n self._get_arrow_wedge(x1, y1, x0, y0,\n head_dist, cos_t, sin_t, linewidth)\n if has_begin_arrow\n else ([], [], 0, 0)\n )\n\n # end arrow\n x2, y2 = path.vertices[-2]\n x3, y3 = path.vertices[-1]\n\n # If there is no room for an arrow and a line, then skip the arrow\n has_end_arrow = self.endarrow and (x2, y2) != (x3, y3)\n verticesB, codesB, ddxB, ddyB = (\n self._get_arrow_wedge(x2, y2, x3, y3,\n head_dist, cos_t, sin_t, linewidth)\n if has_end_arrow\n else ([], [], 0, 0)\n )\n\n # This simple code will not work if ddx, ddy is greater than the\n # separation between vertices.\n _path = [Path(np.concatenate([[(x0 + ddxA, y0 + ddyA)],\n path.vertices[1:-1],\n [(x3 + ddxB, y3 + ddyB)]]),\n path.codes)]\n _fillable = [False]\n\n if has_begin_arrow:\n if self.fillbegin:\n p = np.concatenate([verticesA, [verticesA[0],\n verticesA[0]], ])\n c = np.concatenate([codesA, [Path.LINETO, Path.CLOSEPOLY]])\n _path.append(Path(p, c))\n _fillable.append(True)\n else:\n _path.append(Path(verticesA, codesA))\n _fillable.append(False)\n\n if has_end_arrow:\n if self.fillend:\n _fillable.append(True)\n p = np.concatenate([verticesB, [verticesB[0],\n verticesB[0]], ])\n c = np.concatenate([codesB, [Path.LINETO, Path.CLOSEPOLY]])\n _path.append(Path(p, c))\n else:\n _fillable.append(False)\n _path.append(Path(verticesB, codesB))\n\n return _path, _fillable\n\n @_register_style(_style_list, name=\"-\")\n class Curve(_Curve):\n \"\"\"A simple curve without any arrow head.\"\"\"\n\n def __init__(self):\n super().__init__(beginarrow=False, endarrow=False)\n\n @_register_style(_style_list, name=\"<-\")\n class CurveA(_Curve):\n \"\"\"An arrow with a head at its begin point.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.2\n Width of the arrow head.\n \"\"\"\n super().__init__(beginarrow=True, endarrow=False,\n head_length=head_length, head_width=head_width)\n\n @_register_style(_style_list, name=\"->\")\n class CurveB(_Curve):\n \"\"\"An arrow with a head at its end point.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.2\n Width of the arrow head.\n \"\"\"\n super().__init__(beginarrow=False, endarrow=True,\n head_length=head_length, head_width=head_width)\n\n @_register_style(_style_list, name=\"<->\")\n class CurveAB(_Curve):\n \"\"\"An arrow with heads both at the begin and the end point.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.2\n Width of the arrow head.\n \"\"\"\n super().__init__(beginarrow=True, endarrow=True,\n head_length=head_length, head_width=head_width)\n\n @_register_style(_style_list, name=\"<|-\")\n class CurveFilledA(_Curve):\n \"\"\"An arrow with filled triangle head at the begin.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.2\n Width of the arrow head.\n \"\"\"\n super().__init__(beginarrow=True, endarrow=False,\n fillbegin=True, fillend=False,\n head_length=head_length, head_width=head_width)\n\n @_register_style(_style_list, name=\"-|>\")\n class CurveFilledB(_Curve):\n \"\"\"An arrow with filled triangle head at the end.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.2\n Width of the arrow head.\n \"\"\"\n super().__init__(beginarrow=False, endarrow=True,\n fillbegin=False, fillend=True,\n head_length=head_length, head_width=head_width)\n\n @_register_style(_style_list, name=\"<|-|>\")\n class CurveFilledAB(_Curve):\n \"\"\"An arrow with filled triangle heads at both ends.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.2\n Width of the arrow head.\n \"\"\"\n super().__init__(beginarrow=True, endarrow=True,\n fillbegin=True, fillend=True,\n head_length=head_length, head_width=head_width)\n\n class _Bracket(_Base):\n\n def __init__(self, bracketA=None, bracketB=None,\n widthA=1., widthB=1.,\n lengthA=0.2, lengthB=0.2,\n angleA=None, angleB=None,\n scaleA=None, scaleB=None):\n self.bracketA, self.bracketB = bracketA, bracketB\n self.widthA, self.widthB = widthA, widthB\n self.lengthA, self.lengthB = lengthA, lengthB\n self.angleA, self.angleB = angleA, angleB\n self.scaleA, self.scaleB = scaleA, scaleB\n\n def _get_bracket(self, x0, y0,\n cos_t, sin_t, width, length, angle):\n\n # arrow from x0, y0 to x1, y1\n from matplotlib.bezier import get_normal_points\n x1, y1, x2, y2 = get_normal_points(x0, y0, cos_t, sin_t, width)\n\n dx, dy = length * cos_t, length * sin_t\n\n vertices_arrow = [(x1 + dx, y1 + dy),\n (x1, y1),\n (x2, y2),\n (x2 + dx, y2 + dy)]\n codes_arrow = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO]\n\n if angle is not None:\n trans = transforms.Affine2D().rotate_deg_around(x0, y0, angle)\n vertices_arrow = trans.transform(vertices_arrow)\n\n return vertices_arrow, codes_arrow\n\n def transmute(self, path, mutation_size, linewidth):\n\n if self.scaleA is None:\n scaleA = mutation_size\n else:\n scaleA = self.scaleA\n\n if self.scaleB is None:\n scaleB = mutation_size\n else:\n scaleB = self.scaleB\n\n vertices_list, codes_list = [], []\n\n if self.bracketA:\n x0, y0 = path.vertices[0]\n x1, y1 = path.vertices[1]\n cos_t, sin_t = get_cos_sin(x1, y1, x0, y0)\n verticesA, codesA = self._get_bracket(x0, y0, cos_t, sin_t,\n self.widthA * scaleA,\n self.lengthA * scaleA,\n self.angleA)\n vertices_list.append(verticesA)\n codes_list.append(codesA)\n\n vertices_list.append(path.vertices)\n codes_list.append(path.codes)\n\n if self.bracketB:\n x0, y0 = path.vertices[-1]\n x1, y1 = path.vertices[-2]\n cos_t, sin_t = get_cos_sin(x1, y1, x0, y0)\n verticesB, codesB = self._get_bracket(x0, y0, cos_t, sin_t,\n self.widthB * scaleB,\n self.lengthB * scaleB,\n self.angleB)\n vertices_list.append(verticesB)\n codes_list.append(codesB)\n\n vertices = np.concatenate(vertices_list)\n codes = np.concatenate(codes_list)\n\n p = Path(vertices, codes)\n\n return p, False\n\n @_register_style(_style_list, name=\"]-[\")\n class BracketAB(_Bracket):\n \"\"\"An arrow with outward square brackets at both ends.\"\"\"\n\n def __init__(self,\n widthA=1., lengthA=0.2, angleA=None,\n widthB=1., lengthB=0.2, angleB=None):\n \"\"\"\n Parameters\n ----------\n widthA : float, default: 1.0\n Width of the bracket.\n\n lengthA : float, default: 0.2\n Length of the bracket.\n\n angleA : float, default: None\n Angle, in degrees, between the bracket and the line. Zero is\n perpendicular to the line, and positive measures\n counterclockwise.\n\n widthB : float, default: 1.0\n Width of the bracket.\n\n lengthB : float, default: 0.2\n Length of the bracket.\n\n angleB : float, default: None\n Angle, in degrees, between the bracket and the line. Zero is\n perpendicular to the line, and positive measures\n counterclockwise.\n \"\"\"\n super().__init__(True, True,\n widthA=widthA, lengthA=lengthA, angleA=angleA,\n widthB=widthB, lengthB=lengthB, angleB=angleB)\n\n @_register_style(_style_list, name=\"]-\")\n class BracketA(_Bracket):\n \"\"\"An arrow with an outward square bracket at its start.\"\"\"\n\n def __init__(self, widthA=1., lengthA=0.2, angleA=None):\n \"\"\"\n Parameters\n ----------\n widthA : float, default: 1.0\n Width of the bracket.\n\n lengthA : float, default: 0.2\n Length of the bracket.\n\n angleA : float, default: None\n Angle between the bracket and the line.\n \"\"\"\n super().__init__(True, None,\n widthA=widthA, lengthA=lengthA, angleA=angleA)\n\n @_register_style(_style_list, name=\"-[\")\n class BracketB(_Bracket):\n \"\"\"An arrow with an outward square bracket at its end.\"\"\"\n\n def __init__(self, widthB=1., lengthB=0.2, angleB=None):\n \"\"\"\n Parameters\n ----------\n widthB : float, default: 1.0\n Width of the bracket.\n\n lengthB : float, default: 0.2\n Length of the bracket.\n\n angleB : float, default: None\n Angle, in degrees, between the bracket and the line. Zero is\n perpendicular to the line, and positive measures\n counterclockwise.\n \"\"\"\n super().__init__(None, True,\n widthB=widthB, lengthB=lengthB, angleB=angleB)\n\n @_register_style(_style_list, name=\"|-|\")\n class BarAB(_Bracket):\n \"\"\"An arrow with vertical bars ``|`` at both ends.\"\"\"\n\n def __init__(self,\n widthA=1., angleA=None,\n widthB=1., angleB=None):\n \"\"\"\n Parameters\n ----------\n widthA : float, default: 1.0\n Width of the bracket.\n\n angleA : float, default: None\n Angle, in degrees, between the bracket and the line. Zero is\n perpendicular to the line, and positive measures\n counterclockwise.\n\n widthB : float, default: 1.0\n Width of the bracket.\n\n angleB : float, default: None\n Angle, in degrees, between the bracket and the line. Zero is\n perpendicular to the line, and positive measures\n counterclockwise.\n \"\"\"\n super().__init__(True, True,\n widthA=widthA, lengthA=0, angleA=angleA,\n widthB=widthB, lengthB=0, angleB=angleB)\n\n @_register_style(_style_list)\n class Simple(_Base):\n \"\"\"A simple arrow. Only works with a quadratic Bezier curve.\"\"\"\n\n def __init__(self, head_length=.5, head_width=.5, tail_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.5\n Length of the arrow head.\n\n head_width : float, default: 0.5\n Width of the arrow head.\n\n tail_width : float, default: 0.2\n Width of the arrow tail.\n \"\"\"\n self.head_length, self.head_width, self.tail_width = \\\n head_length, head_width, tail_width\n super().__init__()\n\n def transmute(self, path, mutation_size, linewidth):\n\n x0, y0, x1, y1, x2, y2 = self.ensure_quadratic_bezier(path)\n\n # divide the path into a head and a tail\n head_length = self.head_length * mutation_size\n in_f = inside_circle(x2, y2, head_length)\n arrow_path = [(x0, y0), (x1, y1), (x2, y2)]\n\n try:\n arrow_out, arrow_in = \\\n split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n except NonIntersectingPathException:\n # if this happens, make a straight line of the head_length\n # long.\n x0, y0 = _point_along_a_line(x2, y2, x1, y1, head_length)\n x1n, y1n = 0.5 * (x0 + x2), 0.5 * (y0 + y2)\n arrow_in = [(x0, y0), (x1n, y1n), (x2, y2)]\n arrow_out = None\n\n # head\n head_width = self.head_width * mutation_size\n head_left, head_right = make_wedged_bezier2(arrow_in,\n head_width / 2., wm=.5)\n\n # tail\n if arrow_out is not None:\n tail_width = self.tail_width * mutation_size\n tail_left, tail_right = get_parallels(arrow_out,\n tail_width / 2.)\n\n patch_path = [(Path.MOVETO, tail_right[0]),\n (Path.CURVE3, tail_right[1]),\n (Path.CURVE3, tail_right[2]),\n (Path.LINETO, head_right[0]),\n (Path.CURVE3, head_right[1]),\n (Path.CURVE3, head_right[2]),\n (Path.CURVE3, head_left[1]),\n (Path.CURVE3, head_left[0]),\n (Path.LINETO, tail_left[2]),\n (Path.CURVE3, tail_left[1]),\n (Path.CURVE3, tail_left[0]),\n (Path.LINETO, tail_right[0]),\n (Path.CLOSEPOLY, tail_right[0]),\n ]\n else:\n patch_path = [(Path.MOVETO, head_right[0]),\n (Path.CURVE3, head_right[1]),\n (Path.CURVE3, head_right[2]),\n (Path.CURVE3, head_left[1]),\n (Path.CURVE3, head_left[0]),\n (Path.CLOSEPOLY, head_left[0]),\n ]\n\n path = Path([p for c, p in patch_path], [c for c, p in patch_path])\n\n return path, True\n\n @_register_style(_style_list)\n class Fancy(_Base):\n \"\"\"A fancy arrow. Only works with a quadratic Bezier curve.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.4, tail_width=.4):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.4\n Width of the arrow head.\n\n tail_width : float, default: 0.4\n Width of the arrow tail.\n \"\"\"\n self.head_length, self.head_width, self.tail_width = \\\n head_length, head_width, tail_width\n super().__init__()\n\n def transmute(self, path, mutation_size, linewidth):\n\n x0, y0, x1, y1, x2, y2 = self.ensure_quadratic_bezier(path)\n\n # divide the path into a head and a tail\n head_length = self.head_length * mutation_size\n arrow_path = [(x0, y0), (x1, y1), (x2, y2)]\n\n # path for head\n in_f = inside_circle(x2, y2, head_length)\n try:\n path_out, path_in = split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n except NonIntersectingPathException:\n # if this happens, make a straight line of the head_length\n # long.\n x0, y0 = _point_along_a_line(x2, y2, x1, y1, head_length)\n x1n, y1n = 0.5 * (x0 + x2), 0.5 * (y0 + y2)\n arrow_path = [(x0, y0), (x1n, y1n), (x2, y2)]\n path_head = arrow_path\n else:\n path_head = path_in\n\n # path for head\n in_f = inside_circle(x2, y2, head_length * .8)\n path_out, path_in = split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n path_tail = path_out\n\n # head\n head_width = self.head_width * mutation_size\n head_l, head_r = make_wedged_bezier2(path_head,\n head_width / 2.,\n wm=.6)\n\n # tail\n tail_width = self.tail_width * mutation_size\n tail_left, tail_right = make_wedged_bezier2(path_tail,\n tail_width * .5,\n w1=1., wm=0.6, w2=0.3)\n\n # path for head\n in_f = inside_circle(x0, y0, tail_width * .3)\n path_in, path_out = split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n tail_start = path_in[-1]\n\n head_right, head_left = head_r, head_l\n patch_path = [(Path.MOVETO, tail_start),\n (Path.LINETO, tail_right[0]),\n (Path.CURVE3, tail_right[1]),\n (Path.CURVE3, tail_right[2]),\n (Path.LINETO, head_right[0]),\n (Path.CURVE3, head_right[1]),\n (Path.CURVE3, head_right[2]),\n (Path.CURVE3, head_left[1]),\n (Path.CURVE3, head_left[0]),\n (Path.LINETO, tail_left[2]),\n (Path.CURVE3, tail_left[1]),\n (Path.CURVE3, tail_left[0]),\n (Path.LINETO, tail_start),\n (Path.CLOSEPOLY, tail_start),\n ]\n path = Path([p for c, p in patch_path], [c for c, p in patch_path])\n\n return path, True\n\n @_register_style(_style_list)\n class Wedge(_Base):\n \"\"\"\n Wedge(?) shape. Only works with a quadratic Bezier curve. The\n begin point has a width of the tail_width and the end point has a\n width of 0. At the middle, the width is shrink_factor*tail_width.\n \"\"\"\n\n def __init__(self, tail_width=.3, shrink_factor=0.5):\n \"\"\"\n Parameters\n ----------\n tail_width : float, default: 0.3\n Width of the tail.\n\n shrink_factor : float, default: 0.5\n Fraction of the arrow width at the middle point.\n \"\"\"\n self.tail_width = tail_width\n self.shrink_factor = shrink_factor\n super().__init__()\n\n def transmute(self, path, mutation_size, linewidth):\n\n x0, y0, x1, y1, x2, y2 = self.ensure_quadratic_bezier(path)\n\n arrow_path = [(x0, y0), (x1, y1), (x2, y2)]\n b_plus, b_minus = make_wedged_bezier2(\n arrow_path,\n self.tail_width * mutation_size / 2.,\n wm=self.shrink_factor)\n\n patch_path = [(Path.MOVETO, b_plus[0]),\n (Path.CURVE3, b_plus[1]),\n (Path.CURVE3, b_plus[2]),\n (Path.LINETO, b_minus[2]),\n (Path.CURVE3, b_minus[1]),\n (Path.CURVE3, b_minus[0]),\n (Path.CLOSEPOLY, b_minus[0]),\n ]\n path = Path([p for c, p in patch_path], [c for c, p in patch_path])\n\n return path, True\n\n\ndocstring.interpd.update(\n AvailableBoxstyles=BoxStyle.pprint_styles(),\n ListBoxstyles=_simpleprint_styles(BoxStyle._style_list),\n AvailableArrowstyles=ArrowStyle.pprint_styles(),\n AvailableConnectorstyles=ConnectionStyle.pprint_styles(),\n)\ndocstring.dedent_interpd(BoxStyle)\ndocstring.dedent_interpd(ArrowStyle)\ndocstring.dedent_interpd(ConnectionStyle)\n\n\nclass FancyBboxPatch(Patch):\n \"\"\"\n A fancy box around a rectangle with lower left at *xy* = (*x*, *y*)\n with specified width and height.\n\n `.FancyBboxPatch` is similar to `.Rectangle`, but it draws a fancy box\n around the rectangle. The transformation of the rectangle box to the\n fancy box is delegated to the style classes defined in `.BoxStyle`.\n \"\"\"\n\n _edge_default = True\n\n def __str__(self):\n s = self.__class__.__name__ + \"((%g, %g), width=%g, height=%g)\"\n return s % (self._x, self._y, self._width, self._height)\n\n @docstring.dedent_interpd\n @_api.delete_parameter(\"3.4\", \"bbox_transmuter\", alternative=\"boxstyle\")\n def __init__(self, xy, width, height,\n boxstyle=\"round\", bbox_transmuter=None,\n mutation_scale=1, mutation_aspect=1,\n **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : float, float\n The lower left corner of the box.\n\n width : float\n The width of the box.\n\n height : float\n The height of the box.\n\n boxstyle : str or `matplotlib.patches.BoxStyle`\n The style of the fancy box. This can either be a `.BoxStyle`\n instance or a string of the style name and optionally comma\n seprarated attributes (e.g. \"Round, pad=0.2\"). This string is\n passed to `.BoxStyle` to construct a `.BoxStyle` object. See\n there for a full documentation.\n\n The following box styles are available:\n\n %(AvailableBoxstyles)s\n\n mutation_scale : float, default: 1\n Scaling factor applied to the attributes of the box style\n (e.g. pad or rounding_size).\n\n mutation_aspect : float, default: 1\n The height of the rectangle will be squeezed by this value before\n the mutation and the mutated box will be stretched by the inverse\n of it. For example, this allows different horizontal and vertical\n padding.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties\n\n %(Patch_kwdoc)s\n \"\"\"\n\n super().__init__(**kwargs)\n\n self._x = xy[0]\n self._y = xy[1]\n self._width = width\n self._height = height\n\n if boxstyle == \"custom\":\n _api.warn_deprecated(\n \"3.4\", message=\"Support for boxstyle='custom' is deprecated \"\n \"since %(since)s and will be removed %(removal)s; directly \"\n \"pass a boxstyle instance as the boxstyle parameter instead.\")\n if bbox_transmuter is None:\n raise ValueError(\"bbox_transmuter argument is needed with \"\n \"custom boxstyle\")\n self._bbox_transmuter = bbox_transmuter\n else:\n self.set_boxstyle(boxstyle)\n\n self._mutation_scale = mutation_scale\n self._mutation_aspect = mutation_aspect\n\n self.stale = True\n\n @docstring.dedent_interpd\n def set_boxstyle(self, boxstyle=None, **kwargs):\n \"\"\"\n Set the box style.\n\n Most box styles can be further configured using attributes.\n Attributes from the previous box style are not reused.\n\n Without argument (or with ``boxstyle=None``), the available box styles\n are returned as a human-readable string.\n\n Parameters\n ----------\n boxstyle : str or `matplotlib.patches.BoxStyle`\n The style of the fancy box. This can either be a `.BoxStyle`\n instance or a string of the style name and optionally comma\n seprarated attributes (e.g. \"Round, pad=0.2\"). This string is\n passed to `.BoxStyle` to construct a `.BoxStyle` object. See\n there for a full documentation.\n\n The following box styles are available:\n\n %(AvailableBoxstyles)s\n\n .. ACCEPTS: %(ListBoxstyles)s\n\n **kwargs\n Additional attributes for the box style. See the table above for\n supported parameters.\n\n Examples\n --------\n ::\n\n set_boxstyle(\"round,pad=0.2\")\n set_boxstyle(\"round\", pad=0.2)\n\n \"\"\"\n if boxstyle is None:\n return BoxStyle.pprint_styles()\n\n if isinstance(boxstyle, BoxStyle._Base) or callable(boxstyle):\n self._bbox_transmuter = boxstyle\n else:\n self._bbox_transmuter = BoxStyle(boxstyle, **kwargs)\n self.stale = True\n\n def set_mutation_scale(self, scale):\n \"\"\"\n Set the mutation scale.\n\n Parameters\n ----------\n scale : float\n \"\"\"\n self._mutation_scale = scale\n self.stale = True\n\n def get_mutation_scale(self):\n \"\"\"Return the mutation scale.\"\"\"\n return self._mutation_scale\n\n def set_mutation_aspect(self, aspect):\n \"\"\"\n Set the aspect ratio of the bbox mutation.\n\n Parameters\n ----------\n aspect : float\n \"\"\"\n self._mutation_aspect = aspect\n self.stale = True\n\n def get_mutation_aspect(self):\n \"\"\"Return the aspect ratio of the bbox mutation.\"\"\"\n return (self._mutation_aspect if self._mutation_aspect is not None\n else 1) # backcompat.\n\n def get_boxstyle(self):\n \"\"\"Return the boxstyle object.\"\"\"\n return self._bbox_transmuter\n\n def get_path(self):\n \"\"\"Return the mutated path of the rectangle.\"\"\"\n boxstyle = self.get_boxstyle()\n x = self._x\n y = self._y\n width = self._width\n height = self._height\n m_scale = self.get_mutation_scale()\n m_aspect = self.get_mutation_aspect()\n # Squeeze the given height by the aspect_ratio.\n y, height = y / m_aspect, height / m_aspect\n # Call boxstyle with squeezed height.\n try:\n inspect.signature(boxstyle).bind(x, y, width, height, m_scale)\n except TypeError:\n # Don't apply aspect twice.\n path = boxstyle(x, y, width, height, m_scale, 1)\n _api.warn_deprecated(\n \"3.4\", message=\"boxstyles must be callable without the \"\n \"'mutation_aspect' parameter since %(since)s; support for the \"\n \"old call signature will be removed %(removal)s.\")\n else:\n path = boxstyle(x, y, width, height, m_scale)\n vertices, codes = path.vertices, path.codes\n # Restore the height.\n vertices[:, 1] = vertices[:, 1] * m_aspect\n return Path(vertices, codes)\n\n # Following methods are borrowed from the Rectangle class.\n\n def get_x(self):\n \"\"\"Return the left coord of the rectangle.\"\"\"\n return self._x\n\n def get_y(self):\n \"\"\"Return the bottom coord of the rectangle.\"\"\"\n return self._y\n\n def get_width(self):\n \"\"\"Return the width of the rectangle.\"\"\"\n return self._width\n\n def get_height(self):\n \"\"\"Return the height of the rectangle.\"\"\"\n return self._height\n\n def set_x(self, x):\n \"\"\"\n Set the left coord of the rectangle.\n\n Parameters\n ----------\n x : float\n \"\"\"\n self._x = x\n self.stale = True\n\n def set_y(self, y):\n \"\"\"\n Set the bottom coord of the rectangle.\n\n Parameters\n ----------\n y : float\n \"\"\"\n self._y = y\n self.stale = True\n\n def set_width(self, w):\n \"\"\"\n Set the rectangle width.\n\n Parameters\n ----------\n w : float\n \"\"\"\n self._width = w\n self.stale = True\n\n def set_height(self, h):\n \"\"\"\n Set the rectangle height.\n\n Parameters\n ----------\n h : float\n \"\"\"\n self._height = h\n self.stale = True\n\n def set_bounds(self, *args):\n \"\"\"\n Set the bounds of the rectangle.\n\n Call signatures::\n\n set_bounds(left, bottom, width, height)\n set_bounds((left, bottom, width, height))\n\n Parameters\n ----------\n left, bottom : float\n The coordinates of the bottom left corner of the rectangle.\n width, height : float\n The width/height of the rectangle.\n \"\"\"\n if len(args) == 1:\n l, b, w, h = args[0]\n else:\n l, b, w, h = args\n self._x = l\n self._y = b\n self._width = w\n self._height = h\n self.stale = True\n\n def get_bbox(self):\n \"\"\"Return the `.Bbox`.\"\"\"\n return transforms.Bbox.from_bounds(self._x, self._y,\n self._width, self._height)\n\n\nclass FancyArrowPatch(Patch):\n \"\"\"\n A fancy arrow patch. It draws an arrow using the `ArrowStyle`.\n\n The head and tail positions are fixed at the specified start and end points\n of the arrow, but the size and shape (in display coordinates) of the arrow\n does not change when the axis is moved or zoomed.\n \"\"\"\n _edge_default = True\n\n def __str__(self):\n if self._posA_posB is not None:\n (x1, y1), (x2, y2) = self._posA_posB\n return f\"{type(self).__name__}(({x1:g}, {y1:g})->({x2:g}, {y2:g}))\"\n else:\n return f\"{type(self).__name__}({self._path_original})\"\n\n @docstring.dedent_interpd\n @_api.delete_parameter(\"3.4\", \"dpi_cor\")\n def __init__(self, posA=None, posB=None, path=None,\n arrowstyle=\"simple\", connectionstyle=\"arc3\",\n patchA=None, patchB=None,\n shrinkA=2, shrinkB=2,\n mutation_scale=1, mutation_aspect=1,\n dpi_cor=1,\n **kwargs):\n \"\"\"\n There are two ways for defining an arrow:\n\n - If *posA* and *posB* are given, a path connecting two points is\n created according to *connectionstyle*. The path will be\n clipped with *patchA* and *patchB* and further shrunken by\n *shrinkA* and *shrinkB*. An arrow is drawn along this\n resulting path using the *arrowstyle* parameter.\n\n - Alternatively if *path* is provided, an arrow is drawn along this\n path and *patchA*, *patchB*, *shrinkA*, and *shrinkB* are ignored.\n\n Parameters\n ----------\n posA, posB : (float, float), default: None\n (x, y) coordinates of arrow tail and arrow head respectively.\n\n path : `~matplotlib.path.Path`, default: None\n If provided, an arrow is drawn along this path and *patchA*,\n *patchB*, *shrinkA*, and *shrinkB* are ignored.\n\n arrowstyle : str or `.ArrowStyle`, default: 'simple'\n The `.ArrowStyle` with which the fancy arrow is drawn. If a\n string, it should be one of the available arrowstyle names, with\n optional comma-separated attributes. The optional attributes are\n meant to be scaled with the *mutation_scale*. The following arrow\n styles are available:\n\n %(AvailableArrowstyles)s\n\n connectionstyle : str or `.ConnectionStyle` or None, optional, \\\ndefault: 'arc3'\n The `.ConnectionStyle` with which *posA* and *posB* are connected.\n If a string, it should be one of the available connectionstyle\n names, with optional comma-separated attributes. The following\n connection styles are available:\n\n %(AvailableConnectorstyles)s\n\n patchA, patchB : `.Patch`, default: None\n Head and tail patches, respectively.\n\n shrinkA, shrinkB : float, default: 2\n Shrinking factor of the tail and head of the arrow respectively.\n\n mutation_scale : float, default: 1\n Value with which attributes of *arrowstyle* (e.g., *head_length*)\n will be scaled.\n\n mutation_aspect : None or float, default: None\n The height of the rectangle will be squeezed by this value before\n the mutation and the mutated box will be stretched by the inverse\n of it.\n\n dpi_cor : float, default: 1\n dpi_cor is currently used for linewidth-related things and shrink\n factor. Mutation scale is affected by this. Deprecated.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties, optional\n Here is a list of available `.Patch` properties:\n\n %(Patch_kwdoc)s\n\n In contrast to other patches, the default ``capstyle`` and\n ``joinstyle`` for `FancyArrowPatch` are set to ``\"round\"``.\n \"\"\"\n # Traditionally, the cap- and joinstyle for FancyArrowPatch are round\n kwargs.setdefault(\"joinstyle\", JoinStyle.round)\n kwargs.setdefault(\"capstyle\", CapStyle.round)\n\n super().__init__(**kwargs)\n\n if posA is not None and posB is not None and path is None:\n self._posA_posB = [posA, posB]\n\n if connectionstyle is None:\n connectionstyle = \"arc3\"\n self.set_connectionstyle(connectionstyle)\n\n elif posA is None and posB is None and path is not None:\n self._posA_posB = None\n else:\n raise ValueError(\"Either posA and posB, or path need to provided\")\n\n self.patchA = patchA\n self.patchB = patchB\n self.shrinkA = shrinkA\n self.shrinkB = shrinkB\n\n self._path_original = path\n\n self.set_arrowstyle(arrowstyle)\n\n self._mutation_scale = mutation_scale\n self._mutation_aspect = mutation_aspect\n\n self._dpi_cor = dpi_cor\n\n @_api.deprecated(\"3.4\")\n def set_dpi_cor(self, dpi_cor):\n \"\"\"\n dpi_cor is currently used for linewidth-related things and\n shrink factor. Mutation scale is affected by this.\n\n Parameters\n ----------\n dpi_cor : float\n \"\"\"\n self._dpi_cor = dpi_cor\n self.stale = True\n\n @_api.deprecated(\"3.4\")\n def get_dpi_cor(self):\n \"\"\"\n dpi_cor is currently used for linewidth-related things and\n shrink factor. Mutation scale is affected by this.\n\n Returns\n -------\n scalar\n \"\"\"\n return self._dpi_cor\n\n def set_positions(self, posA, posB):\n \"\"\"\n Set the begin and end positions of the connecting path.\n\n Parameters\n ----------\n posA, posB : None, tuple\n (x, y) coordinates of arrow tail and arrow head respectively. If\n `None` use current value.\n \"\"\"\n if posA is not None:\n self._posA_posB[0] = posA\n if posB is not None:\n self._posA_posB[1] = posB\n self.stale = True\n\n def set_patchA(self, patchA):\n \"\"\"\n Set the tail patch.\n\n Parameters\n ----------\n patchA : `.patches.Patch`\n \"\"\"\n self.patchA = patchA\n self.stale = True\n\n def set_patchB(self, patchB):\n \"\"\"\n Set the head patch.\n\n Parameters\n ----------\n patchB : `.patches.Patch`\n \"\"\"\n self.patchB = patchB\n self.stale = True\n\n def set_connectionstyle(self, connectionstyle, **kw):\n \"\"\"\n Set the connection style. Old attributes are forgotten.\n\n Parameters\n ----------\n connectionstyle : str or `.ConnectionStyle` or None, optional\n Can be a string with connectionstyle name with\n optional comma-separated attributes, e.g.::\n\n set_connectionstyle(\"arc,angleA=0,armA=30,rad=10\")\n\n Alternatively, the attributes can be provided as keywords, e.g.::\n\n set_connectionstyle(\"arc\", angleA=0,armA=30,rad=10)\n\n Without any arguments (or with ``connectionstyle=None``), return\n available styles as a list of strings.\n \"\"\"\n\n if connectionstyle is None:\n return ConnectionStyle.pprint_styles()\n\n if (isinstance(connectionstyle, ConnectionStyle._Base) or\n callable(connectionstyle)):\n self._connector = connectionstyle\n else:\n self._connector = ConnectionStyle(connectionstyle, **kw)\n self.stale = True\n\n def get_connectionstyle(self):\n \"\"\"Return the `ConnectionStyle` used.\"\"\"\n return self._connector\n\n def set_arrowstyle(self, arrowstyle=None, **kw):\n \"\"\"\n Set the arrow style. Old attributes are forgotten. Without arguments\n (or with ``arrowstyle=None``) returns available box styles as a list of\n strings.\n\n Parameters\n ----------\n arrowstyle : None or ArrowStyle or str, default: None\n Can be a string with arrowstyle name with optional comma-separated\n attributes, e.g.::\n\n set_arrowstyle(\"Fancy,head_length=0.2\")\n\n Alternatively attributes can be provided as keywords, e.g.::\n\n set_arrowstyle(\"fancy\", head_length=0.2)\n\n \"\"\"\n\n if arrowstyle is None:\n return ArrowStyle.pprint_styles()\n\n if isinstance(arrowstyle, ArrowStyle._Base):\n self._arrow_transmuter = arrowstyle\n else:\n self._arrow_transmuter = ArrowStyle(arrowstyle, **kw)\n self.stale = True\n\n def get_arrowstyle(self):\n \"\"\"Return the arrowstyle object.\"\"\"\n return self._arrow_transmuter\n\n def set_mutation_scale(self, scale):\n \"\"\"\n Set the mutation scale.\n\n Parameters\n ----------\n scale : float\n \"\"\"\n self._mutation_scale = scale\n self.stale = True\n\n def get_mutation_scale(self):\n \"\"\"\n Return the mutation scale.\n\n Returns\n -------\n scalar\n \"\"\"\n return self._mutation_scale\n\n def set_mutation_aspect(self, aspect):\n \"\"\"\n Set the aspect ratio of the bbox mutation.\n\n Parameters\n ----------\n aspect : float\n \"\"\"\n self._mutation_aspect = aspect\n self.stale = True\n\n def get_mutation_aspect(self):\n \"\"\"Return the aspect ratio of the bbox mutation.\"\"\"\n return (self._mutation_aspect if self._mutation_aspect is not None\n else 1) # backcompat.\n\n def get_path(self):\n \"\"\"\n Return the path of the arrow in the data coordinates. Use\n get_path_in_displaycoord() method to retrieve the arrow path\n in display coordinates.\n \"\"\"\n _path, fillable = self.get_path_in_displaycoord()\n if np.iterable(fillable):\n _path = Path.make_compound_path(*_path)\n return self.get_transform().inverted().transform_path(_path)\n\n def get_path_in_displaycoord(self):\n \"\"\"Return the mutated path of the arrow in display coordinates.\"\"\"\n dpi_cor = self._dpi_cor\n\n if self._posA_posB is not None:\n posA = self._convert_xy_units(self._posA_posB[0])\n posB = self._convert_xy_units(self._posA_posB[1])\n (posA, posB) = self.get_transform().transform((posA, posB))\n _path = self.get_connectionstyle()(posA, posB,\n patchA=self.patchA,\n patchB=self.patchB,\n shrinkA=self.shrinkA * dpi_cor,\n shrinkB=self.shrinkB * dpi_cor\n )\n else:\n _path = self.get_transform().transform_path(self._path_original)\n\n _path, fillable = self.get_arrowstyle()(\n _path,\n self.get_mutation_scale() * dpi_cor,\n self.get_linewidth() * dpi_cor,\n self.get_mutation_aspect())\n\n return _path, fillable\n\n def draw(self, renderer):\n if not self.get_visible():\n return\n\n with self._bind_draw_path_function(renderer) as draw_path:\n\n # FIXME : dpi_cor is for the dpi-dependency of the linewidth. There\n # could be room for improvement. Maybe get_path_in_displaycoord\n # could take a renderer argument, but get_path should be adapted\n # too.\n self._dpi_cor = renderer.points_to_pixels(1.)\n path, fillable = self.get_path_in_displaycoord()\n\n if not np.iterable(fillable):\n path = [path]\n fillable = [fillable]\n\n affine = transforms.IdentityTransform()\n\n for p, f in zip(path, fillable):\n draw_path(\n p, affine,\n self._facecolor if f and self._facecolor[3] else None)\n\n\nclass ConnectionPatch(FancyArrowPatch):\n \"\"\"A patch that connects two points (possibly in different axes).\"\"\"\n\n def __str__(self):\n return \"ConnectionPatch((%g, %g), (%g, %g))\" % \\\n (self.xy1[0], self.xy1[1], self.xy2[0], self.xy2[1])\n\n @docstring.dedent_interpd\n @_api.delete_parameter(\"3.4\", \"dpi_cor\")\n def __init__(self, xyA, xyB, coordsA, coordsB=None,\n axesA=None, axesB=None,\n arrowstyle=\"-\",\n connectionstyle=\"arc3\",\n patchA=None,\n patchB=None,\n shrinkA=0.,\n shrinkB=0.,\n mutation_scale=10.,\n mutation_aspect=None,\n clip_on=False,\n dpi_cor=1.,\n **kwargs):\n \"\"\"\n Connect point *xyA* in *coordsA* with point *xyB* in *coordsB*.\n\n Valid keys are\n\n =============== ======================================================\n Key Description\n =============== ======================================================\n arrowstyle the arrow style\n connectionstyle the connection style\n relpos default is (0.5, 0.5)\n patchA default is bounding box of the text\n patchB default is None\n shrinkA default is 2 points\n shrinkB default is 2 points\n mutation_scale default is text size (in points)\n mutation_aspect default is 1.\n ? any key for `matplotlib.patches.PathPatch`\n =============== ======================================================\n\n *coordsA* and *coordsB* are strings that indicate the\n coordinates of *xyA* and *xyB*.\n\n ==================== ==================================================\n Property Description\n ==================== ==================================================\n 'figure points' points from the lower left corner of the figure\n 'figure pixels' pixels from the lower left corner of the figure\n 'figure fraction' 0, 0 is lower left of figure and 1, 1 is upper\n right\n 'subfigure points' points from the lower left corner of the subfigure\n 'subfigure pixels' pixels from the lower left corner of the subfigure\n 'subfigure fraction' fraction of the subfigure, 0, 0 is lower left.\n 'axes points' points from lower left corner of axes\n 'axes pixels' pixels from lower left corner of axes\n 'axes fraction' 0, 0 is lower left of axes and 1, 1 is upper right\n 'data' use the coordinate system of the object being\n annotated (default)\n 'offset points' offset (in points) from the *xy* value\n 'polar' you can specify *theta*, *r* for the annotation,\n even in cartesian plots. Note that if you are\n using a polar axes, you do not need to specify\n polar for the coordinate system since that is the\n native \"data\" coordinate system.\n ==================== ==================================================\n\n Alternatively they can be set to any valid\n `~matplotlib.transforms.Transform`.\n\n Note that 'subfigure pixels' and 'figure pixels' are the same\n for the parent figure, so users who want code that is usable in\n a subfigure can use 'subfigure pixels'.\n\n .. note::\n\n Using `ConnectionPatch` across two `~.axes.Axes` instances\n is not directly compatible with :doc:`constrained layout\n </tutorials/intermediate/constrainedlayout_guide>`. Add the artist\n directly to the `.Figure` instead of adding it to a specific Axes,\n or exclude it from the layout using ``con.set_in_layout(False)``.\n\n .. code-block:: default\n\n fig, ax = plt.subplots(1, 2, constrained_layout=True)\n con = ConnectionPatch(..., axesA=ax[0], axesB=ax[1])\n fig.add_artist(con)\n\n \"\"\"\n if coordsB is None:\n coordsB = coordsA\n # we'll draw ourself after the artist we annotate by default\n self.xy1 = xyA\n self.xy2 = xyB\n self.coords1 = coordsA\n self.coords2 = coordsB\n\n self.axesA = axesA\n self.axesB = axesB\n\n super().__init__(posA=(0, 0), posB=(1, 1),\n arrowstyle=arrowstyle,\n connectionstyle=connectionstyle,\n patchA=patchA, patchB=patchB,\n shrinkA=shrinkA, shrinkB=shrinkB,\n mutation_scale=mutation_scale,\n mutation_aspect=mutation_aspect,\n clip_on=clip_on,\n **kwargs)\n self._dpi_cor = dpi_cor\n\n # if True, draw annotation only if self.xy is inside the axes\n self._annotation_clip = None\n\n def _get_xy(self, xy, s, axes=None):\n \"\"\"Calculate the pixel position of given point.\"\"\"\n s0 = s # For the error message, if needed.\n if axes is None:\n axes = self.axes\n xy = np.array(xy)\n if s in [\"figure points\", \"axes points\"]:\n xy *= self.figure.dpi / 72\n s = s.replace(\"points\", \"pixels\")\n elif s == \"figure fraction\":\n s = self.figure.transFigure\n elif s == \"subfigure fraction\":\n s = self.figure.transSubfigure\n elif s == \"axes fraction\":\n s = axes.transAxes\n x, y = xy\n\n if s == 'data':\n trans = axes.transData\n x = float(self.convert_xunits(x))\n y = float(self.convert_yunits(y))\n return trans.transform((x, y))\n elif s == 'offset points':\n if self.xycoords == 'offset points': # prevent recursion\n return self._get_xy(self.xy, 'data')\n return (\n self._get_xy(self.xy, self.xycoords) # converted data point\n + xy * self.figure.dpi / 72) # converted offset\n elif s == 'polar':\n theta, r = x, y\n x = r * np.cos(theta)\n y = r * np.sin(theta)\n trans = axes.transData\n return trans.transform((x, y))\n elif s == 'figure pixels':\n # pixels from the lower left corner of the figure\n bb = self.figure.figbbox\n x = bb.x0 + x if x >= 0 else bb.x1 + x\n y = bb.y0 + y if y >= 0 else bb.y1 + y\n return x, y\n elif s == 'subfigure pixels':\n # pixels from the lower left corner of the figure\n bb = self.figure.bbox\n x = bb.x0 + x if x >= 0 else bb.x1 + x\n y = bb.y0 + y if y >= 0 else bb.y1 + y\n return x, y\n elif s == 'axes pixels':\n # pixels from the lower left corner of the axes\n bb = axes.bbox\n x = bb.x0 + x if x >= 0 else bb.x1 + x\n y = bb.y0 + y if y >= 0 else bb.y1 + y\n return x, y\n elif isinstance(s, transforms.Transform):\n return s.transform(xy)\n else:\n raise ValueError(f\"{s0} is not a valid coordinate transformation\")\n\n def set_annotation_clip(self, b):\n \"\"\"\n Set the clipping behavior.\n\n Parameters\n ----------\n b : bool or None\n\n - *False*: The annotation will always be drawn regardless of its\n position.\n - *True*: The annotation will only be drawn if ``self.xy`` is\n inside the axes.\n - *None*: The annotation will only be drawn if ``self.xy`` is\n inside the axes and ``self.xycoords == \"data\"``.\n \"\"\"\n self._annotation_clip = b\n self.stale = True\n\n def get_annotation_clip(self):\n \"\"\"\n Return the clipping behavior.\n\n See `.set_annotation_clip` for the meaning of the return value.\n \"\"\"\n return self._annotation_clip\n\n def get_path_in_displaycoord(self):\n \"\"\"Return the mutated path of the arrow in display coordinates.\"\"\"\n dpi_cor = self._dpi_cor\n posA = self._get_xy(self.xy1, self.coords1, self.axesA)\n posB = self._get_xy(self.xy2, self.coords2, self.axesB)\n path = self.get_connectionstyle()(\n posA, posB,\n patchA=self.patchA, patchB=self.patchB,\n shrinkA=self.shrinkA * dpi_cor, shrinkB=self.shrinkB * dpi_cor,\n )\n path, fillable = self.get_arrowstyle()(\n path,\n self.get_mutation_scale() * dpi_cor,\n self.get_linewidth() * dpi_cor,\n self.get_mutation_aspect()\n )\n return path, fillable\n\n def _check_xy(self, renderer):\n \"\"\"Check whether the annotation needs to be drawn.\"\"\"\n\n b = self.get_annotation_clip()\n\n if b or (b is None and self.coords1 == \"data\"):\n xy_pixel = self._get_xy(self.xy1, self.coords1, self.axesA)\n if self.axesA is None:\n axes = self.axes\n else:\n axes = self.axesA\n if not axes.contains_point(xy_pixel):\n return False\n\n if b or (b is None and self.coords2 == \"data\"):\n xy_pixel = self._get_xy(self.xy2, self.coords2, self.axesB)\n if self.axesB is None:\n axes = self.axes\n else:\n axes = self.axesB\n if not axes.contains_point(xy_pixel):\n return False\n\n return True\n\n def draw(self, renderer):\n if renderer is not None:\n self._renderer = renderer\n if not self.get_visible() or not self._check_xy(renderer):\n return\n super().draw(renderer)\n"}
|
{"lib/matplotlib/patches.py": [{"type": "function", "name": "Rectangle.get_angle", "lines": [802, 804], "signature": "def get_angle(self):", "doc": "Get the rotation angle in degrees."}, {"type": "function", "name": "Rectangle.set_angle", "lines": [816, 823], "signature": "def set_angle(self, angle):", "doc": "Set the rotation angle in degrees.\n\nThe rotation is performed anti-clockwise around *xy*."}]}
|
3.3
|
["lib/matplotlib/tests/test_patches.py::test_rotate_rect_draw[png]"]
|
["lib/matplotlib/tests/test_patches.py::test_Polygon_close", "lib/matplotlib/tests/test_patches.py::test_rotate_rect", "lib/matplotlib/tests/test_patches.py::test_negative_rect", "lib/matplotlib/tests/test_patches.py::test_clip_to_bbox[png]", "lib/matplotlib/tests/test_patches.py::test_patch_alpha_coloring[png]", "lib/matplotlib/tests/test_patches.py::test_patch_alpha_override[png]", "lib/matplotlib/tests/test_patches.py::test_patch_color_none", "lib/matplotlib/tests/test_patches.py::test_patch_custom_linestyle[png]", "lib/matplotlib/tests/test_patches.py::test_patch_linestyle_accents", "lib/matplotlib/tests/test_patches.py::test_patch_linestyle_none[png]", "lib/matplotlib/tests/test_patches.py::test_wedge_movement", "lib/matplotlib/tests/test_patches.py::test_wedge_range[png]", "lib/matplotlib/tests/test_patches.py::test_patch_str", "lib/matplotlib/tests/test_patches.py::test_multi_color_hatch[png]", "lib/matplotlib/tests/test_patches.py::test_units_rectangle[png]", "lib/matplotlib/tests/test_patches.py::test_connection_patch[png]", "lib/matplotlib/tests/test_patches.py::test_connection_patch_fig[png]", "lib/matplotlib/tests/test_patches.py::test_datetime_rectangle", "lib/matplotlib/tests/test_patches.py::test_datetime_datetime_fails", "lib/matplotlib/tests/test_patches.py::test_contains_point", "lib/matplotlib/tests/test_patches.py::test_contains_points", "lib/matplotlib/tests/test_patches.py::test_shadow[png]", "lib/matplotlib/tests/test_patches.py::test_fancyarrow_units", "lib/matplotlib/tests/test_patches.py::test_degenerate_polygon", "lib/matplotlib/tests/test_patches.py::test_color_override_warning[edgecolor]", "lib/matplotlib/tests/test_patches.py::test_color_override_warning[facecolor]", "lib/matplotlib/tests/test_patches.py::test_empty_verts", "lib/matplotlib/tests/test_patches.py::test_default_antialiased", "lib/matplotlib/tests/test_patches.py::test_default_linestyle", "lib/matplotlib/tests/test_patches.py::test_default_capstyle", "lib/matplotlib/tests/test_patches.py::test_default_joinstyle"]
|
59b32afde60e46407b60c766b878c840a9bfa490
|
{"first_commit_time": 1615140549.0, "pr_title": "Add angle setter/getter to Rectangle", "pr_body": "This adds an `angle` property, such that the `Rectangle` is labelled stale when it is updated. All the other properties use `set_` and `get_`, but to maintain backwards compatibility I've used a property here since `Rectangle.angle` already existed.", "pr_timeline": [{"time": 1614989619.0, "comment": "I propose to add `set_` and `get_` instead, but leave the storage in the public `Rectangle.angle`.\r\n\r\nThis is a consistency issue. We don't have attributes do fancy stuff (and thus have litte public attributes anyway). Defining `set_` enables the use with standard mechanisms like `rect.set(width=2, angle=30)`.\r\n\r\nWe don't need staling on `.angle` assignment. This has not been working and thus we don't need that for backward compatibility.\r\n\r\nWe may additionally deprecate `.angle`, but I'm undecided if that's worth it."}, {"time": 1615140146.0, "comment": "My approval still stands, although you made flake8 unhappy :)"}, {"time": 1615143510.0, "comment": "Please add tests for the new setters and getters!"}, {"time": 1615203827.0, "comment": "Test added. It's a simple figure comparison to make sure that setting the rotation after patch creation does the same thing as setting it in the constructor, with some additional tests for `get_angle()`."}], "issues": {}}
|
|
matplotlib/matplotlib
| 19,663
|
https://github.com/matplotlib/matplotlib/pull/19663
|
matplotlib__matplotlib-19663
|
[]
|
bd7df784719a3393bc7d6e7b471fb2b5ed52da50
|
diff --git a/lib/matplotlib/colors.py b/lib/matplotlib/colors.py
index 9754d6bf2c56..8f2744013202 100644
--- a/lib/matplotlib/colors.py
+++ b/lib/matplotlib/colors.py
@@ -561,7 +561,7 @@ def _warn_if_global_cmap_modified(cmap):
"colormap. In future versions, you will not be able to "
"modify a registered colormap in-place. To remove this "
"warning, you can make a copy of the colormap first. "
- f'cmap = copy.copy(mpl.cm.get_cmap("{cmap.name}"))'
+ f'cmap = mpl.cm.get_cmap("{cmap.name}").copy()'
)
@@ -845,6 +845,10 @@ def color_block(color):
f'over {color_block(self.get_over())}'
'</div>')
+ def copy(self):
+ """Return a copy of the colormap."""
+ return self.__copy__()
+
class LinearSegmentedColormap(Colormap):
"""
|
diff --git a/lib/matplotlib/tests/test_colors.py b/lib/matplotlib/tests/test_colors.py
index fff69da380bf..d9c31a148a9b 100644
--- a/lib/matplotlib/tests/test_colors.py
+++ b/lib/matplotlib/tests/test_colors.py
@@ -150,6 +150,16 @@ def test_colormap_copy():
with np.errstate(invalid='ignore'):
ret2 = copied_cmap([-1, 0, .5, 1, np.nan, np.inf])
assert_array_equal(ret1, ret2)
+ # again with the .copy method:
+ cmap = plt.cm.Reds
+ copied_cmap = cmap.copy()
+ with np.errstate(invalid='ignore'):
+ ret1 = copied_cmap([-1, 0, .5, 1, np.nan, np.inf])
+ cmap2 = copy.copy(copied_cmap)
+ cmap2.set_bad('g')
+ with np.errstate(invalid='ignore'):
+ ret2 = copied_cmap([-1, 0, .5, 1, np.nan, np.inf])
+ assert_array_equal(ret1, ret2)
def test_colormap_endian():
| 2021-03-07T21:04:50
|
{}
|
{"lib/matplotlib/colors.py": "\"\"\"\nA module for converting numbers or color arguments to *RGB* or *RGBA*.\n\n*RGB* and *RGBA* are sequences of, respectively, 3 or 4 floats in the\nrange 0-1.\n\nThis module includes functions and classes for color specification conversions,\nand for mapping numbers to colors in a 1-D array of colors called a colormap.\n\nMapping data onto colors using a colormap typically involves two steps: a data\narray is first mapped onto the range 0-1 using a subclass of `Normalize`,\nthen this number is mapped to a color using a subclass of `Colormap`. Two\nsubclasses of `Colormap` provided here: `LinearSegmentedColormap`, which uses\npiecewise-linear interpolation to define colormaps, and `ListedColormap`, which\nmakes a colormap from a list of colors.\n\n.. seealso::\n\n :doc:`/tutorials/colors/colormap-manipulation` for examples of how to\n make colormaps and\n\n :doc:`/tutorials/colors/colormaps` for a list of built-in colormaps.\n\n :doc:`/tutorials/colors/colormapnorms` for more details about data\n normalization\n\n More colormaps are available at palettable_.\n\nThe module also provides functions for checking whether an object can be\ninterpreted as a color (`is_color_like`), for converting such an object\nto an RGBA tuple (`to_rgba`) or to an HTML-like hex string in the\n\"#rrggbb\" format (`to_hex`), and a sequence of colors to an (n, 4)\nRGBA array (`to_rgba_array`). Caching is used for efficiency.\n\nMatplotlib recognizes the following formats to specify a color:\n\n* an RGB or RGBA (red, green, blue, alpha) tuple of float values in closed\n interval ``[0, 1]`` (e.g., ``(0.1, 0.2, 0.5)`` or ``(0.1, 0.2, 0.5, 0.3)``);\n* a hex RGB or RGBA string (e.g., ``'#0f0f0f'`` or ``'#0f0f0f80'``;\n case-insensitive);\n* a shorthand hex RGB or RGBA string, equivalent to the hex RGB or RGBA\n string obtained by duplicating each character, (e.g., ``'#abc'``, equivalent\n to ``'#aabbcc'``, or ``'#abcd'``, equivalent to ``'#aabbccdd'``;\n case-insensitive);\n* a string representation of a float value in ``[0, 1]`` inclusive for gray\n level (e.g., ``'0.5'``);\n* one of the characters ``{'b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'}``, which\n are short-hand notations for shades of blue, green, red, cyan, magenta,\n yellow, black, and white. Note that the colors ``'g', 'c', 'm', 'y'`` do not\n coincide with the X11/CSS4 colors. Their particular shades were chosen for\n better visibility of colored lines against typical backgrounds.\n* a X11/CSS4 color name (case-insensitive);\n* a name from the `xkcd color survey`_, prefixed with ``'xkcd:'`` (e.g.,\n ``'xkcd:sky blue'``; case insensitive);\n* one of the Tableau Colors from the 'T10' categorical palette (the default\n color cycle): ``{'tab:blue', 'tab:orange', 'tab:green', 'tab:red',\n 'tab:purple', 'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive', 'tab:cyan'}``\n (case-insensitive);\n* a \"CN\" color spec, i.e. 'C' followed by a number, which is an index into the\n default property cycle (:rc:`axes.prop_cycle`); the indexing is intended to\n occur at rendering time, and defaults to black if the cycle does not include\n color.\n\n.. _palettable: https://jiffyclub.github.io/palettable/\n.. _xkcd color survey: https://xkcd.com/color/rgb/\n\"\"\"\n\nimport base64\nfrom collections.abc import Sized, Sequence\nimport copy\nimport functools\nimport inspect\nimport io\nimport itertools\nfrom numbers import Number\nimport re\nfrom PIL import Image\nfrom PIL.PngImagePlugin import PngInfo\n\nimport matplotlib as mpl\nimport numpy as np\nfrom matplotlib import _api, cbook, scale\nfrom ._color_data import BASE_COLORS, TABLEAU_COLORS, CSS4_COLORS, XKCD_COLORS\n\n\nclass _ColorMapping(dict):\n def __init__(self, mapping):\n super().__init__(mapping)\n self.cache = {}\n\n def __setitem__(self, key, value):\n super().__setitem__(key, value)\n self.cache.clear()\n\n def __delitem__(self, key):\n super().__delitem__(key)\n self.cache.clear()\n\n\n_colors_full_map = {}\n# Set by reverse priority order.\n_colors_full_map.update(XKCD_COLORS)\n_colors_full_map.update({k.replace('grey', 'gray'): v\n for k, v in XKCD_COLORS.items()\n if 'grey' in k})\n_colors_full_map.update(CSS4_COLORS)\n_colors_full_map.update(TABLEAU_COLORS)\n_colors_full_map.update({k.replace('gray', 'grey'): v\n for k, v in TABLEAU_COLORS.items()\n if 'gray' in k})\n_colors_full_map.update(BASE_COLORS)\n_colors_full_map = _ColorMapping(_colors_full_map)\n\n_REPR_PNG_SIZE = (512, 64)\n\n\ndef get_named_colors_mapping():\n \"\"\"Return the global mapping of names to named colors.\"\"\"\n return _colors_full_map\n\n\ndef _sanitize_extrema(ex):\n if ex is None:\n return ex\n try:\n ret = ex.item()\n except AttributeError:\n ret = float(ex)\n return ret\n\n\ndef _is_nth_color(c):\n \"\"\"Return whether *c* can be interpreted as an item in the color cycle.\"\"\"\n return isinstance(c, str) and re.match(r\"\\AC[0-9]+\\Z\", c)\n\n\ndef is_color_like(c):\n \"\"\"Return whether *c* can be interpreted as an RGB(A) color.\"\"\"\n # Special-case nth color syntax because it cannot be parsed during setup.\n if _is_nth_color(c):\n return True\n try:\n to_rgba(c)\n except ValueError:\n return False\n else:\n return True\n\n\ndef _check_color_like(**kwargs):\n \"\"\"\n For each *key, value* pair in *kwargs*, check that *value* is color-like.\n \"\"\"\n for k, v in kwargs.items():\n if not is_color_like(v):\n raise ValueError(f\"{v!r} is not a valid value for {k}\")\n\n\ndef same_color(c1, c2):\n \"\"\"\n Return whether the colors *c1* and *c2* are the same.\n\n *c1*, *c2* can be single colors or lists/arrays of colors.\n \"\"\"\n c1 = to_rgba_array(c1)\n c2 = to_rgba_array(c2)\n n1 = max(c1.shape[0], 1) # 'none' results in shape (0, 4), but is 1-elem\n n2 = max(c2.shape[0], 1) # 'none' results in shape (0, 4), but is 1-elem\n\n if n1 != n2:\n raise ValueError('Different number of elements passed.')\n # The following shape test is needed to correctly handle comparisons with\n # 'none', which results in a shape (0, 4) array and thus cannot be tested\n # via value comparison.\n return c1.shape == c2.shape and (c1 == c2).all()\n\n\ndef to_rgba(c, alpha=None):\n \"\"\"\n Convert *c* to an RGBA color.\n\n Parameters\n ----------\n c : Matplotlib color or ``np.ma.masked``\n\n alpha : float, optional\n If *alpha* is given, force the alpha value of the returned RGBA tuple\n to *alpha*.\n\n If None, the alpha value from *c* is used. If *c* does not have an\n alpha channel, then alpha defaults to 1.\n\n *alpha* is ignored for the color value ``\"none\"`` (case-insensitive),\n which always maps to ``(0, 0, 0, 0)``.\n\n Returns\n -------\n tuple\n Tuple of floats ``(r, g, b, a)``, where each channel (red, green, blue,\n alpha) can assume values between 0 and 1.\n \"\"\"\n # Special-case nth color syntax because it should not be cached.\n if _is_nth_color(c):\n from matplotlib import rcParams\n prop_cycler = rcParams['axes.prop_cycle']\n colors = prop_cycler.by_key().get('color', ['k'])\n c = colors[int(c[1:]) % len(colors)]\n try:\n rgba = _colors_full_map.cache[c, alpha]\n except (KeyError, TypeError): # Not in cache, or unhashable.\n rgba = None\n if rgba is None: # Suppress exception chaining of cache lookup failure.\n rgba = _to_rgba_no_colorcycle(c, alpha)\n try:\n _colors_full_map.cache[c, alpha] = rgba\n except TypeError:\n pass\n return rgba\n\n\ndef _to_rgba_no_colorcycle(c, alpha=None):\n \"\"\"\n Convert *c* to an RGBA color, with no support for color-cycle syntax.\n\n If *alpha* is given, force the alpha value of the returned RGBA tuple\n to *alpha*. Otherwise, the alpha value from *c* is used, if it has alpha\n information, or defaults to 1.\n\n *alpha* is ignored for the color value ``\"none\"`` (case-insensitive),\n which always maps to ``(0, 0, 0, 0)``.\n \"\"\"\n orig_c = c\n if c is np.ma.masked:\n return (0., 0., 0., 0.)\n if isinstance(c, str):\n if c.lower() == \"none\":\n return (0., 0., 0., 0.)\n # Named color.\n try:\n # This may turn c into a non-string, so we check again below.\n c = _colors_full_map[c]\n except KeyError:\n if len(orig_c) != 1:\n try:\n c = _colors_full_map[c.lower()]\n except KeyError:\n pass\n if isinstance(c, str):\n # hex color in #rrggbb format.\n match = re.match(r\"\\A#[a-fA-F0-9]{6}\\Z\", c)\n if match:\n return (tuple(int(n, 16) / 255\n for n in [c[1:3], c[3:5], c[5:7]])\n + (alpha if alpha is not None else 1.,))\n # hex color in #rgb format, shorthand for #rrggbb.\n match = re.match(r\"\\A#[a-fA-F0-9]{3}\\Z\", c)\n if match:\n return (tuple(int(n, 16) / 255\n for n in [c[1]*2, c[2]*2, c[3]*2])\n + (alpha if alpha is not None else 1.,))\n # hex color with alpha in #rrggbbaa format.\n match = re.match(r\"\\A#[a-fA-F0-9]{8}\\Z\", c)\n if match:\n color = [int(n, 16) / 255\n for n in [c[1:3], c[3:5], c[5:7], c[7:9]]]\n if alpha is not None:\n color[-1] = alpha\n return tuple(color)\n # hex color with alpha in #rgba format, shorthand for #rrggbbaa.\n match = re.match(r\"\\A#[a-fA-F0-9]{4}\\Z\", c)\n if match:\n color = [int(n, 16) / 255\n for n in [c[1]*2, c[2]*2, c[3]*2, c[4]*2]]\n if alpha is not None:\n color[-1] = alpha\n return tuple(color)\n # string gray.\n try:\n c = float(c)\n except ValueError:\n pass\n else:\n if not (0 <= c <= 1):\n raise ValueError(\n f\"Invalid string grayscale value {orig_c!r}. \"\n f\"Value must be within 0-1 range\")\n return c, c, c, alpha if alpha is not None else 1.\n raise ValueError(f\"Invalid RGBA argument: {orig_c!r}\")\n # turn 2-D array into 1-D array\n if isinstance(c, np.ndarray):\n if c.ndim == 2 and c.shape[0] == 1:\n c = c.reshape(-1)\n # tuple color.\n if not np.iterable(c):\n raise ValueError(f\"Invalid RGBA argument: {orig_c!r}\")\n if len(c) not in [3, 4]:\n raise ValueError(\"RGBA sequence should have length 3 or 4\")\n if not all(isinstance(x, Number) for x in c):\n # Checks that don't work: `map(float, ...)`, `np.array(..., float)` and\n # `np.array(...).astype(float)` would all convert \"0.5\" to 0.5.\n raise ValueError(f\"Invalid RGBA argument: {orig_c!r}\")\n # Return a tuple to prevent the cached value from being modified.\n c = tuple(map(float, c))\n if len(c) == 3 and alpha is None:\n alpha = 1\n if alpha is not None:\n c = c[:3] + (alpha,)\n if any(elem < 0 or elem > 1 for elem in c):\n raise ValueError(\"RGBA values should be within 0-1 range\")\n return c\n\n\ndef to_rgba_array(c, alpha=None):\n \"\"\"\n Convert *c* to a (n, 4) array of RGBA colors.\n\n Parameters\n ----------\n c : Matplotlib color or array of colors\n If *c* is a masked array, an ndarray is returned with a (0, 0, 0, 0)\n row for each masked value or row in *c*.\n\n alpha : float or sequence of floats, optional\n If *alpha* is given, force the alpha value of the returned RGBA tuple\n to *alpha*.\n\n If None, the alpha value from *c* is used. If *c* does not have an\n alpha channel, then alpha defaults to 1.\n\n *alpha* is ignored for the color value ``\"none\"`` (case-insensitive),\n which always maps to ``(0, 0, 0, 0)``.\n\n If *alpha* is a sequence and *c* is a single color, *c* will be\n repeated to match the length of *alpha*.\n\n Returns\n -------\n array\n (n, 4) array of RGBA colors, where each channel (red, green, blue,\n alpha) can assume values between 0 and 1.\n \"\"\"\n # Special-case inputs that are already arrays, for performance. (If the\n # array has the wrong kind or shape, raise the error during one-at-a-time\n # conversion.)\n if np.iterable(alpha):\n alpha = np.asarray(alpha).ravel()\n if (isinstance(c, np.ndarray) and c.dtype.kind in \"if\"\n and c.ndim == 2 and c.shape[1] in [3, 4]):\n mask = c.mask.any(axis=1) if np.ma.is_masked(c) else None\n c = np.ma.getdata(c)\n if np.iterable(alpha):\n if c.shape[0] == 1 and alpha.shape[0] > 1:\n c = np.tile(c, (alpha.shape[0], 1))\n elif c.shape[0] != alpha.shape[0]:\n raise ValueError(\"The number of colors must match the number\"\n \" of alpha values if there are more than one\"\n \" of each.\")\n if c.shape[1] == 3:\n result = np.column_stack([c, np.zeros(len(c))])\n result[:, -1] = alpha if alpha is not None else 1.\n elif c.shape[1] == 4:\n result = c.copy()\n if alpha is not None:\n result[:, -1] = alpha\n if mask is not None:\n result[mask] = 0\n if np.any((result < 0) | (result > 1)):\n raise ValueError(\"RGBA values should be within 0-1 range\")\n return result\n # Handle single values.\n # Note that this occurs *after* handling inputs that are already arrays, as\n # `to_rgba(c, alpha)` (below) is expensive for such inputs, due to the need\n # to format the array in the ValueError message(!).\n if cbook._str_lower_equal(c, \"none\"):\n return np.zeros((0, 4), float)\n try:\n if np.iterable(alpha):\n return np.array([to_rgba(c, a) for a in alpha], float)\n else:\n return np.array([to_rgba(c, alpha)], float)\n except (ValueError, TypeError):\n pass\n\n if isinstance(c, str):\n raise ValueError(\"Using a string of single character colors as \"\n \"a color sequence is not supported. The colors can \"\n \"be passed as an explicit list instead.\")\n\n if len(c) == 0:\n return np.zeros((0, 4), float)\n\n # Quick path if the whole sequence can be directly converted to a numpy\n # array in one shot.\n if isinstance(c, Sequence):\n lens = {len(cc) if isinstance(cc, (list, tuple)) else -1 for cc in c}\n if lens == {3}:\n rgba = np.column_stack([c, np.ones(len(c))])\n elif lens == {4}:\n rgba = np.array(c)\n else:\n rgba = np.array([to_rgba(cc) for cc in c])\n else:\n rgba = np.array([to_rgba(cc) for cc in c])\n\n if alpha is not None:\n rgba[:, 3] = alpha\n return rgba\n\n\ndef to_rgb(c):\n \"\"\"Convert *c* to an RGB color, silently dropping the alpha channel.\"\"\"\n return to_rgba(c)[:3]\n\n\ndef to_hex(c, keep_alpha=False):\n \"\"\"\n Convert *c* to a hex color.\n\n Uses the ``#rrggbb`` format if *keep_alpha* is False (the default),\n ``#rrggbbaa`` otherwise.\n \"\"\"\n c = to_rgba(c)\n if not keep_alpha:\n c = c[:3]\n return \"#\" + \"\".join(format(int(round(val * 255)), \"02x\") for val in c)\n\n\n### Backwards-compatible color-conversion API\n\n\ncnames = CSS4_COLORS\nhexColorPattern = re.compile(r\"\\A#[a-fA-F0-9]{6}\\Z\")\nrgb2hex = to_hex\nhex2color = to_rgb\n\n\nclass ColorConverter:\n \"\"\"\n A class only kept for backwards compatibility.\n\n Its functionality is entirely provided by module-level functions.\n \"\"\"\n colors = _colors_full_map\n cache = _colors_full_map.cache\n to_rgb = staticmethod(to_rgb)\n to_rgba = staticmethod(to_rgba)\n to_rgba_array = staticmethod(to_rgba_array)\n\n\ncolorConverter = ColorConverter()\n\n\n### End of backwards-compatible color-conversion API\n\n\ndef _create_lookup_table(N, data, gamma=1.0):\n r\"\"\"\n Create an *N* -element 1D lookup table.\n\n This assumes a mapping :math:`f : [0, 1] \\rightarrow [0, 1]`. The returned\n data is an array of N values :math:`y = f(x)` where x is sampled from\n [0, 1].\n\n By default (*gamma* = 1) x is equidistantly sampled from [0, 1]. The\n *gamma* correction factor :math:`\\gamma` distorts this equidistant\n sampling by :math:`x \\rightarrow x^\\gamma`.\n\n Parameters\n ----------\n N : int\n The number of elements of the created lookup table; at least 1.\n\n data : (M, 3) array-like or callable\n Defines the mapping :math:`f`.\n\n If a (M, 3) array-like, the rows define values (x, y0, y1). The x\n values must start with x=0, end with x=1, and all x values be in\n increasing order.\n\n A value between :math:`x_i` and :math:`x_{i+1}` is mapped to the range\n :math:`y^1_{i-1} \\ldots y^0_i` by linear interpolation.\n\n For the simple case of a y-continuous mapping, y0 and y1 are identical.\n\n The two values of y are to allow for discontinuous mapping functions.\n E.g. a sawtooth with a period of 0.2 and an amplitude of 1 would be::\n\n [(0, 1, 0), (0.2, 1, 0), (0.4, 1, 0), ..., [(1, 1, 0)]\n\n In the special case of ``N == 1``, by convention the returned value\n is y0 for x == 1.\n\n If *data* is a callable, it must accept and return numpy arrays::\n\n data(x : ndarray) -> ndarray\n\n and map values between 0 - 1 to 0 - 1.\n\n gamma : float\n Gamma correction factor for input distribution x of the mapping.\n\n See also https://en.wikipedia.org/wiki/Gamma_correction.\n\n Returns\n -------\n array\n The lookup table where ``lut[x * (N-1)]`` gives the closest value\n for values of x between 0 and 1.\n\n Notes\n -----\n This function is internally used for `.LinearSegmentedColormap`.\n \"\"\"\n\n if callable(data):\n xind = np.linspace(0, 1, N) ** gamma\n lut = np.clip(np.array(data(xind), dtype=float), 0, 1)\n return lut\n\n try:\n adata = np.array(data)\n except Exception as err:\n raise TypeError(\"data must be convertible to an array\") from err\n shape = adata.shape\n if len(shape) != 2 or shape[1] != 3:\n raise ValueError(\"data must be nx3 format\")\n\n x = adata[:, 0]\n y0 = adata[:, 1]\n y1 = adata[:, 2]\n\n if x[0] != 0. or x[-1] != 1.0:\n raise ValueError(\n \"data mapping points must start with x=0 and end with x=1\")\n if (np.diff(x) < 0).any():\n raise ValueError(\"data mapping points must have x in increasing order\")\n # begin generation of lookup table\n if N == 1:\n # convention: use the y = f(x=1) value for a 1-element lookup table\n lut = np.array(y0[-1])\n else:\n x = x * (N - 1)\n xind = (N - 1) * np.linspace(0, 1, N) ** gamma\n ind = np.searchsorted(x, xind)[1:-1]\n\n distance = (xind[1:-1] - x[ind - 1]) / (x[ind] - x[ind - 1])\n lut = np.concatenate([\n [y1[0]],\n distance * (y0[ind] - y1[ind - 1]) + y1[ind - 1],\n [y0[-1]],\n ])\n # ensure that the lut is confined to values between 0 and 1 by clipping it\n return np.clip(lut, 0.0, 1.0)\n\n\ndef _warn_if_global_cmap_modified(cmap):\n if getattr(cmap, '_global', False):\n _api.warn_deprecated(\n \"3.3\",\n message=\"You are modifying the state of a globally registered \"\n \"colormap. In future versions, you will not be able to \"\n \"modify a registered colormap in-place. To remove this \"\n \"warning, you can make a copy of the colormap first. \"\n f'cmap = copy.copy(mpl.cm.get_cmap(\"{cmap.name}\"))'\n )\n\n\nclass Colormap:\n \"\"\"\n Baseclass for all scalar to RGBA mappings.\n\n Typically, Colormap instances are used to convert data values (floats)\n from the interval ``[0, 1]`` to the RGBA color that the respective\n Colormap represents. For scaling of data into the ``[0, 1]`` interval see\n `matplotlib.colors.Normalize`. Subclasses of `matplotlib.cm.ScalarMappable`\n make heavy use of this ``data -> normalize -> map-to-color`` processing\n chain.\n \"\"\"\n\n def __init__(self, name, N=256):\n \"\"\"\n Parameters\n ----------\n name : str\n The name of the colormap.\n N : int\n The number of rgb quantization levels.\n \"\"\"\n self.name = name\n self.N = int(N) # ensure that N is always int\n self._rgba_bad = (0.0, 0.0, 0.0, 0.0) # If bad, don't paint anything.\n self._rgba_under = None\n self._rgba_over = None\n self._i_under = self.N\n self._i_over = self.N + 1\n self._i_bad = self.N + 2\n self._isinit = False\n #: When this colormap exists on a scalar mappable and colorbar_extend\n #: is not False, colorbar creation will pick up ``colorbar_extend`` as\n #: the default value for the ``extend`` keyword in the\n #: `matplotlib.colorbar.Colorbar` constructor.\n self.colorbar_extend = False\n\n def __call__(self, X, alpha=None, bytes=False):\n \"\"\"\n Parameters\n ----------\n X : float or int, ndarray or scalar\n The data value(s) to convert to RGBA.\n For floats, X should be in the interval ``[0.0, 1.0]`` to\n return the RGBA values ``X*100`` percent along the Colormap line.\n For integers, X should be in the interval ``[0, Colormap.N)`` to\n return RGBA values *indexed* from the Colormap with index ``X``.\n alpha : float or array-like or None\n Alpha must be a scalar between 0 and 1, a sequence of such\n floats with shape matching X, or None.\n bytes : bool\n If False (default), the returned RGBA values will be floats in the\n interval ``[0, 1]`` otherwise they will be uint8s in the interval\n ``[0, 255]``.\n\n Returns\n -------\n Tuple of RGBA values if X is scalar, otherwise an array of\n RGBA values with a shape of ``X.shape + (4, )``.\n \"\"\"\n if not self._isinit:\n self._init()\n\n mask_bad = X.mask if np.ma.is_masked(X) else np.isnan(X) # Mask nan's.\n xa = np.array(X, copy=True)\n if not xa.dtype.isnative:\n xa = xa.byteswap().newbyteorder() # Native byteorder is faster.\n if xa.dtype.kind == \"f\":\n with np.errstate(invalid=\"ignore\"):\n xa *= self.N\n # Negative values are out of range, but astype(int) would\n # truncate them towards zero.\n xa[xa < 0] = -1\n # xa == 1 (== N after multiplication) is not out of range.\n xa[xa == self.N] = self.N - 1\n # Avoid converting large positive values to negative integers.\n np.clip(xa, -1, self.N, out=xa)\n xa = xa.astype(int)\n # Set the over-range indices before the under-range;\n # otherwise the under-range values get converted to over-range.\n xa[xa > self.N - 1] = self._i_over\n xa[xa < 0] = self._i_under\n xa[mask_bad] = self._i_bad\n\n if bytes:\n lut = (self._lut * 255).astype(np.uint8)\n else:\n lut = self._lut.copy() # Don't let alpha modify original _lut.\n\n rgba = np.empty(shape=xa.shape + (4,), dtype=lut.dtype)\n lut.take(xa, axis=0, mode='clip', out=rgba)\n\n if alpha is not None:\n if np.iterable(alpha):\n alpha = np.asarray(alpha)\n if alpha.shape != xa.shape:\n raise ValueError(\"alpha is array-like but its shape\"\n \" %s doesn't match that of X %s\" %\n (alpha.shape, xa.shape))\n alpha = np.clip(alpha, 0, 1)\n if bytes:\n alpha = (alpha * 255).astype(np.uint8)\n rgba[..., -1] = alpha\n\n # If the \"bad\" color is all zeros, then ignore alpha input.\n if (lut[-1] == 0).all() and np.any(mask_bad):\n if np.iterable(mask_bad) and mask_bad.shape == xa.shape:\n rgba[mask_bad] = (0, 0, 0, 0)\n else:\n rgba[..., :] = (0, 0, 0, 0)\n\n if not np.iterable(X):\n rgba = tuple(rgba)\n return rgba\n\n def __copy__(self):\n cls = self.__class__\n cmapobject = cls.__new__(cls)\n cmapobject.__dict__.update(self.__dict__)\n if self._isinit:\n cmapobject._lut = np.copy(self._lut)\n cmapobject._global = False\n return cmapobject\n\n def get_bad(self):\n \"\"\"Get the color for masked values.\"\"\"\n if not self._isinit:\n self._init()\n return np.array(self._lut[self._i_bad])\n\n def set_bad(self, color='k', alpha=None):\n \"\"\"Set the color for masked values.\"\"\"\n _warn_if_global_cmap_modified(self)\n self._rgba_bad = to_rgba(color, alpha)\n if self._isinit:\n self._set_extremes()\n\n def get_under(self):\n \"\"\"Get the color for low out-of-range values.\"\"\"\n if not self._isinit:\n self._init()\n return np.array(self._lut[self._i_under])\n\n def set_under(self, color='k', alpha=None):\n \"\"\"Set the color for low out-of-range values.\"\"\"\n _warn_if_global_cmap_modified(self)\n self._rgba_under = to_rgba(color, alpha)\n if self._isinit:\n self._set_extremes()\n\n def get_over(self):\n \"\"\"Get the color for high out-of-range values.\"\"\"\n if not self._isinit:\n self._init()\n return np.array(self._lut[self._i_over])\n\n def set_over(self, color='k', alpha=None):\n \"\"\"Set the color for high out-of-range values.\"\"\"\n _warn_if_global_cmap_modified(self)\n self._rgba_over = to_rgba(color, alpha)\n if self._isinit:\n self._set_extremes()\n\n def set_extremes(self, *, bad=None, under=None, over=None):\n \"\"\"\n Set the colors for masked (*bad*) values and, when ``norm.clip =\n False``, low (*under*) and high (*over*) out-of-range values.\n \"\"\"\n if bad is not None:\n self.set_bad(bad)\n if under is not None:\n self.set_under(under)\n if over is not None:\n self.set_over(over)\n\n def with_extremes(self, *, bad=None, under=None, over=None):\n \"\"\"\n Return a copy of the colormap, for which the colors for masked (*bad*)\n values and, when ``norm.clip = False``, low (*under*) and high (*over*)\n out-of-range values, have been set accordingly.\n \"\"\"\n new_cm = copy.copy(self)\n new_cm.set_extremes(bad=bad, under=under, over=over)\n return new_cm\n\n def _set_extremes(self):\n if self._rgba_under:\n self._lut[self._i_under] = self._rgba_under\n else:\n self._lut[self._i_under] = self._lut[0]\n if self._rgba_over:\n self._lut[self._i_over] = self._rgba_over\n else:\n self._lut[self._i_over] = self._lut[self.N - 1]\n self._lut[self._i_bad] = self._rgba_bad\n\n def _init(self):\n \"\"\"Generate the lookup table, ``self._lut``.\"\"\"\n raise NotImplementedError(\"Abstract class only\")\n\n def is_gray(self):\n \"\"\"Return whether the colormap is grayscale.\"\"\"\n if not self._isinit:\n self._init()\n return (np.all(self._lut[:, 0] == self._lut[:, 1]) and\n np.all(self._lut[:, 0] == self._lut[:, 2]))\n\n def _resample(self, lutsize):\n \"\"\"Return a new colormap with *lutsize* entries.\"\"\"\n raise NotImplementedError()\n\n def reversed(self, name=None):\n \"\"\"\n Return a reversed instance of the Colormap.\n\n .. note:: This function is not implemented for base class.\n\n Parameters\n ----------\n name : str, optional\n The name for the reversed colormap. If it's None the\n name will be the name of the parent colormap + \"_r\".\n\n See Also\n --------\n LinearSegmentedColormap.reversed\n ListedColormap.reversed\n \"\"\"\n raise NotImplementedError()\n\n def _repr_png_(self):\n \"\"\"Generate a PNG representation of the Colormap.\"\"\"\n X = np.tile(np.linspace(0, 1, _REPR_PNG_SIZE[0]),\n (_REPR_PNG_SIZE[1], 1))\n pixels = self(X, bytes=True)\n png_bytes = io.BytesIO()\n title = self.name + ' colormap'\n author = f'Matplotlib v{mpl.__version__}, https://matplotlib.org'\n pnginfo = PngInfo()\n pnginfo.add_text('Title', title)\n pnginfo.add_text('Description', title)\n pnginfo.add_text('Author', author)\n pnginfo.add_text('Software', author)\n Image.fromarray(pixels).save(png_bytes, format='png', pnginfo=pnginfo)\n return png_bytes.getvalue()\n\n def _repr_html_(self):\n \"\"\"Generate an HTML representation of the Colormap.\"\"\"\n png_bytes = self._repr_png_()\n png_base64 = base64.b64encode(png_bytes).decode('ascii')\n def color_block(color):\n hex_color = to_hex(color, keep_alpha=True)\n return (f'<div title=\"{hex_color}\" '\n 'style=\"display: inline-block; '\n 'width: 1em; height: 1em; '\n 'margin: 0; '\n 'vertical-align: middle; '\n 'border: 1px solid #555; '\n f'background-color: {hex_color};\"></div>')\n\n return ('<div style=\"vertical-align: middle;\">'\n f'<strong>{self.name}</strong> '\n '</div>'\n '<div class=\"cmap\"><img '\n f'alt=\"{self.name} colormap\" '\n f'title=\"{self.name}\" '\n 'style=\"border: 1px solid #555;\" '\n f'src=\"data:image/png;base64,{png_base64}\"></div>'\n '<div style=\"vertical-align: middle; '\n f'max-width: {_REPR_PNG_SIZE[0]+2}px; '\n 'display: flex; justify-content: space-between;\">'\n '<div style=\"float: left;\">'\n f'{color_block(self.get_under())} under'\n '</div>'\n '<div style=\"margin: 0 auto; display: inline-block;\">'\n f'bad {color_block(self.get_bad())}'\n '</div>'\n '<div style=\"float: right;\">'\n f'over {color_block(self.get_over())}'\n '</div>')\n\n\nclass LinearSegmentedColormap(Colormap):\n \"\"\"\n Colormap objects based on lookup tables using linear segments.\n\n The lookup table is generated using linear interpolation for each\n primary color, with the 0-1 domain divided into any number of\n segments.\n \"\"\"\n\n def __init__(self, name, segmentdata, N=256, gamma=1.0):\n \"\"\"\n Create colormap from linear mapping segments\n\n segmentdata argument is a dictionary with a red, green and blue\n entries. Each entry should be a list of *x*, *y0*, *y1* tuples,\n forming rows in a table. Entries for alpha are optional.\n\n Example: suppose you want red to increase from 0 to 1 over\n the bottom half, green to do the same over the middle half,\n and blue over the top half. Then you would use::\n\n cdict = {'red': [(0.0, 0.0, 0.0),\n (0.5, 1.0, 1.0),\n (1.0, 1.0, 1.0)],\n\n 'green': [(0.0, 0.0, 0.0),\n (0.25, 0.0, 0.0),\n (0.75, 1.0, 1.0),\n (1.0, 1.0, 1.0)],\n\n 'blue': [(0.0, 0.0, 0.0),\n (0.5, 0.0, 0.0),\n (1.0, 1.0, 1.0)]}\n\n Each row in the table for a given color is a sequence of\n *x*, *y0*, *y1* tuples. In each sequence, *x* must increase\n monotonically from 0 to 1. For any input value *z* falling\n between *x[i]* and *x[i+1]*, the output value of a given color\n will be linearly interpolated between *y1[i]* and *y0[i+1]*::\n\n row i: x y0 y1\n /\n /\n row i+1: x y0 y1\n\n Hence y0 in the first row and y1 in the last row are never used.\n\n See Also\n --------\n LinearSegmentedColormap.from_list\n Static method; factory function for generating a smoothly-varying\n LinearSegmentedColormap.\n \"\"\"\n # True only if all colors in map are identical; needed for contouring.\n self.monochrome = False\n super().__init__(name, N)\n self._segmentdata = segmentdata\n self._gamma = gamma\n\n def _init(self):\n self._lut = np.ones((self.N + 3, 4), float)\n self._lut[:-3, 0] = _create_lookup_table(\n self.N, self._segmentdata['red'], self._gamma)\n self._lut[:-3, 1] = _create_lookup_table(\n self.N, self._segmentdata['green'], self._gamma)\n self._lut[:-3, 2] = _create_lookup_table(\n self.N, self._segmentdata['blue'], self._gamma)\n if 'alpha' in self._segmentdata:\n self._lut[:-3, 3] = _create_lookup_table(\n self.N, self._segmentdata['alpha'], 1)\n self._isinit = True\n self._set_extremes()\n\n def set_gamma(self, gamma):\n \"\"\"Set a new gamma value and regenerate colormap.\"\"\"\n self._gamma = gamma\n self._init()\n\n @staticmethod\n def from_list(name, colors, N=256, gamma=1.0):\n \"\"\"\n Create a `LinearSegmentedColormap` from a list of colors.\n\n Parameters\n ----------\n name : str\n The name of the colormap.\n colors : array-like of colors or array-like of (value, color)\n If only colors are given, they are equidistantly mapped from the\n range :math:`[0, 1]`; i.e. 0 maps to ``colors[0]`` and 1 maps to\n ``colors[-1]``.\n If (value, color) pairs are given, the mapping is from *value*\n to *color*. This can be used to divide the range unevenly.\n N : int\n The number of rgb quantization levels.\n gamma : float\n \"\"\"\n if not np.iterable(colors):\n raise ValueError('colors must be iterable')\n\n if (isinstance(colors[0], Sized) and len(colors[0]) == 2\n and not isinstance(colors[0], str)):\n # List of value, color pairs\n vals, colors = zip(*colors)\n else:\n vals = np.linspace(0, 1, len(colors))\n\n r, g, b, a = to_rgba_array(colors).T\n cdict = {\n \"red\": np.column_stack([vals, r, r]),\n \"green\": np.column_stack([vals, g, g]),\n \"blue\": np.column_stack([vals, b, b]),\n \"alpha\": np.column_stack([vals, a, a]),\n }\n\n return LinearSegmentedColormap(name, cdict, N, gamma)\n\n def _resample(self, lutsize):\n \"\"\"Return a new colormap with *lutsize* entries.\"\"\"\n new_cmap = LinearSegmentedColormap(self.name, self._segmentdata,\n lutsize)\n new_cmap._rgba_over = self._rgba_over\n new_cmap._rgba_under = self._rgba_under\n new_cmap._rgba_bad = self._rgba_bad\n return new_cmap\n\n # Helper ensuring picklability of the reversed cmap.\n @staticmethod\n def _reverser(func, x):\n return func(1 - x)\n\n def reversed(self, name=None):\n \"\"\"\n Return a reversed instance of the Colormap.\n\n Parameters\n ----------\n name : str, optional\n The name for the reversed colormap. If it's None the\n name will be the name of the parent colormap + \"_r\".\n\n Returns\n -------\n LinearSegmentedColormap\n The reversed colormap.\n \"\"\"\n if name is None:\n name = self.name + \"_r\"\n\n # Using a partial object keeps the cmap picklable.\n data_r = {key: (functools.partial(self._reverser, data)\n if callable(data) else\n [(1.0 - x, y1, y0) for x, y0, y1 in reversed(data)])\n for key, data in self._segmentdata.items()}\n\n new_cmap = LinearSegmentedColormap(name, data_r, self.N, self._gamma)\n # Reverse the over/under values too\n new_cmap._rgba_over = self._rgba_under\n new_cmap._rgba_under = self._rgba_over\n new_cmap._rgba_bad = self._rgba_bad\n return new_cmap\n\n\nclass ListedColormap(Colormap):\n \"\"\"\n Colormap object generated from a list of colors.\n\n This may be most useful when indexing directly into a colormap,\n but it can also be used to generate special colormaps for ordinary\n mapping.\n\n Parameters\n ----------\n colors : list, array\n List of Matplotlib color specifications, or an equivalent Nx3 or Nx4\n floating point array (*N* rgb or rgba values).\n name : str, optional\n String to identify the colormap.\n N : int, optional\n Number of entries in the map. The default is *None*, in which case\n there is one colormap entry for each element in the list of colors.\n If ::\n\n N < len(colors)\n\n the list will be truncated at *N*. If ::\n\n N > len(colors)\n\n the list will be extended by repetition.\n \"\"\"\n def __init__(self, colors, name='from_list', N=None):\n self.monochrome = False # Are all colors identical? (for contour.py)\n if N is None:\n self.colors = colors\n N = len(colors)\n else:\n if isinstance(colors, str):\n self.colors = [colors] * N\n self.monochrome = True\n elif np.iterable(colors):\n if len(colors) == 1:\n self.monochrome = True\n self.colors = list(\n itertools.islice(itertools.cycle(colors), N))\n else:\n try:\n gray = float(colors)\n except TypeError:\n pass\n else:\n self.colors = [gray] * N\n self.monochrome = True\n super().__init__(name, N)\n\n def _init(self):\n self._lut = np.zeros((self.N + 3, 4), float)\n self._lut[:-3] = to_rgba_array(self.colors)\n self._isinit = True\n self._set_extremes()\n\n def _resample(self, lutsize):\n \"\"\"Return a new colormap with *lutsize* entries.\"\"\"\n colors = self(np.linspace(0, 1, lutsize))\n new_cmap = ListedColormap(colors, name=self.name)\n # Keep the over/under values too\n new_cmap._rgba_over = self._rgba_over\n new_cmap._rgba_under = self._rgba_under\n new_cmap._rgba_bad = self._rgba_bad\n return new_cmap\n\n def reversed(self, name=None):\n \"\"\"\n Return a reversed instance of the Colormap.\n\n Parameters\n ----------\n name : str, optional\n The name for the reversed colormap. If it's None the\n name will be the name of the parent colormap + \"_r\".\n\n Returns\n -------\n ListedColormap\n A reversed instance of the colormap.\n \"\"\"\n if name is None:\n name = self.name + \"_r\"\n\n colors_r = list(reversed(self.colors))\n new_cmap = ListedColormap(colors_r, name=name, N=self.N)\n # Reverse the over/under values too\n new_cmap._rgba_over = self._rgba_under\n new_cmap._rgba_under = self._rgba_over\n new_cmap._rgba_bad = self._rgba_bad\n return new_cmap\n\n\nclass Normalize:\n \"\"\"\n A class which, when called, linearly normalizes data into the\n ``[0.0, 1.0]`` interval.\n \"\"\"\n\n def __init__(self, vmin=None, vmax=None, clip=False):\n \"\"\"\n Parameters\n ----------\n vmin, vmax : float or None\n If *vmin* and/or *vmax* is not given, they are initialized from the\n minimum and maximum value, respectively, of the first input\n processed; i.e., ``__call__(A)`` calls ``autoscale_None(A)``.\n\n clip : bool, default: False\n If ``True`` values falling outside the range ``[vmin, vmax]``,\n are mapped to 0 or 1, whichever is closer, and masked values are\n set to 1. If ``False`` masked values remain masked.\n\n Clipping silently defeats the purpose of setting the over, under,\n and masked colors in a colormap, so it is likely to lead to\n surprises; therefore the default is ``clip=False``.\n\n Notes\n -----\n Returns 0 if ``vmin == vmax``.\n \"\"\"\n self.vmin = _sanitize_extrema(vmin)\n self.vmax = _sanitize_extrema(vmax)\n self.clip = clip\n self._scale = scale.LinearScale(axis=None)\n\n @staticmethod\n def process_value(value):\n \"\"\"\n Homogenize the input *value* for easy and efficient normalization.\n\n *value* can be a scalar or sequence.\n\n Returns\n -------\n result : masked array\n Masked array with the same shape as *value*.\n is_scalar : bool\n Whether *value* is a scalar.\n\n Notes\n -----\n Float dtypes are preserved; integer types with two bytes or smaller are\n converted to np.float32, and larger types are converted to np.float64.\n Preserving float32 when possible, and using in-place operations,\n greatly improves speed for large arrays.\n \"\"\"\n is_scalar = not np.iterable(value)\n if is_scalar:\n value = [value]\n dtype = np.min_scalar_type(value)\n if np.issubdtype(dtype, np.integer) or dtype.type is np.bool_:\n # bool_/int8/int16 -> float32; int32/int64 -> float64\n dtype = np.promote_types(dtype, np.float32)\n # ensure data passed in as an ndarray subclass are interpreted as\n # an ndarray. See issue #6622.\n mask = np.ma.getmask(value)\n data = np.asarray(value)\n result = np.ma.array(data, mask=mask, dtype=dtype, copy=True)\n return result, is_scalar\n\n def __call__(self, value, clip=None):\n \"\"\"\n Normalize *value* data in the ``[vmin, vmax]`` interval into the\n ``[0.0, 1.0]`` interval and return it.\n\n Parameters\n ----------\n value\n Data to normalize.\n clip : bool\n If ``None``, defaults to ``self.clip`` (which defaults to\n ``False``).\n\n Notes\n -----\n If not already initialized, ``self.vmin`` and ``self.vmax`` are\n initialized using ``self.autoscale_None(value)``.\n \"\"\"\n if clip is None:\n clip = self.clip\n\n result, is_scalar = self.process_value(value)\n\n self.autoscale_None(result)\n # Convert at least to float, without losing precision.\n (vmin,), _ = self.process_value(self.vmin)\n (vmax,), _ = self.process_value(self.vmax)\n if vmin == vmax:\n result.fill(0) # Or should it be all masked? Or 0.5?\n elif vmin > vmax:\n raise ValueError(\"minvalue must be less than or equal to maxvalue\")\n else:\n if clip:\n mask = np.ma.getmask(result)\n result = np.ma.array(np.clip(result.filled(vmax), vmin, vmax),\n mask=mask)\n # ma division is very slow; we can take a shortcut\n resdat = result.data\n resdat -= vmin\n resdat /= (vmax - vmin)\n result = np.ma.array(resdat, mask=result.mask, copy=False)\n if is_scalar:\n result = result[0]\n return result\n\n def inverse(self, value):\n if not self.scaled():\n raise ValueError(\"Not invertible until both vmin and vmax are set\")\n (vmin,), _ = self.process_value(self.vmin)\n (vmax,), _ = self.process_value(self.vmax)\n\n if np.iterable(value):\n val = np.ma.asarray(value)\n return vmin + val * (vmax - vmin)\n else:\n return vmin + value * (vmax - vmin)\n\n def autoscale(self, A):\n \"\"\"Set *vmin*, *vmax* to min, max of *A*.\"\"\"\n A = np.asanyarray(A)\n self.vmin = A.min()\n self.vmax = A.max()\n\n def autoscale_None(self, A):\n \"\"\"If vmin or vmax are not set, use the min/max of *A* to set them.\"\"\"\n A = np.asanyarray(A)\n if self.vmin is None and A.size:\n self.vmin = A.min()\n if self.vmax is None and A.size:\n self.vmax = A.max()\n\n def scaled(self):\n \"\"\"Return whether vmin and vmax are set.\"\"\"\n return self.vmin is not None and self.vmax is not None\n\n\nclass TwoSlopeNorm(Normalize):\n def __init__(self, vcenter, vmin=None, vmax=None):\n \"\"\"\n Normalize data with a set center.\n\n Useful when mapping data with an unequal rates of change around a\n conceptual center, e.g., data that range from -2 to 4, with 0 as\n the midpoint.\n\n Parameters\n ----------\n vcenter : float\n The data value that defines ``0.5`` in the normalization.\n vmin : float, optional\n The data value that defines ``0.0`` in the normalization.\n Defaults to the min value of the dataset.\n vmax : float, optional\n The data value that defines ``1.0`` in the normalization.\n Defaults to the the max value of the dataset.\n\n Examples\n --------\n This maps data value -4000 to 0., 0 to 0.5, and +10000 to 1.0; data\n between is linearly interpolated::\n\n >>> import matplotlib.colors as mcolors\n >>> offset = mcolors.TwoSlopeNorm(vmin=-4000.,\n vcenter=0., vmax=10000)\n >>> data = [-4000., -2000., 0., 2500., 5000., 7500., 10000.]\n >>> offset(data)\n array([0., 0.25, 0.5, 0.625, 0.75, 0.875, 1.0])\n \"\"\"\n\n self.vcenter = vcenter\n self.vmin = vmin\n self.vmax = vmax\n if vcenter is not None and vmax is not None and vcenter >= vmax:\n raise ValueError('vmin, vcenter, and vmax must be in '\n 'ascending order')\n if vcenter is not None and vmin is not None and vcenter <= vmin:\n raise ValueError('vmin, vcenter, and vmax must be in '\n 'ascending order')\n\n def autoscale_None(self, A):\n \"\"\"\n Get vmin and vmax, and then clip at vcenter\n \"\"\"\n super().autoscale_None(A)\n if self.vmin > self.vcenter:\n self.vmin = self.vcenter\n if self.vmax < self.vcenter:\n self.vmax = self.vcenter\n\n def __call__(self, value, clip=None):\n \"\"\"\n Map value to the interval [0, 1]. The clip argument is unused.\n \"\"\"\n result, is_scalar = self.process_value(value)\n self.autoscale_None(result) # sets self.vmin, self.vmax if None\n\n if not self.vmin <= self.vcenter <= self.vmax:\n raise ValueError(\"vmin, vcenter, vmax must increase monotonically\")\n result = np.ma.masked_array(\n np.interp(result, [self.vmin, self.vcenter, self.vmax],\n [0, 0.5, 1.]), mask=np.ma.getmask(result))\n if is_scalar:\n result = np.atleast_1d(result)[0]\n return result\n\n\nclass CenteredNorm(Normalize):\n def __init__(self, vcenter=0, halfrange=None, clip=False):\n \"\"\"\n Normalize symmetrical data around a center (0 by default).\n\n Unlike `TwoSlopeNorm`, `CenteredNorm` applies an equal rate of change\n around the center.\n\n Useful when mapping symmetrical data around a conceptual center\n e.g., data that range from -2 to 4, with 0 as the midpoint, and\n with equal rates of change around that midpoint.\n\n Parameters\n ----------\n vcenter : float, default: 0\n The data value that defines ``0.5`` in the normalization.\n halfrange : float, optional\n The range of data values that defines a range of ``0.5`` in the\n normalization, so that *vcenter* - *halfrange* is ``0.0`` and\n *vcenter* + *halfrange* is ``1.0`` in the normalization.\n Defaults to the largest absolute difference to *vcenter* for\n the values in the dataset.\n\n Examples\n --------\n This maps data values -2 to 0.25, 0 to 0.5, and 4 to 1.0\n (assuming equal rates of change above and below 0.0):\n\n >>> import matplotlib.colors as mcolors\n >>> norm = mcolors.CenteredNorm(halfrange=4.0)\n >>> data = [-2., 0., 4.]\n >>> norm(data)\n array([0.25, 0.5 , 1. ])\n \"\"\"\n self._vcenter = vcenter\n # calling the halfrange setter to set vmin and vmax\n self.halfrange = halfrange\n self.clip = clip\n\n def _set_vmin_vmax(self):\n \"\"\"\n Set *vmin* and *vmax* based on *vcenter* and *halfrange*.\n \"\"\"\n self.vmax = self._vcenter + self._halfrange\n self.vmin = self._vcenter - self._halfrange\n\n def autoscale(self, A):\n \"\"\"\n Set *halfrange* to ``max(abs(A-vcenter))``, then set *vmin* and *vmax*.\n \"\"\"\n A = np.asanyarray(A)\n self._halfrange = max(self._vcenter-A.min(),\n A.max()-self._vcenter)\n self._set_vmin_vmax()\n\n def autoscale_None(self, A):\n \"\"\"Set *vmin* and *vmax*.\"\"\"\n A = np.asanyarray(A)\n if self.vmax is None and A.size:\n self.autoscale(A)\n\n @property\n def vcenter(self):\n return self._vcenter\n\n @vcenter.setter\n def vcenter(self, vcenter):\n self._vcenter = vcenter\n if self.vmax is not None:\n # recompute halfrange assuming vmin and vmax represent\n # min and max of data\n self._halfrange = max(self._vcenter-self.vmin,\n self.vmax-self._vcenter)\n self._set_vmin_vmax()\n\n @property\n def halfrange(self):\n return self._halfrange\n\n @halfrange.setter\n def halfrange(self, halfrange):\n if halfrange is None:\n self._halfrange = None\n self.vmin = None\n self.vmax = None\n else:\n self._halfrange = abs(halfrange)\n\n def __call__(self, value, clip=None):\n if self._halfrange is not None:\n # enforce symmetry, reset vmin and vmax\n self._set_vmin_vmax()\n return super().__call__(value, clip=clip)\n\n\ndef _make_norm_from_scale(scale_cls, base_norm_cls=None, *, init=None):\n \"\"\"\n Decorator for building a `.Normalize` subclass from a `.Scale` subclass.\n\n After ::\n\n @_make_norm_from_scale(scale_cls)\n class norm_cls(Normalize):\n ...\n\n *norm_cls* is filled with methods so that normalization computations are\n forwarded to *scale_cls* (i.e., *scale_cls* is the scale that would be used\n for the colorbar of a mappable normalized with *norm_cls*).\n\n If *init* is not passed, then the constructor signature of *norm_cls*\n will be ``norm_cls(vmin=None, vmax=None, clip=False)``; these three\n parameters will be forwarded to the base class (``Normalize.__init__``),\n and a *scale_cls* object will be initialized with no arguments (other than\n a dummy axis).\n\n If the *scale_cls* constructor takes additional parameters, then *init*\n should be passed to `_make_norm_from_scale`. It is a callable which is\n *only* used for its signature. First, this signature will become the\n signature of *norm_cls*. Second, the *norm_cls* constructor will bind the\n parameters passed to it using this signature, extract the bound *vmin*,\n *vmax*, and *clip* values, pass those to ``Normalize.__init__``, and\n forward the remaining bound values (including any defaults defined by the\n signature) to the *scale_cls* constructor.\n \"\"\"\n\n if base_norm_cls is None:\n return functools.partial(_make_norm_from_scale, scale_cls, init=init)\n\n if init is None:\n def init(vmin=None, vmax=None, clip=False): pass\n bound_init_signature = inspect.signature(init)\n\n class Norm(base_norm_cls):\n\n def __init__(self, *args, **kwargs):\n ba = bound_init_signature.bind(*args, **kwargs)\n ba.apply_defaults()\n super().__init__(\n **{k: ba.arguments.pop(k) for k in [\"vmin\", \"vmax\", \"clip\"]})\n self._scale = scale_cls(axis=None, **ba.arguments)\n self._trf = self._scale.get_transform()\n\n def __call__(self, value, clip=None):\n value, is_scalar = self.process_value(value)\n self.autoscale_None(value)\n if self.vmin > self.vmax:\n raise ValueError(\"vmin must be less or equal to vmax\")\n if self.vmin == self.vmax:\n return np.full_like(value, 0)\n if clip is None:\n clip = self.clip\n if clip:\n value = np.clip(value, self.vmin, self.vmax)\n t_value = self._trf.transform(value).reshape(np.shape(value))\n t_vmin, t_vmax = self._trf.transform([self.vmin, self.vmax])\n if not np.isfinite([t_vmin, t_vmax]).all():\n raise ValueError(\"Invalid vmin or vmax\")\n t_value -= t_vmin\n t_value /= (t_vmax - t_vmin)\n t_value = np.ma.masked_invalid(t_value, copy=False)\n return t_value[0] if is_scalar else t_value\n\n def inverse(self, value):\n if not self.scaled():\n raise ValueError(\"Not invertible until scaled\")\n if self.vmin > self.vmax:\n raise ValueError(\"vmin must be less or equal to vmax\")\n t_vmin, t_vmax = self._trf.transform([self.vmin, self.vmax])\n if not np.isfinite([t_vmin, t_vmax]).all():\n raise ValueError(\"Invalid vmin or vmax\")\n value, is_scalar = self.process_value(value)\n rescaled = value * (t_vmax - t_vmin)\n rescaled += t_vmin\n value = (self._trf\n .inverted()\n .transform(rescaled)\n .reshape(np.shape(value)))\n return value[0] if is_scalar else value\n\n Norm.__name__ = base_norm_cls.__name__\n Norm.__qualname__ = base_norm_cls.__qualname__\n Norm.__module__ = base_norm_cls.__module__\n Norm.__init__.__signature__ = bound_init_signature.replace(parameters=[\n inspect.Parameter(\"self\", inspect.Parameter.POSITIONAL_OR_KEYWORD),\n *bound_init_signature.parameters.values()])\n return Norm\n\n\n@_make_norm_from_scale(\n scale.FuncScale,\n init=lambda functions, vmin=None, vmax=None, clip=False: None)\nclass FuncNorm(Normalize):\n \"\"\"\n Arbitrary normalization using functions for the forward and inverse.\n\n Parameters\n ----------\n functions : (callable, callable)\n two-tuple of the forward and inverse functions for the normalization.\n The forward function must be monotonic.\n\n Both functions must have the signature ::\n\n def forward(values: array-like) -> array-like\n\n vmin, vmax : float or None\n If *vmin* and/or *vmax* is not given, they are initialized from the\n minimum and maximum value, respectively, of the first input\n processed; i.e., ``__call__(A)`` calls ``autoscale_None(A)``.\n\n clip : bool, default: False\n If ``True`` values falling outside the range ``[vmin, vmax]``,\n are mapped to 0 or 1, whichever is closer, and masked values are\n set to 1. If ``False`` masked values remain masked.\n\n Clipping silently defeats the purpose of setting the over, under,\n and masked colors in a colormap, so it is likely to lead to\n surprises; therefore the default is ``clip=False``.\n \"\"\"\n\n\n@_make_norm_from_scale(functools.partial(scale.LogScale, nonpositive=\"mask\"))\nclass LogNorm(Normalize):\n \"\"\"Normalize a given value to the 0-1 range on a log scale.\"\"\"\n\n def autoscale(self, A):\n # docstring inherited.\n super().autoscale(np.ma.masked_less_equal(A, 0, copy=False))\n\n def autoscale_None(self, A):\n # docstring inherited.\n super().autoscale_None(np.ma.masked_less_equal(A, 0, copy=False))\n\n\n@_make_norm_from_scale(\n scale.SymmetricalLogScale,\n init=lambda linthresh, linscale=1., vmin=None, vmax=None, clip=False, *,\n base=10: None)\nclass SymLogNorm(Normalize):\n \"\"\"\n The symmetrical logarithmic scale is logarithmic in both the\n positive and negative directions from the origin.\n\n Since the values close to zero tend toward infinity, there is a\n need to have a range around zero that is linear. The parameter\n *linthresh* allows the user to specify the size of this range\n (-*linthresh*, *linthresh*).\n\n Parameters\n ----------\n linthresh : float\n The range within which the plot is linear (to avoid having the plot\n go to infinity around zero).\n linscale : float, default: 1\n This allows the linear range (-*linthresh* to *linthresh*) to be\n stretched relative to the logarithmic range. Its value is the\n number of decades to use for each half of the linear range. For\n example, when *linscale* == 1.0 (the default), the space used for\n the positive and negative halves of the linear range will be equal\n to one decade in the logarithmic range.\n base : float, default: 10\n \"\"\"\n\n @property\n def linthresh(self):\n return self._scale.linthresh\n\n @linthresh.setter\n def linthresh(self, value):\n self._scale.linthresh = value\n\n\nclass PowerNorm(Normalize):\n \"\"\"\n Linearly map a given value to the 0-1 range and then apply\n a power-law normalization over that range.\n \"\"\"\n def __init__(self, gamma, vmin=None, vmax=None, clip=False):\n super().__init__(vmin, vmax, clip)\n self.gamma = gamma\n\n def __call__(self, value, clip=None):\n if clip is None:\n clip = self.clip\n\n result, is_scalar = self.process_value(value)\n\n self.autoscale_None(result)\n gamma = self.gamma\n vmin, vmax = self.vmin, self.vmax\n if vmin > vmax:\n raise ValueError(\"minvalue must be less than or equal to maxvalue\")\n elif vmin == vmax:\n result.fill(0)\n else:\n if clip:\n mask = np.ma.getmask(result)\n result = np.ma.array(np.clip(result.filled(vmax), vmin, vmax),\n mask=mask)\n resdat = result.data\n resdat -= vmin\n resdat[resdat < 0] = 0\n np.power(resdat, gamma, resdat)\n resdat /= (vmax - vmin) ** gamma\n\n result = np.ma.array(resdat, mask=result.mask, copy=False)\n if is_scalar:\n result = result[0]\n return result\n\n def inverse(self, value):\n if not self.scaled():\n raise ValueError(\"Not invertible until scaled\")\n gamma = self.gamma\n vmin, vmax = self.vmin, self.vmax\n\n if np.iterable(value):\n val = np.ma.asarray(value)\n return np.ma.power(val, 1. / gamma) * (vmax - vmin) + vmin\n else:\n return pow(value, 1. / gamma) * (vmax - vmin) + vmin\n\n\nclass BoundaryNorm(Normalize):\n \"\"\"\n Generate a colormap index based on discrete intervals.\n\n Unlike `Normalize` or `LogNorm`, `BoundaryNorm` maps values to integers\n instead of to the interval 0-1.\n\n Mapping to the 0-1 interval could have been done via piece-wise linear\n interpolation, but using integers seems simpler, and reduces the number of\n conversions back and forth between integer and floating point.\n \"\"\"\n def __init__(self, boundaries, ncolors, clip=False, *, extend='neither'):\n \"\"\"\n Parameters\n ----------\n boundaries : array-like\n Monotonically increasing sequence of at least 2 boundaries.\n ncolors : int\n Number of colors in the colormap to be used.\n clip : bool, optional\n If clip is ``True``, out of range values are mapped to 0 if they\n are below ``boundaries[0]`` or mapped to ``ncolors - 1`` if they\n are above ``boundaries[-1]``.\n\n If clip is ``False``, out of range values are mapped to -1 if\n they are below ``boundaries[0]`` or mapped to *ncolors* if they are\n above ``boundaries[-1]``. These are then converted to valid indices\n by `Colormap.__call__`.\n extend : {'neither', 'both', 'min', 'max'}, default: 'neither'\n Extend the number of bins to include one or both of the\n regions beyond the boundaries. For example, if ``extend``\n is 'min', then the color to which the region between the first\n pair of boundaries is mapped will be distinct from the first\n color in the colormap, and by default a\n `~matplotlib.colorbar.Colorbar` will be drawn with\n the triangle extension on the left or lower end.\n\n Returns\n -------\n int16 scalar or array\n\n Notes\n -----\n *boundaries* defines the edges of bins, and data falling within a bin\n is mapped to the color with the same index.\n\n If the number of bins, including any extensions, is less than\n *ncolors*, the color index is chosen by linear interpolation, mapping\n the ``[0, nbins - 1]`` range onto the ``[0, ncolors - 1]`` range.\n \"\"\"\n if clip and extend != 'neither':\n raise ValueError(\"'clip=True' is not compatible with 'extend'\")\n self.clip = clip\n self.vmin = boundaries[0]\n self.vmax = boundaries[-1]\n self.boundaries = np.asarray(boundaries)\n self.N = len(self.boundaries)\n if self.N < 2:\n raise ValueError(\"You must provide at least 2 boundaries \"\n f\"(1 region) but you passed in {boundaries!r}\")\n self.Ncmap = ncolors\n self.extend = extend\n\n self._scale = None # don't use the default scale.\n\n self._n_regions = self.N - 1 # number of colors needed\n self._offset = 0\n if extend in ('min', 'both'):\n self._n_regions += 1\n self._offset = 1\n if extend in ('max', 'both'):\n self._n_regions += 1\n if self._n_regions > self.Ncmap:\n raise ValueError(f\"There are {self._n_regions} color bins \"\n \"including extensions, but ncolors = \"\n f\"{ncolors}; ncolors must equal or exceed the \"\n \"number of bins\")\n\n def __call__(self, value, clip=None):\n if clip is None:\n clip = self.clip\n\n xx, is_scalar = self.process_value(value)\n mask = np.ma.getmaskarray(xx)\n # Fill masked values a value above the upper boundary\n xx = np.atleast_1d(xx.filled(self.vmax + 1))\n if clip:\n np.clip(xx, self.vmin, self.vmax, out=xx)\n max_col = self.Ncmap - 1\n else:\n max_col = self.Ncmap\n # this gives us the bins in the lookup table in the range\n # [0, _n_regions - 1] (the offset is baked in in the init)\n iret = np.digitize(xx, self.boundaries) - 1 + self._offset\n # if we have more colors than regions, stretch the region\n # index computed above to full range of the color bins. This\n # will make use of the full range (but skip some of the colors\n # in the middle) such that the first region is mapped to the\n # first color and the last region is mapped to the last color.\n if self.Ncmap > self._n_regions:\n if self._n_regions == 1:\n # special case the 1 region case, pick the middle color\n iret[iret == 0] = (self.Ncmap - 1) // 2\n else:\n # otherwise linearly remap the values from the region index\n # to the color index spaces\n iret = (self.Ncmap - 1) / (self._n_regions - 1) * iret\n # cast to 16bit integers in all cases\n iret = iret.astype(np.int16)\n iret[xx < self.vmin] = -1\n iret[xx >= self.vmax] = max_col\n ret = np.ma.array(iret, mask=mask)\n if is_scalar:\n ret = int(ret[0]) # assume python scalar\n return ret\n\n def inverse(self, value):\n \"\"\"\n Raises\n ------\n ValueError\n BoundaryNorm is not invertible, so calling this method will always\n raise an error\n \"\"\"\n raise ValueError(\"BoundaryNorm is not invertible\")\n\n\nclass NoNorm(Normalize):\n \"\"\"\n Dummy replacement for `Normalize`, for the case where we want to use\n indices directly in a `~matplotlib.cm.ScalarMappable`.\n \"\"\"\n def __call__(self, value, clip=None):\n return value\n\n def inverse(self, value):\n return value\n\n\ndef rgb_to_hsv(arr):\n \"\"\"\n Convert float rgb values (in the range [0, 1]), in a numpy array to hsv\n values.\n\n Parameters\n ----------\n arr : (..., 3) array-like\n All values must be in the range [0, 1]\n\n Returns\n -------\n (..., 3) ndarray\n Colors converted to hsv values in range [0, 1]\n \"\"\"\n arr = np.asarray(arr)\n\n # check length of the last dimension, should be _some_ sort of rgb\n if arr.shape[-1] != 3:\n raise ValueError(\"Last dimension of input array must be 3; \"\n \"shape {} was found.\".format(arr.shape))\n\n in_shape = arr.shape\n arr = np.array(\n arr, copy=False,\n dtype=np.promote_types(arr.dtype, np.float32), # Don't work on ints.\n ndmin=2, # In case input was 1D.\n )\n out = np.zeros_like(arr)\n arr_max = arr.max(-1)\n ipos = arr_max > 0\n delta = arr.ptp(-1)\n s = np.zeros_like(delta)\n s[ipos] = delta[ipos] / arr_max[ipos]\n ipos = delta > 0\n # red is max\n idx = (arr[..., 0] == arr_max) & ipos\n out[idx, 0] = (arr[idx, 1] - arr[idx, 2]) / delta[idx]\n # green is max\n idx = (arr[..., 1] == arr_max) & ipos\n out[idx, 0] = 2. + (arr[idx, 2] - arr[idx, 0]) / delta[idx]\n # blue is max\n idx = (arr[..., 2] == arr_max) & ipos\n out[idx, 0] = 4. + (arr[idx, 0] - arr[idx, 1]) / delta[idx]\n\n out[..., 0] = (out[..., 0] / 6.0) % 1.0\n out[..., 1] = s\n out[..., 2] = arr_max\n\n return out.reshape(in_shape)\n\n\ndef hsv_to_rgb(hsv):\n \"\"\"\n Convert hsv values to rgb.\n\n Parameters\n ----------\n hsv : (..., 3) array-like\n All values assumed to be in range [0, 1]\n\n Returns\n -------\n (..., 3) ndarray\n Colors converted to RGB values in range [0, 1]\n \"\"\"\n hsv = np.asarray(hsv)\n\n # check length of the last dimension, should be _some_ sort of rgb\n if hsv.shape[-1] != 3:\n raise ValueError(\"Last dimension of input array must be 3; \"\n \"shape {shp} was found.\".format(shp=hsv.shape))\n\n in_shape = hsv.shape\n hsv = np.array(\n hsv, copy=False,\n dtype=np.promote_types(hsv.dtype, np.float32), # Don't work on ints.\n ndmin=2, # In case input was 1D.\n )\n\n h = hsv[..., 0]\n s = hsv[..., 1]\n v = hsv[..., 2]\n\n r = np.empty_like(h)\n g = np.empty_like(h)\n b = np.empty_like(h)\n\n i = (h * 6.0).astype(int)\n f = (h * 6.0) - i\n p = v * (1.0 - s)\n q = v * (1.0 - s * f)\n t = v * (1.0 - s * (1.0 - f))\n\n idx = i % 6 == 0\n r[idx] = v[idx]\n g[idx] = t[idx]\n b[idx] = p[idx]\n\n idx = i == 1\n r[idx] = q[idx]\n g[idx] = v[idx]\n b[idx] = p[idx]\n\n idx = i == 2\n r[idx] = p[idx]\n g[idx] = v[idx]\n b[idx] = t[idx]\n\n idx = i == 3\n r[idx] = p[idx]\n g[idx] = q[idx]\n b[idx] = v[idx]\n\n idx = i == 4\n r[idx] = t[idx]\n g[idx] = p[idx]\n b[idx] = v[idx]\n\n idx = i == 5\n r[idx] = v[idx]\n g[idx] = p[idx]\n b[idx] = q[idx]\n\n idx = s == 0\n r[idx] = v[idx]\n g[idx] = v[idx]\n b[idx] = v[idx]\n\n rgb = np.stack([r, g, b], axis=-1)\n\n return rgb.reshape(in_shape)\n\n\ndef _vector_magnitude(arr):\n # things that don't work here:\n # * np.linalg.norm: drops mask from ma.array\n # * np.sum: drops mask from ma.array unless entire vector is masked\n sum_sq = 0\n for i in range(arr.shape[-1]):\n sum_sq += arr[..., i, np.newaxis] ** 2\n return np.sqrt(sum_sq)\n\n\nclass LightSource:\n \"\"\"\n Create a light source coming from the specified azimuth and elevation.\n Angles are in degrees, with the azimuth measured\n clockwise from north and elevation up from the zero plane of the surface.\n\n `shade` is used to produce \"shaded\" rgb values for a data array.\n `shade_rgb` can be used to combine an rgb image with an elevation map.\n `hillshade` produces an illumination map of a surface.\n \"\"\"\n\n def __init__(self, azdeg=315, altdeg=45, hsv_min_val=0, hsv_max_val=1,\n hsv_min_sat=1, hsv_max_sat=0):\n \"\"\"\n Specify the azimuth (measured clockwise from south) and altitude\n (measured up from the plane of the surface) of the light source\n in degrees.\n\n Parameters\n ----------\n azdeg : float, default: 315 degrees (from the northwest)\n The azimuth (0-360, degrees clockwise from North) of the light\n source.\n altdeg : float, default: 45 degrees\n The altitude (0-90, degrees up from horizontal) of the light\n source.\n\n Notes\n -----\n For backwards compatibility, the parameters *hsv_min_val*,\n *hsv_max_val*, *hsv_min_sat*, and *hsv_max_sat* may be supplied at\n initialization as well. However, these parameters will only be used if\n \"blend_mode='hsv'\" is passed into `shade` or `shade_rgb`.\n See the documentation for `blend_hsv` for more details.\n \"\"\"\n self.azdeg = azdeg\n self.altdeg = altdeg\n self.hsv_min_val = hsv_min_val\n self.hsv_max_val = hsv_max_val\n self.hsv_min_sat = hsv_min_sat\n self.hsv_max_sat = hsv_max_sat\n\n @property\n def direction(self):\n \"\"\"The unit vector direction towards the light source.\"\"\"\n # Azimuth is in degrees clockwise from North. Convert to radians\n # counterclockwise from East (mathematical notation).\n az = np.radians(90 - self.azdeg)\n alt = np.radians(self.altdeg)\n return np.array([\n np.cos(az) * np.cos(alt),\n np.sin(az) * np.cos(alt),\n np.sin(alt)\n ])\n\n def hillshade(self, elevation, vert_exag=1, dx=1, dy=1, fraction=1.):\n \"\"\"\n Calculate the illumination intensity for a surface using the defined\n azimuth and elevation for the light source.\n\n This computes the normal vectors for the surface, and then passes them\n on to `shade_normals`\n\n Parameters\n ----------\n elevation : 2D array-like\n The height values used to generate an illumination map\n vert_exag : number, optional\n The amount to exaggerate the elevation values by when calculating\n illumination. This can be used either to correct for differences in\n units between the x-y coordinate system and the elevation\n coordinate system (e.g. decimal degrees vs. meters) or to\n exaggerate or de-emphasize topographic effects.\n dx : number, optional\n The x-spacing (columns) of the input *elevation* grid.\n dy : number, optional\n The y-spacing (rows) of the input *elevation* grid.\n fraction : number, optional\n Increases or decreases the contrast of the hillshade. Values\n greater than one will cause intermediate values to move closer to\n full illumination or shadow (and clipping any values that move\n beyond 0 or 1). Note that this is not visually or mathematically\n the same as vertical exaggeration.\n\n Returns\n -------\n ndarray\n A 2D array of illumination values between 0-1, where 0 is\n completely in shadow and 1 is completely illuminated.\n \"\"\"\n\n # Because most image and raster GIS data has the first row in the array\n # as the \"top\" of the image, dy is implicitly negative. This is\n # consistent to what `imshow` assumes, as well.\n dy = -dy\n\n # compute the normal vectors from the partial derivatives\n e_dy, e_dx = np.gradient(vert_exag * elevation, dy, dx)\n\n # .view is to keep subclasses\n normal = np.empty(elevation.shape + (3,)).view(type(elevation))\n normal[..., 0] = -e_dx\n normal[..., 1] = -e_dy\n normal[..., 2] = 1\n normal /= _vector_magnitude(normal)\n\n return self.shade_normals(normal, fraction)\n\n def shade_normals(self, normals, fraction=1.):\n \"\"\"\n Calculate the illumination intensity for the normal vectors of a\n surface using the defined azimuth and elevation for the light source.\n\n Imagine an artificial sun placed at infinity in some azimuth and\n elevation position illuminating our surface. The parts of the surface\n that slope toward the sun should brighten while those sides facing away\n should become darker.\n\n Parameters\n ----------\n fraction : number, optional\n Increases or decreases the contrast of the hillshade. Values\n greater than one will cause intermediate values to move closer to\n full illumination or shadow (and clipping any values that move\n beyond 0 or 1). Note that this is not visually or mathematically\n the same as vertical exaggeration.\n\n Returns\n -------\n ndarray\n A 2D array of illumination values between 0-1, where 0 is\n completely in shadow and 1 is completely illuminated.\n \"\"\"\n\n intensity = normals.dot(self.direction)\n\n # Apply contrast stretch\n imin, imax = intensity.min(), intensity.max()\n intensity *= fraction\n\n # Rescale to 0-1, keeping range before contrast stretch\n # If constant slope, keep relative scaling (i.e. flat should be 0.5,\n # fully occluded 0, etc.)\n if (imax - imin) > 1e-6:\n # Strictly speaking, this is incorrect. Negative values should be\n # clipped to 0 because they're fully occluded. However, rescaling\n # in this manner is consistent with the previous implementation and\n # visually appears better than a \"hard\" clip.\n intensity -= imin\n intensity /= (imax - imin)\n intensity = np.clip(intensity, 0, 1)\n\n return intensity\n\n def shade(self, data, cmap, norm=None, blend_mode='overlay', vmin=None,\n vmax=None, vert_exag=1, dx=1, dy=1, fraction=1, **kwargs):\n \"\"\"\n Combine colormapped data values with an illumination intensity map\n (a.k.a. \"hillshade\") of the values.\n\n Parameters\n ----------\n data : 2D array-like\n The height values used to generate a shaded map.\n cmap : `~matplotlib.colors.Colormap`\n The colormap used to color the *data* array. Note that this must be\n a `~matplotlib.colors.Colormap` instance. For example, rather than\n passing in ``cmap='gist_earth'``, use\n ``cmap=plt.get_cmap('gist_earth')`` instead.\n norm : `~matplotlib.colors.Normalize` instance, optional\n The normalization used to scale values before colormapping. If\n None, the input will be linearly scaled between its min and max.\n blend_mode : {'hsv', 'overlay', 'soft'} or callable, optional\n The type of blending used to combine the colormapped data\n values with the illumination intensity. Default is\n \"overlay\". Note that for most topographic surfaces,\n \"overlay\" or \"soft\" appear more visually realistic. If a\n user-defined function is supplied, it is expected to\n combine an MxNx3 RGB array of floats (ranging 0 to 1) with\n an MxNx1 hillshade array (also 0 to 1). (Call signature\n ``func(rgb, illum, **kwargs)``) Additional kwargs supplied\n to this function will be passed on to the *blend_mode*\n function.\n vmin : float or None, optional\n The minimum value used in colormapping *data*. If *None* the\n minimum value in *data* is used. If *norm* is specified, then this\n argument will be ignored.\n vmax : float or None, optional\n The maximum value used in colormapping *data*. If *None* the\n maximum value in *data* is used. If *norm* is specified, then this\n argument will be ignored.\n vert_exag : number, optional\n The amount to exaggerate the elevation values by when calculating\n illumination. This can be used either to correct for differences in\n units between the x-y coordinate system and the elevation\n coordinate system (e.g. decimal degrees vs. meters) or to\n exaggerate or de-emphasize topography.\n dx : number, optional\n The x-spacing (columns) of the input *elevation* grid.\n dy : number, optional\n The y-spacing (rows) of the input *elevation* grid.\n fraction : number, optional\n Increases or decreases the contrast of the hillshade. Values\n greater than one will cause intermediate values to move closer to\n full illumination or shadow (and clipping any values that move\n beyond 0 or 1). Note that this is not visually or mathematically\n the same as vertical exaggeration.\n Additional kwargs are passed on to the *blend_mode* function.\n\n Returns\n -------\n ndarray\n An MxNx4 array of floats ranging between 0-1.\n \"\"\"\n if vmin is None:\n vmin = data.min()\n if vmax is None:\n vmax = data.max()\n if norm is None:\n norm = Normalize(vmin=vmin, vmax=vmax)\n\n rgb0 = cmap(norm(data))\n rgb1 = self.shade_rgb(rgb0, elevation=data, blend_mode=blend_mode,\n vert_exag=vert_exag, dx=dx, dy=dy,\n fraction=fraction, **kwargs)\n # Don't overwrite the alpha channel, if present.\n rgb0[..., :3] = rgb1[..., :3]\n return rgb0\n\n def shade_rgb(self, rgb, elevation, fraction=1., blend_mode='hsv',\n vert_exag=1, dx=1, dy=1, **kwargs):\n \"\"\"\n Use this light source to adjust the colors of the *rgb* input array to\n give the impression of a shaded relief map with the given *elevation*.\n\n Parameters\n ----------\n rgb : array-like\n An (M, N, 3) RGB array, assumed to be in the range of 0 to 1.\n elevation : array-like\n An (M, N) array of the height values used to generate a shaded map.\n fraction : number\n Increases or decreases the contrast of the hillshade. Values\n greater than one will cause intermediate values to move closer to\n full illumination or shadow (and clipping any values that move\n beyond 0 or 1). Note that this is not visually or mathematically\n the same as vertical exaggeration.\n blend_mode : {'hsv', 'overlay', 'soft'} or callable, optional\n The type of blending used to combine the colormapped data values\n with the illumination intensity. For backwards compatibility, this\n defaults to \"hsv\". Note that for most topographic surfaces,\n \"overlay\" or \"soft\" appear more visually realistic. If a\n user-defined function is supplied, it is expected to combine an\n MxNx3 RGB array of floats (ranging 0 to 1) with an MxNx1 hillshade\n array (also 0 to 1). (Call signature\n ``func(rgb, illum, **kwargs)``)\n Additional kwargs supplied to this function will be passed on to\n the *blend_mode* function.\n vert_exag : number, optional\n The amount to exaggerate the elevation values by when calculating\n illumination. This can be used either to correct for differences in\n units between the x-y coordinate system and the elevation\n coordinate system (e.g. decimal degrees vs. meters) or to\n exaggerate or de-emphasize topography.\n dx : number, optional\n The x-spacing (columns) of the input *elevation* grid.\n dy : number, optional\n The y-spacing (rows) of the input *elevation* grid.\n Additional kwargs are passed on to the *blend_mode* function.\n\n Returns\n -------\n ndarray\n An (m, n, 3) array of floats ranging between 0-1.\n \"\"\"\n # Calculate the \"hillshade\" intensity.\n intensity = self.hillshade(elevation, vert_exag, dx, dy, fraction)\n intensity = intensity[..., np.newaxis]\n\n # Blend the hillshade and rgb data using the specified mode\n lookup = {\n 'hsv': self.blend_hsv,\n 'soft': self.blend_soft_light,\n 'overlay': self.blend_overlay,\n }\n if blend_mode in lookup:\n blend = lookup[blend_mode](rgb, intensity, **kwargs)\n else:\n try:\n blend = blend_mode(rgb, intensity, **kwargs)\n except TypeError as err:\n raise ValueError('\"blend_mode\" must be callable or one of {}'\n .format(lookup.keys)) from err\n\n # Only apply result where hillshade intensity isn't masked\n if np.ma.is_masked(intensity):\n mask = intensity.mask[..., 0]\n for i in range(3):\n blend[..., i][mask] = rgb[..., i][mask]\n\n return blend\n\n def blend_hsv(self, rgb, intensity, hsv_max_sat=None, hsv_max_val=None,\n hsv_min_val=None, hsv_min_sat=None):\n \"\"\"\n Take the input data array, convert to HSV values in the given colormap,\n then adjust those color values to give the impression of a shaded\n relief map with a specified light source. RGBA values are returned,\n which can then be used to plot the shaded image with imshow.\n\n The color of the resulting image will be darkened by moving the (s, v)\n values (in hsv colorspace) toward (hsv_min_sat, hsv_min_val) in the\n shaded regions, or lightened by sliding (s, v) toward (hsv_max_sat,\n hsv_max_val) in regions that are illuminated. The default extremes are\n chose so that completely shaded points are nearly black (s = 1, v = 0)\n and completely illuminated points are nearly white (s = 0, v = 1).\n\n Parameters\n ----------\n rgb : ndarray\n An MxNx3 RGB array of floats ranging from 0 to 1 (color image).\n intensity : ndarray\n An MxNx1 array of floats ranging from 0 to 1 (grayscale image).\n hsv_max_sat : number, default: 1\n The maximum saturation value that the *intensity* map can shift the\n output image to.\n hsv_min_sat : number, optional\n The minimum saturation value that the *intensity* map can shift the\n output image to. Defaults to 0.\n hsv_max_val : number, optional\n The maximum value (\"v\" in \"hsv\") that the *intensity* map can shift\n the output image to. Defaults to 1.\n hsv_min_val : number, optional\n The minimum value (\"v\" in \"hsv\") that the *intensity* map can shift\n the output image to. Defaults to 0.\n\n Returns\n -------\n ndarray\n An MxNx3 RGB array representing the combined images.\n \"\"\"\n # Backward compatibility...\n if hsv_max_sat is None:\n hsv_max_sat = self.hsv_max_sat\n if hsv_max_val is None:\n hsv_max_val = self.hsv_max_val\n if hsv_min_sat is None:\n hsv_min_sat = self.hsv_min_sat\n if hsv_min_val is None:\n hsv_min_val = self.hsv_min_val\n\n # Expects a 2D intensity array scaled between -1 to 1...\n intensity = intensity[..., 0]\n intensity = 2 * intensity - 1\n\n # Convert to rgb, then rgb to hsv\n hsv = rgb_to_hsv(rgb[:, :, 0:3])\n hue, sat, val = np.moveaxis(hsv, -1, 0)\n\n # Modify hsv values (in place) to simulate illumination.\n # putmask(A, mask, B) <=> A[mask] = B[mask]\n np.putmask(sat, (np.abs(sat) > 1.e-10) & (intensity > 0),\n (1 - intensity) * sat + intensity * hsv_max_sat)\n np.putmask(sat, (np.abs(sat) > 1.e-10) & (intensity < 0),\n (1 + intensity) * sat - intensity * hsv_min_sat)\n np.putmask(val, intensity > 0,\n (1 - intensity) * val + intensity * hsv_max_val)\n np.putmask(val, intensity < 0,\n (1 + intensity) * val - intensity * hsv_min_val)\n np.clip(hsv[:, :, 1:], 0, 1, out=hsv[:, :, 1:])\n\n # Convert modified hsv back to rgb.\n return hsv_to_rgb(hsv)\n\n def blend_soft_light(self, rgb, intensity):\n \"\"\"\n Combine an rgb image with an intensity map using \"soft light\" blending,\n using the \"pegtop\" formula.\n\n Parameters\n ----------\n rgb : ndarray\n An MxNx3 RGB array of floats ranging from 0 to 1 (color image).\n intensity : ndarray\n An MxNx1 array of floats ranging from 0 to 1 (grayscale image).\n\n Returns\n -------\n ndarray\n An MxNx3 RGB array representing the combined images.\n \"\"\"\n return 2 * intensity * rgb + (1 - 2 * intensity) * rgb**2\n\n def blend_overlay(self, rgb, intensity):\n \"\"\"\n Combines an rgb image with an intensity map using \"overlay\" blending.\n\n Parameters\n ----------\n rgb : ndarray\n An MxNx3 RGB array of floats ranging from 0 to 1 (color image).\n intensity : ndarray\n An MxNx1 array of floats ranging from 0 to 1 (grayscale image).\n\n Returns\n -------\n ndarray\n An MxNx3 RGB array representing the combined images.\n \"\"\"\n low = 2 * intensity * rgb\n high = 1 - 2 * (1 - intensity) * (1 - rgb)\n return np.where(rgb <= 0.5, low, high)\n\n\ndef from_levels_and_colors(levels, colors, extend='neither'):\n \"\"\"\n A helper routine to generate a cmap and a norm instance which\n behave similar to contourf's levels and colors arguments.\n\n Parameters\n ----------\n levels : sequence of numbers\n The quantization levels used to construct the `BoundaryNorm`.\n Value ``v`` is quantized to level ``i`` if ``lev[i] <= v < lev[i+1]``.\n colors : sequence of colors\n The fill color to use for each level. If *extend* is \"neither\" there\n must be ``n_level - 1`` colors. For an *extend* of \"min\" or \"max\" add\n one extra color, and for an *extend* of \"both\" add two colors.\n extend : {'neither', 'min', 'max', 'both'}, optional\n The behaviour when a value falls out of range of the given levels.\n See `~.Axes.contourf` for details.\n\n Returns\n -------\n cmap : `~matplotlib.colors.Normalize`\n norm : `~matplotlib.colors.Colormap`\n \"\"\"\n slice_map = {\n 'both': slice(1, -1),\n 'min': slice(1, None),\n 'max': slice(0, -1),\n 'neither': slice(0, None),\n }\n _api.check_in_list(slice_map, extend=extend)\n color_slice = slice_map[extend]\n\n n_data_colors = len(levels) - 1\n n_expected = n_data_colors + color_slice.start - (color_slice.stop or 0)\n if len(colors) != n_expected:\n raise ValueError(\n f'With extend == {extend!r} and {len(levels)} levels, '\n f'expected {n_expected} colors, but got {len(colors)}')\n\n cmap = ListedColormap(colors[color_slice], N=n_data_colors)\n\n if extend in ['min', 'both']:\n cmap.set_under(colors[0])\n else:\n cmap.set_under('none')\n\n if extend in ['max', 'both']:\n cmap.set_over(colors[-1])\n else:\n cmap.set_over('none')\n\n cmap.colorbar_extend = extend\n\n norm = BoundaryNorm(levels, ncolors=n_data_colors)\n return cmap, norm\n"}
|
{"lib/matplotlib/colors.py": [{"type": "function", "name": "Colormap.copy", "lines": [848, 850], "signature": "def copy(self):", "doc": "Return a copy of the colormap."}]}
|
3.3
|
["lib/matplotlib/tests/test_colors.py::test_colormap_copy"]
|
["lib/matplotlib/tests/test_colors.py::test_create_lookup_table[5-result0]", "lib/matplotlib/tests/test_colors.py::test_create_lookup_table[2-result1]", "lib/matplotlib/tests/test_colors.py::test_create_lookup_table[1-result2]", "lib/matplotlib/tests/test_colors.py::test_resample", "lib/matplotlib/tests/test_colors.py::test_register_cmap", "lib/matplotlib/tests/test_colors.py::test_double_register_builtin_cmap", "lib/matplotlib/tests/test_colors.py::test_unregister_builtin_cmap", "lib/matplotlib/tests/test_colors.py::test_colormap_global_set_warn", "lib/matplotlib/tests/test_colors.py::test_colormap_dict_deprecate", "lib/matplotlib/tests/test_colors.py::test_colormap_endian", "lib/matplotlib/tests/test_colors.py::test_colormap_invalid", "lib/matplotlib/tests/test_colors.py::test_colormap_return_types", "lib/matplotlib/tests/test_colors.py::test_BoundaryNorm", "lib/matplotlib/tests/test_colors.py::test_CenteredNorm", "lib/matplotlib/tests/test_colors.py::test_lognorm_invalid[-1-2]", "lib/matplotlib/tests/test_colors.py::test_lognorm_invalid[3-1]", "lib/matplotlib/tests/test_colors.py::test_LogNorm", "lib/matplotlib/tests/test_colors.py::test_LogNorm_inverse", "lib/matplotlib/tests/test_colors.py::test_PowerNorm", "lib/matplotlib/tests/test_colors.py::test_PowerNorm_translation_invariance", "lib/matplotlib/tests/test_colors.py::test_Normalize", "lib/matplotlib/tests/test_colors.py::test_FuncNorm", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_autoscale", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_autoscale_None_vmin", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_autoscale_None_vmax", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_scale", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_scaleout_center", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_scaleout_center_max", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_Even", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_Odd", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_VminEqualsVcenter", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_VmaxEqualsVcenter", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_VminGTVcenter", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_TwoSlopeNorm_VminGTVmax", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_VcenterGTVmax", "lib/matplotlib/tests/test_colors.py::test_TwoSlopeNorm_premature_scaling", "lib/matplotlib/tests/test_colors.py::test_SymLogNorm", "lib/matplotlib/tests/test_colors.py::test_SymLogNorm_colorbar", "lib/matplotlib/tests/test_colors.py::test_SymLogNorm_single_zero", "lib/matplotlib/tests/test_colors.py::test_cmap_and_norm_from_levels_and_colors[png]", "lib/matplotlib/tests/test_colors.py::test_boundarynorm_and_colorbarbase[png]", "lib/matplotlib/tests/test_colors.py::test_cmap_and_norm_from_levels_and_colors2", "lib/matplotlib/tests/test_colors.py::test_rgb_hsv_round_trip", "lib/matplotlib/tests/test_colors.py::test_autoscale_masked", "lib/matplotlib/tests/test_colors.py::test_light_source_topo_surface[png]", "lib/matplotlib/tests/test_colors.py::test_light_source_shading_default", "lib/matplotlib/tests/test_colors.py::test_light_source_shading_empty_mask", "lib/matplotlib/tests/test_colors.py::test_light_source_masked_shading", "lib/matplotlib/tests/test_colors.py::test_light_source_hillshading", "lib/matplotlib/tests/test_colors.py::test_light_source_planar_hillshading", "lib/matplotlib/tests/test_colors.py::test_color_names", "lib/matplotlib/tests/test_colors.py::test_pandas_iterable", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Accent]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Accent_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Blues]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Blues_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[BrBG]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[BrBG_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[BuGn]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[BuGn_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[BuPu]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[BuPu_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[CMRmap]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[CMRmap_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Dark2]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Dark2_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[GnBu]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[GnBu_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Greens]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Greens_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Greys]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Greys_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[OrRd]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[OrRd_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Oranges]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Oranges_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PRGn]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PRGn_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Paired]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Paired_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Pastel1]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Pastel1_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Pastel2]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Pastel2_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PiYG]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PiYG_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuBu]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuBuGn]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuBuGn_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuBu_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuOr]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuOr_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuRd]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[PuRd_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Purples]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Purples_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdBu]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdBu_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdGy]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdGy_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdPu]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdPu_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdYlBu]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdYlBu_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdYlGn]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[RdYlGn_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Reds]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Reds_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Set1]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Set1_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Set2]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Set2_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Set3]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Set3_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Spectral]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Spectral_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Wistia]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[Wistia_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlGn]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlGnBu]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlGnBu_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlGn_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlOrBr]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlOrBr_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlOrRd]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[YlOrRd_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[afmhot]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[afmhot_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[autumn]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[autumn_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[binary]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[binary_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[bone]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[bone_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[brg]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[brg_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[bwr]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[bwr_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[cividis]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[cividis_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[cool]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[cool_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[coolwarm]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[coolwarm_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[copper]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[copper_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[cubehelix]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[cubehelix_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[flag]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[flag_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_earth]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_earth_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_gray]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_gray_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_heat]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_heat_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_ncar]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_ncar_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_rainbow]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_rainbow_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_stern]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_stern_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_yarg]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gist_yarg_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gnuplot]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gnuplot2]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gnuplot2_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gnuplot_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gray]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[gray_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[hot]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[hot_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[hsv]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[hsv_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[inferno]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[inferno_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[jet]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[jet_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[magma]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[magma_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[nipy_spectral]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[nipy_spectral_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[ocean]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[ocean_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[pink]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[pink_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[plasma]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[plasma_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[prism]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[prism_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[rainbow]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[rainbow_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[seismic]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[seismic_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[spring]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[spring_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[summer]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[summer_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab10]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab10_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab20]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab20_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab20b]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab20b_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab20c]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[tab20c_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[terrain]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[terrain_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[turbo]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[turbo_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[twilight]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[twilight_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[twilight_shifted]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[twilight_shifted_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[viridis]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[viridis_r]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[winter]", "lib/matplotlib/tests/test_colors.py::test_colormap_reversing[winter_r]", "lib/matplotlib/tests/test_colors.py::test_cn", "lib/matplotlib/tests/test_colors.py::test_conversions", "lib/matplotlib/tests/test_colors.py::test_conversions_masked", "lib/matplotlib/tests/test_colors.py::test_to_rgba_array_single_str", "lib/matplotlib/tests/test_colors.py::test_to_rgba_array_alpha_array", "lib/matplotlib/tests/test_colors.py::test_failed_conversions", "lib/matplotlib/tests/test_colors.py::test_grey_gray", "lib/matplotlib/tests/test_colors.py::test_tableau_order", "lib/matplotlib/tests/test_colors.py::test_ndarray_subclass_norm", "lib/matplotlib/tests/test_colors.py::test_same_color", "lib/matplotlib/tests/test_colors.py::test_hex_shorthand_notation", "lib/matplotlib/tests/test_colors.py::test_repr_png", "lib/matplotlib/tests/test_colors.py::test_repr_html", "lib/matplotlib/tests/test_colors.py::test_get_under_over_bad", "lib/matplotlib/tests/test_colors.py::test_non_mutable_get_values[over]", "lib/matplotlib/tests/test_colors.py::test_non_mutable_get_values[under]", "lib/matplotlib/tests/test_colors.py::test_non_mutable_get_values[bad]", "lib/matplotlib/tests/test_colors.py::test_colormap_alpha_array", "lib/matplotlib/tests/test_colors.py::test_colormap_bad_data_with_alpha", "lib/matplotlib/tests/test_colors.py::test_2d_to_rgba", "lib/matplotlib/tests/test_colors.py::test_set_dict_to_rgba", "lib/matplotlib/tests/test_colors.py::test_norm_deepcopy"]
|
59b32afde60e46407b60c766b878c840a9bfa490
|
{"first_commit_time": 1615151492.0, "pr_title": "ENH: add a copy method to colormaps", "pr_body": "## PR Summary\r\n\r\nRight now we are deprecating access to the global colormaps, but users can only avoid it by doing `newcmap = copy.copy(cmap)` which is a bit heavy-handed. Here we provide a simple `copy` method to `Colormap` so we can do `newcmap = cmap.copy()`. \r\n\r\nSee discussion in #19609, #16991, #18503\r\n\r\n\r\n## PR Checklist\r\n\r\n<!-- Please mark any checkboxes that do not apply to this PR as [N/A]. -->\r\n\r\n- [ ] Has pytest style unit tests (and `pytest` passes).\r\n- [ ] Is [Flake 8](https://flake8.pycqa.org/en/latest/) compliant (run `flake8` on changed files to check).\r\n- [ ] New features are documented, with examples if plot related.\r\n- [ ] Documentation is sphinx and numpydoc compliant (the docs should [build](https://matplotlib.org/devel/documenting_mpl.html#building-the-docs) without error).\r\n- [ ] Conforms to Matplotlib style conventions (install `flake8-docstrings` and run `flake8 --docstring-convention=all`).\r\n- [ ] New features have an entry in `doc/users/next_whats_new/` (follow instructions in README.rst there).\r\n- [ ] API changes documented in `doc/api/next_api_changes/` (follow instructions in README.rst there).\r\n\r\n<!--\r\nThank you so much for your PR! To help us review your contribution, please\r\nconsider the following points:\r\n\r\n- A development guide is available at https://matplotlib.org/devdocs/devel/index.html.\r\n\r\n- Help with git and github is available at\r\n https://matplotlib.org/devel/gitwash/development_workflow.html.\r\n\r\n- Do not create the PR out of master, but out of a separate branch.\r\n\r\n- The PR title should summarize the changes, for example \"Raise ValueError on\r\n non-numeric input to set_xlim\". Avoid non-descriptive titles such as\r\n \"Addresses issue #8576\".\r\n\r\n- The summary should provide at least 1-2 sentences describing the pull request\r\n in detail (Why is this change required? What problem does it solve?) and\r\n link to any relevant issues.\r\n\r\n- If you are contributing fixes to docstrings, please pay attention to\r\n http://matplotlib.org/devel/documenting_mpl.html#formatting. In particular,\r\n note the difference between using single backquotes, double backquotes, and\r\n asterisks in the markup.\r\n\r\nWe understand that PRs can sometimes be overwhelming, especially as the\r\nreviews start coming in. Please let us know if the reviews are unclear or\r\nthe recommended next step seems overly demanding, if you would like help in\r\naddressing a reviewer's comments, or if you have been waiting too long to hear\r\nback on your PR.\r\n-->\r\n", "pr_timeline": [{"time": 1615159183.0, "comment": "Agreed, that we definitely don't want a copy method on everything. But most classes aren't primarily coming from global instances, and hence usually can be made from scratch. Whereas colormaps are probably 99% coming from the global list, so I think we should make it easy to move into user scope. "}], "issues": {}}
|
|
matplotlib/matplotlib
| 20,634
|
https://github.com/matplotlib/matplotlib/pull/20634
|
matplotlib__matplotlib-20634
|
[]
|
73b7abf14c77014ab2436e7691e19cbee5864f4b
|
diff --git a/doc/api/next_api_changes/behavior/20634-JKS.rst b/doc/api/next_api_changes/behavior/20634-JKS.rst
new file mode 100644
index 000000000000..ff4046445e42
--- /dev/null
+++ b/doc/api/next_api_changes/behavior/20634-JKS.rst
@@ -0,0 +1,8 @@
+``Type1Font`` objects now decrypt the encrypted part
+----------------------------------------------------
+
+Type 1 fonts have a large part of their code encrypted as an obsolete
+copy-protection measure. This part is now available decrypted as the
+``decrypted`` attribute of :class:`~matplotlib.type1font.Type1Font`.
+This decrypted data is not yet parsed, but this is a prerequisite for
+implementing subsetting.
diff --git a/lib/matplotlib/type1font.py b/lib/matplotlib/type1font.py
index a9ae51ea5303..f417c0fc97a4 100644
--- a/lib/matplotlib/type1font.py
+++ b/lib/matplotlib/type1font.py
@@ -24,13 +24,16 @@
import binascii
import enum
import itertools
+import logging
import re
import struct
import numpy as np
from matplotlib.cbook import _format_approx
+from . import _api
+_log = logging.getLogger(__name__)
# token types
_TokenType = enum.Enum('_TokenType',
@@ -46,10 +49,12 @@ class Type1Font:
parts : tuple
A 3-tuple of the cleartext part, the encrypted part, and the finale of
zeros.
+ decrypted : bytes
+ The decrypted form of parts[1].
prop : dict[str, Any]
A dictionary of font properties.
"""
- __slots__ = ('parts', 'prop')
+ __slots__ = ('parts', 'decrypted', 'prop')
def __init__(self, input):
"""
@@ -68,6 +73,7 @@ def __init__(self, input):
data = self._read(file)
self.parts = self._split(data)
+ self.decrypted = self._decrypt(self.parts[1], 'eexec')
self._parse()
def _read(self, file):
@@ -125,13 +131,16 @@ def _split(self, data):
zeros -= 1
idx -= 1
if zeros:
- raise RuntimeError('Insufficiently many zeros in Type 1 font')
+ # this may have been a problem on old implementations that
+ # used the zeros as necessary padding
+ _log.info('Insufficiently many zeros in Type 1 font')
# Convert encrypted part to binary (if we read a pfb file, we may end
# up converting binary to hexadecimal to binary again; but if we read
# a pfa file, this part is already in hex, and I am not quite sure if
# even the pfb format guarantees that it will be in binary).
- binary = binascii.unhexlify(data[len1:idx+1])
+ idx1 = len1 + ((idx - len1 + 2) & ~1) # ensure an even number of bytes
+ binary = binascii.unhexlify(data[len1:idx1])
return data[:len1], binary, data[idx+1:]
@@ -139,6 +148,54 @@ def _split(self, data):
_token_re = re.compile(br'/{0,2}[^]\0\t\r\v\n ()<>{}/%[]+')
_instring_re = re.compile(br'[()\\]')
+ @staticmethod
+ def _decrypt(ciphertext, key, ndiscard=4):
+ """
+ Decrypt ciphertext using the Type-1 font algorithm
+
+ The algorithm is described in Adobe's "Adobe Type 1 Font Format".
+ The key argument can be an integer, or one of the strings
+ 'eexec' and 'charstring', which map to the key specified for the
+ corresponding part of Type-1 fonts.
+
+ The ndiscard argument should be an integer, usually 4.
+ That number of bytes is discarded from the beginning of plaintext.
+ """
+
+ key = _api.check_getitem({'eexec': 55665, 'charstring': 4330}, key=key)
+ plaintext = []
+ for byte in ciphertext:
+ plaintext.append(byte ^ (key >> 8))
+ key = ((key+byte) * 52845 + 22719) & 0xffff
+
+ return bytes(plaintext[ndiscard:])
+
+ @staticmethod
+ def _encrypt(plaintext, key, ndiscard=4):
+ """
+ Encrypt plaintext using the Type-1 font algorithm
+
+ The algorithm is described in Adobe's "Adobe Type 1 Font Format".
+ The key argument can be an integer, or one of the strings
+ 'eexec' and 'charstring', which map to the key specified for the
+ corresponding part of Type-1 fonts.
+
+ The ndiscard argument should be an integer, usually 4. That
+ number of bytes is prepended to the plaintext before encryption.
+ This function prepends NUL bytes for reproducibility, even though
+ the original algorithm uses random bytes, presumably to avoid
+ cryptanalysis.
+ """
+
+ key = _api.check_getitem({'eexec': 55665, 'charstring': 4330}, key=key)
+ ciphertext = []
+ for byte in b'\0' * ndiscard + plaintext:
+ c = byte ^ (key >> 8)
+ ciphertext.append(c)
+ key = ((key + c) * 52845 + 22719) & 0xffff
+
+ return bytes(ciphertext)
+
@classmethod
def _tokens(cls, text):
"""
|
diff --git a/lib/matplotlib/tests/test_type1font.py b/lib/matplotlib/tests/test_type1font.py
index 5766709c6cf8..99cc3e500b0e 100644
--- a/lib/matplotlib/tests/test_type1font.py
+++ b/lib/matplotlib/tests/test_type1font.py
@@ -15,6 +15,8 @@ def test_Type1Font():
assert font.parts[2] == rawdata[0x8985:0x8ba6]
assert font.parts[1:] == slanted.parts[1:]
assert font.parts[1:] == condensed.parts[1:]
+ assert font.decrypted.startswith(b'dup\n/Private 18 dict dup begin')
+ assert font.decrypted.endswith(b'mark currentfile closefile\n')
differ = difflib.Differ()
diff = list(differ.compare(
@@ -67,3 +69,11 @@ def test_overprecision():
assert matrix == '0.001 0 0.000167 0.001 0 0'
# and here we had -9.48090361795083
assert angle == '-9.4809'
+
+
+def test_encrypt_decrypt_roundtrip():
+ data = b'this is my plaintext \0\1\2\3'
+ encrypted = t1f.Type1Font._encrypt(data, 'eexec')
+ decrypted = t1f.Type1Font._decrypt(encrypted, 'eexec')
+ assert encrypted != decrypted
+ assert data == decrypted
| 2021-07-12T17:21:48
|
{}
|
{"doc/api/next_api_changes/behavior/20634-JKS.rst": null, "lib/matplotlib/type1font.py": "\"\"\"\nA class representing a Type 1 font.\n\nThis version reads pfa and pfb files and splits them for embedding in\npdf files. It also supports SlantFont and ExtendFont transformations,\nsimilarly to pdfTeX and friends. There is no support yet for subsetting.\n\nUsage::\n\n >>> font = Type1Font(filename)\n >>> clear_part, encrypted_part, finale = font.parts\n >>> slanted_font = font.transform({'slant': 0.167})\n >>> extended_font = font.transform({'extend': 1.2})\n\nSources:\n\n* Adobe Technical Note #5040, Supporting Downloadable PostScript\n Language Fonts.\n\n* Adobe Type 1 Font Format, Adobe Systems Incorporated, third printing,\n v1.1, 1993. ISBN 0-201-57044-0.\n\"\"\"\n\nimport binascii\nimport enum\nimport itertools\nimport re\nimport struct\n\nimport numpy as np\n\nfrom matplotlib.cbook import _format_approx\n\n\n# token types\n_TokenType = enum.Enum('_TokenType',\n 'whitespace name string delimiter number')\n\n\nclass Type1Font:\n \"\"\"\n A class representing a Type-1 font, for use by backends.\n\n Attributes\n ----------\n parts : tuple\n A 3-tuple of the cleartext part, the encrypted part, and the finale of\n zeros.\n prop : dict[str, Any]\n A dictionary of font properties.\n \"\"\"\n __slots__ = ('parts', 'prop')\n\n def __init__(self, input):\n \"\"\"\n Initialize a Type-1 font.\n\n Parameters\n ----------\n input : str or 3-tuple\n Either a pfb file name, or a 3-tuple of already-decoded Type-1\n font `~.Type1Font.parts`.\n \"\"\"\n if isinstance(input, tuple) and len(input) == 3:\n self.parts = input\n else:\n with open(input, 'rb') as file:\n data = self._read(file)\n self.parts = self._split(data)\n\n self._parse()\n\n def _read(self, file):\n \"\"\"Read the font from a file, decoding into usable parts.\"\"\"\n rawdata = file.read()\n if not rawdata.startswith(b'\\x80'):\n return rawdata\n\n data = b''\n while rawdata:\n if not rawdata.startswith(b'\\x80'):\n raise RuntimeError('Broken pfb file (expected byte 128, '\n 'got %d)' % rawdata[0])\n type = rawdata[1]\n if type in (1, 2):\n length, = struct.unpack('<i', rawdata[2:6])\n segment = rawdata[6:6 + length]\n rawdata = rawdata[6 + length:]\n\n if type == 1: # ASCII text: include verbatim\n data += segment\n elif type == 2: # binary data: encode in hexadecimal\n data += binascii.hexlify(segment)\n elif type == 3: # end of file\n break\n else:\n raise RuntimeError('Unknown segment type %d in pfb file' %\n type)\n\n return data\n\n def _split(self, data):\n \"\"\"\n Split the Type 1 font into its three main parts.\n\n The three parts are: (1) the cleartext part, which ends in a\n eexec operator; (2) the encrypted part; (3) the fixed part,\n which contains 512 ASCII zeros possibly divided on various\n lines, a cleartomark operator, and possibly something else.\n \"\"\"\n\n # Cleartext part: just find the eexec and skip whitespace\n idx = data.index(b'eexec')\n idx += len(b'eexec')\n while data[idx] in b' \\t\\r\\n':\n idx += 1\n len1 = idx\n\n # Encrypted part: find the cleartomark operator and count\n # zeros backward\n idx = data.rindex(b'cleartomark') - 1\n zeros = 512\n while zeros and data[idx] in b'0' or data[idx] in b'\\r\\n':\n if data[idx] in b'0':\n zeros -= 1\n idx -= 1\n if zeros:\n raise RuntimeError('Insufficiently many zeros in Type 1 font')\n\n # Convert encrypted part to binary (if we read a pfb file, we may end\n # up converting binary to hexadecimal to binary again; but if we read\n # a pfa file, this part is already in hex, and I am not quite sure if\n # even the pfb format guarantees that it will be in binary).\n binary = binascii.unhexlify(data[len1:idx+1])\n\n return data[:len1], binary, data[idx+1:]\n\n _whitespace_or_comment_re = re.compile(br'[\\0\\t\\r\\014\\n ]+|%[^\\r\\n\\v]*')\n _token_re = re.compile(br'/{0,2}[^]\\0\\t\\r\\v\\n ()<>{}/%[]+')\n _instring_re = re.compile(br'[()\\\\]')\n\n @classmethod\n def _tokens(cls, text):\n \"\"\"\n A PostScript tokenizer. Yield (token, value) pairs such as\n (_TokenType.whitespace, ' ') or (_TokenType.name, '/Foobar').\n \"\"\"\n # Preload enum members for speed.\n tok_whitespace = _TokenType.whitespace\n tok_name = _TokenType.name\n tok_string = _TokenType.string\n tok_delimiter = _TokenType.delimiter\n tok_number = _TokenType.number\n pos = 0\n while pos < len(text):\n match = cls._whitespace_or_comment_re.match(text, pos)\n if match:\n yield (tok_whitespace, match.group())\n pos = match.end()\n elif text[pos:pos+1] == b'(':\n start = pos\n pos += 1\n depth = 1\n while depth:\n match = cls._instring_re.search(text, pos)\n if match is None:\n return\n pos = match.end()\n if match.group() == b'(':\n depth += 1\n elif match.group() == b')':\n depth -= 1\n else: # a backslash - skip the next character\n pos += 1\n yield (tok_string, text[start:pos])\n elif text[pos:pos + 2] in (b'<<', b'>>'):\n yield (tok_delimiter, text[pos:pos + 2])\n pos += 2\n elif text[pos:pos+1] == b'<':\n start = pos\n pos = text.index(b'>', pos)\n yield (tok_string, text[start:pos])\n else:\n match = cls._token_re.match(text, pos)\n if match:\n try:\n float(match.group())\n yield (tok_number, match.group())\n except ValueError:\n yield (tok_name, match.group())\n pos = match.end()\n else:\n yield (tok_delimiter, text[pos:pos + 1])\n pos += 1\n\n def _parse(self):\n \"\"\"\n Find the values of various font properties. This limited kind\n of parsing is described in Chapter 10 \"Adobe Type Manager\n Compatibility\" of the Type-1 spec.\n \"\"\"\n # Preload enum members for speed.\n tok_whitespace = _TokenType.whitespace\n tok_name = _TokenType.name\n tok_string = _TokenType.string\n tok_number = _TokenType.number\n # Start with reasonable defaults\n prop = {'weight': 'Regular', 'ItalicAngle': 0.0, 'isFixedPitch': False,\n 'UnderlinePosition': -100, 'UnderlineThickness': 50}\n filtered = ((token, value)\n for token, value in self._tokens(self.parts[0])\n if token is not tok_whitespace)\n # The spec calls this an ASCII format; in Python 2.x we could\n # just treat the strings and names as opaque bytes but let's\n # turn them into proper Unicode, and be lenient in case of high bytes.\n def convert(x): return x.decode('ascii', 'replace')\n for token, value in filtered:\n if token is tok_name and value.startswith(b'/'):\n key = convert(value[1:])\n token, value = next(filtered)\n if token is tok_name:\n if value in (b'true', b'false'):\n value = value == b'true'\n else:\n value = convert(value.lstrip(b'/'))\n elif token is tok_string:\n value = convert(value.lstrip(b'(').rstrip(b')'))\n elif token is tok_number:\n if b'.' in value:\n value = float(value)\n else:\n value = int(value)\n else: # more complicated value such as an array\n value = None\n if key != 'FontInfo' and value is not None:\n prop[key] = value\n\n # Fill in the various *Name properties\n if 'FontName' not in prop:\n prop['FontName'] = (prop.get('FullName') or\n prop.get('FamilyName') or\n 'Unknown')\n if 'FullName' not in prop:\n prop['FullName'] = prop['FontName']\n if 'FamilyName' not in prop:\n extras = ('(?i)([ -](regular|plain|italic|oblique|(semi)?bold|'\n '(ultra)?light|extra|condensed))+$')\n prop['FamilyName'] = re.sub(extras, '', prop['FullName'])\n\n self.prop = prop\n\n @classmethod\n def _transformer(cls, tokens, slant, extend):\n tok_whitespace = _TokenType.whitespace\n tok_name = _TokenType.name\n\n def fontname(name):\n result = name\n if slant:\n result += b'_Slant_%d' % int(1000 * slant)\n if extend != 1.0:\n result += b'_Extend_%d' % int(1000 * extend)\n return result\n\n def italicangle(angle):\n return b'%a' % round(\n float(angle) - np.arctan(slant) / np.pi * 180,\n 5\n )\n\n def fontmatrix(array):\n array = array.lstrip(b'[').rstrip(b']').split()\n array = [float(x) for x in array]\n oldmatrix = np.eye(3, 3)\n oldmatrix[0:3, 0] = array[::2]\n oldmatrix[0:3, 1] = array[1::2]\n modifier = np.array([[extend, 0, 0],\n [slant, 1, 0],\n [0, 0, 1]])\n newmatrix = np.dot(modifier, oldmatrix)\n array[::2] = newmatrix[0:3, 0]\n array[1::2] = newmatrix[0:3, 1]\n return (\n '[%s]' % ' '.join(_format_approx(x, 6) for x in array)\n ).encode('ascii')\n\n def replace(fun):\n def replacer(tokens):\n token, value = next(tokens) # name, e.g., /FontMatrix\n yield value\n token, value = next(tokens) # possible whitespace\n while token is tok_whitespace:\n yield value\n token, value = next(tokens)\n if value != b'[': # name/number/etc.\n yield fun(value)\n else: # array, e.g., [1 2 3]\n result = b''\n while value != b']':\n result += value\n token, value = next(tokens)\n result += value\n yield fun(result)\n return replacer\n\n def suppress(tokens):\n for _ in itertools.takewhile(lambda x: x[1] != b'def', tokens):\n pass\n yield b''\n\n table = {b'/FontName': replace(fontname),\n b'/ItalicAngle': replace(italicangle),\n b'/FontMatrix': replace(fontmatrix),\n b'/UniqueID': suppress}\n\n for token, value in tokens:\n if token is tok_name and value in table:\n yield from table[value](\n itertools.chain([(token, value)], tokens))\n else:\n yield value\n\n def transform(self, effects):\n \"\"\"\n Return a new font that is slanted and/or extended.\n\n Parameters\n ----------\n effects : dict\n A dict with optional entries:\n\n - 'slant' : float, default: 0\n Tangent of the angle that the font is to be slanted to the\n right. Negative values slant to the left.\n - 'extend' : float, default: 1\n Scaling factor for the font width. Values less than 1 condense\n the glyphs.\n\n Returns\n -------\n `Type1Font`\n \"\"\"\n tokenizer = self._tokens(self.parts[0])\n transformed = self._transformer(tokenizer,\n slant=effects.get('slant', 0.0),\n extend=effects.get('extend', 1.0))\n return Type1Font((b\"\".join(transformed), self.parts[1], self.parts[2]))\n"}
|
diff --git a/doc/api/next_api_changes/behavior/20634-JKS.rst b/doc/api/next_api_changes/behavior/20634-JKS.rst
new file mode 100644
index 000000000000..ff4046445e42
--- /dev/null
+++ b/doc/api/next_api_changes/behavior/20634-JKS.rst
@@ -0,0 +1,8 @@
+``Type1Font`` objects now decrypt the encrypted part
+----------------------------------------------------
+
+Type 1 fonts have a large part of their code encrypted as an obsolete
+copy-protection measure. This part is now available decrypted as the
+``decrypted`` attribute of :class:`~matplotlib.type1font.Type1Font`.
+This decrypted data is not yet parsed, but this is a prerequisite for
+implementing subsetting.
|
{"lib/matplotlib/type1font.py": [{"type": "function", "name": "Type1Font._decrypt", "lines": [152, 171], "signature": "def _decrypt(ciphertext, key, ndiscard=4):", "doc": "Decrypt ciphertext using the Type-1 font algorithm\n\nThe algorithm is described in Adobe's \"Adobe Type 1 Font Format\".\nThe key argument can be an integer, or one of the strings\n'eexec' and 'charstring', which map to the key specified for the\ncorresponding part of Type-1 fonts.\n\nThe ndiscard argument should be an integer, usually 4.\nThat number of bytes is discarded from the beginning of plaintext."}, {"type": "function", "name": "Type1Font._encrypt", "lines": [174, 197], "signature": "def _encrypt(plaintext, key, ndiscard=4):", "doc": "Encrypt plaintext using the Type-1 font algorithm\n\nThe algorithm is described in Adobe's \"Adobe Type 1 Font Format\".\nThe key argument can be an integer, or one of the strings\n'eexec' and 'charstring', which map to the key specified for the\ncorresponding part of Type-1 fonts.\n\nThe ndiscard argument should be an integer, usually 4. That\nnumber of bytes is prepended to the plaintext before encryption.\nThis function prepends NUL bytes for reproducibility, even though\nthe original algorithm uses random bytes, presumably to avoid\ncryptanalysis."}]}
|
3.4
|
["lib/matplotlib/tests/test_type1font.py::test_Type1Font", "lib/matplotlib/tests/test_type1font.py::test_encrypt_decrypt_roundtrip"]
|
["lib/matplotlib/tests/test_type1font.py::test_overprecision"]
|
7e36e90ae85120bdbef9b57c2a5b35d40ed07710
|
{"first_commit_time": 1626109249.0, "pr_title": "Implement Type-1 decryption", "pr_body": "## PR Summary\r\n\r\nImplement decryption of the encrypted part of Type-1 fonts. This is a prerequisite of subsetting (#127) although that will need a lot more code.\r\n\r\nI looked at using fonttools for Type-1 subsetting, but it seems to lack much of the functionality needed for that, although it does implement this decryption algorithm. (The code in this PR is completely my own implementation.)\r\n\r\ncc @aitikgupta - subsetting TTF fonts is more important than Type-1 since it affects a lot more users, but if you want to take a look at Type-1 subsetting, this is the first step\r\n\r\n## PR Checklist\r\n\r\n<!-- Please mark any checkboxes that do not apply to this PR as [N/A]. -->\r\n\r\n- [ ] Has pytest style unit tests (and `pytest` passes).\r\n- [ ] Is [Flake 8](https://flake8.pycqa.org/en/latest/) compliant (run `flake8` on changed files to check).\r\n- [ ] New features are documented, with examples if plot related.\r\n- [ ] Documentation is sphinx and numpydoc compliant (the docs should [build](https://matplotlib.org/devel/documenting_mpl.html#building-the-docs) without error).\r\n- [ ] Conforms to Matplotlib style conventions (install `flake8-docstrings` and run `flake8 --docstring-convention=all`).\r\n- [ ] New features have an entry in `doc/users/next_whats_new/` (follow instructions in README.rst there).\r\n- [ ] API changes documented in `doc/api/next_api_changes/` (follow instructions in README.rst there).\r\n\r\n<!--\r\nThank you so much for your PR! To help us review your contribution, please\r\nconsider the following points:\r\n\r\n- A development guide is available at https://matplotlib.org/devdocs/devel/index.html.\r\n\r\n- Help with git and github is available at\r\n https://matplotlib.org/devel/gitwash/development_workflow.html.\r\n\r\n- Do not create the PR out of master, but out of a separate branch.\r\n\r\n- The PR title should summarize the changes, for example \"Raise ValueError on\r\n non-numeric input to set_xlim\". Avoid non-descriptive titles such as\r\n \"Addresses issue #8576\".\r\n\r\n- The summary should provide at least 1-2 sentences describing the pull request\r\n in detail (Why is this change required? What problem does it solve?) and\r\n link to any relevant issues.\r\n\r\n- If you are contributing fixes to docstrings, please pay attention to\r\n http://matplotlib.org/devel/documenting_mpl.html#formatting. In particular,\r\n note the difference between using single backquotes, double backquotes, and\r\n asterisks in the markup.\r\n\r\nWe understand that PRs can sometimes be overwhelming, especially as the\r\nreviews start coming in. Please let us know if the reviews are unclear or\r\nthe recommended next step seems overly demanding, if you would like help in\r\naddressing a reviewer's comments, or if you have been waiting too long to hear\r\nback on your PR.\r\n-->\r\n", "pr_timeline": [{"time": 1626958385.0, "comment": "Could we consider defering most of type1's handling to fonttools as well instead?"}, {"time": 1626958747.0, "comment": "> Could we consider defering most of type1's handling to fonttools as well instead?\r\n\r\nI tried - its type-1 implementation is very incomplete and not sufficient for subsetting."}, {"time": 1626959449.0, "comment": "OK then."}, {"time": 1626962193.0, "comment": "Thanks for the review @anntzer - I made the changes. I'll wait for the CI to complete before merging."}, {"time": 1626962217.0, "comment": "@jkseppan if one of us doesn't get to it, feel free to self-merge when CI is all green."}], "issues": {}}
|
matplotlib/matplotlib
| 21,977
|
https://github.com/matplotlib/matplotlib/pull/21977
|
matplotlib__matplotlib-21977
|
[]
|
ac2a14581949bcf869e969f28a21ee479d2fc250
|
diff --git a/lib/matplotlib/patches.py b/lib/matplotlib/patches.py
index b83714bdcd7b..544a626b8abe 100644
--- a/lib/matplotlib/patches.py
+++ b/lib/matplotlib/patches.py
@@ -806,6 +806,18 @@ def get_xy(self):
"""Return the left and bottom coords of the rectangle as a tuple."""
return self._x0, self._y0
+ def get_corners(self):
+ """
+ Return the corners of the rectangle, moving anti-clockwise from
+ (x0, y0).
+ """
+ return self.get_patch_transform().transform(
+ [(0, 0), (1, 0), (1, 1), (0, 1)])
+
+ def get_center(self):
+ """Return the centre of the rectangle."""
+ return self.get_patch_transform().transform((0.5, 0.5))
+
def get_width(self):
"""Return the width of the rectangle."""
return self._width
@@ -1657,6 +1669,16 @@ def get_angle(self):
angle = property(get_angle, set_angle)
+ def get_corners(self):
+ """
+ Return the corners of the ellipse bounding box.
+
+ The bounding box orientation is moving anti-clockwise from the
+ lower left corner defined before rotation.
+ """
+ return self.get_patch_transform().transform(
+ [(-1, -1), (1, -1), (1, 1), (-1, 1)])
+
class Annulus(Patch):
"""
|
diff --git a/lib/matplotlib/tests/test_patches.py b/lib/matplotlib/tests/test_patches.py
index 9487758c8aef..6a8ddc87f3ae 100644
--- a/lib/matplotlib/tests/test_patches.py
+++ b/lib/matplotlib/tests/test_patches.py
@@ -6,7 +6,7 @@
import pytest
import matplotlib as mpl
-from matplotlib.patches import (Annulus, Patch, Polygon, Rectangle,
+from matplotlib.patches import (Annulus, Ellipse, Patch, Polygon, Rectangle,
FancyArrowPatch)
from matplotlib.testing.decorators import image_comparison, check_figures_equal
from matplotlib.transforms import Bbox
@@ -54,6 +54,54 @@ def test_Polygon_close():
assert_array_equal(p.get_xy(), xyclosed)
+def test_corner_center():
+ loc = [10, 20]
+ width = 1
+ height = 2
+
+ # Rectangle
+ # No rotation
+ corners = ((10, 20), (11, 20), (11, 22), (10, 22))
+ rect = Rectangle(loc, width, height)
+ assert_array_equal(rect.get_corners(), corners)
+ assert_array_equal(rect.get_center(), (10.5, 21))
+
+ # 90 deg rotation
+ corners_rot = ((10, 20), (10, 21), (8, 21), (8, 20))
+ rect.set_angle(90)
+ assert_array_equal(rect.get_corners(), corners_rot)
+ assert_array_equal(rect.get_center(), (9, 20.5))
+
+ # Rotation not a multiple of 90 deg
+ theta = 33
+ t = mtransforms.Affine2D().rotate_around(*loc, np.deg2rad(theta))
+ corners_rot = t.transform(corners)
+ rect.set_angle(theta)
+ assert_almost_equal(rect.get_corners(), corners_rot)
+
+ # Ellipse
+ loc = [loc[0] + width / 2,
+ loc[1] + height / 2]
+ ellipse = Ellipse(loc, width, height)
+
+ # No rotation
+ assert_array_equal(ellipse.get_corners(), corners)
+
+ # 90 deg rotation
+ corners_rot = ((11.5, 20.5), (11.5, 21.5), (9.5, 21.5), (9.5, 20.5))
+ ellipse.set_angle(90)
+ assert_array_equal(ellipse.get_corners(), corners_rot)
+ # Rotation shouldn't change ellipse center
+ assert_array_equal(ellipse.get_center(), loc)
+
+ # Rotation not a multiple of 90 deg
+ theta = 33
+ t = mtransforms.Affine2D().rotate_around(*loc, np.deg2rad(theta))
+ corners_rot = t.transform(corners)
+ ellipse.set_angle(theta)
+ assert_almost_equal(ellipse.get_corners(), corners_rot)
+
+
def test_rotate_rect():
loc = np.asarray([1.0, 2.0])
width = 2
| 2021-12-16T13:31:39
|
{}
|
{"lib/matplotlib/patches.py": "r\"\"\"\nPatches are `.Artist`\\s with a face color and an edge color.\n\"\"\"\n\nimport contextlib\nimport functools\nimport inspect\nimport math\nfrom numbers import Number\nimport textwrap\nfrom collections import namedtuple\n\nimport numpy as np\n\nimport matplotlib as mpl\nfrom . import (_api, artist, cbook, colors, docstring, hatch as mhatch,\n lines as mlines, transforms)\nfrom .bezier import (\n NonIntersectingPathException, get_cos_sin, get_intersection,\n get_parallels, inside_circle, make_wedged_bezier2,\n split_bezier_intersecting_with_closedpath, split_path_inout)\nfrom .path import Path\nfrom ._enums import JoinStyle, CapStyle\n\n\[email protected]\n@cbook._define_aliases({\n \"antialiased\": [\"aa\"],\n \"edgecolor\": [\"ec\"],\n \"facecolor\": [\"fc\"],\n \"linestyle\": [\"ls\"],\n \"linewidth\": [\"lw\"],\n})\nclass Patch(artist.Artist):\n \"\"\"\n A patch is a 2D artist with a face color and an edge color.\n\n If any of *edgecolor*, *facecolor*, *linewidth*, or *antialiased*\n are *None*, they default to their rc params setting.\n \"\"\"\n zorder = 1\n\n @_api.deprecated(\"3.4\")\n @_api.classproperty\n def validCap(cls):\n with _api.suppress_matplotlib_deprecation_warning():\n return mlines.Line2D.validCap\n\n @_api.deprecated(\"3.4\")\n @_api.classproperty\n def validJoin(cls):\n with _api.suppress_matplotlib_deprecation_warning():\n return mlines.Line2D.validJoin\n\n # Whether to draw an edge by default. Set on a\n # subclass-by-subclass basis.\n _edge_default = False\n\n def __init__(self,\n edgecolor=None,\n facecolor=None,\n color=None,\n linewidth=None,\n linestyle=None,\n antialiased=None,\n hatch=None,\n fill=True,\n capstyle=None,\n joinstyle=None,\n **kwargs):\n \"\"\"\n The following kwarg properties are supported\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__()\n\n if linewidth is None:\n linewidth = mpl.rcParams['patch.linewidth']\n if linestyle is None:\n linestyle = \"solid\"\n if capstyle is None:\n capstyle = CapStyle.butt\n if joinstyle is None:\n joinstyle = JoinStyle.miter\n if antialiased is None:\n antialiased = mpl.rcParams['patch.antialiased']\n\n self._hatch_color = colors.to_rgba(mpl.rcParams['hatch.color'])\n self._fill = True # needed for set_facecolor call\n if color is not None:\n if edgecolor is not None or facecolor is not None:\n _api.warn_external(\n \"Setting the 'color' property will override \"\n \"the edgecolor or facecolor properties.\")\n self.set_color(color)\n else:\n self.set_edgecolor(edgecolor)\n self.set_facecolor(facecolor)\n\n self._linewidth = 0\n self._unscaled_dash_pattern = (0, None) # offset, dash\n self._dash_pattern = (0, None) # offset, dash (scaled by linewidth)\n\n self.set_fill(fill)\n self.set_linestyle(linestyle)\n self.set_linewidth(linewidth)\n self.set_antialiased(antialiased)\n self.set_hatch(hatch)\n self.set_capstyle(capstyle)\n self.set_joinstyle(joinstyle)\n\n if len(kwargs):\n self.update(kwargs)\n\n def get_verts(self):\n \"\"\"\n Return a copy of the vertices used in this patch.\n\n If the patch contains Bezier curves, the curves will be interpolated by\n line segments. To access the curves as curves, use `get_path`.\n \"\"\"\n trans = self.get_transform()\n path = self.get_path()\n polygons = path.to_polygons(trans)\n if len(polygons):\n return polygons[0]\n return []\n\n def _process_radius(self, radius):\n if radius is not None:\n return radius\n if isinstance(self._picker, Number):\n _radius = self._picker\n else:\n if self.get_edgecolor()[3] == 0:\n _radius = 0\n else:\n _radius = self.get_linewidth()\n return _radius\n\n def contains(self, mouseevent, radius=None):\n \"\"\"\n Test whether the mouse event occurred in the patch.\n\n Returns\n -------\n (bool, empty dict)\n \"\"\"\n inside, info = self._default_contains(mouseevent)\n if inside is not None:\n return inside, info\n radius = self._process_radius(radius)\n codes = self.get_path().codes\n if codes is not None:\n vertices = self.get_path().vertices\n # if the current path is concatenated by multiple sub paths.\n # get the indexes of the starting code(MOVETO) of all sub paths\n idxs, = np.where(codes == Path.MOVETO)\n # Don't split before the first MOVETO.\n idxs = idxs[1:]\n subpaths = map(\n Path, np.split(vertices, idxs), np.split(codes, idxs))\n else:\n subpaths = [self.get_path()]\n inside = any(\n subpath.contains_point(\n (mouseevent.x, mouseevent.y), self.get_transform(), radius)\n for subpath in subpaths)\n return inside, {}\n\n def contains_point(self, point, radius=None):\n \"\"\"\n Return whether the given point is inside the patch.\n\n Parameters\n ----------\n point : (float, float)\n The point (x, y) to check, in target coordinates of\n ``self.get_transform()``. These are display coordinates for patches\n that are added to a figure or axes.\n radius : float, optional\n Add an additional margin on the patch in target coordinates of\n ``self.get_transform()``. See `.Path.contains_point` for further\n details.\n\n Returns\n -------\n bool\n\n Notes\n -----\n The proper use of this method depends on the transform of the patch.\n Isolated patches do not have a transform. In this case, the patch\n creation coordinates and the point coordinates match. The following\n example checks that the center of a circle is within the circle\n\n >>> center = 0, 0\n >>> c = Circle(center, radius=1)\n >>> c.contains_point(center)\n True\n\n The convention of checking against the transformed patch stems from\n the fact that this method is predominantly used to check if display\n coordinates (e.g. from mouse events) are within the patch. If you want\n to do the above check with data coordinates, you have to properly\n transform them first:\n\n >>> center = 0, 0\n >>> c = Circle(center, radius=1)\n >>> plt.gca().add_patch(c)\n >>> transformed_center = c.get_transform().transform(center)\n >>> c.contains_point(transformed_center)\n True\n\n \"\"\"\n radius = self._process_radius(radius)\n return self.get_path().contains_point(point,\n self.get_transform(),\n radius)\n\n def contains_points(self, points, radius=None):\n \"\"\"\n Return whether the given points are inside the patch.\n\n Parameters\n ----------\n points : (N, 2) array\n The points to check, in target coordinates of\n ``self.get_transform()``. These are display coordinates for patches\n that are added to a figure or axes. Columns contain x and y values.\n radius : float, optional\n Add an additional margin on the patch in target coordinates of\n ``self.get_transform()``. See `.Path.contains_point` for further\n details.\n\n Returns\n -------\n length-N bool array\n\n Notes\n -----\n The proper use of this method depends on the transform of the patch.\n See the notes on `.Patch.contains_point`.\n \"\"\"\n radius = self._process_radius(radius)\n return self.get_path().contains_points(points,\n self.get_transform(),\n radius)\n\n def update_from(self, other):\n # docstring inherited.\n super().update_from(other)\n # For some properties we don't need or don't want to go through the\n # getters/setters, so we just copy them directly.\n self._edgecolor = other._edgecolor\n self._facecolor = other._facecolor\n self._original_edgecolor = other._original_edgecolor\n self._original_facecolor = other._original_facecolor\n self._fill = other._fill\n self._hatch = other._hatch\n self._hatch_color = other._hatch_color\n self._unscaled_dash_pattern = other._unscaled_dash_pattern\n self.set_linewidth(other._linewidth) # also sets scaled dashes\n self.set_transform(other.get_data_transform())\n # If the transform of other needs further initialization, then it will\n # be the case for this artist too.\n self._transformSet = other.is_transform_set()\n\n def get_extents(self):\n \"\"\"\n Return the `Patch`'s axis-aligned extents as a `~.transforms.Bbox`.\n \"\"\"\n return self.get_path().get_extents(self.get_transform())\n\n def get_transform(self):\n \"\"\"Return the `~.transforms.Transform` applied to the `Patch`.\"\"\"\n return self.get_patch_transform() + artist.Artist.get_transform(self)\n\n def get_data_transform(self):\n \"\"\"\n Return the `~.transforms.Transform` mapping data coordinates to\n physical coordinates.\n \"\"\"\n return artist.Artist.get_transform(self)\n\n def get_patch_transform(self):\n \"\"\"\n Return the `~.transforms.Transform` instance mapping patch coordinates\n to data coordinates.\n\n For example, one may define a patch of a circle which represents a\n radius of 5 by providing coordinates for a unit circle, and a\n transform which scales the coordinates (the patch coordinate) by 5.\n \"\"\"\n return transforms.IdentityTransform()\n\n def get_antialiased(self):\n \"\"\"Return whether antialiasing is used for drawing.\"\"\"\n return self._antialiased\n\n def get_edgecolor(self):\n \"\"\"Return the edge color.\"\"\"\n return self._edgecolor\n\n def get_facecolor(self):\n \"\"\"Return the face color.\"\"\"\n return self._facecolor\n\n def get_linewidth(self):\n \"\"\"Return the line width in points.\"\"\"\n return self._linewidth\n\n def get_linestyle(self):\n \"\"\"Return the linestyle.\"\"\"\n return self._linestyle\n\n def set_antialiased(self, aa):\n \"\"\"\n Set whether to use antialiased rendering.\n\n Parameters\n ----------\n aa : bool or None\n \"\"\"\n if aa is None:\n aa = mpl.rcParams['patch.antialiased']\n self._antialiased = aa\n self.stale = True\n\n def _set_edgecolor(self, color):\n set_hatch_color = True\n if color is None:\n if (mpl.rcParams['patch.force_edgecolor'] or\n not self._fill or self._edge_default):\n color = mpl.rcParams['patch.edgecolor']\n else:\n color = 'none'\n set_hatch_color = False\n\n self._edgecolor = colors.to_rgba(color, self._alpha)\n if set_hatch_color:\n self._hatch_color = self._edgecolor\n self.stale = True\n\n def set_edgecolor(self, color):\n \"\"\"\n Set the patch edge color.\n\n Parameters\n ----------\n color : color or None\n \"\"\"\n self._original_edgecolor = color\n self._set_edgecolor(color)\n\n def _set_facecolor(self, color):\n if color is None:\n color = mpl.rcParams['patch.facecolor']\n alpha = self._alpha if self._fill else 0\n self._facecolor = colors.to_rgba(color, alpha)\n self.stale = True\n\n def set_facecolor(self, color):\n \"\"\"\n Set the patch face color.\n\n Parameters\n ----------\n color : color or None\n \"\"\"\n self._original_facecolor = color\n self._set_facecolor(color)\n\n def set_color(self, c):\n \"\"\"\n Set both the edgecolor and the facecolor.\n\n Parameters\n ----------\n c : color\n\n See Also\n --------\n Patch.set_facecolor, Patch.set_edgecolor\n For setting the edge or face color individually.\n \"\"\"\n self.set_facecolor(c)\n self.set_edgecolor(c)\n\n def set_alpha(self, alpha):\n # docstring inherited\n super().set_alpha(alpha)\n self._set_facecolor(self._original_facecolor)\n self._set_edgecolor(self._original_edgecolor)\n # stale is already True\n\n def set_linewidth(self, w):\n \"\"\"\n Set the patch linewidth in points.\n\n Parameters\n ----------\n w : float or None\n \"\"\"\n if w is None:\n w = mpl.rcParams['patch.linewidth']\n if w is None:\n w = mpl.rcParams['axes.linewidth']\n self._linewidth = float(w)\n self._dash_pattern = mlines._scale_dashes(\n *self._unscaled_dash_pattern, w)\n self.stale = True\n\n def set_linestyle(self, ls):\n \"\"\"\n Set the patch linestyle.\n\n ========================================== =================\n linestyle description\n ========================================== =================\n ``'-'`` or ``'solid'`` solid line\n ``'--'`` or ``'dashed'`` dashed line\n ``'-.'`` or ``'dashdot'`` dash-dotted line\n ``':'`` or ``'dotted'`` dotted line\n ``'none'``, ``'None'``, ``' '``, or ``''`` draw nothing\n ========================================== =================\n\n Alternatively a dash tuple of the following form can be provided::\n\n (offset, onoffseq)\n\n where ``onoffseq`` is an even length tuple of on and off ink in points.\n\n Parameters\n ----------\n ls : {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}\n The line style.\n \"\"\"\n if ls is None:\n ls = \"solid\"\n if ls in [' ', '', 'none']:\n ls = 'None'\n self._linestyle = ls\n self._unscaled_dash_pattern = mlines._get_dash_pattern(ls)\n self._dash_pattern = mlines._scale_dashes(\n *self._unscaled_dash_pattern, self._linewidth)\n self.stale = True\n\n def set_fill(self, b):\n \"\"\"\n Set whether to fill the patch.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self._fill = bool(b)\n self._set_facecolor(self._original_facecolor)\n self._set_edgecolor(self._original_edgecolor)\n self.stale = True\n\n def get_fill(self):\n \"\"\"Return whether the patch is filled.\"\"\"\n return self._fill\n\n # Make fill a property so as to preserve the long-standing\n # but somewhat inconsistent behavior in which fill was an\n # attribute.\n fill = property(get_fill, set_fill)\n\n @docstring.interpd\n def set_capstyle(self, s):\n \"\"\"\n Set the `.CapStyle`.\n\n Parameters\n ----------\n s : `.CapStyle` or %(CapStyle)s\n \"\"\"\n cs = CapStyle(s)\n self._capstyle = cs\n self.stale = True\n\n def get_capstyle(self):\n \"\"\"Return the capstyle.\"\"\"\n return self._capstyle\n\n @docstring.interpd\n def set_joinstyle(self, s):\n \"\"\"\n Set the `.JoinStyle`.\n\n Parameters\n ----------\n s : `.JoinStyle` or %(JoinStyle)s\n \"\"\"\n js = JoinStyle(s)\n self._joinstyle = js\n self.stale = True\n\n def get_joinstyle(self):\n \"\"\"Return the joinstyle.\"\"\"\n return self._joinstyle\n\n def set_hatch(self, hatch):\n r\"\"\"\n Set the hatching pattern.\n\n *hatch* can be one of::\n\n / - diagonal hatching\n \\ - back diagonal\n | - vertical\n - - horizontal\n + - crossed\n x - crossed diagonal\n o - small circle\n O - large circle\n . - dots\n * - stars\n\n Letters can be combined, in which case all the specified\n hatchings are done. If same letter repeats, it increases the\n density of hatching of that pattern.\n\n Hatching is supported in the PostScript, PDF, SVG and Agg\n backends only.\n\n Parameters\n ----------\n hatch : {'/', '\\\\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}\n \"\"\"\n # Use validate_hatch(list) after deprecation.\n mhatch._validate_hatch_pattern(hatch)\n self._hatch = hatch\n self.stale = True\n\n def get_hatch(self):\n \"\"\"Return the hatching pattern.\"\"\"\n return self._hatch\n\n @contextlib.contextmanager\n def _bind_draw_path_function(self, renderer):\n \"\"\"\n ``draw()`` helper factored out for sharing with `FancyArrowPatch`.\n\n Yields a callable ``dp`` such that calling ``dp(*args, **kwargs)`` is\n equivalent to calling ``renderer1.draw_path(gc, *args, **kwargs)``\n where ``renderer1`` and ``gc`` have been suitably set from ``renderer``\n and the artist's properties.\n \"\"\"\n\n renderer.open_group('patch', self.get_gid())\n gc = renderer.new_gc()\n\n gc.set_foreground(self._edgecolor, isRGBA=True)\n\n lw = self._linewidth\n if self._edgecolor[3] == 0 or self._linestyle == 'None':\n lw = 0\n gc.set_linewidth(lw)\n gc.set_dashes(*self._dash_pattern)\n gc.set_capstyle(self._capstyle)\n gc.set_joinstyle(self._joinstyle)\n\n gc.set_antialiased(self._antialiased)\n self._set_gc_clip(gc)\n gc.set_url(self._url)\n gc.set_snap(self.get_snap())\n\n gc.set_alpha(self._alpha)\n\n if self._hatch:\n gc.set_hatch(self._hatch)\n gc.set_hatch_color(self._hatch_color)\n\n if self.get_sketch_params() is not None:\n gc.set_sketch_params(*self.get_sketch_params())\n\n if self.get_path_effects():\n from matplotlib.patheffects import PathEffectRenderer\n renderer = PathEffectRenderer(self.get_path_effects(), renderer)\n\n # In `with _bind_draw_path_function(renderer) as draw_path: ...`\n # (in the implementations of `draw()` below), calls to `draw_path(...)`\n # will occur as if they took place here with `gc` inserted as\n # additional first argument.\n yield functools.partial(renderer.draw_path, gc)\n\n gc.restore()\n renderer.close_group('patch')\n self.stale = False\n\n @artist.allow_rasterization\n def draw(self, renderer):\n # docstring inherited\n if not self.get_visible():\n return\n # Patch has traditionally ignored the dashoffset.\n with cbook._setattr_cm(\n self, _dash_pattern=(0, self._dash_pattern[1])), \\\n self._bind_draw_path_function(renderer) as draw_path:\n path = self.get_path()\n transform = self.get_transform()\n tpath = transform.transform_path_non_affine(path)\n affine = transform.get_affine()\n draw_path(tpath, affine,\n # Work around a bug in the PDF and SVG renderers, which\n # do not draw the hatches if the facecolor is fully\n # transparent, but do if it is None.\n self._facecolor if self._facecolor[3] else None)\n\n def get_path(self):\n \"\"\"Return the path of this patch.\"\"\"\n raise NotImplementedError('Derived must override')\n\n def get_window_extent(self, renderer=None):\n return self.get_path().get_extents(self.get_transform())\n\n def _convert_xy_units(self, xy):\n \"\"\"Convert x and y units for a tuple (x, y).\"\"\"\n x = self.convert_xunits(xy[0])\n y = self.convert_yunits(xy[1])\n return x, y\n\n\nclass Shadow(Patch):\n def __str__(self):\n return \"Shadow(%s)\" % (str(self.patch))\n\n @docstring.dedent_interpd\n def __init__(self, patch, ox, oy, **kwargs):\n \"\"\"\n Create a shadow of the given *patch*.\n\n By default, the shadow will have the same face color as the *patch*,\n but darkened.\n\n Parameters\n ----------\n patch : `.Patch`\n The patch to create the shadow for.\n ox, oy : float\n The shift of the shadow in data coordinates, scaled by a factor\n of dpi/72.\n **kwargs\n Properties of the shadow patch. Supported keys are:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__()\n self.patch = patch\n self._ox, self._oy = ox, oy\n self._shadow_transform = transforms.Affine2D()\n\n self.update_from(self.patch)\n color = .3 * np.asarray(colors.to_rgb(self.patch.get_facecolor()))\n self.update({'facecolor': color, 'edgecolor': color, 'alpha': 0.5,\n # Place shadow patch directly behind the inherited patch.\n 'zorder': np.nextafter(self.patch.zorder, -np.inf),\n **kwargs})\n\n def _update_transform(self, renderer):\n ox = renderer.points_to_pixels(self._ox)\n oy = renderer.points_to_pixels(self._oy)\n self._shadow_transform.clear().translate(ox, oy)\n\n def get_path(self):\n return self.patch.get_path()\n\n def get_patch_transform(self):\n return self.patch.get_patch_transform() + self._shadow_transform\n\n def draw(self, renderer):\n self._update_transform(renderer)\n super().draw(renderer)\n\n\nclass Rectangle(Patch):\n \"\"\"\n A rectangle defined via an anchor point *xy* and its *width* and *height*.\n\n The rectangle extends from ``xy[0]`` to ``xy[0] + width`` in x-direction\n and from ``xy[1]`` to ``xy[1] + height`` in y-direction. ::\n\n : +------------------+\n : | |\n : height |\n : | |\n : (xy)---- width -----+\n\n One may picture *xy* as the bottom left corner, but which corner *xy* is\n actually depends on the direction of the axis and the sign of *width*\n and *height*; e.g. *xy* would be the bottom right corner if the x-axis\n was inverted or if *width* was negative.\n \"\"\"\n\n def __str__(self):\n pars = self._x0, self._y0, self._width, self._height, self.angle\n fmt = \"Rectangle(xy=(%g, %g), width=%g, height=%g, angle=%g)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, width, height, angle=0.0, *,\n rotation_point='xy', **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n The anchor point.\n width : float\n Rectangle width.\n height : float\n Rectangle height.\n angle : float, default: 0\n Rotation in degrees anti-clockwise about the rotation point.\n rotation_point : {'xy', 'center', (number, number)}, default: 'xy'\n If ``'xy'``, rotate around the anchor point. If ``'center'`` rotate\n around the center. If 2-tuple of number, rotate around this\n coordinate.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self._x0 = xy[0]\n self._y0 = xy[1]\n self._width = width\n self._height = height\n self.angle = float(angle)\n self.rotation_point = rotation_point\n # Required for RectangleSelector with axes aspect ratio != 1\n # The patch is defined in data coordinates and when changing the\n # selector with square modifier and not in data coordinates, we need\n # to correct for the aspect ratio difference between the data and\n # display coordinate systems. Its value is typically provide by\n # Axes._get_aspect_ratio()\n self._aspect_ratio_correction = 1.0\n self._convert_units() # Validate the inputs.\n\n def get_path(self):\n \"\"\"Return the vertices of the rectangle.\"\"\"\n return Path.unit_rectangle()\n\n def _convert_units(self):\n \"\"\"Convert bounds of the rectangle.\"\"\"\n x0 = self.convert_xunits(self._x0)\n y0 = self.convert_yunits(self._y0)\n x1 = self.convert_xunits(self._x0 + self._width)\n y1 = self.convert_yunits(self._y0 + self._height)\n return x0, y0, x1, y1\n\n def get_patch_transform(self):\n # Note: This cannot be called until after this has been added to\n # an Axes, otherwise unit conversion will fail. This makes it very\n # important to call the accessor method and not directly access the\n # transformation member variable.\n bbox = self.get_bbox()\n if self.rotation_point == 'center':\n width, height = bbox.x1 - bbox.x0, bbox.y1 - bbox.y0\n rotation_point = bbox.x0 + width / 2., bbox.y0 + height / 2.\n elif self.rotation_point == 'xy':\n rotation_point = bbox.x0, bbox.y0\n else:\n rotation_point = self.rotation_point\n return transforms.BboxTransformTo(bbox) \\\n + transforms.Affine2D() \\\n .translate(-rotation_point[0], -rotation_point[1]) \\\n .scale(1, self._aspect_ratio_correction) \\\n .rotate_deg(self.angle) \\\n .scale(1, 1 / self._aspect_ratio_correction) \\\n .translate(*rotation_point)\n\n @property\n def rotation_point(self):\n \"\"\"The rotation point of the patch.\"\"\"\n return self._rotation_point\n\n @rotation_point.setter\n def rotation_point(self, value):\n if value in ['center', 'xy'] or (\n isinstance(value, tuple) and len(value) == 2 and\n isinstance(value[0], Number) and isinstance(value[1], Number)\n ):\n self._rotation_point = value\n else:\n raise ValueError(\"`rotation_point` must be one of \"\n \"{'xy', 'center', (number, number)}.\")\n\n def get_x(self):\n \"\"\"Return the left coordinate of the rectangle.\"\"\"\n return self._x0\n\n def get_y(self):\n \"\"\"Return the bottom coordinate of the rectangle.\"\"\"\n return self._y0\n\n def get_xy(self):\n \"\"\"Return the left and bottom coords of the rectangle as a tuple.\"\"\"\n return self._x0, self._y0\n\n def get_width(self):\n \"\"\"Return the width of the rectangle.\"\"\"\n return self._width\n\n def get_height(self):\n \"\"\"Return the height of the rectangle.\"\"\"\n return self._height\n\n def get_angle(self):\n \"\"\"Get the rotation angle in degrees.\"\"\"\n return self.angle\n\n def set_x(self, x):\n \"\"\"Set the left coordinate of the rectangle.\"\"\"\n self._x0 = x\n self.stale = True\n\n def set_y(self, y):\n \"\"\"Set the bottom coordinate of the rectangle.\"\"\"\n self._y0 = y\n self.stale = True\n\n def set_angle(self, angle):\n \"\"\"\n Set the rotation angle in degrees.\n\n The rotation is performed anti-clockwise around *xy*.\n \"\"\"\n self.angle = angle\n self.stale = True\n\n def set_xy(self, xy):\n \"\"\"\n Set the left and bottom coordinates of the rectangle.\n\n Parameters\n ----------\n xy : (float, float)\n \"\"\"\n self._x0, self._y0 = xy\n self.stale = True\n\n def set_width(self, w):\n \"\"\"Set the width of the rectangle.\"\"\"\n self._width = w\n self.stale = True\n\n def set_height(self, h):\n \"\"\"Set the height of the rectangle.\"\"\"\n self._height = h\n self.stale = True\n\n def set_bounds(self, *args):\n \"\"\"\n Set the bounds of the rectangle as *left*, *bottom*, *width*, *height*.\n\n The values may be passed as separate parameters or as a tuple::\n\n set_bounds(left, bottom, width, height)\n set_bounds((left, bottom, width, height))\n\n .. ACCEPTS: (left, bottom, width, height)\n \"\"\"\n if len(args) == 1:\n l, b, w, h = args[0]\n else:\n l, b, w, h = args\n self._x0 = l\n self._y0 = b\n self._width = w\n self._height = h\n self.stale = True\n\n def get_bbox(self):\n \"\"\"Return the `.Bbox`.\"\"\"\n x0, y0, x1, y1 = self._convert_units()\n return transforms.Bbox.from_extents(x0, y0, x1, y1)\n\n xy = property(get_xy, set_xy)\n\n\nclass RegularPolygon(Patch):\n \"\"\"A regular polygon patch.\"\"\"\n\n def __str__(self):\n s = \"RegularPolygon((%g, %g), %d, radius=%g, orientation=%g)\"\n return s % (self.xy[0], self.xy[1], self.numvertices, self.radius,\n self.orientation)\n\n @docstring.dedent_interpd\n def __init__(self, xy, numVertices, radius=5, orientation=0,\n **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n The center position.\n\n numVertices : int\n The number of vertices.\n\n radius : float\n The distance from the center to each of the vertices.\n\n orientation : float\n The polygon rotation angle (in radians).\n\n **kwargs\n `Patch` properties:\n\n %(Patch:kwdoc)s\n \"\"\"\n self.xy = xy\n self.numvertices = numVertices\n self.orientation = orientation\n self.radius = radius\n self._path = Path.unit_regular_polygon(numVertices)\n self._patch_transform = transforms.Affine2D()\n super().__init__(**kwargs)\n\n def get_path(self):\n return self._path\n\n def get_patch_transform(self):\n return self._patch_transform.clear() \\\n .scale(self.radius) \\\n .rotate(self.orientation) \\\n .translate(*self.xy)\n\n\nclass PathPatch(Patch):\n \"\"\"A general polycurve path patch.\"\"\"\n\n _edge_default = True\n\n def __str__(self):\n s = \"PathPatch%d((%g, %g) ...)\"\n return s % (len(self._path.vertices), *tuple(self._path.vertices[0]))\n\n @docstring.dedent_interpd\n def __init__(self, path, **kwargs):\n \"\"\"\n *path* is a `~.path.Path` object.\n\n Valid keyword arguments are:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self._path = path\n\n def get_path(self):\n return self._path\n\n def set_path(self, path):\n self._path = path\n\n\nclass StepPatch(PathPatch):\n \"\"\"\n A path patch describing a stepwise constant function.\n\n By default the path is not closed and starts and stops at\n baseline value.\n \"\"\"\n\n _edge_default = False\n\n @docstring.dedent_interpd\n def __init__(self, values, edges, *,\n orientation='vertical', baseline=0, **kwargs):\n \"\"\"\n Parameters\n ----------\n values : array-like\n The step heights.\n\n edges : array-like\n The edge positions, with ``len(edges) == len(vals) + 1``,\n between which the curve takes on vals values.\n\n orientation : {'vertical', 'horizontal'}, default: 'vertical'\n The direction of the steps. Vertical means that *values* are\n along the y-axis, and edges are along the x-axis.\n\n baseline : float, array-like or None, default: 0\n The bottom value of the bounding edges or when\n ``fill=True``, position of lower edge. If *fill* is\n True or an array is passed to *baseline*, a closed\n path is drawn.\n\n Other valid keyword arguments are:\n\n %(Patch:kwdoc)s\n \"\"\"\n self.orientation = orientation\n self._edges = np.asarray(edges)\n self._values = np.asarray(values)\n self._baseline = np.asarray(baseline) if baseline is not None else None\n self._update_path()\n super().__init__(self._path, **kwargs)\n\n def _update_path(self):\n if np.isnan(np.sum(self._edges)):\n raise ValueError('Nan values in \"edges\" are disallowed')\n if self._edges.size - 1 != self._values.size:\n raise ValueError('Size mismatch between \"values\" and \"edges\". '\n \"Expected `len(values) + 1 == len(edges)`, but \"\n f\"`len(values) = {self._values.size}` and \"\n f\"`len(edges) = {self._edges.size}`.\")\n # Initializing with empty arrays allows supporting empty stairs.\n verts, codes = [np.empty((0, 2))], [np.empty(0, dtype=Path.code_type)]\n\n _nan_mask = np.isnan(self._values)\n if self._baseline is not None:\n _nan_mask |= np.isnan(self._baseline)\n for idx0, idx1 in cbook.contiguous_regions(~_nan_mask):\n x = np.repeat(self._edges[idx0:idx1+1], 2)\n y = np.repeat(self._values[idx0:idx1], 2)\n if self._baseline is None:\n y = np.concatenate([y[:1], y, y[-1:]])\n elif self._baseline.ndim == 0: # single baseline value\n y = np.concatenate([[self._baseline], y, [self._baseline]])\n elif self._baseline.ndim == 1: # baseline array\n base = np.repeat(self._baseline[idx0:idx1], 2)[::-1]\n x = np.concatenate([x, x[::-1]])\n y = np.concatenate([base[-1:], y, base[:1],\n base[:1], base, base[-1:]])\n else: # no baseline\n raise ValueError('Invalid `baseline` specified')\n if self.orientation == 'vertical':\n xy = np.column_stack([x, y])\n else:\n xy = np.column_stack([y, x])\n verts.append(xy)\n codes.append([Path.MOVETO] + [Path.LINETO]*(len(xy)-1))\n self._path = Path(np.concatenate(verts), np.concatenate(codes))\n\n def get_data(self):\n \"\"\"Get `.StepPatch` values, edges and baseline as namedtuple.\"\"\"\n StairData = namedtuple('StairData', 'values edges baseline')\n return StairData(self._values, self._edges, self._baseline)\n\n def set_data(self, values=None, edges=None, baseline=None):\n \"\"\"\n Set `.StepPatch` values, edges and baseline.\n\n Parameters\n ----------\n values : 1D array-like or None\n Will not update values, if passing None\n edges : 1D array-like, optional\n baseline : float, 1D array-like or None\n \"\"\"\n if values is None and edges is None and baseline is None:\n raise ValueError(\"Must set *values*, *edges* or *baseline*.\")\n if values is not None:\n self._values = np.asarray(values)\n if edges is not None:\n self._edges = np.asarray(edges)\n if baseline is not None:\n self._baseline = np.asarray(baseline)\n self._update_path()\n self.stale = True\n\n\nclass Polygon(Patch):\n \"\"\"A general polygon patch.\"\"\"\n\n def __str__(self):\n if len(self._path.vertices):\n s = \"Polygon%d((%g, %g) ...)\"\n return s % (len(self._path.vertices), *self._path.vertices[0])\n else:\n return \"Polygon0()\"\n\n @docstring.dedent_interpd\n def __init__(self, xy, closed=True, **kwargs):\n \"\"\"\n *xy* is a numpy array with shape Nx2.\n\n If *closed* is *True*, the polygon will be closed so the\n starting and ending points are the same.\n\n Valid keyword arguments are:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self._closed = closed\n self.set_xy(xy)\n\n def get_path(self):\n \"\"\"Get the `.Path` of the polygon.\"\"\"\n return self._path\n\n def get_closed(self):\n \"\"\"Return whether the polygon is closed.\"\"\"\n return self._closed\n\n def set_closed(self, closed):\n \"\"\"\n Set whether the polygon is closed.\n\n Parameters\n ----------\n closed : bool\n True if the polygon is closed\n \"\"\"\n if self._closed == bool(closed):\n return\n self._closed = bool(closed)\n self.set_xy(self.get_xy())\n self.stale = True\n\n def get_xy(self):\n \"\"\"\n Get the vertices of the path.\n\n Returns\n -------\n (N, 2) numpy array\n The coordinates of the vertices.\n \"\"\"\n return self._path.vertices\n\n def set_xy(self, xy):\n \"\"\"\n Set the vertices of the polygon.\n\n Parameters\n ----------\n xy : (N, 2) array-like\n The coordinates of the vertices.\n\n Notes\n -----\n Unlike `~.path.Path`, we do not ignore the last input vertex. If the\n polygon is meant to be closed, and the last point of the polygon is not\n equal to the first, we assume that the user has not explicitly passed a\n ``CLOSEPOLY`` vertex, and add it ourselves.\n \"\"\"\n xy = np.asarray(xy)\n nverts, _ = xy.shape\n if self._closed:\n # if the first and last vertex are the \"same\", then we assume that\n # the user explicitly passed the CLOSEPOLY vertex. Otherwise, we\n # have to append one since the last vertex will be \"ignored\" by\n # Path\n if nverts == 1 or nverts > 1 and (xy[0] != xy[-1]).any():\n xy = np.concatenate([xy, [xy[0]]])\n else:\n # if we aren't closed, and the last vertex matches the first, then\n # we assume we have an unnecessary CLOSEPOLY vertex and remove it\n if nverts > 2 and (xy[0] == xy[-1]).all():\n xy = xy[:-1]\n self._path = Path(xy, closed=self._closed)\n self.stale = True\n\n xy = property(get_xy, set_xy,\n doc='The vertices of the path as (N, 2) numpy array.')\n\n\nclass Wedge(Patch):\n \"\"\"Wedge shaped patch.\"\"\"\n\n def __str__(self):\n pars = (self.center[0], self.center[1], self.r,\n self.theta1, self.theta2, self.width)\n fmt = \"Wedge(center=(%g, %g), r=%g, theta1=%g, theta2=%g, width=%s)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, center, r, theta1, theta2, width=None, **kwargs):\n \"\"\"\n A wedge centered at *x*, *y* center with radius *r* that\n sweeps *theta1* to *theta2* (in degrees). If *width* is given,\n then a partial wedge is drawn from inner radius *r* - *width*\n to outer radius *r*.\n\n Valid keyword arguments are:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self.center = center\n self.r, self.width = r, width\n self.theta1, self.theta2 = theta1, theta2\n self._patch_transform = transforms.IdentityTransform()\n self._recompute_path()\n\n def _recompute_path(self):\n # Inner and outer rings are connected unless the annulus is complete\n if abs((self.theta2 - self.theta1) - 360) <= 1e-12:\n theta1, theta2 = 0, 360\n connector = Path.MOVETO\n else:\n theta1, theta2 = self.theta1, self.theta2\n connector = Path.LINETO\n\n # Form the outer ring\n arc = Path.arc(theta1, theta2)\n\n if self.width is not None:\n # Partial annulus needs to draw the outer ring\n # followed by a reversed and scaled inner ring\n v1 = arc.vertices\n v2 = arc.vertices[::-1] * (self.r - self.width) / self.r\n v = np.concatenate([v1, v2, [v1[0, :], (0, 0)]])\n c = np.concatenate([\n arc.codes, arc.codes, [connector, Path.CLOSEPOLY]])\n c[len(arc.codes)] = connector\n else:\n # Wedge doesn't need an inner ring\n v = np.concatenate([\n arc.vertices, [(0, 0), arc.vertices[0, :], (0, 0)]])\n c = np.concatenate([\n arc.codes, [connector, connector, Path.CLOSEPOLY]])\n\n # Shift and scale the wedge to the final location.\n v *= self.r\n v += np.asarray(self.center)\n self._path = Path(v, c)\n\n def set_center(self, center):\n self._path = None\n self.center = center\n self.stale = True\n\n def set_radius(self, radius):\n self._path = None\n self.r = radius\n self.stale = True\n\n def set_theta1(self, theta1):\n self._path = None\n self.theta1 = theta1\n self.stale = True\n\n def set_theta2(self, theta2):\n self._path = None\n self.theta2 = theta2\n self.stale = True\n\n def set_width(self, width):\n self._path = None\n self.width = width\n self.stale = True\n\n def get_path(self):\n if self._path is None:\n self._recompute_path()\n return self._path\n\n\n# COVERAGE NOTE: Not used internally or from examples\nclass Arrow(Patch):\n \"\"\"An arrow patch.\"\"\"\n\n def __str__(self):\n return \"Arrow()\"\n\n _path = Path([[0.0, 0.1], [0.0, -0.1],\n [0.8, -0.1], [0.8, -0.3],\n [1.0, 0.0], [0.8, 0.3],\n [0.8, 0.1], [0.0, 0.1]],\n closed=True)\n\n @docstring.dedent_interpd\n def __init__(self, x, y, dx, dy, width=1.0, **kwargs):\n \"\"\"\n Draws an arrow from (*x*, *y*) to (*x* + *dx*, *y* + *dy*).\n The width of the arrow is scaled by *width*.\n\n Parameters\n ----------\n x : float\n x coordinate of the arrow tail.\n y : float\n y coordinate of the arrow tail.\n dx : float\n Arrow length in the x direction.\n dy : float\n Arrow length in the y direction.\n width : float, default: 1\n Scale factor for the width of the arrow. With a default value of 1,\n the tail width is 0.2 and head width is 0.6.\n **kwargs\n Keyword arguments control the `Patch` properties:\n\n %(Patch:kwdoc)s\n\n See Also\n --------\n FancyArrow\n Patch that allows independent control of the head and tail\n properties.\n \"\"\"\n super().__init__(**kwargs)\n self._patch_transform = (\n transforms.Affine2D()\n .scale(np.hypot(dx, dy), width)\n .rotate(np.arctan2(dy, dx))\n .translate(x, y)\n .frozen())\n\n def get_path(self):\n return self._path\n\n def get_patch_transform(self):\n return self._patch_transform\n\n\nclass FancyArrow(Polygon):\n \"\"\"\n Like Arrow, but lets you set head width and head height independently.\n \"\"\"\n\n _edge_default = True\n\n def __str__(self):\n return \"FancyArrow()\"\n\n @docstring.dedent_interpd\n def __init__(self, x, y, dx, dy, width=0.001, length_includes_head=False,\n head_width=None, head_length=None, shape='full', overhang=0,\n head_starts_at_zero=False, **kwargs):\n \"\"\"\n Parameters\n ----------\n x, y : float\n The x and y coordinates of the arrow base.\n\n dx, dy : float\n The length of the arrow along x and y direction.\n\n width : float, default: 0.001\n Width of full arrow tail.\n\n length_includes_head : bool, default: False\n True if head is to be counted in calculating the length.\n\n head_width : float or None, default: 3*width\n Total width of the full arrow head.\n\n head_length : float or None, default: 1.5*head_width\n Length of arrow head.\n\n shape : {'full', 'left', 'right'}, default: 'full'\n Draw the left-half, right-half, or full arrow.\n\n overhang : float, default: 0\n Fraction that the arrow is swept back (0 overhang means\n triangular shape). Can be negative or greater than one.\n\n head_starts_at_zero : bool, default: False\n If True, the head starts being drawn at coordinate 0\n instead of ending at coordinate 0.\n\n **kwargs\n `.Patch` properties:\n\n %(Patch:kwdoc)s\n \"\"\"\n self._x = x\n self._y = y\n self._dx = dx\n self._dy = dy\n self._width = width\n self._length_includes_head = length_includes_head\n self._head_width = head_width\n self._head_length = head_length\n self._shape = shape\n self._overhang = overhang\n self._head_starts_at_zero = head_starts_at_zero\n self._make_verts()\n super().__init__(self.verts, closed=True, **kwargs)\n\n def set_data(self, *, x=None, y=None, dx=None, dy=None, width=None,\n head_width=None, head_length=None):\n \"\"\"\n Set `.FancyArrow` x, y, dx, dy, width, head_with, and head_length.\n Values left as None will not be updated.\n\n Parameters\n ----------\n x, y : float or None, default: None\n The x and y coordinates of the arrow base.\n\n dx, dy : float or None, default: None\n The length of the arrow along x and y direction.\n\n width: float or None, default: None\n Width of full arrow tail.\n\n head_width: float or None, default: None\n Total width of the full arrow head.\n\n head_length: float or None, default: None\n Length of arrow head.\n \"\"\"\n if x is not None:\n self._x = x\n if y is not None:\n self._y = y\n if dx is not None:\n self._dx = dx\n if dy is not None:\n self._dy = dy\n if width is not None:\n self._width = width\n if head_width is not None:\n self._head_width = head_width\n if head_length is not None:\n self._head_length = head_length\n self._make_verts()\n self.set_xy(self.verts)\n\n def _make_verts(self):\n if self._head_width is None:\n head_width = 3 * self._width\n else:\n head_width = self._head_width\n if self._head_length is None:\n head_length = 1.5 * head_width\n else:\n head_length = self._head_length\n\n distance = np.hypot(self._dx, self._dy)\n\n if self._length_includes_head:\n length = distance\n else:\n length = distance + head_length\n if not length:\n self.verts = np.empty([0, 2]) # display nothing if empty\n else:\n # start by drawing horizontal arrow, point at (0, 0)\n hw, hl = head_width, head_length\n hs, lw = self._overhang, self._width\n left_half_arrow = np.array([\n [0.0, 0.0], # tip\n [-hl, -hw / 2], # leftmost\n [-hl * (1 - hs), -lw / 2], # meets stem\n [-length, -lw / 2], # bottom left\n [-length, 0],\n ])\n # if we're not including the head, shift up by head length\n if not self._length_includes_head:\n left_half_arrow += [head_length, 0]\n # if the head starts at 0, shift up by another head length\n if self._head_starts_at_zero:\n left_half_arrow += [head_length / 2, 0]\n # figure out the shape, and complete accordingly\n if self._shape == 'left':\n coords = left_half_arrow\n else:\n right_half_arrow = left_half_arrow * [1, -1]\n if self._shape == 'right':\n coords = right_half_arrow\n elif self._shape == 'full':\n # The half-arrows contain the midpoint of the stem,\n # which we can omit from the full arrow. Including it\n # twice caused a problem with xpdf.\n coords = np.concatenate([left_half_arrow[:-1],\n right_half_arrow[-2::-1]])\n else:\n raise ValueError(\"Got unknown shape: %s\" % self.shape)\n if distance != 0:\n cx = self._dx / distance\n sx = self._dy / distance\n else:\n # Account for division by zero\n cx, sx = 0, 1\n M = [[cx, sx], [-sx, cx]]\n self.verts = np.dot(coords, M) + [\n self._x + self._dx,\n self._y + self._dy,\n ]\n\n\ndocstring.interpd.update(\n FancyArrow=\"\\n\".join(\n (inspect.getdoc(FancyArrow.__init__) or \"\").splitlines()[2:]))\n\n\nclass CirclePolygon(RegularPolygon):\n \"\"\"A polygon-approximation of a circle patch.\"\"\"\n\n def __str__(self):\n s = \"CirclePolygon((%g, %g), radius=%g, resolution=%d)\"\n return s % (self.xy[0], self.xy[1], self.radius, self.numvertices)\n\n @docstring.dedent_interpd\n def __init__(self, xy, radius=5,\n resolution=20, # the number of vertices\n ** kwargs):\n \"\"\"\n Create a circle at *xy* = (*x*, *y*) with given *radius*.\n\n This circle is approximated by a regular polygon with *resolution*\n sides. For a smoother circle drawn with splines, see `Circle`.\n\n Valid keyword arguments are:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(xy, resolution, radius, orientation=0, **kwargs)\n\n\nclass Ellipse(Patch):\n \"\"\"A scale-free ellipse.\"\"\"\n\n def __str__(self):\n pars = (self._center[0], self._center[1],\n self.width, self.height, self.angle)\n fmt = \"Ellipse(xy=(%s, %s), width=%s, height=%s, angle=%s)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, width, height, angle=0, **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n xy coordinates of ellipse centre.\n width : float\n Total length (diameter) of horizontal axis.\n height : float\n Total length (diameter) of vertical axis.\n angle : float, default: 0\n Rotation in degrees anti-clockwise.\n\n Notes\n -----\n Valid keyword arguments are:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n\n self._center = xy\n self._width, self._height = width, height\n self._angle = angle\n self._path = Path.unit_circle()\n # Required for EllipseSelector with axes aspect ratio != 1\n # The patch is defined in data coordinates and when changing the\n # selector with square modifier and not in data coordinates, we need\n # to correct for the aspect ratio difference between the data and\n # display coordinate systems.\n self._aspect_ratio_correction = 1.0\n # Note: This cannot be calculated until this is added to an Axes\n self._patch_transform = transforms.IdentityTransform()\n\n def _recompute_transform(self):\n \"\"\"\n Notes\n -----\n This cannot be called until after this has been added to an Axes,\n otherwise unit conversion will fail. This makes it very important to\n call the accessor method and not directly access the transformation\n member variable.\n \"\"\"\n center = (self.convert_xunits(self._center[0]),\n self.convert_yunits(self._center[1]))\n width = self.convert_xunits(self._width)\n height = self.convert_yunits(self._height)\n self._patch_transform = transforms.Affine2D() \\\n .scale(width * 0.5, height * 0.5 * self._aspect_ratio_correction) \\\n .rotate_deg(self.angle) \\\n .scale(1, 1 / self._aspect_ratio_correction) \\\n .translate(*center)\n\n def get_path(self):\n \"\"\"Return the path of the ellipse.\"\"\"\n return self._path\n\n def get_patch_transform(self):\n self._recompute_transform()\n return self._patch_transform\n\n def set_center(self, xy):\n \"\"\"\n Set the center of the ellipse.\n\n Parameters\n ----------\n xy : (float, float)\n \"\"\"\n self._center = xy\n self.stale = True\n\n def get_center(self):\n \"\"\"Return the center of the ellipse.\"\"\"\n return self._center\n\n center = property(get_center, set_center)\n\n def set_width(self, width):\n \"\"\"\n Set the width of the ellipse.\n\n Parameters\n ----------\n width : float\n \"\"\"\n self._width = width\n self.stale = True\n\n def get_width(self):\n \"\"\"\n Return the width of the ellipse.\n \"\"\"\n return self._width\n\n width = property(get_width, set_width)\n\n def set_height(self, height):\n \"\"\"\n Set the height of the ellipse.\n\n Parameters\n ----------\n height : float\n \"\"\"\n self._height = height\n self.stale = True\n\n def get_height(self):\n \"\"\"Return the height of the ellipse.\"\"\"\n return self._height\n\n height = property(get_height, set_height)\n\n def set_angle(self, angle):\n \"\"\"\n Set the angle of the ellipse.\n\n Parameters\n ----------\n angle : float\n \"\"\"\n self._angle = angle\n self.stale = True\n\n def get_angle(self):\n \"\"\"Return the angle of the ellipse.\"\"\"\n return self._angle\n\n angle = property(get_angle, set_angle)\n\n\nclass Annulus(Patch):\n \"\"\"\n An elliptical annulus.\n \"\"\"\n\n @docstring.dedent_interpd\n def __init__(self, xy, r, width, angle=0.0, **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n xy coordinates of annulus centre.\n r : float or (float, float)\n The radius, or semi-axes:\n\n - If float: radius of the outer circle.\n - If two floats: semi-major and -minor axes of outer ellipse.\n width : float\n Width (thickness) of the annular ring. The width is measured inward\n from the outer ellipse so that for the inner ellipse the semi-axes\n are given by ``r - width``. *width* must be less than or equal to\n the semi-minor axis.\n angle : float, default: 0\n Rotation angle in degrees (anti-clockwise from the positive\n x-axis). Ignored for circular annuli (i.e., if *r* is a scalar).\n **kwargs\n Keyword arguments control the `Patch` properties:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n\n self.set_radii(r)\n self.center = xy\n self.width = width\n self.angle = angle\n self._path = None\n\n def __str__(self):\n if self.a == self.b:\n r = self.a\n else:\n r = (self.a, self.b)\n\n return \"Annulus(xy=(%s, %s), r=%s, width=%s, angle=%s)\" % \\\n (*self.center, r, self.width, self.angle)\n\n def set_center(self, xy):\n \"\"\"\n Set the center of the annulus.\n\n Parameters\n ----------\n xy : (float, float)\n \"\"\"\n self._center = xy\n self._path = None\n self.stale = True\n\n def get_center(self):\n \"\"\"Return the center of the annulus.\"\"\"\n return self._center\n\n center = property(get_center, set_center)\n\n def set_width(self, width):\n \"\"\"\n Set the width (thickness) of the annulus ring.\n\n The width is measured inwards from the outer ellipse.\n\n Parameters\n ----------\n width : float\n \"\"\"\n if min(self.a, self.b) <= width:\n raise ValueError(\n 'Width of annulus must be less than or equal semi-minor axis')\n\n self._width = width\n self._path = None\n self.stale = True\n\n def get_width(self):\n \"\"\"Return the width (thickness) of the annulus ring.\"\"\"\n return self._width\n\n width = property(get_width, set_width)\n\n def set_angle(self, angle):\n \"\"\"\n Set the tilt angle of the annulus.\n\n Parameters\n ----------\n angle : float\n \"\"\"\n self._angle = angle\n self._path = None\n self.stale = True\n\n def get_angle(self):\n \"\"\"Return the angle of the annulus.\"\"\"\n return self._angle\n\n angle = property(get_angle, set_angle)\n\n def set_semimajor(self, a):\n \"\"\"\n Set the semi-major axis *a* of the annulus.\n\n Parameters\n ----------\n a : float\n \"\"\"\n self.a = float(a)\n self._path = None\n self.stale = True\n\n def set_semiminor(self, b):\n \"\"\"\n Set the semi-minor axis *b* of the annulus.\n\n Parameters\n ----------\n b : float\n \"\"\"\n self.b = float(b)\n self._path = None\n self.stale = True\n\n def set_radii(self, r):\n \"\"\"\n Set the semi-major (*a*) and semi-minor radii (*b*) of the annulus.\n\n Parameters\n ----------\n r : float or (float, float)\n The radius, or semi-axes:\n\n - If float: radius of the outer circle.\n - If two floats: semi-major and -minor axes of outer ellipse.\n \"\"\"\n if np.shape(r) == (2,):\n self.a, self.b = r\n elif np.shape(r) == ():\n self.a = self.b = float(r)\n else:\n raise ValueError(\"Parameter 'r' must be one or two floats.\")\n\n self._path = None\n self.stale = True\n\n def get_radii(self):\n \"\"\"Return the semi-major and semi-minor radii of the annulus.\"\"\"\n return self.a, self.b\n\n radii = property(get_radii, set_radii)\n\n def _transform_verts(self, verts, a, b):\n return transforms.Affine2D() \\\n .scale(*self._convert_xy_units((a, b))) \\\n .rotate_deg(self.angle) \\\n .translate(*self._convert_xy_units(self.center)) \\\n .transform(verts)\n\n def _recompute_path(self):\n # circular arc\n arc = Path.arc(0, 360)\n\n # annulus needs to draw an outer ring\n # followed by a reversed and scaled inner ring\n a, b, w = self.a, self.b, self.width\n v1 = self._transform_verts(arc.vertices, a, b)\n v2 = self._transform_verts(arc.vertices[::-1], a - w, b - w)\n v = np.vstack([v1, v2, v1[0, :], (0, 0)])\n c = np.hstack([arc.codes, Path.MOVETO,\n arc.codes[1:], Path.MOVETO,\n Path.CLOSEPOLY])\n self._path = Path(v, c)\n\n def get_path(self):\n if self._path is None:\n self._recompute_path()\n return self._path\n\n\nclass Circle(Ellipse):\n \"\"\"\n A circle patch.\n \"\"\"\n def __str__(self):\n pars = self.center[0], self.center[1], self.radius\n fmt = \"Circle(xy=(%g, %g), radius=%g)\"\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, radius=5, **kwargs):\n \"\"\"\n Create a true circle at center *xy* = (*x*, *y*) with given *radius*.\n\n Unlike `CirclePolygon` which is a polygonal approximation, this uses\n Bezier splines and is much closer to a scale-free circle.\n\n Valid keyword arguments are:\n\n %(Patch:kwdoc)s\n \"\"\"\n super().__init__(xy, radius * 2, radius * 2, **kwargs)\n self.radius = radius\n\n def set_radius(self, radius):\n \"\"\"\n Set the radius of the circle.\n\n Parameters\n ----------\n radius : float\n \"\"\"\n self.width = self.height = 2 * radius\n self.stale = True\n\n def get_radius(self):\n \"\"\"Return the radius of the circle.\"\"\"\n return self.width / 2.\n\n radius = property(get_radius, set_radius)\n\n\nclass Arc(Ellipse):\n \"\"\"\n An elliptical arc, i.e. a segment of an ellipse.\n\n Due to internal optimizations, there are certain restrictions on using Arc:\n\n - The arc cannot be filled.\n\n - The arc must be used in an `~.axes.Axes` instance. It can not be added\n directly to a `.Figure` because it is optimized to only render the\n segments that are inside the axes bounding box with high resolution.\n \"\"\"\n def __str__(self):\n pars = (self.center[0], self.center[1], self.width,\n self.height, self.angle, self.theta1, self.theta2)\n fmt = (\"Arc(xy=(%g, %g), width=%g, \"\n \"height=%g, angle=%g, theta1=%g, theta2=%g)\")\n return fmt % pars\n\n @docstring.dedent_interpd\n def __init__(self, xy, width, height, angle=0.0,\n theta1=0.0, theta2=360.0, **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : (float, float)\n The center of the ellipse.\n\n width : float\n The length of the horizontal axis.\n\n height : float\n The length of the vertical axis.\n\n angle : float\n Rotation of the ellipse in degrees (counterclockwise).\n\n theta1, theta2 : float, default: 0, 360\n Starting and ending angles of the arc in degrees. These values\n are relative to *angle*, e.g. if *angle* = 45 and *theta1* = 90\n the absolute starting angle is 135.\n Default *theta1* = 0, *theta2* = 360, i.e. a complete ellipse.\n The arc is drawn in the counterclockwise direction.\n Angles greater than or equal to 360, or smaller than 0, are\n represented by an equivalent angle in the range [0, 360), by\n taking the input value mod 360.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties\n Most `.Patch` properties are supported as keyword arguments,\n with the exception of *fill* and *facecolor* because filling is\n not supported.\n\n %(Patch:kwdoc)s\n \"\"\"\n fill = kwargs.setdefault('fill', False)\n if fill:\n raise ValueError(\"Arc objects can not be filled\")\n\n super().__init__(xy, width, height, angle, **kwargs)\n\n self.theta1 = theta1\n self.theta2 = theta2\n\n @artist.allow_rasterization\n def draw(self, renderer):\n \"\"\"\n Draw the arc to the given *renderer*.\n\n Notes\n -----\n Ellipses are normally drawn using an approximation that uses\n eight cubic Bezier splines. The error of this approximation\n is 1.89818e-6, according to this unverified source:\n\n Lancaster, Don. *Approximating a Circle or an Ellipse Using\n Four Bezier Cubic Splines.*\n\n https://www.tinaja.com/glib/ellipse4.pdf\n\n There is a use case where very large ellipses must be drawn\n with very high accuracy, and it is too expensive to render the\n entire ellipse with enough segments (either splines or line\n segments). Therefore, in the case where either radius of the\n ellipse is large enough that the error of the spline\n approximation will be visible (greater than one pixel offset\n from the ideal), a different technique is used.\n\n In that case, only the visible parts of the ellipse are drawn,\n with each visible arc using a fixed number of spline segments\n (8). The algorithm proceeds as follows:\n\n 1. The points where the ellipse intersects the axes bounding\n box are located. (This is done be performing an inverse\n transformation on the axes bbox such that it is relative\n to the unit circle -- this makes the intersection\n calculation much easier than doing rotated ellipse\n intersection directly).\n\n This uses the \"line intersecting a circle\" algorithm from:\n\n Vince, John. *Geometry for Computer Graphics: Formulae,\n Examples & Proofs.* London: Springer-Verlag, 2005.\n\n 2. The angles of each of the intersection points are calculated.\n\n 3. Proceeding counterclockwise starting in the positive\n x-direction, each of the visible arc-segments between the\n pairs of vertices are drawn using the Bezier arc\n approximation technique implemented in `.Path.arc`.\n \"\"\"\n if not hasattr(self, 'axes'):\n raise RuntimeError('Arcs can only be used in Axes instances')\n if not self.get_visible():\n return\n\n self._recompute_transform()\n\n width = self.convert_xunits(self.width)\n height = self.convert_yunits(self.height)\n\n # If the width and height of ellipse are not equal, take into account\n # stretching when calculating angles to draw between\n def theta_stretch(theta, scale):\n theta = np.deg2rad(theta)\n x = np.cos(theta)\n y = np.sin(theta)\n stheta = np.rad2deg(np.arctan2(scale * y, x))\n # arctan2 has the range [-pi, pi], we expect [0, 2*pi]\n return (stheta + 360) % 360\n\n theta1 = self.theta1\n theta2 = self.theta2\n\n if (\n # if we need to stretch the angles because we are distorted\n width != height\n # and we are not doing a full circle.\n #\n # 0 and 360 do not exactly round-trip through the angle\n # stretching (due to both float precision limitations and\n # the difference between the range of arctan2 [-pi, pi] and\n # this method [0, 360]) so avoid doing it if we don't have to.\n and not (theta1 != theta2 and theta1 % 360 == theta2 % 360)\n ):\n theta1 = theta_stretch(self.theta1, width / height)\n theta2 = theta_stretch(self.theta2, width / height)\n\n # Get width and height in pixels we need to use\n # `self.get_data_transform` rather than `self.get_transform`\n # because we want the transform from dataspace to the\n # screen space to estimate how big the arc will be in physical\n # units when rendered (the transform that we get via\n # `self.get_transform()` goes from an idealized unit-radius\n # space to screen space).\n data_to_screen_trans = self.get_data_transform()\n pwidth, pheight = (data_to_screen_trans.transform((width, height)) -\n data_to_screen_trans.transform((0, 0)))\n inv_error = (1.0 / 1.89818e-6) * 0.5\n\n if pwidth < inv_error and pheight < inv_error:\n self._path = Path.arc(theta1, theta2)\n return Patch.draw(self, renderer)\n\n def line_circle_intersect(x0, y0, x1, y1):\n dx = x1 - x0\n dy = y1 - y0\n dr2 = dx * dx + dy * dy\n D = x0 * y1 - x1 * y0\n D2 = D * D\n discrim = dr2 - D2\n if discrim >= 0.0:\n sign_dy = np.copysign(1, dy) # +/-1, never 0.\n sqrt_discrim = np.sqrt(discrim)\n return np.array(\n [[(D * dy + sign_dy * dx * sqrt_discrim) / dr2,\n (-D * dx + abs(dy) * sqrt_discrim) / dr2],\n [(D * dy - sign_dy * dx * sqrt_discrim) / dr2,\n (-D * dx - abs(dy) * sqrt_discrim) / dr2]])\n else:\n return np.empty((0, 2))\n\n def segment_circle_intersect(x0, y0, x1, y1):\n epsilon = 1e-9\n if x1 < x0:\n x0e, x1e = x1, x0\n else:\n x0e, x1e = x0, x1\n if y1 < y0:\n y0e, y1e = y1, y0\n else:\n y0e, y1e = y0, y1\n xys = line_circle_intersect(x0, y0, x1, y1)\n xs, ys = xys.T\n return xys[\n (x0e - epsilon < xs) & (xs < x1e + epsilon)\n & (y0e - epsilon < ys) & (ys < y1e + epsilon)\n ]\n\n # Transforms the axes box_path so that it is relative to the unit\n # circle in the same way that it is relative to the desired ellipse.\n box_path_transform = (transforms.BboxTransformTo(self.axes.bbox)\n + self.get_transform().inverted())\n box_path = Path.unit_rectangle().transformed(box_path_transform)\n\n thetas = set()\n # For each of the point pairs, there is a line segment\n for p0, p1 in zip(box_path.vertices[:-1], box_path.vertices[1:]):\n xy = segment_circle_intersect(*p0, *p1)\n x, y = xy.T\n # arctan2 return [-pi, pi), the rest of our angles are in\n # [0, 360], adjust as needed.\n theta = (np.rad2deg(np.arctan2(y, x)) + 360) % 360\n thetas.update(theta[(theta1 < theta) & (theta < theta2)])\n thetas = sorted(thetas) + [theta2]\n last_theta = theta1\n theta1_rad = np.deg2rad(theta1)\n inside = box_path.contains_point(\n (np.cos(theta1_rad), np.sin(theta1_rad))\n )\n\n # save original path\n path_original = self._path\n for theta in thetas:\n if inside:\n self._path = Path.arc(last_theta, theta, 8)\n Patch.draw(self, renderer)\n inside = False\n else:\n inside = True\n last_theta = theta\n\n # restore original path\n self._path = path_original\n\n\ndef bbox_artist(artist, renderer, props=None, fill=True):\n \"\"\"\n A debug function to draw a rectangle around the bounding\n box returned by an artist's `.Artist.get_window_extent`\n to test whether the artist is returning the correct bbox.\n\n *props* is a dict of rectangle props with the additional property\n 'pad' that sets the padding around the bbox in points.\n \"\"\"\n if props is None:\n props = {}\n props = props.copy() # don't want to alter the pad externally\n pad = props.pop('pad', 4)\n pad = renderer.points_to_pixels(pad)\n bbox = artist.get_window_extent(renderer)\n r = Rectangle(\n xy=(bbox.x0 - pad / 2, bbox.y0 - pad / 2),\n width=bbox.width + pad, height=bbox.height + pad,\n fill=fill, transform=transforms.IdentityTransform(), clip_on=False)\n r.update(props)\n r.draw(renderer)\n\n\ndef draw_bbox(bbox, renderer, color='k', trans=None):\n \"\"\"\n A debug function to draw a rectangle around the bounding\n box returned by an artist's `.Artist.get_window_extent`\n to test whether the artist is returning the correct bbox.\n \"\"\"\n r = Rectangle(xy=bbox.p0, width=bbox.width, height=bbox.height,\n edgecolor=color, fill=False, clip_on=False)\n if trans is not None:\n r.set_transform(trans)\n r.draw(renderer)\n\n\ndef _simpleprint_styles(_styles):\n \"\"\"\n A helper function for the _Style class. Given the dictionary of\n {stylename: styleclass}, return a string rep of the list of keys.\n Used to update the documentation.\n \"\"\"\n return \"[{}]\".format(\"|\".join(map(\" '{}' \".format, _styles)))\n\n\nclass _Style:\n \"\"\"\n A base class for the Styles. It is meant to be a container class,\n where actual styles are declared as subclass of it, and it\n provides some helper functions.\n \"\"\"\n def __new__(cls, stylename, **kwargs):\n \"\"\"Return the instance of the subclass with the given style name.\"\"\"\n\n # The \"class\" should have the _style_list attribute, which is a mapping\n # of style names to style classes.\n\n _list = stylename.replace(\" \", \"\").split(\",\")\n _name = _list[0].lower()\n try:\n _cls = cls._style_list[_name]\n except KeyError as err:\n raise ValueError(\"Unknown style : %s\" % stylename) from err\n\n try:\n _args_pair = [cs.split(\"=\") for cs in _list[1:]]\n _args = {k: float(v) for k, v in _args_pair}\n except ValueError as err:\n raise ValueError(\"Incorrect style argument : %s\" %\n stylename) from err\n _args.update(kwargs)\n\n return _cls(**_args)\n\n @classmethod\n def get_styles(cls):\n \"\"\"Return a dictionary of available styles.\"\"\"\n return cls._style_list\n\n @classmethod\n def pprint_styles(cls):\n \"\"\"Return the available styles as pretty-printed string.\"\"\"\n table = [('Class', 'Name', 'Attrs'),\n *[(cls.__name__,\n # Add backquotes, as - and | have special meaning in reST.\n f'``{name}``',\n # [1:-1] drops the surrounding parentheses.\n str(inspect.signature(cls))[1:-1] or 'None')\n for name, cls in cls._style_list.items()]]\n # Convert to rst table.\n col_len = [max(len(cell) for cell in column) for column in zip(*table)]\n table_formatstr = ' '.join('=' * cl for cl in col_len)\n rst_table = '\\n'.join([\n '',\n table_formatstr,\n ' '.join(cell.ljust(cl) for cell, cl in zip(table[0], col_len)),\n table_formatstr,\n *[' '.join(cell.ljust(cl) for cell, cl in zip(row, col_len))\n for row in table[1:]],\n table_formatstr,\n '',\n ])\n return textwrap.indent(rst_table, prefix=' ' * 4)\n\n @classmethod\n def register(cls, name, style):\n \"\"\"Register a new style.\"\"\"\n if not issubclass(style, cls._Base):\n raise ValueError(\"%s must be a subclass of %s\" % (style,\n cls._Base))\n cls._style_list[name] = style\n\n\ndef _register_style(style_list, cls=None, *, name=None):\n \"\"\"Class decorator that stashes a class in a (style) dictionary.\"\"\"\n if cls is None:\n return functools.partial(_register_style, style_list, name=name)\n style_list[name or cls.__name__.lower()] = cls\n return cls\n\n\nclass BoxStyle(_Style):\n \"\"\"\n `BoxStyle` is a container class which defines several\n boxstyle classes, which are used for `FancyBboxPatch`.\n\n A style object can be created as::\n\n BoxStyle.Round(pad=0.2)\n\n or::\n\n BoxStyle(\"Round\", pad=0.2)\n\n or::\n\n BoxStyle(\"Round, pad=0.2\")\n\n The following boxstyle classes are defined.\n\n %(AvailableBoxstyles)s\n\n An instance of any boxstyle class is an callable object,\n whose call signature is::\n\n __call__(self, x0, y0, width, height, mutation_size)\n\n and returns a `.Path` instance. *x0*, *y0*, *width* and\n *height* specify the location and size of the box to be\n drawn. *mutation_scale* determines the overall size of the\n mutation (by which I mean the transformation of the rectangle to\n the fancy box).\n \"\"\"\n\n _style_list = {}\n\n @_api.deprecated(\"3.4\")\n class _Base:\n \"\"\"\n Abstract base class for styling of `.FancyBboxPatch`.\n\n This class is not an artist itself. The `__call__` method returns the\n `~matplotlib.path.Path` for outlining the fancy box. The actual drawing\n is handled in `.FancyBboxPatch`.\n\n Subclasses may only use parameters with default values in their\n ``__init__`` method because they must be able to be initialized\n without arguments.\n\n Subclasses must implement the `__call__` method. It receives the\n enclosing rectangle *x0, y0, width, height* as well as the\n *mutation_size*, which scales the outline properties such as padding.\n It returns the outline of the fancy box as `.path.Path`.\n \"\"\"\n\n @_api.deprecated(\"3.4\")\n def transmute(self, x0, y0, width, height, mutation_size):\n \"\"\"Return the `~.path.Path` outlining the given rectangle.\"\"\"\n return self(self, x0, y0, width, height, mutation_size, 1)\n\n # This can go away once the deprecation period elapses, leaving _Base\n # as a fully abstract base class just providing docstrings, no logic.\n def __init_subclass__(cls):\n transmute = _api.deprecate_method_override(\n __class__.transmute, cls, since=\"3.4\")\n if transmute:\n cls.__call__ = transmute\n return\n\n __call__ = cls.__call__\n\n @_api.delete_parameter(\"3.4\", \"mutation_aspect\")\n def call_wrapper(\n self, x0, y0, width, height, mutation_size,\n mutation_aspect=_api.deprecation._deprecated_parameter):\n if mutation_aspect is _api.deprecation._deprecated_parameter:\n # Don't trigger deprecation warning internally.\n return __call__(self, x0, y0, width, height, mutation_size)\n else:\n # Squeeze the given height by the aspect_ratio.\n y0, height = y0 / mutation_aspect, height / mutation_aspect\n path = self(x0, y0, width, height, mutation_size,\n mutation_aspect)\n vertices, codes = path.vertices, path.codes\n # Restore the height.\n vertices[:, 1] = vertices[:, 1] * mutation_aspect\n return Path(vertices, codes)\n\n cls.__call__ = call_wrapper\n\n def __call__(self, x0, y0, width, height, mutation_size):\n \"\"\"\n Given the location and size of the box, return the path of\n the box around it.\n\n Parameters\n ----------\n x0, y0, width, height : float\n Location and size of the box.\n mutation_size : float\n A reference scale for the mutation.\n\n Returns\n -------\n `~matplotlib.path.Path`\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n @_register_style(_style_list)\n class Square(_Base):\n \"\"\"A square box.\"\"\"\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n pad = mutation_size * self.pad\n # width and height with padding added.\n width, height = width + 2 * pad, height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad\n x1, y1 = x0 + width, y0 + height\n return Path([(x0, y0), (x1, y0), (x1, y1), (x0, y1), (x0, y0)],\n closed=True)\n\n @_register_style(_style_list)\n class Circle(_Base):\n \"\"\"A circular box.\"\"\"\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n pad = mutation_size * self.pad\n width, height = width + 2 * pad, height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad\n return Path.circle((x0 + width / 2, y0 + height / 2),\n max(width, height) / 2)\n\n @_register_style(_style_list)\n class LArrow(_Base):\n \"\"\"A box in the shape of a left-pointing arrow.\"\"\"\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n # padding\n pad = mutation_size * self.pad\n # width and height with padding added.\n width, height = width + 2 * pad, height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad,\n x1, y1 = x0 + width, y0 + height\n\n dx = (y1 - y0) / 2\n dxx = dx / 2\n x0 = x0 + pad / 1.4 # adjust by ~sqrt(2)\n\n return Path([(x0 + dxx, y0), (x1, y0), (x1, y1), (x0 + dxx, y1),\n (x0 + dxx, y1 + dxx), (x0 - dx, y0 + dx),\n (x0 + dxx, y0 - dxx), # arrow\n (x0 + dxx, y0), (x0 + dxx, y0)],\n closed=True)\n\n @_register_style(_style_list)\n class RArrow(LArrow):\n \"\"\"A box in the shape of a right-pointing arrow.\"\"\"\n\n def __call__(self, x0, y0, width, height, mutation_size):\n p = BoxStyle.LArrow.__call__(\n self, x0, y0, width, height, mutation_size)\n p.vertices[:, 0] = 2 * x0 + width - p.vertices[:, 0]\n return p\n\n @_register_style(_style_list)\n class DArrow(_Base):\n \"\"\"A box in the shape of a two-way arrow.\"\"\"\n # Modified from LArrow to add a right arrow to the bbox.\n\n def __init__(self, pad=0.3):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n \"\"\"\n self.pad = pad\n\n def __call__(self, x0, y0, width, height, mutation_size):\n # padding\n pad = mutation_size * self.pad\n # width and height with padding added.\n # The width is padded by the arrows, so we don't need to pad it.\n height = height + 2 * pad\n # boundary of the padded box\n x0, y0 = x0 - pad, y0 - pad\n x1, y1 = x0 + width, y0 + height\n\n dx = (y1 - y0) / 2\n dxx = dx / 2\n x0 = x0 + pad / 1.4 # adjust by ~sqrt(2)\n\n return Path([(x0 + dxx, y0), (x1, y0), # bot-segment\n (x1, y0 - dxx), (x1 + dx + dxx, y0 + dx),\n (x1, y1 + dxx), # right-arrow\n (x1, y1), (x0 + dxx, y1), # top-segment\n (x0 + dxx, y1 + dxx), (x0 - dx, y0 + dx),\n (x0 + dxx, y0 - dxx), # left-arrow\n (x0 + dxx, y0), (x0 + dxx, y0)], # close-poly\n closed=True)\n\n @_register_style(_style_list)\n class Round(_Base):\n \"\"\"A box with round corners.\"\"\"\n\n def __init__(self, pad=0.3, rounding_size=None):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n rounding_size : float, default: *pad*\n Radius of the corners.\n \"\"\"\n self.pad = pad\n self.rounding_size = rounding_size\n\n def __call__(self, x0, y0, width, height, mutation_size):\n\n # padding\n pad = mutation_size * self.pad\n\n # size of the rounding corner\n if self.rounding_size:\n dr = mutation_size * self.rounding_size\n else:\n dr = pad\n\n width, height = width + 2 * pad, height + 2 * pad\n\n x0, y0 = x0 - pad, y0 - pad,\n x1, y1 = x0 + width, y0 + height\n\n # Round corners are implemented as quadratic Bezier, e.g.,\n # [(x0, y0-dr), (x0, y0), (x0+dr, y0)] for lower left corner.\n cp = [(x0 + dr, y0),\n (x1 - dr, y0),\n (x1, y0), (x1, y0 + dr),\n (x1, y1 - dr),\n (x1, y1), (x1 - dr, y1),\n (x0 + dr, y1),\n (x0, y1), (x0, y1 - dr),\n (x0, y0 + dr),\n (x0, y0), (x0 + dr, y0),\n (x0 + dr, y0)]\n\n com = [Path.MOVETO,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.LINETO,\n Path.CURVE3, Path.CURVE3,\n Path.CLOSEPOLY]\n\n path = Path(cp, com)\n\n return path\n\n @_register_style(_style_list)\n class Round4(_Base):\n \"\"\"A box with rounded edges.\"\"\"\n\n def __init__(self, pad=0.3, rounding_size=None):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n rounding_size : float, default: *pad*/2\n Rounding of edges.\n \"\"\"\n self.pad = pad\n self.rounding_size = rounding_size\n\n def __call__(self, x0, y0, width, height, mutation_size):\n\n # padding\n pad = mutation_size * self.pad\n\n # Rounding size; defaults to half of the padding.\n if self.rounding_size:\n dr = mutation_size * self.rounding_size\n else:\n dr = pad / 2.\n\n width = width + 2 * pad - 2 * dr\n height = height + 2 * pad - 2 * dr\n\n x0, y0 = x0 - pad + dr, y0 - pad + dr,\n x1, y1 = x0 + width, y0 + height\n\n cp = [(x0, y0),\n (x0 + dr, y0 - dr), (x1 - dr, y0 - dr), (x1, y0),\n (x1 + dr, y0 + dr), (x1 + dr, y1 - dr), (x1, y1),\n (x1 - dr, y1 + dr), (x0 + dr, y1 + dr), (x0, y1),\n (x0 - dr, y1 - dr), (x0 - dr, y0 + dr), (x0, y0),\n (x0, y0)]\n\n com = [Path.MOVETO,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CURVE4, Path.CURVE4, Path.CURVE4,\n Path.CLOSEPOLY]\n\n path = Path(cp, com)\n\n return path\n\n @_register_style(_style_list)\n class Sawtooth(_Base):\n \"\"\"A box with a sawtooth outline.\"\"\"\n\n def __init__(self, pad=0.3, tooth_size=None):\n \"\"\"\n Parameters\n ----------\n pad : float, default: 0.3\n The amount of padding around the original box.\n tooth_size : float, default: *pad*/2\n Size of the sawtooth.\n \"\"\"\n self.pad = pad\n self.tooth_size = tooth_size\n\n def _get_sawtooth_vertices(self, x0, y0, width, height, mutation_size):\n\n # padding\n pad = mutation_size * self.pad\n\n # size of sawtooth\n if self.tooth_size is None:\n tooth_size = self.pad * .5 * mutation_size\n else:\n tooth_size = self.tooth_size * mutation_size\n\n tooth_size2 = tooth_size / 2\n width = width + 2 * pad - tooth_size\n height = height + 2 * pad - tooth_size\n\n # the sizes of the vertical and horizontal sawtooth are\n # separately adjusted to fit the given box size.\n dsx_n = int(round((width - tooth_size) / (tooth_size * 2))) * 2\n dsx = (width - tooth_size) / dsx_n\n dsy_n = int(round((height - tooth_size) / (tooth_size * 2))) * 2\n dsy = (height - tooth_size) / dsy_n\n\n x0, y0 = x0 - pad + tooth_size2, y0 - pad + tooth_size2\n x1, y1 = x0 + width, y0 + height\n\n bottom_saw_x = [\n x0,\n *(x0 + tooth_size2 + dsx * .5 * np.arange(dsx_n * 2)),\n x1 - tooth_size2,\n ]\n bottom_saw_y = [\n y0,\n *([y0 - tooth_size2, y0, y0 + tooth_size2, y0] * dsx_n),\n y0 - tooth_size2,\n ]\n right_saw_x = [\n x1,\n *([x1 + tooth_size2, x1, x1 - tooth_size2, x1] * dsx_n),\n x1 + tooth_size2,\n ]\n right_saw_y = [\n y0,\n *(y0 + tooth_size2 + dsy * .5 * np.arange(dsy_n * 2)),\n y1 - tooth_size2,\n ]\n top_saw_x = [\n x1,\n *(x1 - tooth_size2 - dsx * .5 * np.arange(dsx_n * 2)),\n x0 + tooth_size2,\n ]\n top_saw_y = [\n y1,\n *([y1 + tooth_size2, y1, y1 - tooth_size2, y1] * dsx_n),\n y1 + tooth_size2,\n ]\n left_saw_x = [\n x0,\n *([x0 - tooth_size2, x0, x0 + tooth_size2, x0] * dsy_n),\n x0 - tooth_size2,\n ]\n left_saw_y = [\n y1,\n *(y1 - tooth_size2 - dsy * .5 * np.arange(dsy_n * 2)),\n y0 + tooth_size2,\n ]\n\n saw_vertices = [*zip(bottom_saw_x, bottom_saw_y),\n *zip(right_saw_x, right_saw_y),\n *zip(top_saw_x, top_saw_y),\n *zip(left_saw_x, left_saw_y),\n (bottom_saw_x[0], bottom_saw_y[0])]\n\n return saw_vertices\n\n def __call__(self, x0, y0, width, height, mutation_size):\n saw_vertices = self._get_sawtooth_vertices(x0, y0, width,\n height, mutation_size)\n path = Path(saw_vertices, closed=True)\n return path\n\n @_register_style(_style_list)\n class Roundtooth(Sawtooth):\n \"\"\"A box with a rounded sawtooth outline.\"\"\"\n\n def __call__(self, x0, y0, width, height, mutation_size):\n saw_vertices = self._get_sawtooth_vertices(x0, y0,\n width, height,\n mutation_size)\n # Add a trailing vertex to allow us to close the polygon correctly\n saw_vertices = np.concatenate([saw_vertices, [saw_vertices[0]]])\n codes = ([Path.MOVETO] +\n [Path.CURVE3, Path.CURVE3] * ((len(saw_vertices)-1)//2) +\n [Path.CLOSEPOLY])\n return Path(saw_vertices, codes)\n\n\nclass ConnectionStyle(_Style):\n \"\"\"\n `ConnectionStyle` is a container class which defines\n several connectionstyle classes, which is used to create a path\n between two points. These are mainly used with `FancyArrowPatch`.\n\n A connectionstyle object can be either created as::\n\n ConnectionStyle.Arc3(rad=0.2)\n\n or::\n\n ConnectionStyle(\"Arc3\", rad=0.2)\n\n or::\n\n ConnectionStyle(\"Arc3, rad=0.2\")\n\n The following classes are defined\n\n %(AvailableConnectorstyles)s\n\n An instance of any connection style class is an callable object,\n whose call signature is::\n\n __call__(self, posA, posB,\n patchA=None, patchB=None,\n shrinkA=2., shrinkB=2.)\n\n and it returns a `.Path` instance. *posA* and *posB* are\n tuples of (x, y) coordinates of the two points to be\n connected. *patchA* (or *patchB*) is given, the returned path is\n clipped so that it start (or end) from the boundary of the\n patch. The path is further shrunk by *shrinkA* (or *shrinkB*)\n which is given in points.\n \"\"\"\n\n _style_list = {}\n\n class _Base:\n \"\"\"\n A base class for connectionstyle classes. The subclass needs\n to implement a *connect* method whose call signature is::\n\n connect(posA, posB)\n\n where posA and posB are tuples of x, y coordinates to be\n connected. The method needs to return a path connecting two\n points. This base class defines a __call__ method, and a few\n helper methods.\n \"\"\"\n\n class SimpleEvent:\n def __init__(self, xy):\n self.x, self.y = xy\n\n def _clip(self, path, patchA, patchB):\n \"\"\"\n Clip the path to the boundary of the patchA and patchB.\n The starting point of the path needed to be inside of the\n patchA and the end point inside the patch B. The *contains*\n methods of each patch object is utilized to test if the point\n is inside the path.\n \"\"\"\n\n if patchA:\n def insideA(xy_display):\n xy_event = ConnectionStyle._Base.SimpleEvent(xy_display)\n return patchA.contains(xy_event)[0]\n\n try:\n left, right = split_path_inout(path, insideA)\n except ValueError:\n right = path\n\n path = right\n\n if patchB:\n def insideB(xy_display):\n xy_event = ConnectionStyle._Base.SimpleEvent(xy_display)\n return patchB.contains(xy_event)[0]\n\n try:\n left, right = split_path_inout(path, insideB)\n except ValueError:\n left = path\n\n path = left\n\n return path\n\n def _shrink(self, path, shrinkA, shrinkB):\n \"\"\"\n Shrink the path by fixed size (in points) with shrinkA and shrinkB.\n \"\"\"\n if shrinkA:\n insideA = inside_circle(*path.vertices[0], shrinkA)\n try:\n left, path = split_path_inout(path, insideA)\n except ValueError:\n pass\n if shrinkB:\n insideB = inside_circle(*path.vertices[-1], shrinkB)\n try:\n path, right = split_path_inout(path, insideB)\n except ValueError:\n pass\n return path\n\n def __call__(self, posA, posB,\n shrinkA=2., shrinkB=2., patchA=None, patchB=None):\n \"\"\"\n Call the *connect* method to create a path between *posA* and\n *posB*; then clip and shrink the path.\n \"\"\"\n path = self.connect(posA, posB)\n clipped_path = self._clip(path, patchA, patchB)\n shrunk_path = self._shrink(clipped_path, shrinkA, shrinkB)\n return shrunk_path\n\n @_register_style(_style_list)\n class Arc3(_Base):\n \"\"\"\n Creates a simple quadratic Bezier curve between two\n points. The curve is created so that the middle control point\n (C1) is located at the same distance from the start (C0) and\n end points(C2) and the distance of the C1 to the line\n connecting C0-C2 is *rad* times the distance of C0-C2.\n \"\"\"\n\n def __init__(self, rad=0.):\n \"\"\"\n *rad*\n curvature of the curve.\n \"\"\"\n self.rad = rad\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n x12, y12 = (x1 + x2) / 2., (y1 + y2) / 2.\n dx, dy = x2 - x1, y2 - y1\n\n f = self.rad\n\n cx, cy = x12 + f * dy, y12 - f * dx\n\n vertices = [(x1, y1),\n (cx, cy),\n (x2, y2)]\n codes = [Path.MOVETO,\n Path.CURVE3,\n Path.CURVE3]\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Angle3(_Base):\n \"\"\"\n Creates a simple quadratic Bezier curve between two\n points. The middle control points is placed at the\n intersecting point of two lines which cross the start and\n end point, and have a slope of angleA and angleB, respectively.\n \"\"\"\n\n def __init__(self, angleA=90, angleB=0):\n \"\"\"\n *angleA*\n starting angle of the path\n\n *angleB*\n ending angle of the path\n \"\"\"\n\n self.angleA = angleA\n self.angleB = angleB\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n\n cosA = math.cos(math.radians(self.angleA))\n sinA = math.sin(math.radians(self.angleA))\n cosB = math.cos(math.radians(self.angleB))\n sinB = math.sin(math.radians(self.angleB))\n\n cx, cy = get_intersection(x1, y1, cosA, sinA,\n x2, y2, cosB, sinB)\n\n vertices = [(x1, y1), (cx, cy), (x2, y2)]\n codes = [Path.MOVETO, Path.CURVE3, Path.CURVE3]\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Angle(_Base):\n \"\"\"\n Creates a piecewise continuous quadratic Bezier path between\n two points. The path has a one passing-through point placed at\n the intersecting point of two lines which cross the start\n and end point, and have a slope of angleA and angleB, respectively.\n The connecting edges are rounded with *rad*.\n \"\"\"\n\n def __init__(self, angleA=90, angleB=0, rad=0.):\n \"\"\"\n *angleA*\n starting angle of the path\n\n *angleB*\n ending angle of the path\n\n *rad*\n rounding radius of the edge\n \"\"\"\n\n self.angleA = angleA\n self.angleB = angleB\n\n self.rad = rad\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n\n cosA = math.cos(math.radians(self.angleA))\n sinA = math.sin(math.radians(self.angleA))\n cosB = math.cos(math.radians(self.angleB))\n sinB = math.sin(math.radians(self.angleB))\n\n cx, cy = get_intersection(x1, y1, cosA, sinA,\n x2, y2, cosB, sinB)\n\n vertices = [(x1, y1)]\n codes = [Path.MOVETO]\n\n if self.rad == 0.:\n vertices.append((cx, cy))\n codes.append(Path.LINETO)\n else:\n dx1, dy1 = x1 - cx, y1 - cy\n d1 = np.hypot(dx1, dy1)\n f1 = self.rad / d1\n dx2, dy2 = x2 - cx, y2 - cy\n d2 = np.hypot(dx2, dy2)\n f2 = self.rad / d2\n vertices.extend([(cx + dx1 * f1, cy + dy1 * f1),\n (cx, cy),\n (cx + dx2 * f2, cy + dy2 * f2)])\n codes.extend([Path.LINETO, Path.CURVE3, Path.CURVE3])\n\n vertices.append((x2, y2))\n codes.append(Path.LINETO)\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Arc(_Base):\n \"\"\"\n Creates a piecewise continuous quadratic Bezier path between\n two points. The path can have two passing-through points, a\n point placed at the distance of armA and angle of angleA from\n point A, another point with respect to point B. The edges are\n rounded with *rad*.\n \"\"\"\n\n def __init__(self, angleA=0, angleB=0, armA=None, armB=None, rad=0.):\n \"\"\"\n *angleA* :\n starting angle of the path\n\n *angleB* :\n ending angle of the path\n\n *armA* :\n length of the starting arm\n\n *armB* :\n length of the ending arm\n\n *rad* :\n rounding radius of the edges\n \"\"\"\n\n self.angleA = angleA\n self.angleB = angleB\n self.armA = armA\n self.armB = armB\n\n self.rad = rad\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x2, y2 = posB\n\n vertices = [(x1, y1)]\n rounded = []\n codes = [Path.MOVETO]\n\n if self.armA:\n cosA = math.cos(math.radians(self.angleA))\n sinA = math.sin(math.radians(self.angleA))\n # x_armA, y_armB\n d = self.armA - self.rad\n rounded.append((x1 + d * cosA, y1 + d * sinA))\n d = self.armA\n rounded.append((x1 + d * cosA, y1 + d * sinA))\n\n if self.armB:\n cosB = math.cos(math.radians(self.angleB))\n sinB = math.sin(math.radians(self.angleB))\n x_armB, y_armB = x2 + self.armB * cosB, y2 + self.armB * sinB\n\n if rounded:\n xp, yp = rounded[-1]\n dx, dy = x_armB - xp, y_armB - yp\n dd = (dx * dx + dy * dy) ** .5\n\n rounded.append((xp + self.rad * dx / dd,\n yp + self.rad * dy / dd))\n vertices.extend(rounded)\n codes.extend([Path.LINETO,\n Path.CURVE3,\n Path.CURVE3])\n else:\n xp, yp = vertices[-1]\n dx, dy = x_armB - xp, y_armB - yp\n dd = (dx * dx + dy * dy) ** .5\n\n d = dd - self.rad\n rounded = [(xp + d * dx / dd, yp + d * dy / dd),\n (x_armB, y_armB)]\n\n if rounded:\n xp, yp = rounded[-1]\n dx, dy = x2 - xp, y2 - yp\n dd = (dx * dx + dy * dy) ** .5\n\n rounded.append((xp + self.rad * dx / dd,\n yp + self.rad * dy / dd))\n vertices.extend(rounded)\n codes.extend([Path.LINETO,\n Path.CURVE3,\n Path.CURVE3])\n\n vertices.append((x2, y2))\n codes.append(Path.LINETO)\n\n return Path(vertices, codes)\n\n @_register_style(_style_list)\n class Bar(_Base):\n \"\"\"\n A line with *angle* between A and B with *armA* and\n *armB*. One of the arms is extended so that they are connected in\n a right angle. The length of armA is determined by (*armA*\n + *fraction* x AB distance). Same for armB.\n \"\"\"\n\n def __init__(self, armA=0., armB=0., fraction=0.3, angle=None):\n \"\"\"\n Parameters\n ----------\n armA : float\n minimum length of armA\n\n armB : float\n minimum length of armB\n\n fraction : float\n a fraction of the distance between two points that\n will be added to armA and armB.\n\n angle : float or None\n angle of the connecting line (if None, parallel\n to A and B)\n \"\"\"\n self.armA = armA\n self.armB = armB\n self.fraction = fraction\n self.angle = angle\n\n def connect(self, posA, posB):\n x1, y1 = posA\n x20, y20 = x2, y2 = posB\n\n theta1 = math.atan2(y2 - y1, x2 - x1)\n dx, dy = x2 - x1, y2 - y1\n dd = (dx * dx + dy * dy) ** .5\n ddx, ddy = dx / dd, dy / dd\n\n armA, armB = self.armA, self.armB\n\n if self.angle is not None:\n theta0 = np.deg2rad(self.angle)\n dtheta = theta1 - theta0\n dl = dd * math.sin(dtheta)\n dL = dd * math.cos(dtheta)\n x2, y2 = x1 + dL * math.cos(theta0), y1 + dL * math.sin(theta0)\n armB = armB - dl\n\n # update\n dx, dy = x2 - x1, y2 - y1\n dd2 = (dx * dx + dy * dy) ** .5\n ddx, ddy = dx / dd2, dy / dd2\n\n arm = max(armA, armB)\n f = self.fraction * dd + arm\n\n cx1, cy1 = x1 + f * ddy, y1 - f * ddx\n cx2, cy2 = x2 + f * ddy, y2 - f * ddx\n\n vertices = [(x1, y1),\n (cx1, cy1),\n (cx2, cy2),\n (x20, y20)]\n codes = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO]\n\n return Path(vertices, codes)\n\n\ndef _point_along_a_line(x0, y0, x1, y1, d):\n \"\"\"\n Return the point on the line connecting (*x0*, *y0*) -- (*x1*, *y1*) whose\n distance from (*x0*, *y0*) is *d*.\n \"\"\"\n dx, dy = x0 - x1, y0 - y1\n ff = d / (dx * dx + dy * dy) ** .5\n x2, y2 = x0 - ff * dx, y0 - ff * dy\n\n return x2, y2\n\n\nclass ArrowStyle(_Style):\n \"\"\"\n `ArrowStyle` is a container class which defines several\n arrowstyle classes, which is used to create an arrow path along a\n given path. These are mainly used with `FancyArrowPatch`.\n\n A arrowstyle object can be either created as::\n\n ArrowStyle.Fancy(head_length=.4, head_width=.4, tail_width=.4)\n\n or::\n\n ArrowStyle(\"Fancy\", head_length=.4, head_width=.4, tail_width=.4)\n\n or::\n\n ArrowStyle(\"Fancy, head_length=.4, head_width=.4, tail_width=.4\")\n\n The following classes are defined\n\n %(AvailableArrowstyles)s\n\n An instance of any arrow style class is a callable object,\n whose call signature is::\n\n __call__(self, path, mutation_size, linewidth, aspect_ratio=1.)\n\n and it returns a tuple of a `.Path` instance and a boolean\n value. *path* is a `.Path` instance along which the arrow\n will be drawn. *mutation_size* and *aspect_ratio* have the same\n meaning as in `BoxStyle`. *linewidth* is a line width to be\n stroked. This is meant to be used to correct the location of the\n head so that it does not overshoot the destination point, but not all\n classes support it.\n \"\"\"\n\n _style_list = {}\n\n class _Base:\n \"\"\"\n Arrow Transmuter Base class\n\n ArrowTransmuterBase and its derivatives are used to make a fancy\n arrow around a given path. The __call__ method returns a path\n (which will be used to create a PathPatch instance) and a boolean\n value indicating the path is open therefore is not fillable. This\n class is not an artist and actual drawing of the fancy arrow is\n done by the FancyArrowPatch class.\n \"\"\"\n\n # The derived classes are required to be able to be initialized\n # w/o arguments, i.e., all its argument (except self) must have\n # the default values.\n\n @staticmethod\n def ensure_quadratic_bezier(path):\n \"\"\"\n Some ArrowStyle classes only works with a simple quadratic\n Bezier curve (created with `.ConnectionStyle.Arc3` or\n `.ConnectionStyle.Angle3`). This static method checks if the\n provided path is a simple quadratic Bezier curve and returns its\n control points if true.\n \"\"\"\n segments = list(path.iter_segments())\n if (len(segments) != 2 or segments[0][1] != Path.MOVETO or\n segments[1][1] != Path.CURVE3):\n raise ValueError(\n \"'path' is not a valid quadratic Bezier curve\")\n return [*segments[0][0], *segments[1][0]]\n\n def transmute(self, path, mutation_size, linewidth):\n \"\"\"\n The transmute method is the very core of the ArrowStyle class and\n must be overridden in the subclasses. It receives the path object\n along which the arrow will be drawn, and the mutation_size, with\n which the arrow head etc. will be scaled. The linewidth may be\n used to adjust the path so that it does not pass beyond the given\n points. It returns a tuple of a Path instance and a boolean. The\n boolean value indicate whether the path can be filled or not. The\n return value can also be a list of paths and list of booleans of a\n same length.\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n def __call__(self, path, mutation_size, linewidth,\n aspect_ratio=1.):\n \"\"\"\n The __call__ method is a thin wrapper around the transmute method\n and takes care of the aspect ratio.\n \"\"\"\n\n if aspect_ratio is not None:\n # Squeeze the given height by the aspect_ratio\n vertices = path.vertices / [1, aspect_ratio]\n path_shrunk = Path(vertices, path.codes)\n # call transmute method with squeezed height.\n path_mutated, fillable = self.transmute(path_shrunk,\n mutation_size,\n linewidth)\n if np.iterable(fillable):\n path_list = []\n for p in path_mutated:\n # Restore the height\n path_list.append(\n Path(p.vertices * [1, aspect_ratio], p.codes))\n return path_list, fillable\n else:\n return path_mutated, fillable\n else:\n return self.transmute(path, mutation_size, linewidth)\n\n class _Curve(_Base):\n \"\"\"\n A simple arrow which will work with any path instance. The\n returned path is the concatenation of the original path, and at\n most two paths representing the arrow head or bracket at the begin\n point and at the end point. The arrow heads can be either open\n or closed.\n \"\"\"\n\n beginarrow = endarrow = None # Whether arrows are drawn.\n arrow = \"-\"\n fillbegin = fillend = False # Whether arrows are filled.\n\n def __init__(self, head_length=.4, head_width=.2, widthA=1., widthB=1.,\n lengthA=0.2, lengthB=0.2, angleA=0, angleB=0, scaleA=None,\n scaleB=None):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head, relative to *mutation_scale*.\n head_width : float, default: 0.2\n Width of the arrow head, relative to *mutation_scale*.\n widthA : float, default: 1.0\n Width of the bracket at the beginning of the arrow\n widthB : float, default: 1.0\n Width of the bracket at the end of the arrow\n lengthA : float, default: 0.2\n Length of the bracket at the beginning of the arrow\n lengthB : float, default: 0.2\n Length of the bracket at the end of the arrow\n angleA : float, default 0\n Orientation of the bracket at the beginning, as a\n counterclockwise angle. 0 degrees means perpendicular\n to the line.\n angleB : float, default 0\n Orientation of the bracket at the beginning, as a\n counterclockwise angle. 0 degrees means perpendicular\n to the line.\n scaleA : float, default *mutation_size*\n The mutation_size for the beginning bracket\n scaleB : float, default *mutation_size*\n The mutation_size for the end bracket\n \"\"\"\n\n self.head_length, self.head_width = head_length, head_width\n self.widthA, self.widthB = widthA, widthB\n self.lengthA, self.lengthB = lengthA, lengthB\n self.angleA, self.angleB = angleA, angleB\n self.scaleA, self.scaleB = scaleA, scaleB\n\n self._beginarrow_head = False\n self._beginarrow_bracket = False\n self._endarrow_head = False\n self._endarrow_bracket = False\n\n if \"-\" not in self.arrow:\n raise ValueError(\"arrow must have the '-' between \"\n \"the two heads\")\n\n beginarrow, endarrow = self.arrow.split(\"-\", 1)\n\n if beginarrow == \"<\":\n self._beginarrow_head = True\n self._beginarrow_bracket = False\n elif beginarrow == \"<|\":\n self._beginarrow_head = True\n self._beginarrow_bracket = False\n self.fillbegin = True\n elif beginarrow in (\"]\", \"|\"):\n self._beginarrow_head = False\n self._beginarrow_bracket = True\n elif self.beginarrow is True:\n self._beginarrow_head = True\n self._beginarrow_bracket = False\n\n _api.warn_deprecated('3.5', name=\"beginarrow\",\n alternative=\"arrow\")\n elif self.beginarrow is False:\n self._beginarrow_head = False\n self._beginarrow_bracket = False\n\n _api.warn_deprecated('3.5', name=\"beginarrow\",\n alternative=\"arrow\")\n\n if endarrow == \">\":\n self._endarrow_head = True\n self._endarrow_bracket = False\n elif endarrow == \"|>\":\n self._endarrow_head = True\n self._endarrow_bracket = False\n self.fillend = True\n elif endarrow in (\"[\", \"|\"):\n self._endarrow_head = False\n self._endarrow_bracket = True\n elif self.endarrow is True:\n self._endarrow_head = True\n self._endarrow_bracket = False\n\n _api.warn_deprecated('3.5', name=\"endarrow\",\n alternative=\"arrow\")\n elif self.endarrow is False:\n self._endarrow_head = False\n self._endarrow_bracket = False\n\n _api.warn_deprecated('3.5', name=\"endarrow\",\n alternative=\"arrow\")\n\n super().__init__()\n\n def _get_arrow_wedge(self, x0, y0, x1, y1,\n head_dist, cos_t, sin_t, linewidth):\n \"\"\"\n Return the paths for arrow heads. Since arrow lines are\n drawn with capstyle=projected, The arrow goes beyond the\n desired point. This method also returns the amount of the path\n to be shrunken so that it does not overshoot.\n \"\"\"\n\n # arrow from x0, y0 to x1, y1\n dx, dy = x0 - x1, y0 - y1\n\n cp_distance = np.hypot(dx, dy)\n\n # pad_projected : amount of pad to account the\n # overshooting of the projection of the wedge\n pad_projected = (.5 * linewidth / sin_t)\n\n # Account for division by zero\n if cp_distance == 0:\n cp_distance = 1\n\n # apply pad for projected edge\n ddx = pad_projected * dx / cp_distance\n ddy = pad_projected * dy / cp_distance\n\n # offset for arrow wedge\n dx = dx / cp_distance * head_dist\n dy = dy / cp_distance * head_dist\n\n dx1, dy1 = cos_t * dx + sin_t * dy, -sin_t * dx + cos_t * dy\n dx2, dy2 = cos_t * dx - sin_t * dy, sin_t * dx + cos_t * dy\n\n vertices_arrow = [(x1 + ddx + dx1, y1 + ddy + dy1),\n (x1 + ddx, y1 + ddy),\n (x1 + ddx + dx2, y1 + ddy + dy2)]\n codes_arrow = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO]\n\n return vertices_arrow, codes_arrow, ddx, ddy\n\n def _get_bracket(self, x0, y0,\n x1, y1, width, length, angle):\n\n cos_t, sin_t = get_cos_sin(x1, y1, x0, y0)\n\n # arrow from x0, y0 to x1, y1\n from matplotlib.bezier import get_normal_points\n x1, y1, x2, y2 = get_normal_points(x0, y0, cos_t, sin_t, width)\n\n dx, dy = length * cos_t, length * sin_t\n\n vertices_arrow = [(x1 + dx, y1 + dy),\n (x1, y1),\n (x2, y2),\n (x2 + dx, y2 + dy)]\n codes_arrow = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO]\n\n if angle:\n trans = transforms.Affine2D().rotate_deg_around(x0, y0, angle)\n vertices_arrow = trans.transform(vertices_arrow)\n\n return vertices_arrow, codes_arrow\n\n def transmute(self, path, mutation_size, linewidth):\n\n if self._beginarrow_head or self._endarrow_head:\n head_length = self.head_length * mutation_size\n head_width = self.head_width * mutation_size\n head_dist = np.hypot(head_length, head_width)\n cos_t, sin_t = head_length / head_dist, head_width / head_dist\n\n scaleA = mutation_size if self.scaleA is None else self.scaleA\n scaleB = mutation_size if self.scaleB is None else self.scaleB\n\n # begin arrow\n x0, y0 = path.vertices[0]\n x1, y1 = path.vertices[1]\n\n # If there is no room for an arrow and a line, then skip the arrow\n has_begin_arrow = self._beginarrow_head and (x0, y0) != (x1, y1)\n verticesA, codesA, ddxA, ddyA = (\n self._get_arrow_wedge(x1, y1, x0, y0,\n head_dist, cos_t, sin_t, linewidth)\n if has_begin_arrow\n else ([], [], 0, 0)\n )\n\n # end arrow\n x2, y2 = path.vertices[-2]\n x3, y3 = path.vertices[-1]\n\n # If there is no room for an arrow and a line, then skip the arrow\n has_end_arrow = self._endarrow_head and (x2, y2) != (x3, y3)\n verticesB, codesB, ddxB, ddyB = (\n self._get_arrow_wedge(x2, y2, x3, y3,\n head_dist, cos_t, sin_t, linewidth)\n if has_end_arrow\n else ([], [], 0, 0)\n )\n\n # This simple code will not work if ddx, ddy is greater than the\n # separation between vertices.\n _path = [Path(np.concatenate([[(x0 + ddxA, y0 + ddyA)],\n path.vertices[1:-1],\n [(x3 + ddxB, y3 + ddyB)]]),\n path.codes)]\n _fillable = [False]\n\n if has_begin_arrow:\n if self.fillbegin:\n p = np.concatenate([verticesA, [verticesA[0],\n verticesA[0]], ])\n c = np.concatenate([codesA, [Path.LINETO, Path.CLOSEPOLY]])\n _path.append(Path(p, c))\n _fillable.append(True)\n else:\n _path.append(Path(verticesA, codesA))\n _fillable.append(False)\n elif self._beginarrow_bracket:\n x0, y0 = path.vertices[0]\n x1, y1 = path.vertices[1]\n verticesA, codesA = self._get_bracket(x0, y0, x1, y1,\n self.widthA * scaleA,\n self.lengthA * scaleA,\n self.angleA)\n\n _path.append(Path(verticesA, codesA))\n _fillable.append(False)\n\n if has_end_arrow:\n if self.fillend:\n _fillable.append(True)\n p = np.concatenate([verticesB, [verticesB[0],\n verticesB[0]], ])\n c = np.concatenate([codesB, [Path.LINETO, Path.CLOSEPOLY]])\n _path.append(Path(p, c))\n else:\n _fillable.append(False)\n _path.append(Path(verticesB, codesB))\n elif self._endarrow_bracket:\n x0, y0 = path.vertices[-1]\n x1, y1 = path.vertices[-2]\n verticesB, codesB = self._get_bracket(x0, y0, x1, y1,\n self.widthB * scaleB,\n self.lengthB * scaleB,\n self.angleB)\n\n _path.append(Path(verticesB, codesB))\n _fillable.append(False)\n\n return _path, _fillable\n\n @_register_style(_style_list, name=\"-\")\n class Curve(_Curve):\n \"\"\"A simple curve without any arrow head.\"\"\"\n\n def __init__(self): # hide head_length, head_width\n # These attributes (whose values come from backcompat) only matter\n # if someone modifies beginarrow/etc. on an ArrowStyle instance.\n super().__init__(head_length=.2, head_width=.1)\n\n @_register_style(_style_list, name=\"<-\")\n class CurveA(_Curve):\n \"\"\"An arrow with a head at its begin point.\"\"\"\n arrow = \"<-\"\n\n @_register_style(_style_list, name=\"->\")\n class CurveB(_Curve):\n \"\"\"An arrow with a head at its end point.\"\"\"\n arrow = \"->\"\n\n @_register_style(_style_list, name=\"<->\")\n class CurveAB(_Curve):\n \"\"\"An arrow with heads both at the begin and the end point.\"\"\"\n arrow = \"<->\"\n\n @_register_style(_style_list, name=\"<|-\")\n class CurveFilledA(_Curve):\n \"\"\"An arrow with filled triangle head at the begin.\"\"\"\n arrow = \"<|-\"\n\n @_register_style(_style_list, name=\"-|>\")\n class CurveFilledB(_Curve):\n \"\"\"An arrow with filled triangle head at the end.\"\"\"\n arrow = \"-|>\"\n\n @_register_style(_style_list, name=\"<|-|>\")\n class CurveFilledAB(_Curve):\n \"\"\"An arrow with filled triangle heads at both ends.\"\"\"\n arrow = \"<|-|>\"\n\n @_register_style(_style_list, name=\"]-\")\n class BracketA(_Curve):\n \"\"\"An arrow with an outward square bracket at its start.\"\"\"\n arrow = \"]-\"\n\n def __init__(self, widthA=1., lengthA=0.2, angleA=0):\n \"\"\"\n Parameters\n ----------\n widthA : float, default: 1.0\n Width of the bracket.\n lengthA : float, default: 0.2\n Length of the bracket.\n angleA : float, default: 0 degrees\n Orientation of the bracket, as a counterclockwise angle.\n 0 degrees means perpendicular to the line.\n \"\"\"\n super().__init__(widthA=widthA, lengthA=lengthA, angleA=angleA)\n\n @_register_style(_style_list, name=\"-[\")\n class BracketB(_Curve):\n \"\"\"An arrow with an outward square bracket at its end.\"\"\"\n arrow = \"-[\"\n\n def __init__(self, widthB=1., lengthB=0.2, angleB=0):\n \"\"\"\n Parameters\n ----------\n widthB : float, default: 1.0\n Width of the bracket.\n lengthB : float, default: 0.2\n Length of the bracket.\n angleB : float, default: 0 degrees\n Orientation of the bracket, as a counterclockwise angle.\n 0 degrees means perpendicular to the line.\n \"\"\"\n super().__init__(widthB=widthB, lengthB=lengthB, angleB=angleB)\n\n @_register_style(_style_list, name=\"]-[\")\n class BracketAB(_Curve):\n \"\"\"An arrow with outward square brackets at both ends.\"\"\"\n arrow = \"]-[\"\n\n def __init__(self,\n widthA=1., lengthA=0.2, angleA=0,\n widthB=1., lengthB=0.2, angleB=0):\n \"\"\"\n Parameters\n ----------\n widthA, widthB : float, default: 1.0\n Width of the bracket.\n lengthA, lengthB : float, default: 0.2\n Length of the bracket.\n angleA, angleB : float, default: 0 degrees\n Orientation of the bracket, as a counterclockwise angle.\n 0 degrees means perpendicular to the line.\n \"\"\"\n super().__init__(widthA=widthA, lengthA=lengthA, angleA=angleA,\n widthB=widthB, lengthB=lengthB, angleB=angleB)\n\n @_register_style(_style_list, name=\"|-|\")\n class BarAB(_Curve):\n \"\"\"An arrow with vertical bars ``|`` at both ends.\"\"\"\n arrow = \"|-|\"\n\n def __init__(self, widthA=1., angleA=0, widthB=1., angleB=0):\n \"\"\"\n Parameters\n ----------\n widthA, widthB : float, default: 1.0\n Width of the bracket.\n angleA, angleB : float, default: 0 degrees\n Orientation of the bracket, as a counterclockwise angle.\n 0 degrees means perpendicular to the line.\n \"\"\"\n super().__init__(widthA=widthA, lengthA=0, angleA=angleA,\n widthB=widthB, lengthB=0, angleB=angleB)\n\n @_register_style(_style_list, name=']->')\n class BracketCurve(_Curve):\n \"\"\"\n An arrow with an outward square bracket at its start and a head at\n the end.\n \"\"\"\n arrow = \"]->\"\n\n def __init__(self, widthA=1., lengthA=0.2, angleA=None):\n \"\"\"\n Parameters\n ----------\n widthA : float, default: 1.0\n Width of the bracket.\n lengthA : float, default: 0.2\n Length of the bracket.\n angleA : float, default: 0 degrees\n Orientation of the bracket, as a counterclockwise angle.\n 0 degrees means perpendicular to the line.\n \"\"\"\n super().__init__(widthA=widthA, lengthA=lengthA, angleA=angleA)\n\n @_register_style(_style_list, name='<-[')\n class CurveBracket(_Curve):\n \"\"\"\n An arrow with an outward square bracket at its end and a head at\n the start.\n \"\"\"\n arrow = \"<-[\"\n\n def __init__(self, widthB=1., lengthB=0.2, angleB=None):\n \"\"\"\n Parameters\n ----------\n widthB : float, default: 1.0\n Width of the bracket.\n lengthB : float, default: 0.2\n Length of the bracket.\n angleB : float, default: 0 degrees\n Orientation of the bracket, as a counterclockwise angle.\n 0 degrees means perpendicular to the line.\n \"\"\"\n super().__init__(widthB=widthB, lengthB=lengthB, angleB=angleB)\n\n @_register_style(_style_list)\n class Simple(_Base):\n \"\"\"A simple arrow. Only works with a quadratic Bezier curve.\"\"\"\n\n def __init__(self, head_length=.5, head_width=.5, tail_width=.2):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.5\n Length of the arrow head.\n\n head_width : float, default: 0.5\n Width of the arrow head.\n\n tail_width : float, default: 0.2\n Width of the arrow tail.\n \"\"\"\n self.head_length, self.head_width, self.tail_width = \\\n head_length, head_width, tail_width\n super().__init__()\n\n def transmute(self, path, mutation_size, linewidth):\n\n x0, y0, x1, y1, x2, y2 = self.ensure_quadratic_bezier(path)\n\n # divide the path into a head and a tail\n head_length = self.head_length * mutation_size\n in_f = inside_circle(x2, y2, head_length)\n arrow_path = [(x0, y0), (x1, y1), (x2, y2)]\n\n try:\n arrow_out, arrow_in = \\\n split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n except NonIntersectingPathException:\n # if this happens, make a straight line of the head_length\n # long.\n x0, y0 = _point_along_a_line(x2, y2, x1, y1, head_length)\n x1n, y1n = 0.5 * (x0 + x2), 0.5 * (y0 + y2)\n arrow_in = [(x0, y0), (x1n, y1n), (x2, y2)]\n arrow_out = None\n\n # head\n head_width = self.head_width * mutation_size\n head_left, head_right = make_wedged_bezier2(arrow_in,\n head_width / 2., wm=.5)\n\n # tail\n if arrow_out is not None:\n tail_width = self.tail_width * mutation_size\n tail_left, tail_right = get_parallels(arrow_out,\n tail_width / 2.)\n\n patch_path = [(Path.MOVETO, tail_right[0]),\n (Path.CURVE3, tail_right[1]),\n (Path.CURVE3, tail_right[2]),\n (Path.LINETO, head_right[0]),\n (Path.CURVE3, head_right[1]),\n (Path.CURVE3, head_right[2]),\n (Path.CURVE3, head_left[1]),\n (Path.CURVE3, head_left[0]),\n (Path.LINETO, tail_left[2]),\n (Path.CURVE3, tail_left[1]),\n (Path.CURVE3, tail_left[0]),\n (Path.LINETO, tail_right[0]),\n (Path.CLOSEPOLY, tail_right[0]),\n ]\n else:\n patch_path = [(Path.MOVETO, head_right[0]),\n (Path.CURVE3, head_right[1]),\n (Path.CURVE3, head_right[2]),\n (Path.CURVE3, head_left[1]),\n (Path.CURVE3, head_left[0]),\n (Path.CLOSEPOLY, head_left[0]),\n ]\n\n path = Path([p for c, p in patch_path], [c for c, p in patch_path])\n\n return path, True\n\n @_register_style(_style_list)\n class Fancy(_Base):\n \"\"\"A fancy arrow. Only works with a quadratic Bezier curve.\"\"\"\n\n def __init__(self, head_length=.4, head_width=.4, tail_width=.4):\n \"\"\"\n Parameters\n ----------\n head_length : float, default: 0.4\n Length of the arrow head.\n\n head_width : float, default: 0.4\n Width of the arrow head.\n\n tail_width : float, default: 0.4\n Width of the arrow tail.\n \"\"\"\n self.head_length, self.head_width, self.tail_width = \\\n head_length, head_width, tail_width\n super().__init__()\n\n def transmute(self, path, mutation_size, linewidth):\n\n x0, y0, x1, y1, x2, y2 = self.ensure_quadratic_bezier(path)\n\n # divide the path into a head and a tail\n head_length = self.head_length * mutation_size\n arrow_path = [(x0, y0), (x1, y1), (x2, y2)]\n\n # path for head\n in_f = inside_circle(x2, y2, head_length)\n try:\n path_out, path_in = split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n except NonIntersectingPathException:\n # if this happens, make a straight line of the head_length\n # long.\n x0, y0 = _point_along_a_line(x2, y2, x1, y1, head_length)\n x1n, y1n = 0.5 * (x0 + x2), 0.5 * (y0 + y2)\n arrow_path = [(x0, y0), (x1n, y1n), (x2, y2)]\n path_head = arrow_path\n else:\n path_head = path_in\n\n # path for head\n in_f = inside_circle(x2, y2, head_length * .8)\n path_out, path_in = split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n path_tail = path_out\n\n # head\n head_width = self.head_width * mutation_size\n head_l, head_r = make_wedged_bezier2(path_head,\n head_width / 2.,\n wm=.6)\n\n # tail\n tail_width = self.tail_width * mutation_size\n tail_left, tail_right = make_wedged_bezier2(path_tail,\n tail_width * .5,\n w1=1., wm=0.6, w2=0.3)\n\n # path for head\n in_f = inside_circle(x0, y0, tail_width * .3)\n path_in, path_out = split_bezier_intersecting_with_closedpath(\n arrow_path, in_f, tolerance=0.01)\n tail_start = path_in[-1]\n\n head_right, head_left = head_r, head_l\n patch_path = [(Path.MOVETO, tail_start),\n (Path.LINETO, tail_right[0]),\n (Path.CURVE3, tail_right[1]),\n (Path.CURVE3, tail_right[2]),\n (Path.LINETO, head_right[0]),\n (Path.CURVE3, head_right[1]),\n (Path.CURVE3, head_right[2]),\n (Path.CURVE3, head_left[1]),\n (Path.CURVE3, head_left[0]),\n (Path.LINETO, tail_left[2]),\n (Path.CURVE3, tail_left[1]),\n (Path.CURVE3, tail_left[0]),\n (Path.LINETO, tail_start),\n (Path.CLOSEPOLY, tail_start),\n ]\n path = Path([p for c, p in patch_path], [c for c, p in patch_path])\n\n return path, True\n\n @_register_style(_style_list)\n class Wedge(_Base):\n \"\"\"\n Wedge(?) shape. Only works with a quadratic Bezier curve. The\n begin point has a width of the tail_width and the end point has a\n width of 0. At the middle, the width is shrink_factor*tail_width.\n \"\"\"\n\n def __init__(self, tail_width=.3, shrink_factor=0.5):\n \"\"\"\n Parameters\n ----------\n tail_width : float, default: 0.3\n Width of the tail.\n\n shrink_factor : float, default: 0.5\n Fraction of the arrow width at the middle point.\n \"\"\"\n self.tail_width = tail_width\n self.shrink_factor = shrink_factor\n super().__init__()\n\n def transmute(self, path, mutation_size, linewidth):\n\n x0, y0, x1, y1, x2, y2 = self.ensure_quadratic_bezier(path)\n\n arrow_path = [(x0, y0), (x1, y1), (x2, y2)]\n b_plus, b_minus = make_wedged_bezier2(\n arrow_path,\n self.tail_width * mutation_size / 2.,\n wm=self.shrink_factor)\n\n patch_path = [(Path.MOVETO, b_plus[0]),\n (Path.CURVE3, b_plus[1]),\n (Path.CURVE3, b_plus[2]),\n (Path.LINETO, b_minus[2]),\n (Path.CURVE3, b_minus[1]),\n (Path.CURVE3, b_minus[0]),\n (Path.CLOSEPOLY, b_minus[0]),\n ]\n path = Path([p for c, p in patch_path], [c for c, p in patch_path])\n\n return path, True\n\n\ndocstring.interpd.update(\n AvailableBoxstyles=BoxStyle.pprint_styles(),\n ListBoxstyles=_simpleprint_styles(BoxStyle._style_list),\n AvailableArrowstyles=ArrowStyle.pprint_styles(),\n AvailableConnectorstyles=ConnectionStyle.pprint_styles(),\n)\ndocstring.dedent_interpd(BoxStyle)\ndocstring.dedent_interpd(ArrowStyle)\ndocstring.dedent_interpd(ConnectionStyle)\n\n\nclass FancyBboxPatch(Patch):\n \"\"\"\n A fancy box around a rectangle with lower left at *xy* = (*x*, *y*)\n with specified width and height.\n\n `.FancyBboxPatch` is similar to `.Rectangle`, but it draws a fancy box\n around the rectangle. The transformation of the rectangle box to the\n fancy box is delegated to the style classes defined in `.BoxStyle`.\n \"\"\"\n\n _edge_default = True\n\n def __str__(self):\n s = self.__class__.__name__ + \"((%g, %g), width=%g, height=%g)\"\n return s % (self._x, self._y, self._width, self._height)\n\n @docstring.dedent_interpd\n @_api.delete_parameter(\"3.4\", \"bbox_transmuter\", alternative=\"boxstyle\")\n def __init__(self, xy, width, height,\n boxstyle=\"round\", bbox_transmuter=None,\n mutation_scale=1, mutation_aspect=1,\n **kwargs):\n \"\"\"\n Parameters\n ----------\n xy : float, float\n The lower left corner of the box.\n\n width : float\n The width of the box.\n\n height : float\n The height of the box.\n\n boxstyle : str or `matplotlib.patches.BoxStyle`\n The style of the fancy box. This can either be a `.BoxStyle`\n instance or a string of the style name and optionally comma\n seprarated attributes (e.g. \"Round, pad=0.2\"). This string is\n passed to `.BoxStyle` to construct a `.BoxStyle` object. See\n there for a full documentation.\n\n The following box styles are available:\n\n %(AvailableBoxstyles)s\n\n mutation_scale : float, default: 1\n Scaling factor applied to the attributes of the box style\n (e.g. pad or rounding_size).\n\n mutation_aspect : float, default: 1\n The height of the rectangle will be squeezed by this value before\n the mutation and the mutated box will be stretched by the inverse\n of it. For example, this allows different horizontal and vertical\n padding.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties\n\n %(Patch:kwdoc)s\n \"\"\"\n\n super().__init__(**kwargs)\n\n self._x = xy[0]\n self._y = xy[1]\n self._width = width\n self._height = height\n\n if boxstyle == \"custom\":\n _api.warn_deprecated(\n \"3.4\", message=\"Support for boxstyle='custom' is deprecated \"\n \"since %(since)s and will be removed %(removal)s; directly \"\n \"pass a boxstyle instance as the boxstyle parameter instead.\")\n if bbox_transmuter is None:\n raise ValueError(\"bbox_transmuter argument is needed with \"\n \"custom boxstyle\")\n self._bbox_transmuter = bbox_transmuter\n else:\n self.set_boxstyle(boxstyle)\n\n self._mutation_scale = mutation_scale\n self._mutation_aspect = mutation_aspect\n\n self.stale = True\n\n @docstring.dedent_interpd\n def set_boxstyle(self, boxstyle=None, **kwargs):\n \"\"\"\n Set the box style.\n\n Most box styles can be further configured using attributes.\n Attributes from the previous box style are not reused.\n\n Without argument (or with ``boxstyle=None``), the available box styles\n are returned as a human-readable string.\n\n Parameters\n ----------\n boxstyle : str or `matplotlib.patches.BoxStyle`\n The style of the fancy box. This can either be a `.BoxStyle`\n instance or a string of the style name and optionally comma\n seprarated attributes (e.g. \"Round, pad=0.2\"). This string is\n passed to `.BoxStyle` to construct a `.BoxStyle` object. See\n there for a full documentation.\n\n The following box styles are available:\n\n %(AvailableBoxstyles)s\n\n .. ACCEPTS: %(ListBoxstyles)s\n\n **kwargs\n Additional attributes for the box style. See the table above for\n supported parameters.\n\n Examples\n --------\n ::\n\n set_boxstyle(\"round,pad=0.2\")\n set_boxstyle(\"round\", pad=0.2)\n\n \"\"\"\n if boxstyle is None:\n return BoxStyle.pprint_styles()\n\n if isinstance(boxstyle, BoxStyle._Base) or callable(boxstyle):\n self._bbox_transmuter = boxstyle\n else:\n self._bbox_transmuter = BoxStyle(boxstyle, **kwargs)\n self.stale = True\n\n def set_mutation_scale(self, scale):\n \"\"\"\n Set the mutation scale.\n\n Parameters\n ----------\n scale : float\n \"\"\"\n self._mutation_scale = scale\n self.stale = True\n\n def get_mutation_scale(self):\n \"\"\"Return the mutation scale.\"\"\"\n return self._mutation_scale\n\n def set_mutation_aspect(self, aspect):\n \"\"\"\n Set the aspect ratio of the bbox mutation.\n\n Parameters\n ----------\n aspect : float\n \"\"\"\n self._mutation_aspect = aspect\n self.stale = True\n\n def get_mutation_aspect(self):\n \"\"\"Return the aspect ratio of the bbox mutation.\"\"\"\n return (self._mutation_aspect if self._mutation_aspect is not None\n else 1) # backcompat.\n\n def get_boxstyle(self):\n \"\"\"Return the boxstyle object.\"\"\"\n return self._bbox_transmuter\n\n def get_path(self):\n \"\"\"Return the mutated path of the rectangle.\"\"\"\n boxstyle = self.get_boxstyle()\n x = self._x\n y = self._y\n width = self._width\n height = self._height\n m_scale = self.get_mutation_scale()\n m_aspect = self.get_mutation_aspect()\n # Squeeze the given height by the aspect_ratio.\n y, height = y / m_aspect, height / m_aspect\n # Call boxstyle with squeezed height.\n try:\n inspect.signature(boxstyle).bind(x, y, width, height, m_scale)\n except TypeError:\n # Don't apply aspect twice.\n path = boxstyle(x, y, width, height, m_scale, 1)\n _api.warn_deprecated(\n \"3.4\", message=\"boxstyles must be callable without the \"\n \"'mutation_aspect' parameter since %(since)s; support for the \"\n \"old call signature will be removed %(removal)s.\")\n else:\n path = boxstyle(x, y, width, height, m_scale)\n vertices, codes = path.vertices, path.codes\n # Restore the height.\n vertices[:, 1] = vertices[:, 1] * m_aspect\n return Path(vertices, codes)\n\n # Following methods are borrowed from the Rectangle class.\n\n def get_x(self):\n \"\"\"Return the left coord of the rectangle.\"\"\"\n return self._x\n\n def get_y(self):\n \"\"\"Return the bottom coord of the rectangle.\"\"\"\n return self._y\n\n def get_width(self):\n \"\"\"Return the width of the rectangle.\"\"\"\n return self._width\n\n def get_height(self):\n \"\"\"Return the height of the rectangle.\"\"\"\n return self._height\n\n def set_x(self, x):\n \"\"\"\n Set the left coord of the rectangle.\n\n Parameters\n ----------\n x : float\n \"\"\"\n self._x = x\n self.stale = True\n\n def set_y(self, y):\n \"\"\"\n Set the bottom coord of the rectangle.\n\n Parameters\n ----------\n y : float\n \"\"\"\n self._y = y\n self.stale = True\n\n def set_width(self, w):\n \"\"\"\n Set the rectangle width.\n\n Parameters\n ----------\n w : float\n \"\"\"\n self._width = w\n self.stale = True\n\n def set_height(self, h):\n \"\"\"\n Set the rectangle height.\n\n Parameters\n ----------\n h : float\n \"\"\"\n self._height = h\n self.stale = True\n\n def set_bounds(self, *args):\n \"\"\"\n Set the bounds of the rectangle.\n\n Call signatures::\n\n set_bounds(left, bottom, width, height)\n set_bounds((left, bottom, width, height))\n\n Parameters\n ----------\n left, bottom : float\n The coordinates of the bottom left corner of the rectangle.\n width, height : float\n The width/height of the rectangle.\n \"\"\"\n if len(args) == 1:\n l, b, w, h = args[0]\n else:\n l, b, w, h = args\n self._x = l\n self._y = b\n self._width = w\n self._height = h\n self.stale = True\n\n def get_bbox(self):\n \"\"\"Return the `.Bbox`.\"\"\"\n return transforms.Bbox.from_bounds(self._x, self._y,\n self._width, self._height)\n\n\nclass FancyArrowPatch(Patch):\n \"\"\"\n A fancy arrow patch. It draws an arrow using the `ArrowStyle`.\n\n The head and tail positions are fixed at the specified start and end points\n of the arrow, but the size and shape (in display coordinates) of the arrow\n does not change when the axis is moved or zoomed.\n \"\"\"\n _edge_default = True\n\n def __str__(self):\n if self._posA_posB is not None:\n (x1, y1), (x2, y2) = self._posA_posB\n return f\"{type(self).__name__}(({x1:g}, {y1:g})->({x2:g}, {y2:g}))\"\n else:\n return f\"{type(self).__name__}({self._path_original})\"\n\n @docstring.dedent_interpd\n @_api.delete_parameter(\"3.4\", \"dpi_cor\")\n def __init__(self, posA=None, posB=None, path=None,\n arrowstyle=\"simple\", connectionstyle=\"arc3\",\n patchA=None, patchB=None,\n shrinkA=2, shrinkB=2,\n mutation_scale=1, mutation_aspect=1,\n dpi_cor=1,\n **kwargs):\n \"\"\"\n There are two ways for defining an arrow:\n\n - If *posA* and *posB* are given, a path connecting two points is\n created according to *connectionstyle*. The path will be\n clipped with *patchA* and *patchB* and further shrunken by\n *shrinkA* and *shrinkB*. An arrow is drawn along this\n resulting path using the *arrowstyle* parameter.\n\n - Alternatively if *path* is provided, an arrow is drawn along this\n path and *patchA*, *patchB*, *shrinkA*, and *shrinkB* are ignored.\n\n Parameters\n ----------\n posA, posB : (float, float), default: None\n (x, y) coordinates of arrow tail and arrow head respectively.\n\n path : `~matplotlib.path.Path`, default: None\n If provided, an arrow is drawn along this path and *patchA*,\n *patchB*, *shrinkA*, and *shrinkB* are ignored.\n\n arrowstyle : str or `.ArrowStyle`, default: 'simple'\n The `.ArrowStyle` with which the fancy arrow is drawn. If a\n string, it should be one of the available arrowstyle names, with\n optional comma-separated attributes. The optional attributes are\n meant to be scaled with the *mutation_scale*. The following arrow\n styles are available:\n\n %(AvailableArrowstyles)s\n\n connectionstyle : str or `.ConnectionStyle` or None, optional, \\\ndefault: 'arc3'\n The `.ConnectionStyle` with which *posA* and *posB* are connected.\n If a string, it should be one of the available connectionstyle\n names, with optional comma-separated attributes. The following\n connection styles are available:\n\n %(AvailableConnectorstyles)s\n\n patchA, patchB : `.Patch`, default: None\n Head and tail patches, respectively.\n\n shrinkA, shrinkB : float, default: 2\n Shrinking factor of the tail and head of the arrow respectively.\n\n mutation_scale : float, default: 1\n Value with which attributes of *arrowstyle* (e.g., *head_length*)\n will be scaled.\n\n mutation_aspect : None or float, default: None\n The height of the rectangle will be squeezed by this value before\n the mutation and the mutated box will be stretched by the inverse\n of it.\n\n dpi_cor : float, default: 1\n dpi_cor is currently used for linewidth-related things and shrink\n factor. Mutation scale is affected by this. Deprecated.\n\n Other Parameters\n ----------------\n **kwargs : `.Patch` properties, optional\n Here is a list of available `.Patch` properties:\n\n %(Patch:kwdoc)s\n\n In contrast to other patches, the default ``capstyle`` and\n ``joinstyle`` for `FancyArrowPatch` are set to ``\"round\"``.\n \"\"\"\n # Traditionally, the cap- and joinstyle for FancyArrowPatch are round\n kwargs.setdefault(\"joinstyle\", JoinStyle.round)\n kwargs.setdefault(\"capstyle\", CapStyle.round)\n\n super().__init__(**kwargs)\n\n if posA is not None and posB is not None and path is None:\n self._posA_posB = [posA, posB]\n\n if connectionstyle is None:\n connectionstyle = \"arc3\"\n self.set_connectionstyle(connectionstyle)\n\n elif posA is None and posB is None and path is not None:\n self._posA_posB = None\n else:\n raise ValueError(\"Either posA and posB, or path need to provided\")\n\n self.patchA = patchA\n self.patchB = patchB\n self.shrinkA = shrinkA\n self.shrinkB = shrinkB\n\n self._path_original = path\n\n self.set_arrowstyle(arrowstyle)\n\n self._mutation_scale = mutation_scale\n self._mutation_aspect = mutation_aspect\n\n self._dpi_cor = dpi_cor\n\n @_api.deprecated(\"3.4\")\n def set_dpi_cor(self, dpi_cor):\n \"\"\"\n dpi_cor is currently used for linewidth-related things and\n shrink factor. Mutation scale is affected by this.\n\n Parameters\n ----------\n dpi_cor : float\n \"\"\"\n self._dpi_cor = dpi_cor\n self.stale = True\n\n @_api.deprecated(\"3.4\")\n def get_dpi_cor(self):\n \"\"\"\n dpi_cor is currently used for linewidth-related things and\n shrink factor. Mutation scale is affected by this.\n\n Returns\n -------\n scalar\n \"\"\"\n return self._dpi_cor\n\n def set_positions(self, posA, posB):\n \"\"\"\n Set the begin and end positions of the connecting path.\n\n Parameters\n ----------\n posA, posB : None, tuple\n (x, y) coordinates of arrow tail and arrow head respectively. If\n `None` use current value.\n \"\"\"\n if posA is not None:\n self._posA_posB[0] = posA\n if posB is not None:\n self._posA_posB[1] = posB\n self.stale = True\n\n def set_patchA(self, patchA):\n \"\"\"\n Set the tail patch.\n\n Parameters\n ----------\n patchA : `.patches.Patch`\n \"\"\"\n self.patchA = patchA\n self.stale = True\n\n def set_patchB(self, patchB):\n \"\"\"\n Set the head patch.\n\n Parameters\n ----------\n patchB : `.patches.Patch`\n \"\"\"\n self.patchB = patchB\n self.stale = True\n\n def set_connectionstyle(self, connectionstyle, **kwargs):\n \"\"\"\n Set the connection style. Old attributes are forgotten.\n\n Parameters\n ----------\n connectionstyle : str or `.ConnectionStyle` or None, optional\n Can be a string with connectionstyle name with\n optional comma-separated attributes, e.g.::\n\n set_connectionstyle(\"arc,angleA=0,armA=30,rad=10\")\n\n Alternatively, the attributes can be provided as keywords, e.g.::\n\n set_connectionstyle(\"arc\", angleA=0,armA=30,rad=10)\n\n Without any arguments (or with ``connectionstyle=None``), return\n available styles as a list of strings.\n \"\"\"\n\n if connectionstyle is None:\n return ConnectionStyle.pprint_styles()\n\n if (isinstance(connectionstyle, ConnectionStyle._Base) or\n callable(connectionstyle)):\n self._connector = connectionstyle\n else:\n self._connector = ConnectionStyle(connectionstyle, **kwargs)\n self.stale = True\n\n def get_connectionstyle(self):\n \"\"\"Return the `ConnectionStyle` used.\"\"\"\n return self._connector\n\n def set_arrowstyle(self, arrowstyle=None, **kwargs):\n \"\"\"\n Set the arrow style. Old attributes are forgotten. Without arguments\n (or with ``arrowstyle=None``) returns available box styles as a list of\n strings.\n\n Parameters\n ----------\n arrowstyle : None or ArrowStyle or str, default: None\n Can be a string with arrowstyle name with optional comma-separated\n attributes, e.g.::\n\n set_arrowstyle(\"Fancy,head_length=0.2\")\n\n Alternatively attributes can be provided as keywords, e.g.::\n\n set_arrowstyle(\"fancy\", head_length=0.2)\n\n \"\"\"\n\n if arrowstyle is None:\n return ArrowStyle.pprint_styles()\n\n if isinstance(arrowstyle, ArrowStyle._Base):\n self._arrow_transmuter = arrowstyle\n else:\n self._arrow_transmuter = ArrowStyle(arrowstyle, **kwargs)\n self.stale = True\n\n def get_arrowstyle(self):\n \"\"\"Return the arrowstyle object.\"\"\"\n return self._arrow_transmuter\n\n def set_mutation_scale(self, scale):\n \"\"\"\n Set the mutation scale.\n\n Parameters\n ----------\n scale : float\n \"\"\"\n self._mutation_scale = scale\n self.stale = True\n\n def get_mutation_scale(self):\n \"\"\"\n Return the mutation scale.\n\n Returns\n -------\n scalar\n \"\"\"\n return self._mutation_scale\n\n def set_mutation_aspect(self, aspect):\n \"\"\"\n Set the aspect ratio of the bbox mutation.\n\n Parameters\n ----------\n aspect : float\n \"\"\"\n self._mutation_aspect = aspect\n self.stale = True\n\n def get_mutation_aspect(self):\n \"\"\"Return the aspect ratio of the bbox mutation.\"\"\"\n return (self._mutation_aspect if self._mutation_aspect is not None\n else 1) # backcompat.\n\n def get_path(self):\n \"\"\"Return the path of the arrow in the data coordinates.\"\"\"\n # The path is generated in display coordinates, then converted back to\n # data coordinates.\n _path, fillable = self._get_path_in_displaycoord()\n if np.iterable(fillable):\n _path = Path.make_compound_path(*_path)\n return self.get_transform().inverted().transform_path(_path)\n\n def _get_path_in_displaycoord(self):\n \"\"\"Return the mutated path of the arrow in display coordinates.\"\"\"\n dpi_cor = self._dpi_cor\n\n if self._posA_posB is not None:\n posA = self._convert_xy_units(self._posA_posB[0])\n posB = self._convert_xy_units(self._posA_posB[1])\n (posA, posB) = self.get_transform().transform((posA, posB))\n _path = self.get_connectionstyle()(posA, posB,\n patchA=self.patchA,\n patchB=self.patchB,\n shrinkA=self.shrinkA * dpi_cor,\n shrinkB=self.shrinkB * dpi_cor\n )\n else:\n _path = self.get_transform().transform_path(self._path_original)\n\n _path, fillable = self.get_arrowstyle()(\n _path,\n self.get_mutation_scale() * dpi_cor,\n self.get_linewidth() * dpi_cor,\n self.get_mutation_aspect())\n\n return _path, fillable\n\n get_path_in_displaycoord = _api.deprecate_privatize_attribute(\n \"3.5\",\n alternative=\"self.get_transform().transform_path(self.get_path())\")\n\n def draw(self, renderer):\n if not self.get_visible():\n return\n\n with self._bind_draw_path_function(renderer) as draw_path:\n\n # FIXME : dpi_cor is for the dpi-dependency of the linewidth. There\n # could be room for improvement. Maybe _get_path_in_displaycoord\n # could take a renderer argument, but get_path should be adapted\n # too.\n self._dpi_cor = renderer.points_to_pixels(1.)\n path, fillable = self._get_path_in_displaycoord()\n\n if not np.iterable(fillable):\n path = [path]\n fillable = [fillable]\n\n affine = transforms.IdentityTransform()\n\n for p, f in zip(path, fillable):\n draw_path(\n p, affine,\n self._facecolor if f and self._facecolor[3] else None)\n\n\nclass ConnectionPatch(FancyArrowPatch):\n \"\"\"A patch that connects two points (possibly in different axes).\"\"\"\n\n def __str__(self):\n return \"ConnectionPatch((%g, %g), (%g, %g))\" % \\\n (self.xy1[0], self.xy1[1], self.xy2[0], self.xy2[1])\n\n @docstring.dedent_interpd\n @_api.delete_parameter(\"3.4\", \"dpi_cor\")\n def __init__(self, xyA, xyB, coordsA, coordsB=None,\n axesA=None, axesB=None,\n arrowstyle=\"-\",\n connectionstyle=\"arc3\",\n patchA=None,\n patchB=None,\n shrinkA=0.,\n shrinkB=0.,\n mutation_scale=10.,\n mutation_aspect=None,\n clip_on=False,\n dpi_cor=1.,\n **kwargs):\n \"\"\"\n Connect point *xyA* in *coordsA* with point *xyB* in *coordsB*.\n\n Valid keys are\n\n =============== ======================================================\n Key Description\n =============== ======================================================\n arrowstyle the arrow style\n connectionstyle the connection style\n relpos default is (0.5, 0.5)\n patchA default is bounding box of the text\n patchB default is None\n shrinkA default is 2 points\n shrinkB default is 2 points\n mutation_scale default is text size (in points)\n mutation_aspect default is 1.\n ? any key for `matplotlib.patches.PathPatch`\n =============== ======================================================\n\n *coordsA* and *coordsB* are strings that indicate the\n coordinates of *xyA* and *xyB*.\n\n ==================== ==================================================\n Property Description\n ==================== ==================================================\n 'figure points' points from the lower left corner of the figure\n 'figure pixels' pixels from the lower left corner of the figure\n 'figure fraction' 0, 0 is lower left of figure and 1, 1 is upper\n right\n 'subfigure points' points from the lower left corner of the subfigure\n 'subfigure pixels' pixels from the lower left corner of the subfigure\n 'subfigure fraction' fraction of the subfigure, 0, 0 is lower left.\n 'axes points' points from lower left corner of axes\n 'axes pixels' pixels from lower left corner of axes\n 'axes fraction' 0, 0 is lower left of axes and 1, 1 is upper right\n 'data' use the coordinate system of the object being\n annotated (default)\n 'offset points' offset (in points) from the *xy* value\n 'polar' you can specify *theta*, *r* for the annotation,\n even in cartesian plots. Note that if you are\n using a polar axes, you do not need to specify\n polar for the coordinate system since that is the\n native \"data\" coordinate system.\n ==================== ==================================================\n\n Alternatively they can be set to any valid\n `~matplotlib.transforms.Transform`.\n\n Note that 'subfigure pixels' and 'figure pixels' are the same\n for the parent figure, so users who want code that is usable in\n a subfigure can use 'subfigure pixels'.\n\n .. note::\n\n Using `ConnectionPatch` across two `~.axes.Axes` instances\n is not directly compatible with :doc:`constrained layout\n </tutorials/intermediate/constrainedlayout_guide>`. Add the artist\n directly to the `.Figure` instead of adding it to a specific Axes,\n or exclude it from the layout using ``con.set_in_layout(False)``.\n\n .. code-block:: default\n\n fig, ax = plt.subplots(1, 2, constrained_layout=True)\n con = ConnectionPatch(..., axesA=ax[0], axesB=ax[1])\n fig.add_artist(con)\n\n \"\"\"\n if coordsB is None:\n coordsB = coordsA\n # we'll draw ourself after the artist we annotate by default\n self.xy1 = xyA\n self.xy2 = xyB\n self.coords1 = coordsA\n self.coords2 = coordsB\n\n self.axesA = axesA\n self.axesB = axesB\n\n super().__init__(posA=(0, 0), posB=(1, 1),\n arrowstyle=arrowstyle,\n connectionstyle=connectionstyle,\n patchA=patchA, patchB=patchB,\n shrinkA=shrinkA, shrinkB=shrinkB,\n mutation_scale=mutation_scale,\n mutation_aspect=mutation_aspect,\n clip_on=clip_on,\n **kwargs)\n self._dpi_cor = dpi_cor\n\n # if True, draw annotation only if self.xy is inside the axes\n self._annotation_clip = None\n\n def _get_xy(self, xy, s, axes=None):\n \"\"\"Calculate the pixel position of given point.\"\"\"\n s0 = s # For the error message, if needed.\n if axes is None:\n axes = self.axes\n xy = np.array(xy)\n if s in [\"figure points\", \"axes points\"]:\n xy *= self.figure.dpi / 72\n s = s.replace(\"points\", \"pixels\")\n elif s == \"figure fraction\":\n s = self.figure.transFigure\n elif s == \"subfigure fraction\":\n s = self.figure.transSubfigure\n elif s == \"axes fraction\":\n s = axes.transAxes\n x, y = xy\n\n if s == 'data':\n trans = axes.transData\n x = float(self.convert_xunits(x))\n y = float(self.convert_yunits(y))\n return trans.transform((x, y))\n elif s == 'offset points':\n if self.xycoords == 'offset points': # prevent recursion\n return self._get_xy(self.xy, 'data')\n return (\n self._get_xy(self.xy, self.xycoords) # converted data point\n + xy * self.figure.dpi / 72) # converted offset\n elif s == 'polar':\n theta, r = x, y\n x = r * np.cos(theta)\n y = r * np.sin(theta)\n trans = axes.transData\n return trans.transform((x, y))\n elif s == 'figure pixels':\n # pixels from the lower left corner of the figure\n bb = self.figure.figbbox\n x = bb.x0 + x if x >= 0 else bb.x1 + x\n y = bb.y0 + y if y >= 0 else bb.y1 + y\n return x, y\n elif s == 'subfigure pixels':\n # pixels from the lower left corner of the figure\n bb = self.figure.bbox\n x = bb.x0 + x if x >= 0 else bb.x1 + x\n y = bb.y0 + y if y >= 0 else bb.y1 + y\n return x, y\n elif s == 'axes pixels':\n # pixels from the lower left corner of the axes\n bb = axes.bbox\n x = bb.x0 + x if x >= 0 else bb.x1 + x\n y = bb.y0 + y if y >= 0 else bb.y1 + y\n return x, y\n elif isinstance(s, transforms.Transform):\n return s.transform(xy)\n else:\n raise ValueError(f\"{s0} is not a valid coordinate transformation\")\n\n def set_annotation_clip(self, b):\n \"\"\"\n Set the clipping behavior.\n\n Parameters\n ----------\n b : bool or None\n\n - *False*: The annotation will always be drawn regardless of its\n position.\n - *True*: The annotation will only be drawn if ``self.xy`` is\n inside the axes.\n - *None*: The annotation will only be drawn if ``self.xy`` is\n inside the axes and ``self.xycoords == \"data\"``.\n \"\"\"\n self._annotation_clip = b\n self.stale = True\n\n def get_annotation_clip(self):\n \"\"\"\n Return the clipping behavior.\n\n See `.set_annotation_clip` for the meaning of the return value.\n \"\"\"\n return self._annotation_clip\n\n def _get_path_in_displaycoord(self):\n \"\"\"Return the mutated path of the arrow in display coordinates.\"\"\"\n dpi_cor = self._dpi_cor\n posA = self._get_xy(self.xy1, self.coords1, self.axesA)\n posB = self._get_xy(self.xy2, self.coords2, self.axesB)\n path = self.get_connectionstyle()(\n posA, posB,\n patchA=self.patchA, patchB=self.patchB,\n shrinkA=self.shrinkA * dpi_cor, shrinkB=self.shrinkB * dpi_cor,\n )\n path, fillable = self.get_arrowstyle()(\n path,\n self.get_mutation_scale() * dpi_cor,\n self.get_linewidth() * dpi_cor,\n self.get_mutation_aspect()\n )\n return path, fillable\n\n def _check_xy(self, renderer):\n \"\"\"Check whether the annotation needs to be drawn.\"\"\"\n\n b = self.get_annotation_clip()\n\n if b or (b is None and self.coords1 == \"data\"):\n xy_pixel = self._get_xy(self.xy1, self.coords1, self.axesA)\n if self.axesA is None:\n axes = self.axes\n else:\n axes = self.axesA\n if not axes.contains_point(xy_pixel):\n return False\n\n if b or (b is None and self.coords2 == \"data\"):\n xy_pixel = self._get_xy(self.xy2, self.coords2, self.axesB)\n if self.axesB is None:\n axes = self.axes\n else:\n axes = self.axesB\n if not axes.contains_point(xy_pixel):\n return False\n\n return True\n\n def draw(self, renderer):\n if renderer is not None:\n self._renderer = renderer\n if not self.get_visible() or not self._check_xy(renderer):\n return\n super().draw(renderer)\n"}
|
{"lib/matplotlib/patches.py": [{"type": "function", "name": "Rectangle.get_corners", "lines": [805, 811], "signature": "def get_corners(self):", "doc": "Return the corners of the rectangle, moving anti-clockwise from\n(x0, y0)."}, {"type": "function", "name": "Rectangle.get_center", "lines": [813, 815], "signature": "def get_center(self):", "doc": "Return the centre of the rectangle."}, {"type": "function", "name": "Ellipse.get_corners", "lines": [1668, 1676], "signature": "def get_corners(self):", "doc": "Return the corners of the ellipse bounding box.\n\nThe bounding box orientation is moving anti-clockwise from the\nlower left corner defined before rotation."}]}
|
3.5
|
["lib/matplotlib/tests/test_patches.py::test_corner_center"]
|
["lib/matplotlib/tests/test_patches.py::test_Polygon_close", "lib/matplotlib/tests/test_patches.py::test_rotate_rect", "lib/matplotlib/tests/test_patches.py::test_rotate_rect_draw[png]", "lib/matplotlib/tests/test_patches.py::test_negative_rect", "lib/matplotlib/tests/test_patches.py::test_clip_to_bbox[png]", "lib/matplotlib/tests/test_patches.py::test_patch_alpha_coloring[png]", "lib/matplotlib/tests/test_patches.py::test_patch_alpha_override[png]", "lib/matplotlib/tests/test_patches.py::test_patch_color_none", "lib/matplotlib/tests/test_patches.py::test_patch_custom_linestyle[png]", "lib/matplotlib/tests/test_patches.py::test_patch_linestyle_accents", "lib/matplotlib/tests/test_patches.py::test_patch_linestyle_none[png]", "lib/matplotlib/tests/test_patches.py::test_wedge_movement", "lib/matplotlib/tests/test_patches.py::test_wedge_range[png]", "lib/matplotlib/tests/test_patches.py::test_patch_str", "lib/matplotlib/tests/test_patches.py::test_multi_color_hatch[png]", "lib/matplotlib/tests/test_patches.py::test_units_rectangle[png]", "lib/matplotlib/tests/test_patches.py::test_connection_patch[png]", "lib/matplotlib/tests/test_patches.py::test_connection_patch_fig[png]", "lib/matplotlib/tests/test_patches.py::test_datetime_rectangle", "lib/matplotlib/tests/test_patches.py::test_datetime_datetime_fails", "lib/matplotlib/tests/test_patches.py::test_contains_point", "lib/matplotlib/tests/test_patches.py::test_contains_points", "lib/matplotlib/tests/test_patches.py::test_shadow[png]", "lib/matplotlib/tests/test_patches.py::test_fancyarrow_units", "lib/matplotlib/tests/test_patches.py::test_fancyarrow_setdata", "lib/matplotlib/tests/test_patches.py::test_annulus[png]", "lib/matplotlib/tests/test_patches.py::test_annulus_setters[png]", "lib/matplotlib/tests/test_patches.py::test_degenerate_polygon", "lib/matplotlib/tests/test_patches.py::test_color_override_warning[edgecolor]", "lib/matplotlib/tests/test_patches.py::test_color_override_warning[facecolor]", "lib/matplotlib/tests/test_patches.py::test_empty_verts", "lib/matplotlib/tests/test_patches.py::test_default_antialiased", "lib/matplotlib/tests/test_patches.py::test_default_linestyle", "lib/matplotlib/tests/test_patches.py::test_default_capstyle", "lib/matplotlib/tests/test_patches.py::test_default_joinstyle"]
|
3d6c3da884fafae4654df68144391cfe9be6f134
|
{"first_commit_time": 1640692899.0, "pr_title": "Add corner coordinate helper methods to Ellipse/Rectangle", "pr_body": "## PR Summary\r\nThis is pulled out of https://github.com/matplotlib/matplotlib/pull/21945 because it's a standalone feature, and I think it's worth the scrutiny of a separate PR.\r\n\r\n## PR Checklist\r\n\r\n<!-- Please mark any checkboxes that do not apply to this PR as [N/A]. -->\r\n**Tests and Styling**\r\n- [x] Has pytest style unit tests (and `pytest` passes).\r\n- [x] Is [Flake 8](https://flake8.pycqa.org/en/latest/) compliant (install `flake8-docstrings` and run `flake8 --docstring-convention=all`).\r\n\r\n**Documentation**\r\n- [ ] New features are documented, with examples if plot related.\r\n- [ ] New features have an entry in `doc/users/next_whats_new/` (follow instructions in README.rst there).\r\n- [ ] API changes documented in `doc/api/next_api_changes/` (follow instructions in README.rst there).\r\n- [ ] Documentation is sphinx and numpydoc compliant (the docs should [build](https://matplotlib.org/devel/documenting_mpl.html#building-the-docs) without error).\r\n\r\n<!--\r\nThank you so much for your PR! To help us review your contribution, please\r\nconsider the following points:\r\n\r\n- A development guide is available at https://matplotlib.org/devdocs/devel/index.html.\r\n\r\n- Help with git and github is available at\r\n https://matplotlib.org/devel/gitwash/development_workflow.html.\r\n\r\n- Do not create the PR out of main, but out of a separate branch.\r\n\r\n- The PR title should summarize the changes, for example \"Raise ValueError on\r\n non-numeric input to set_xlim\". Avoid non-descriptive titles such as\r\n \"Addresses issue #8576\".\r\n\r\n- The summary should provide at least 1-2 sentences describing the pull request\r\n in detail (Why is this change required? What problem does it solve?) and\r\n link to any relevant issues.\r\n\r\n- If you are contributing fixes to docstrings, please pay attention to\r\n http://matplotlib.org/devel/documenting_mpl.html#formatting. In particular,\r\n note the difference between using single backquotes, double backquotes, and\r\n asterisks in the markup.\r\n\r\nWe understand that PRs can sometimes be overwhelming, especially as the\r\nreviews start coming in. Please let us know if the reviews are unclear or\r\nthe recommended next step seems overly demanding, if you would like help in\r\naddressing a reviewer's comments, or if you have been waiting too long to hear\r\nback on your PR.\r\n-->\r\n", "pr_timeline": [{"time": 1639746390.0, "comment": "Sorry for the messy PR... I've dropped edge_centers (I'll add them as private API in the widget PR), and should have resolved all the other comments."}], "issues": {}}
|
|
matplotlib/matplotlib
| 23,692
|
https://github.com/matplotlib/matplotlib/pull/23692
|
matplotlib__matplotlib-23692
|
[]
|
3d6c3da884fafae4654df68144391cfe9be6f134
|
diff --git a/doc/api/axis_api.rst b/doc/api/axis_api.rst
index d950d1e253a4..e7da26a11706 100644
--- a/doc/api/axis_api.rst
+++ b/doc/api/axis_api.rst
@@ -100,6 +100,7 @@ Ticks, tick labels and Offset text
Axis.get_offset_text
Axis.get_tick_padding
+ Axis.get_tick_params
Axis.get_ticklabels
Axis.get_ticklines
Axis.get_ticklocs
diff --git a/doc/users/next_whats_new/view_current_axis_format.rst b/doc/users/next_whats_new/view_current_axis_format.rst
new file mode 100644
index 000000000000..eb7e03600f0e
--- /dev/null
+++ b/doc/users/next_whats_new/view_current_axis_format.rst
@@ -0,0 +1,29 @@
+View current appearance settings for ticks, tick labels, and gridlines
+----------------------------------------------------------------------
+
+The new `~matplotlib.axis.Axis.get_tick_params` method can be used to
+retrieve the appearance settings that will be applied to any
+additional ticks, tick labels, and gridlines added to the plot:
+
+.. code-block:: pycon
+
+ >>> import matplotlib.pyplot as plt
+
+ >>> fig, ax = plt.subplots()
+ >>> ax.yaxis.set_tick_params(labelsize=30, labelcolor='red',
+ ... direction='out', which='major')
+ >>> ax.yaxis.get_tick_params(which='major')
+ {'direction': 'out',
+ 'left': True,
+ 'right': False,
+ 'labelleft': True,
+ 'labelright': False,
+ 'gridOn': False,
+ 'labelsize': 30,
+ 'labelcolor': 'red'}
+ >>> ax.yaxis.get_tick_params(which='minor')
+ {'left': True,
+ 'right': False,
+ 'labelleft': True,
+ 'labelright': False,
+ 'gridOn': False}
diff --git a/lib/matplotlib/axes/_base.py b/lib/matplotlib/axes/_base.py
index 015fd3294589..d3ea2cd0e9ac 100644
--- a/lib/matplotlib/axes/_base.py
+++ b/lib/matplotlib/axes/_base.py
@@ -3403,7 +3403,8 @@ def tick_params(self, axis='both', **kwargs):
Change the appearance of ticks, tick labels, and gridlines.
Tick properties that are not explicitly set using the keyword
- arguments remain unchanged unless *reset* is True.
+ arguments remain unchanged unless *reset* is True. For the current
+ style settings, see `.Axis.get_tick_params`.
Parameters
----------
diff --git a/lib/matplotlib/axis.py b/lib/matplotlib/axis.py
index af0815d41d9c..ae38dd28676b 100644
--- a/lib/matplotlib/axis.py
+++ b/lib/matplotlib/axis.py
@@ -916,6 +916,12 @@ def set_tick_params(self, which='major', reset=False, **kwargs):
For documentation of keyword arguments, see
:meth:`matplotlib.axes.Axes.tick_params`.
+
+ See Also
+ --------
+ .Axis.get_tick_params
+ View the current style settings for ticks, ticklabels, and
+ gridlines.
"""
_api.check_in_list(['major', 'minor', 'both'], which=which)
kwtrans = self._translate_tick_params(kwargs)
@@ -949,8 +955,62 @@ def set_tick_params(self, which='major', reset=False, **kwargs):
self.stale = True
+ def get_tick_params(self, which='major'):
+ """
+ Get appearance parameters for ticks, ticklabels, and gridlines.
+
+ .. versionadded:: 3.7
+
+ Parameters
+ ----------
+ which : {'major', 'minor'}, default: 'major'
+ The group of ticks for which the parameters are retrieved.
+
+ Returns
+ -------
+ dict
+ Properties for styling tick elements added to the axis.
+
+ Notes
+ -----
+ This method returns the appearance parameters for styling *new*
+ elements added to this axis and may be different from the values
+ on current elements if they were modified directly by the user
+ (e.g., via ``set_*`` methods on individual tick objects).
+
+ Examples
+ --------
+ ::
+
+ >>> ax.yaxis.set_tick_params(labelsize=30, labelcolor='red',
+ direction='out', which='major')
+ >>> ax.yaxis.get_tick_params(which='major')
+ {'direction': 'out',
+ 'left': True,
+ 'right': False,
+ 'labelleft': True,
+ 'labelright': False,
+ 'gridOn': False,
+ 'labelsize': 30,
+ 'labelcolor': 'red'}
+ >>> ax.yaxis.get_tick_params(which='minor')
+ {'left': True,
+ 'right': False,
+ 'labelleft': True,
+ 'labelright': False,
+ 'gridOn': False}
+
+
+ """
+ _api.check_in_list(['major', 'minor'], which=which)
+ if which == 'major':
+ return self._translate_tick_params(
+ self._major_tick_kw, reverse=True
+ )
+ return self._translate_tick_params(self._minor_tick_kw, reverse=True)
+
@staticmethod
- def _translate_tick_params(kw):
+ def _translate_tick_params(kw, reverse=False):
"""
Translate the kwargs supported by `.Axis.set_tick_params` to kwargs
supported by `.Tick._apply_params`.
@@ -961,9 +1021,12 @@ def _translate_tick_params(kw):
Returns a new dict of translated kwargs.
- Note: The input *kwargs* are currently modified, but that's ok for
- the only caller.
+ Note: Use reverse=True to translate from those supported by
+ `.Tick._apply_params` back to those supported by
+ `.Axis.set_tick_params`.
"""
+ kw_ = {**kw}
+
# The following lists may be moved to a more accessible location.
allowed_keys = [
'size', 'width', 'color', 'tickdir', 'pad',
@@ -988,19 +1051,27 @@ def _translate_tick_params(kw):
'labelright': 'label2On',
'labeltop': 'label2On',
}
- kwtrans = {newkey: kw.pop(oldkey)
- for oldkey, newkey in keymap.items() if oldkey in kw}
- if 'colors' in kw:
- c = kw.pop('colors')
+ if reverse:
+ kwtrans = {
+ oldkey: kw_.pop(newkey)
+ for oldkey, newkey in keymap.items() if newkey in kw_
+ }
+ else:
+ kwtrans = {
+ newkey: kw_.pop(oldkey)
+ for oldkey, newkey in keymap.items() if oldkey in kw_
+ }
+ if 'colors' in kw_:
+ c = kw_.pop('colors')
kwtrans['color'] = c
kwtrans['labelcolor'] = c
# Maybe move the checking up to the caller of this method.
- for key in kw:
+ for key in kw_:
if key not in allowed_keys:
raise ValueError(
"keyword %s is not recognized; valid keywords are %s"
% (key, allowed_keys))
- kwtrans.update(kw)
+ kwtrans.update(kw_)
return kwtrans
def set_clip_path(self, clippath, transform=None):
|
diff --git a/lib/matplotlib/tests/test_axes.py b/lib/matplotlib/tests/test_axes.py
index 3699c9df133d..aed45743d66b 100644
--- a/lib/matplotlib/tests/test_axes.py
+++ b/lib/matplotlib/tests/test_axes.py
@@ -6452,6 +6452,33 @@ def test_pandas_bar_align_center(pd):
fig.canvas.draw()
+def test_axis_get_tick_params():
+ axis = plt.subplot().yaxis
+ initial_major_style_translated = {**axis.get_tick_params(which='major')}
+ initial_minor_style_translated = {**axis.get_tick_params(which='minor')}
+
+ translated_major_kw = axis._translate_tick_params(
+ axis._major_tick_kw, reverse=True
+ )
+ translated_minor_kw = axis._translate_tick_params(
+ axis._minor_tick_kw, reverse=True
+ )
+
+ assert translated_major_kw == initial_major_style_translated
+ assert translated_minor_kw == initial_minor_style_translated
+ axis.set_tick_params(labelsize=30, labelcolor='red',
+ direction='out', which='both')
+
+ new_major_style_translated = {**axis.get_tick_params(which='major')}
+ new_minor_style_translated = {**axis.get_tick_params(which='minor')}
+ new_major_style = axis._translate_tick_params(new_major_style_translated)
+ new_minor_style = axis._translate_tick_params(new_minor_style_translated)
+ assert initial_major_style_translated != new_major_style_translated
+ assert axis._major_tick_kw == new_major_style
+ assert initial_minor_style_translated != new_minor_style_translated
+ assert axis._minor_tick_kw == new_minor_style
+
+
def test_axis_set_tick_params_labelsize_labelcolor():
# Tests fix for issue 4346
axis_1 = plt.subplot()
| 2022-08-20T23:23:12
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"doc/api/axis_api.rst": "*******************\n``matplotlib.axis``\n*******************\n\n.. contents:: Table of Contents\n :depth: 3\n :local:\n :backlinks: entry\n\n.. automodule:: matplotlib.axis\n :no-members:\n :no-undoc-members:\n\nInheritance\n===========\n\n.. inheritance-diagram:: Tick Ticker XAxis YAxis XTick YTick\n :private-bases:\n\n\n``Axis`` objects\n================\n\n.. autoclass:: Axis\n :no-members:\n :no-undoc-members:\n.. autoclass:: XAxis\n :no-members:\n :no-undoc-members:\n.. autoclass:: YAxis\n :no-members:\n :no-undoc-members:\n.. autoclass:: Ticker\n :no-members:\n :no-undoc-members:\n\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.clear\n Axis.get_scale\n\n\nFormatters and Locators\n-----------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.get_major_formatter\n Axis.get_major_locator\n Axis.get_minor_formatter\n Axis.get_minor_locator\n Axis.set_major_formatter\n Axis.set_major_locator\n Axis.set_minor_formatter\n Axis.set_minor_locator\n\n Axis.remove_overlapping_locs\n Axis.get_remove_overlapping_locs\n Axis.set_remove_overlapping_locs\n\nAxis Label\n----------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.set_label_coords\n Axis.set_label_position\n Axis.set_label_text\n Axis.get_label\n Axis.get_label_position\n Axis.get_label_text\n\nTicks, tick labels and Offset text\n----------------------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.get_major_ticks\n Axis.get_majorticklabels\n Axis.get_majorticklines\n Axis.get_majorticklocs\n Axis.get_minor_ticks\n Axis.get_minorticklabels\n Axis.get_minorticklines\n Axis.get_minorticklocs\n\n Axis.get_offset_text\n\n Axis.get_tick_padding\n Axis.get_ticklabels\n Axis.get_ticklines\n Axis.get_ticklocs\n\n Axis.get_gridlines\n Axis.grid\n\n Axis.set_tick_params\n\n Axis.axis_date\n\n\nData and view intervals\n-----------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.get_data_interval\n Axis.get_view_interval\n Axis.get_inverted\n Axis.set_data_interval\n Axis.set_view_interval\n Axis.set_inverted\n\nRendering helpers\n-----------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.get_minpos\n Axis.get_tick_space\n Axis.get_ticklabel_extents\n Axis.get_tightbbox\n\n\nInteractive\n-----------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.contains\n Axis.pickradius\n Axis.get_pickradius\n Axis.set_pickradius\n\n\nUnits\n-----\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.convert_units\n Axis.set_units\n Axis.get_units\n Axis.update_units\n\n\nXAxis Specific\n--------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n XAxis.axis_name\n XAxis.get_text_heights\n XAxis.get_ticks_position\n XAxis.set_ticks_position\n XAxis.set_label_position\n XAxis.tick_bottom\n XAxis.tick_top\n\nYAxis Specific\n--------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n YAxis.axis_name\n YAxis.get_text_widths\n YAxis.get_ticks_position\n YAxis.set_offset_position\n YAxis.set_ticks_position\n YAxis.set_label_position\n YAxis.tick_left\n YAxis.tick_right\n\nOther\n-----\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n\n Axis.OFFSETTEXTPAD\n\n Axis.axes\n Axis.limit_range_for_scale\n Axis.reset_ticks\n Axis.set_default_intervals\n\nDiscouraged\n-----------\n\nThese methods should be used together with care, calling ``set_ticks``\nto specify the desired tick locations **before** calling ``set_ticklabels`` to\nspecify a matching series of labels. Calling ``set_ticks`` makes a\n`~matplotlib.ticker.FixedLocator`; it's list of locations is then used by\n``set_ticklabels`` to make an appropriate\n`~matplotlib.ticker.FuncFormatter`.\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axis.set_ticks\n Axis.set_ticklabels\n\n\n\n``Tick`` objects\n================\n\n.. autoclass:: Tick\n :no-members:\n :no-undoc-members:\n.. autoclass:: XTick\n :no-members:\n :no-undoc-members:\n.. autoclass:: YTick\n :no-members:\n :no-undoc-members:\n\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Tick.get_loc\n Tick.get_pad\n Tick.get_pad_pixels\n Tick.get_tick_padding\n Tick.get_tickdir\n Tick.get_view_interval\n Tick.set_label1\n Tick.set_label2\n Tick.set_pad\n Tick.set_url\n Tick.update_position\n", "doc/users/next_whats_new/view_current_axis_format.rst": null, "lib/matplotlib/axes/_base.py": "from collections.abc import Iterable, MutableSequence\nfrom contextlib import ExitStack\nimport functools\nimport inspect\nimport itertools\nimport logging\nfrom numbers import Real\nfrom operator import attrgetter\nimport types\n\nimport numpy as np\n\nimport matplotlib as mpl\nfrom matplotlib import _api, cbook, _docstring, offsetbox\nimport matplotlib.artist as martist\nimport matplotlib.axis as maxis\nfrom matplotlib.cbook import _OrderedSet, _check_1d, index_of\nimport matplotlib.collections as mcoll\nimport matplotlib.colors as mcolors\nimport matplotlib.font_manager as font_manager\nfrom matplotlib.gridspec import SubplotSpec\nimport matplotlib.image as mimage\nimport matplotlib.lines as mlines\nimport matplotlib.patches as mpatches\nfrom matplotlib.rcsetup import cycler, validate_axisbelow\nimport matplotlib.spines as mspines\nimport matplotlib.table as mtable\nimport matplotlib.text as mtext\nimport matplotlib.ticker as mticker\nimport matplotlib.transforms as mtransforms\n\n_log = logging.getLogger(__name__)\n\n\nclass _axis_method_wrapper:\n \"\"\"\n Helper to generate Axes methods wrapping Axis methods.\n\n After ::\n\n get_foo = _axis_method_wrapper(\"xaxis\", \"get_bar\")\n\n (in the body of a class) ``get_foo`` is a method that forwards it arguments\n to the ``get_bar`` method of the ``xaxis`` attribute, and gets its\n signature and docstring from ``Axis.get_bar``.\n\n The docstring of ``get_foo`` is built by replacing \"this Axis\" by \"the\n {attr_name}\" (i.e., \"the xaxis\", \"the yaxis\") in the wrapped method's\n dedented docstring; additional replacements can by given in *doc_sub*.\n \"\"\"\n\n def __init__(self, attr_name, method_name, *, doc_sub=None):\n self.attr_name = attr_name\n self.method_name = method_name\n # Immediately put the docstring in ``self.__doc__`` so that docstring\n # manipulations within the class body work as expected.\n doc = inspect.getdoc(getattr(maxis.Axis, method_name))\n self._missing_subs = []\n if doc:\n doc_sub = {\"this Axis\": f\"the {self.attr_name}\", **(doc_sub or {})}\n for k, v in doc_sub.items():\n if k not in doc: # Delay raising error until we know qualname.\n self._missing_subs.append(k)\n doc = doc.replace(k, v)\n self.__doc__ = doc\n\n def __set_name__(self, owner, name):\n # This is called at the end of the class body as\n # ``self.__set_name__(cls, name_under_which_self_is_assigned)``; we\n # rely on that to give the wrapper the correct __name__/__qualname__.\n get_method = attrgetter(f\"{self.attr_name}.{self.method_name}\")\n\n def wrapper(self, *args, **kwargs):\n return get_method(self)(*args, **kwargs)\n\n wrapper.__module__ = owner.__module__\n wrapper.__name__ = name\n wrapper.__qualname__ = f\"{owner.__qualname__}.{name}\"\n wrapper.__doc__ = self.__doc__\n # Manually copy the signature instead of using functools.wraps because\n # displaying the Axis method source when asking for the Axes method\n # source would be confusing.\n wrapper.__signature__ = inspect.signature(\n getattr(maxis.Axis, self.method_name))\n\n if self._missing_subs:\n raise ValueError(\n \"The definition of {} expected that the docstring of Axis.{} \"\n \"contains {!r} as substrings\".format(\n wrapper.__qualname__, self.method_name,\n \", \".join(map(repr, self._missing_subs))))\n\n setattr(owner, name, wrapper)\n\n\nclass _TransformedBoundsLocator:\n \"\"\"\n Axes locator for `.Axes.inset_axes` and similarly positioned Axes.\n\n The locator is a callable object used in `.Axes.set_aspect` to compute the\n Axes location depending on the renderer.\n \"\"\"\n\n def __init__(self, bounds, transform):\n \"\"\"\n *bounds* (a ``[l, b, w, h]`` rectangle) and *transform* together\n specify the position of the inset Axes.\n \"\"\"\n self._bounds = bounds\n self._transform = transform\n\n def __call__(self, ax, renderer):\n # Subtracting transSubfigure will typically rely on inverted(),\n # freezing the transform; thus, this needs to be delayed until draw\n # time as transSubfigure may otherwise change after this is evaluated.\n return mtransforms.TransformedBbox(\n mtransforms.Bbox.from_bounds(*self._bounds),\n self._transform - ax.figure.transSubfigure)\n\n\ndef _process_plot_format(fmt, *, ambiguous_fmt_datakey=False):\n \"\"\"\n Convert a MATLAB style color/line style format string to a (*linestyle*,\n *marker*, *color*) tuple.\n\n Example format strings include:\n\n * 'ko': black circles\n * '.b': blue dots\n * 'r--': red dashed lines\n * 'C2--': the third color in the color cycle, dashed lines\n\n The format is absolute in the sense that if a linestyle or marker is not\n defined in *fmt*, there is no line or marker. This is expressed by\n returning 'None' for the respective quantity.\n\n See Also\n --------\n matplotlib.Line2D.lineStyles, matplotlib.colors.cnames\n All possible styles and color format strings.\n \"\"\"\n\n linestyle = None\n marker = None\n color = None\n\n # Is fmt just a colorspec?\n try:\n color = mcolors.to_rgba(fmt)\n\n # We need to differentiate grayscale '1.0' from tri_down marker '1'\n try:\n fmtint = str(int(fmt))\n except ValueError:\n return linestyle, marker, color # Yes\n else:\n if fmt != fmtint:\n # user definitely doesn't want tri_down marker\n return linestyle, marker, color # Yes\n else:\n # ignore converted color\n color = None\n except ValueError:\n pass # No, not just a color.\n\n errfmt = (\"{!r} is neither a data key nor a valid format string ({})\"\n if ambiguous_fmt_datakey else\n \"{!r} is not a valid format string ({})\")\n\n i = 0\n while i < len(fmt):\n c = fmt[i]\n if fmt[i:i+2] in mlines.lineStyles: # First, the two-char styles.\n if linestyle is not None:\n raise ValueError(errfmt.format(fmt, \"two linestyle symbols\"))\n linestyle = fmt[i:i+2]\n i += 2\n elif c in mlines.lineStyles:\n if linestyle is not None:\n raise ValueError(errfmt.format(fmt, \"two linestyle symbols\"))\n linestyle = c\n i += 1\n elif c in mlines.lineMarkers:\n if marker is not None:\n raise ValueError(errfmt.format(fmt, \"two marker symbols\"))\n marker = c\n i += 1\n elif c in mcolors.get_named_colors_mapping():\n if color is not None:\n raise ValueError(errfmt.format(fmt, \"two color symbols\"))\n color = c\n i += 1\n elif c == 'C' and i < len(fmt) - 1:\n color_cycle_number = int(fmt[i + 1])\n color = mcolors.to_rgba(\"C{}\".format(color_cycle_number))\n i += 2\n else:\n raise ValueError(\n errfmt.format(fmt, f\"unrecognized character {c!r}\"))\n\n if linestyle is None and marker is None:\n linestyle = mpl.rcParams['lines.linestyle']\n if linestyle is None:\n linestyle = 'None'\n if marker is None:\n marker = 'None'\n\n return linestyle, marker, color\n\n\nclass _process_plot_var_args:\n \"\"\"\n Process variable length arguments to `~.Axes.plot`, to support ::\n\n plot(t, s)\n plot(t1, s1, t2, s2)\n plot(t1, s1, 'ko', t2, s2)\n plot(t1, s1, 'ko', t2, s2, 'r--', t3, e3)\n\n an arbitrary number of *x*, *y*, *fmt* are allowed\n \"\"\"\n def __init__(self, axes, command='plot'):\n self.axes = axes\n self.command = command\n self.set_prop_cycle(None)\n\n def __getstate__(self):\n # note: it is not possible to pickle a generator (and thus a cycler).\n return {'axes': self.axes, 'command': self.command}\n\n def __setstate__(self, state):\n self.__dict__ = state.copy()\n self.set_prop_cycle(None)\n\n def set_prop_cycle(self, cycler):\n if cycler is None:\n cycler = mpl.rcParams['axes.prop_cycle']\n self.prop_cycler = itertools.cycle(cycler)\n self._prop_keys = cycler.keys # This should make a copy\n\n def __call__(self, *args, data=None, **kwargs):\n self.axes._process_unit_info(kwargs=kwargs)\n\n for pos_only in \"xy\":\n if pos_only in kwargs:\n raise TypeError(\"{} got an unexpected keyword argument {!r}\"\n .format(self.command, pos_only))\n\n if not args:\n return\n\n if data is None: # Process dict views\n args = [cbook.sanitize_sequence(a) for a in args]\n else: # Process the 'data' kwarg.\n replaced = [mpl._replacer(data, arg) for arg in args]\n if len(args) == 1:\n label_namer_idx = 0\n elif len(args) == 2: # Can be x, y or y, c.\n # Figure out what the second argument is.\n # 1) If the second argument cannot be a format shorthand, the\n # second argument is the label_namer.\n # 2) Otherwise (it could have been a format shorthand),\n # a) if we did perform a substitution, emit a warning, and\n # use it as label_namer.\n # b) otherwise, it is indeed a format shorthand; use the\n # first argument as label_namer.\n try:\n _process_plot_format(args[1])\n except ValueError: # case 1)\n label_namer_idx = 1\n else:\n if replaced[1] is not args[1]: # case 2a)\n _api.warn_external(\n f\"Second argument {args[1]!r} is ambiguous: could \"\n f\"be a format string but is in 'data'; using as \"\n f\"data. If it was intended as data, set the \"\n f\"format string to an empty string to suppress \"\n f\"this warning. If it was intended as a format \"\n f\"string, explicitly pass the x-values as well. \"\n f\"Alternatively, rename the entry in 'data'.\",\n RuntimeWarning)\n label_namer_idx = 1\n else: # case 2b)\n label_namer_idx = 0\n elif len(args) == 3:\n label_namer_idx = 1\n else:\n raise ValueError(\n \"Using arbitrary long args with data is not supported due \"\n \"to ambiguity of arguments; use multiple plotting calls \"\n \"instead\")\n if kwargs.get(\"label\") is None:\n kwargs[\"label\"] = mpl._label_from_arg(\n replaced[label_namer_idx], args[label_namer_idx])\n args = replaced\n ambiguous_fmt_datakey = data is not None and len(args) == 2\n\n if len(args) >= 4 and not cbook.is_scalar_or_string(\n kwargs.get(\"label\")):\n raise ValueError(\"plot() with multiple groups of data (i.e., \"\n \"pairs of x and y) does not support multiple \"\n \"labels\")\n\n # Repeatedly grab (x, y) or (x, y, format) from the front of args and\n # massage them into arguments to plot() or fill().\n\n while args:\n this, args = args[:2], args[2:]\n if args and isinstance(args[0], str):\n this += args[0],\n args = args[1:]\n yield from self._plot_args(\n this, kwargs, ambiguous_fmt_datakey=ambiguous_fmt_datakey)\n\n def get_next_color(self):\n \"\"\"Return the next color in the cycle.\"\"\"\n if 'color' not in self._prop_keys:\n return 'k'\n return next(self.prop_cycler)['color']\n\n def _getdefaults(self, ignore, kw):\n \"\"\"\n If some keys in the property cycle (excluding those in the set\n *ignore*) are absent or set to None in the dict *kw*, return a copy\n of the next entry in the property cycle, excluding keys in *ignore*.\n Otherwise, don't advance the property cycle, and return an empty dict.\n \"\"\"\n prop_keys = self._prop_keys - ignore\n if any(kw.get(k, None) is None for k in prop_keys):\n # Need to copy this dictionary or else the next time around\n # in the cycle, the dictionary could be missing entries.\n default_dict = next(self.prop_cycler).copy()\n for p in ignore:\n default_dict.pop(p, None)\n else:\n default_dict = {}\n return default_dict\n\n def _setdefaults(self, defaults, kw):\n \"\"\"\n Add to the dict *kw* the entries in the dict *default* that are absent\n or set to None in *kw*.\n \"\"\"\n for k in defaults:\n if kw.get(k, None) is None:\n kw[k] = defaults[k]\n\n def _makeline(self, x, y, kw, kwargs):\n kw = {**kw, **kwargs} # Don't modify the original kw.\n default_dict = self._getdefaults(set(), kw)\n self._setdefaults(default_dict, kw)\n seg = mlines.Line2D(x, y, **kw)\n return seg, kw\n\n def _makefill(self, x, y, kw, kwargs):\n # Polygon doesn't directly support unitized inputs.\n x = self.axes.convert_xunits(x)\n y = self.axes.convert_yunits(y)\n\n kw = kw.copy() # Don't modify the original kw.\n kwargs = kwargs.copy()\n\n # Ignore 'marker'-related properties as they aren't Polygon\n # properties, but they are Line2D properties, and so they are\n # likely to appear in the default cycler construction.\n # This is done here to the defaults dictionary as opposed to the\n # other two dictionaries because we do want to capture when a\n # *user* explicitly specifies a marker which should be an error.\n # We also want to prevent advancing the cycler if there are no\n # defaults needed after ignoring the given properties.\n ignores = {'marker', 'markersize', 'markeredgecolor',\n 'markerfacecolor', 'markeredgewidth'}\n # Also ignore anything provided by *kwargs*.\n for k, v in kwargs.items():\n if v is not None:\n ignores.add(k)\n\n # Only using the first dictionary to use as basis\n # for getting defaults for back-compat reasons.\n # Doing it with both seems to mess things up in\n # various places (probably due to logic bugs elsewhere).\n default_dict = self._getdefaults(ignores, kw)\n self._setdefaults(default_dict, kw)\n\n # Looks like we don't want \"color\" to be interpreted to\n # mean both facecolor and edgecolor for some reason.\n # So the \"kw\" dictionary is thrown out, and only its\n # 'color' value is kept and translated as a 'facecolor'.\n # This design should probably be revisited as it increases\n # complexity.\n facecolor = kw.get('color', None)\n\n # Throw out 'color' as it is now handled as a facecolor\n default_dict.pop('color', None)\n\n # To get other properties set from the cycler\n # modify the kwargs dictionary.\n self._setdefaults(default_dict, kwargs)\n\n seg = mpatches.Polygon(np.column_stack((x, y)),\n facecolor=facecolor,\n fill=kwargs.get('fill', True),\n closed=kw['closed'])\n seg.set(**kwargs)\n return seg, kwargs\n\n def _plot_args(self, tup, kwargs, *,\n return_kwargs=False, ambiguous_fmt_datakey=False):\n \"\"\"\n Process the arguments of ``plot([x], y, [fmt], **kwargs)`` calls.\n\n This processes a single set of ([x], y, [fmt]) parameters; i.e. for\n ``plot(x, y, x2, y2)`` it will be called twice. Once for (x, y) and\n once for (x2, y2).\n\n x and y may be 2D and thus can still represent multiple datasets.\n\n For multiple datasets, if the keyword argument *label* is a list, this\n will unpack the list and assign the individual labels to the datasets.\n\n Parameters\n ----------\n tup : tuple\n A tuple of the positional parameters. This can be one of\n\n - (y,)\n - (x, y)\n - (y, fmt)\n - (x, y, fmt)\n\n kwargs : dict\n The keyword arguments passed to ``plot()``.\n\n return_kwargs : bool\n Whether to also return the effective keyword arguments after label\n unpacking as well.\n\n ambiguous_fmt_datakey : bool\n Whether the format string in *tup* could also have been a\n misspelled data key.\n\n Returns\n -------\n result\n If *return_kwargs* is false, a list of Artists representing the\n dataset(s).\n If *return_kwargs* is true, a list of (Artist, effective_kwargs)\n representing the dataset(s). See *return_kwargs*.\n The Artist is either `.Line2D` (if called from ``plot()``) or\n `.Polygon` otherwise.\n \"\"\"\n if len(tup) > 1 and isinstance(tup[-1], str):\n # xy is tup with fmt stripped (could still be (y,) only)\n *xy, fmt = tup\n linestyle, marker, color = _process_plot_format(\n fmt, ambiguous_fmt_datakey=ambiguous_fmt_datakey)\n elif len(tup) == 3:\n raise ValueError('third arg must be a format string')\n else:\n xy = tup\n linestyle, marker, color = None, None, None\n\n # Don't allow any None value; these would be up-converted to one\n # element array of None which causes problems downstream.\n if any(v is None for v in tup):\n raise ValueError(\"x, y, and format string must not be None\")\n\n kw = {}\n for prop_name, val in zip(('linestyle', 'marker', 'color'),\n (linestyle, marker, color)):\n if val is not None:\n # check for conflicts between fmt and kwargs\n if (fmt.lower() != 'none'\n and prop_name in kwargs\n and val != 'None'):\n # Technically ``plot(x, y, 'o', ls='--')`` is a conflict\n # because 'o' implicitly unsets the linestyle\n # (linestyle='None').\n # We'll gracefully not warn in this case because an\n # explicit set via kwargs can be seen as intention to\n # override an implicit unset.\n # Note: We don't val.lower() != 'none' because val is not\n # necessarily a string (can be a tuple for colors). This\n # is safe, because *val* comes from _process_plot_format()\n # which only returns 'None'.\n _api.warn_external(\n f\"{prop_name} is redundantly defined by the \"\n f\"'{prop_name}' keyword argument and the fmt string \"\n f'\"{fmt}\" (-> {prop_name}={val!r}). The keyword '\n f\"argument will take precedence.\")\n kw[prop_name] = val\n\n if len(xy) == 2:\n x = _check_1d(xy[0])\n y = _check_1d(xy[1])\n else:\n x, y = index_of(xy[-1])\n\n if self.axes.xaxis is not None:\n self.axes.xaxis.update_units(x)\n if self.axes.yaxis is not None:\n self.axes.yaxis.update_units(y)\n\n if x.shape[0] != y.shape[0]:\n raise ValueError(f\"x and y must have same first dimension, but \"\n f\"have shapes {x.shape} and {y.shape}\")\n if x.ndim > 2 or y.ndim > 2:\n raise ValueError(f\"x and y can be no greater than 2D, but have \"\n f\"shapes {x.shape} and {y.shape}\")\n if x.ndim == 1:\n x = x[:, np.newaxis]\n if y.ndim == 1:\n y = y[:, np.newaxis]\n\n if self.command == 'plot':\n make_artist = self._makeline\n else:\n kw['closed'] = kwargs.get('closed', True)\n make_artist = self._makefill\n\n ncx, ncy = x.shape[1], y.shape[1]\n if ncx > 1 and ncy > 1 and ncx != ncy:\n raise ValueError(f\"x has {ncx} columns but y has {ncy} columns\")\n if ncx == 0 or ncy == 0:\n return []\n\n label = kwargs.get('label')\n n_datasets = max(ncx, ncy)\n if n_datasets > 1 and not cbook.is_scalar_or_string(label):\n if len(label) != n_datasets:\n raise ValueError(f\"label must be scalar or have the same \"\n f\"length as the input data, but found \"\n f\"{len(label)} for {n_datasets} datasets.\")\n labels = label\n else:\n labels = [label] * n_datasets\n\n result = (make_artist(x[:, j % ncx], y[:, j % ncy], kw,\n {**kwargs, 'label': label})\n for j, label in enumerate(labels))\n\n if return_kwargs:\n return list(result)\n else:\n return [l[0] for l in result]\n\n\n@_api.define_aliases({\"facecolor\": [\"fc\"]})\nclass _AxesBase(martist.Artist):\n name = \"rectilinear\"\n\n # axis names are the prefixes for the attributes that contain the\n # respective axis; e.g. 'x' <-> self.xaxis, containing an XAxis.\n # Note that PolarAxes uses these attributes as well, so that we have\n # 'x' <-> self.xaxis, containing a ThetaAxis. In particular we do not\n # have 'theta' in _axis_names.\n # In practice, this is ('x', 'y') for all 2D Axes and ('x', 'y', 'z')\n # for Axes3D.\n _axis_names = (\"x\", \"y\")\n _shared_axes = {name: cbook.Grouper() for name in _axis_names}\n _twinned_axes = cbook.Grouper()\n\n _subclass_uses_cla = False\n\n @property\n def _axis_map(self):\n \"\"\"A mapping of axis names, e.g. 'x', to `Axis` instances.\"\"\"\n return {name: getattr(self, f\"{name}axis\")\n for name in self._axis_names}\n\n def __str__(self):\n return \"{0}({1[0]:g},{1[1]:g};{1[2]:g}x{1[3]:g})\".format(\n type(self).__name__, self._position.bounds)\n\n def __init__(self, fig,\n *args,\n facecolor=None, # defaults to rc axes.facecolor\n frameon=True,\n sharex=None, # use Axes instance's xaxis info\n sharey=None, # use Axes instance's yaxis info\n label='',\n xscale=None,\n yscale=None,\n box_aspect=None,\n **kwargs\n ):\n \"\"\"\n Build an Axes in a figure.\n\n Parameters\n ----------\n fig : `~matplotlib.figure.Figure`\n The Axes is built in the `.Figure` *fig*.\n\n *args\n ``*args`` can be a single ``(left, bottom, width, height)``\n rectangle or a single `.Bbox`. This specifies the rectangle (in\n figure coordinates) where the Axes is positioned.\n\n ``*args`` can also consist of three numbers or a single three-digit\n number; in the latter case, the digits are considered as\n independent numbers. The numbers are interpreted as ``(nrows,\n ncols, index)``: ``(nrows, ncols)`` specifies the size of an array\n of subplots, and ``index`` is the 1-based index of the subplot\n being created. Finally, ``*args`` can also directly be a\n `.SubplotSpec` instance.\n\n sharex, sharey : `~.axes.Axes`, optional\n The x or y `~.matplotlib.axis` is shared with the x or\n y axis in the input `~.axes.Axes`.\n\n frameon : bool, default: True\n Whether the Axes frame is visible.\n\n box_aspect : float, optional\n Set a fixed aspect for the Axes box, i.e. the ratio of height to\n width. See `~.axes.Axes.set_box_aspect` for details.\n\n **kwargs\n Other optional keyword arguments:\n\n %(Axes:kwdoc)s\n\n Returns\n -------\n `~.axes.Axes`\n The new `~.axes.Axes` object.\n \"\"\"\n\n super().__init__()\n if \"rect\" in kwargs:\n if args:\n raise TypeError(\n \"'rect' cannot be used together with positional arguments\")\n rect = kwargs.pop(\"rect\")\n _api.check_isinstance((mtransforms.Bbox, Iterable), rect=rect)\n args = (rect,)\n subplotspec = None\n if len(args) == 1 and isinstance(args[0], mtransforms.Bbox):\n self._position = args[0]\n elif len(args) == 1 and np.iterable(args[0]):\n self._position = mtransforms.Bbox.from_bounds(*args[0])\n else:\n self._position = self._originalPosition = mtransforms.Bbox.unit()\n subplotspec = SubplotSpec._from_subplot_args(fig, args)\n if self._position.width < 0 or self._position.height < 0:\n raise ValueError('Width and height specified must be non-negative')\n self._originalPosition = self._position.frozen()\n self.axes = self\n self._aspect = 'auto'\n self._adjustable = 'box'\n self._anchor = 'C'\n self._stale_viewlims = {name: False for name in self._axis_names}\n self._sharex = sharex\n self._sharey = sharey\n self.set_label(label)\n self.set_figure(fig)\n # The subplotspec needs to be set after the figure (so that\n # figure-level subplotpars are taken into account), but the figure\n # needs to be set after self._position is initialized.\n if subplotspec:\n self.set_subplotspec(subplotspec)\n else:\n self._subplotspec = None\n self.set_box_aspect(box_aspect)\n self._axes_locator = None # Optionally set via update(kwargs).\n\n # placeholder for any colorbars added that use this Axes.\n # (see colorbar.py):\n self._colorbars = []\n self.spines = mspines.Spines.from_dict(self._gen_axes_spines())\n\n # this call may differ for non-sep axes, e.g., polar\n self._init_axis()\n if facecolor is None:\n facecolor = mpl.rcParams['axes.facecolor']\n self._facecolor = facecolor\n self._frameon = frameon\n self.set_axisbelow(mpl.rcParams['axes.axisbelow'])\n\n self._rasterization_zorder = None\n self.clear()\n\n # funcs used to format x and y - fall back on major formatters\n self.fmt_xdata = None\n self.fmt_ydata = None\n\n self.set_navigate(True)\n self.set_navigate_mode(None)\n\n if xscale:\n self.set_xscale(xscale)\n if yscale:\n self.set_yscale(yscale)\n\n self._internal_update(kwargs)\n\n for name, axis in self._axis_map.items():\n axis.callbacks._connect_picklable(\n 'units', self._unit_change_handler(name))\n\n rcParams = mpl.rcParams\n self.tick_params(\n top=rcParams['xtick.top'] and rcParams['xtick.minor.top'],\n bottom=rcParams['xtick.bottom'] and rcParams['xtick.minor.bottom'],\n labeltop=(rcParams['xtick.labeltop'] and\n rcParams['xtick.minor.top']),\n labelbottom=(rcParams['xtick.labelbottom'] and\n rcParams['xtick.minor.bottom']),\n left=rcParams['ytick.left'] and rcParams['ytick.minor.left'],\n right=rcParams['ytick.right'] and rcParams['ytick.minor.right'],\n labelleft=(rcParams['ytick.labelleft'] and\n rcParams['ytick.minor.left']),\n labelright=(rcParams['ytick.labelright'] and\n rcParams['ytick.minor.right']),\n which='minor')\n\n self.tick_params(\n top=rcParams['xtick.top'] and rcParams['xtick.major.top'],\n bottom=rcParams['xtick.bottom'] and rcParams['xtick.major.bottom'],\n labeltop=(rcParams['xtick.labeltop'] and\n rcParams['xtick.major.top']),\n labelbottom=(rcParams['xtick.labelbottom'] and\n rcParams['xtick.major.bottom']),\n left=rcParams['ytick.left'] and rcParams['ytick.major.left'],\n right=rcParams['ytick.right'] and rcParams['ytick.major.right'],\n labelleft=(rcParams['ytick.labelleft'] and\n rcParams['ytick.major.left']),\n labelright=(rcParams['ytick.labelright'] and\n rcParams['ytick.major.right']),\n which='major')\n\n def __init_subclass__(cls, **kwargs):\n parent_uses_cla = super(cls, cls)._subclass_uses_cla\n if 'cla' in cls.__dict__:\n _api.warn_deprecated(\n '3.6',\n pending=True,\n message=f'Overriding `Axes.cla` in {cls.__qualname__} is '\n 'pending deprecation in %(since)s and will be fully '\n 'deprecated in favor of `Axes.clear` in the future. '\n 'Please report '\n f'this to the {cls.__module__!r} author.')\n cls._subclass_uses_cla = 'cla' in cls.__dict__ or parent_uses_cla\n super().__init_subclass__(**kwargs)\n\n def __getstate__(self):\n state = super().__getstate__()\n # Prune the sharing & twinning info to only contain the current group.\n state[\"_shared_axes\"] = {\n name: self._shared_axes[name].get_siblings(self)\n for name in self._axis_names if self in self._shared_axes[name]}\n state[\"_twinned_axes\"] = (self._twinned_axes.get_siblings(self)\n if self in self._twinned_axes else None)\n return state\n\n def __setstate__(self, state):\n # Merge the grouping info back into the global groupers.\n shared_axes = state.pop(\"_shared_axes\")\n for name, shared_siblings in shared_axes.items():\n self._shared_axes[name].join(*shared_siblings)\n twinned_siblings = state.pop(\"_twinned_axes\")\n if twinned_siblings:\n self._twinned_axes.join(*twinned_siblings)\n self.__dict__ = state\n self._stale = True\n\n def __repr__(self):\n fields = []\n if self.get_label():\n fields += [f\"label={self.get_label()!r}\"]\n if hasattr(self, \"get_title\"):\n titles = {}\n for k in [\"left\", \"center\", \"right\"]:\n title = self.get_title(loc=k)\n if title:\n titles[k] = title\n if titles:\n fields += [f\"title={titles}\"]\n for name, axis in self._axis_map.items():\n if axis.get_label() and axis.get_label().get_text():\n fields += [f\"{name}label={axis.get_label().get_text()!r}\"]\n return f\"<{self.__class__.__name__}: \" + \", \".join(fields) + \">\"\n\n def get_subplotspec(self):\n \"\"\"Return the `.SubplotSpec` associated with the subplot, or None.\"\"\"\n return self._subplotspec\n\n def set_subplotspec(self, subplotspec):\n \"\"\"Set the `.SubplotSpec`. associated with the subplot.\"\"\"\n self._subplotspec = subplotspec\n self._set_position(subplotspec.get_position(self.figure))\n\n def get_gridspec(self):\n \"\"\"Return the `.GridSpec` associated with the subplot, or None.\"\"\"\n return self._subplotspec.get_gridspec() if self._subplotspec else None\n\n @_api.delete_parameter(\"3.6\", \"args\")\n @_api.delete_parameter(\"3.6\", \"kwargs\")\n def get_window_extent(self, renderer=None, *args, **kwargs):\n \"\"\"\n Return the Axes bounding box in display space; *args* and *kwargs*\n are empty.\n\n This bounding box does not include the spines, ticks, ticklabels,\n or other labels. For a bounding box including these elements use\n `~matplotlib.axes.Axes.get_tightbbox`.\n\n See Also\n --------\n matplotlib.axes.Axes.get_tightbbox\n matplotlib.axis.Axis.get_tightbbox\n matplotlib.spines.Spine.get_window_extent\n \"\"\"\n return self.bbox\n\n def _init_axis(self):\n # This is moved out of __init__ because non-separable axes don't use it\n self.xaxis = maxis.XAxis(self)\n self.spines.bottom.register_axis(self.xaxis)\n self.spines.top.register_axis(self.xaxis)\n self.yaxis = maxis.YAxis(self)\n self.spines.left.register_axis(self.yaxis)\n self.spines.right.register_axis(self.yaxis)\n self._update_transScale()\n\n def set_figure(self, fig):\n # docstring inherited\n super().set_figure(fig)\n\n self.bbox = mtransforms.TransformedBbox(self._position,\n fig.transSubfigure)\n # these will be updated later as data is added\n self.dataLim = mtransforms.Bbox.null()\n self._viewLim = mtransforms.Bbox.unit()\n self.transScale = mtransforms.TransformWrapper(\n mtransforms.IdentityTransform())\n\n self._set_lim_and_transforms()\n\n def _unstale_viewLim(self):\n # We should arrange to store this information once per share-group\n # instead of on every axis.\n need_scale = {\n name: any(ax._stale_viewlims[name]\n for ax in self._shared_axes[name].get_siblings(self))\n for name in self._axis_names}\n if any(need_scale.values()):\n for name in need_scale:\n for ax in self._shared_axes[name].get_siblings(self):\n ax._stale_viewlims[name] = False\n self.autoscale_view(**{f\"scale{name}\": scale\n for name, scale in need_scale.items()})\n\n @property\n def viewLim(self):\n self._unstale_viewLim()\n return self._viewLim\n\n def _request_autoscale_view(self, axis=\"all\", tight=None):\n \"\"\"\n Mark a single axis, or all of them, as stale wrt. autoscaling.\n\n No computation is performed until the next autoscaling; thus, separate\n calls to control individual axises incur negligible performance cost.\n\n Parameters\n ----------\n axis : str, default: \"all\"\n Either an element of ``self._axis_names``, or \"all\".\n tight : bool or None, default: None\n \"\"\"\n axis_names = _api.check_getitem(\n {**{k: [k] for k in self._axis_names}, \"all\": self._axis_names},\n axis=axis)\n for name in axis_names:\n self._stale_viewlims[name] = True\n if tight is not None:\n self._tight = tight\n\n def _set_lim_and_transforms(self):\n \"\"\"\n Set the *_xaxis_transform*, *_yaxis_transform*, *transScale*,\n *transData*, *transLimits* and *transAxes* transformations.\n\n .. note::\n\n This method is primarily used by rectilinear projections of the\n `~matplotlib.axes.Axes` class, and is meant to be overridden by\n new kinds of projection Axes that need different transformations\n and limits. (See `~matplotlib.projections.polar.PolarAxes` for an\n example.)\n \"\"\"\n self.transAxes = mtransforms.BboxTransformTo(self.bbox)\n\n # Transforms the x and y axis separately by a scale factor.\n # It is assumed that this part will have non-linear components\n # (e.g., for a log scale).\n self.transScale = mtransforms.TransformWrapper(\n mtransforms.IdentityTransform())\n\n # An affine transformation on the data, generally to limit the\n # range of the axes\n self.transLimits = mtransforms.BboxTransformFrom(\n mtransforms.TransformedBbox(self._viewLim, self.transScale))\n\n # The parentheses are important for efficiency here -- they\n # group the last two (which are usually affines) separately\n # from the first (which, with log-scaling can be non-affine).\n self.transData = self.transScale + (self.transLimits + self.transAxes)\n\n self._xaxis_transform = mtransforms.blended_transform_factory(\n self.transData, self.transAxes)\n self._yaxis_transform = mtransforms.blended_transform_factory(\n self.transAxes, self.transData)\n\n def get_xaxis_transform(self, which='grid'):\n \"\"\"\n Get the transformation used for drawing x-axis labels, ticks\n and gridlines. The x-direction is in data coordinates and the\n y-direction is in axis coordinates.\n\n .. note::\n\n This transformation is primarily used by the\n `~matplotlib.axis.Axis` class, and is meant to be\n overridden by new kinds of projections that may need to\n place axis elements in different locations.\n \"\"\"\n if which == 'grid':\n return self._xaxis_transform\n elif which == 'tick1':\n # for cartesian projection, this is bottom spine\n return self.spines.bottom.get_spine_transform()\n elif which == 'tick2':\n # for cartesian projection, this is top spine\n return self.spines.top.get_spine_transform()\n else:\n raise ValueError(f'unknown value for which: {which!r}')\n\n def get_xaxis_text1_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing x-axis labels, which will add\n *pad_points* of padding (in points) between the axis and the label.\n The x-direction is in data coordinates and the y-direction is in\n axis coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"xtick.alignment\"]\n return (self.get_xaxis_transform(which='tick1') +\n mtransforms.ScaledTranslation(0, -1 * pad_points / 72,\n self.figure.dpi_scale_trans),\n \"top\", labels_align)\n\n def get_xaxis_text2_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing secondary x-axis labels, which will\n add *pad_points* of padding (in points) between the axis and the\n label. The x-direction is in data coordinates and the y-direction\n is in axis coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"xtick.alignment\"]\n return (self.get_xaxis_transform(which='tick2') +\n mtransforms.ScaledTranslation(0, pad_points / 72,\n self.figure.dpi_scale_trans),\n \"bottom\", labels_align)\n\n def get_yaxis_transform(self, which='grid'):\n \"\"\"\n Get the transformation used for drawing y-axis labels, ticks\n and gridlines. The x-direction is in axis coordinates and the\n y-direction is in data coordinates.\n\n .. note::\n\n This transformation is primarily used by the\n `~matplotlib.axis.Axis` class, and is meant to be\n overridden by new kinds of projections that may need to\n place axis elements in different locations.\n \"\"\"\n if which == 'grid':\n return self._yaxis_transform\n elif which == 'tick1':\n # for cartesian projection, this is bottom spine\n return self.spines.left.get_spine_transform()\n elif which == 'tick2':\n # for cartesian projection, this is top spine\n return self.spines.right.get_spine_transform()\n else:\n raise ValueError(f'unknown value for which: {which!r}')\n\n def get_yaxis_text1_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing y-axis labels, which will add\n *pad_points* of padding (in points) between the axis and the label.\n The x-direction is in axis coordinates and the y-direction is in\n data coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"ytick.alignment\"]\n return (self.get_yaxis_transform(which='tick1') +\n mtransforms.ScaledTranslation(-1 * pad_points / 72, 0,\n self.figure.dpi_scale_trans),\n labels_align, \"right\")\n\n def get_yaxis_text2_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing secondart y-axis labels, which will\n add *pad_points* of padding (in points) between the axis and the\n label. The x-direction is in axis coordinates and the y-direction\n is in data coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"ytick.alignment\"]\n return (self.get_yaxis_transform(which='tick2') +\n mtransforms.ScaledTranslation(pad_points / 72, 0,\n self.figure.dpi_scale_trans),\n labels_align, \"left\")\n\n def _update_transScale(self):\n self.transScale.set(\n mtransforms.blended_transform_factory(\n self.xaxis.get_transform(), self.yaxis.get_transform()))\n for line in getattr(self, \"_children\", []): # Not set during init.\n if not isinstance(line, mlines.Line2D):\n continue\n try:\n line._transformed_path.invalidate()\n except AttributeError:\n pass\n\n def get_position(self, original=False):\n \"\"\"\n Return the position of the Axes within the figure as a `.Bbox`.\n\n Parameters\n ----------\n original : bool\n If ``True``, return the original position. Otherwise return the\n active position. For an explanation of the positions see\n `.set_position`.\n\n Returns\n -------\n `.Bbox`\n\n \"\"\"\n if original:\n return self._originalPosition.frozen()\n else:\n locator = self.get_axes_locator()\n if not locator:\n self.apply_aspect()\n return self._position.frozen()\n\n def set_position(self, pos, which='both'):\n \"\"\"\n Set the Axes position.\n\n Axes have two position attributes. The 'original' position is the\n position allocated for the Axes. The 'active' position is the\n position the Axes is actually drawn at. These positions are usually\n the same unless a fixed aspect is set to the Axes. See\n `.Axes.set_aspect` for details.\n\n Parameters\n ----------\n pos : [left, bottom, width, height] or `~matplotlib.transforms.Bbox`\n The new position of the Axes in `.Figure` coordinates.\n\n which : {'both', 'active', 'original'}, default: 'both'\n Determines which position variables to change.\n\n See Also\n --------\n matplotlib.transforms.Bbox.from_bounds\n matplotlib.transforms.Bbox.from_extents\n \"\"\"\n self._set_position(pos, which=which)\n # because this is being called externally to the library we\n # don't let it be in the layout.\n self.set_in_layout(False)\n\n def _set_position(self, pos, which='both'):\n \"\"\"\n Private version of set_position.\n\n Call this internally to get the same functionality of `set_position`,\n but not to take the axis out of the constrained_layout hierarchy.\n \"\"\"\n if not isinstance(pos, mtransforms.BboxBase):\n pos = mtransforms.Bbox.from_bounds(*pos)\n for ax in self._twinned_axes.get_siblings(self):\n if which in ('both', 'active'):\n ax._position.set(pos)\n if which in ('both', 'original'):\n ax._originalPosition.set(pos)\n self.stale = True\n\n def reset_position(self):\n \"\"\"\n Reset the active position to the original position.\n\n This undoes changes to the active position (as defined in\n `.set_position`) which may have been performed to satisfy fixed-aspect\n constraints.\n \"\"\"\n for ax in self._twinned_axes.get_siblings(self):\n pos = ax.get_position(original=True)\n ax.set_position(pos, which='active')\n\n def set_axes_locator(self, locator):\n \"\"\"\n Set the Axes locator.\n\n Parameters\n ----------\n locator : Callable[[Axes, Renderer], Bbox]\n \"\"\"\n self._axes_locator = locator\n self.stale = True\n\n def get_axes_locator(self):\n \"\"\"\n Return the axes_locator.\n \"\"\"\n return self._axes_locator\n\n def _set_artist_props(self, a):\n \"\"\"Set the boilerplate props for artists added to Axes.\"\"\"\n a.set_figure(self.figure)\n if not a.is_transform_set():\n a.set_transform(self.transData)\n\n a.axes = self\n if a.get_mouseover():\n self._mouseover_set.add(a)\n\n def _gen_axes_patch(self):\n \"\"\"\n Returns\n -------\n Patch\n The patch used to draw the background of the Axes. It is also used\n as the clipping path for any data elements on the Axes.\n\n In the standard Axes, this is a rectangle, but in other projections\n it may not be.\n\n Notes\n -----\n Intended to be overridden by new projection types.\n \"\"\"\n return mpatches.Rectangle((0.0, 0.0), 1.0, 1.0)\n\n def _gen_axes_spines(self, locations=None, offset=0.0, units='inches'):\n \"\"\"\n Returns\n -------\n dict\n Mapping of spine names to `.Line2D` or `.Patch` instances that are\n used to draw Axes spines.\n\n In the standard Axes, spines are single line segments, but in other\n projections they may not be.\n\n Notes\n -----\n Intended to be overridden by new projection types.\n \"\"\"\n return {side: mspines.Spine.linear_spine(self, side)\n for side in ['left', 'right', 'bottom', 'top']}\n\n def sharex(self, other):\n \"\"\"\n Share the x-axis with *other*.\n\n This is equivalent to passing ``sharex=other`` when constructing the\n Axes, and cannot be used if the x-axis is already being shared with\n another Axes.\n \"\"\"\n _api.check_isinstance(_AxesBase, other=other)\n if self._sharex is not None and other is not self._sharex:\n raise ValueError(\"x-axis is already shared\")\n self._shared_axes[\"x\"].join(self, other)\n self._sharex = other\n self.xaxis.major = other.xaxis.major # Ticker instances holding\n self.xaxis.minor = other.xaxis.minor # locator and formatter.\n x0, x1 = other.get_xlim()\n self.set_xlim(x0, x1, emit=False, auto=other.get_autoscalex_on())\n self.xaxis._scale = other.xaxis._scale\n\n def sharey(self, other):\n \"\"\"\n Share the y-axis with *other*.\n\n This is equivalent to passing ``sharey=other`` when constructing the\n Axes, and cannot be used if the y-axis is already being shared with\n another Axes.\n \"\"\"\n _api.check_isinstance(_AxesBase, other=other)\n if self._sharey is not None and other is not self._sharey:\n raise ValueError(\"y-axis is already shared\")\n self._shared_axes[\"y\"].join(self, other)\n self._sharey = other\n self.yaxis.major = other.yaxis.major # Ticker instances holding\n self.yaxis.minor = other.yaxis.minor # locator and formatter.\n y0, y1 = other.get_ylim()\n self.set_ylim(y0, y1, emit=False, auto=other.get_autoscaley_on())\n self.yaxis._scale = other.yaxis._scale\n\n def __clear(self):\n \"\"\"Clear the Axes.\"\"\"\n # The actual implementation of clear() as long as clear() has to be\n # an adapter delegating to the correct implementation.\n # The implementation can move back into clear() when the\n # deprecation on cla() subclassing expires.\n\n # stash the current visibility state\n if hasattr(self, 'patch'):\n patch_visible = self.patch.get_visible()\n else:\n patch_visible = True\n\n xaxis_visible = self.xaxis.get_visible()\n yaxis_visible = self.yaxis.get_visible()\n\n for axis in self._axis_map.values():\n axis.clear() # Also resets the scale to linear.\n for spine in self.spines.values():\n spine.clear()\n\n self.ignore_existing_data_limits = True\n self.callbacks = cbook.CallbackRegistry(\n signals=[\"xlim_changed\", \"ylim_changed\", \"zlim_changed\"])\n\n for name, axis in self._axis_map.items():\n share = getattr(self, f\"_share{name}\")\n if share is not None:\n getattr(self, f\"share{name}\")(share)\n else:\n axis._set_scale(\"linear\")\n axis._set_lim(0, 1, auto=True)\n\n # update the minor locator for x and y axis based on rcParams\n if mpl.rcParams['xtick.minor.visible']:\n self.xaxis.set_minor_locator(mticker.AutoMinorLocator())\n if mpl.rcParams['ytick.minor.visible']:\n self.yaxis.set_minor_locator(mticker.AutoMinorLocator())\n\n self._xmargin = mpl.rcParams['axes.xmargin']\n self._ymargin = mpl.rcParams['axes.ymargin']\n self._tight = None\n self._use_sticky_edges = True\n self._update_transScale() # needed?\n\n self._get_lines = _process_plot_var_args(self)\n self._get_patches_for_fill = _process_plot_var_args(self, 'fill')\n\n self._gridOn = mpl.rcParams['axes.grid']\n self._children = []\n self._mouseover_set = _OrderedSet()\n self.child_axes = []\n self._current_image = None # strictly for pyplot via _sci, _gci\n self._projection_init = None # strictly for pyplot.subplot\n self.legend_ = None\n self.containers = []\n\n self.grid(False) # Disable grid on init to use rcParameter\n self.grid(self._gridOn, which=mpl.rcParams['axes.grid.which'],\n axis=mpl.rcParams['axes.grid.axis'])\n props = font_manager.FontProperties(\n size=mpl.rcParams['axes.titlesize'],\n weight=mpl.rcParams['axes.titleweight'])\n\n y = mpl.rcParams['axes.titley']\n if y is None:\n y = 1.0\n self._autotitlepos = True\n else:\n self._autotitlepos = False\n\n self.title = mtext.Text(\n x=0.5, y=y, text='',\n fontproperties=props,\n verticalalignment='baseline',\n horizontalalignment='center',\n )\n self._left_title = mtext.Text(\n x=0.0, y=y, text='',\n fontproperties=props.copy(),\n verticalalignment='baseline',\n horizontalalignment='left', )\n self._right_title = mtext.Text(\n x=1.0, y=y, text='',\n fontproperties=props.copy(),\n verticalalignment='baseline',\n horizontalalignment='right',\n )\n title_offset_points = mpl.rcParams['axes.titlepad']\n # refactor this out so it can be called in ax.set_title if\n # pad argument used...\n self._set_title_offset_trans(title_offset_points)\n\n for _title in (self.title, self._left_title, self._right_title):\n self._set_artist_props(_title)\n\n # The patch draws the background of the Axes. We want this to be below\n # the other artists. We use the frame to draw the edges so we are\n # setting the edgecolor to None.\n self.patch = self._gen_axes_patch()\n self.patch.set_figure(self.figure)\n self.patch.set_facecolor(self._facecolor)\n self.patch.set_edgecolor('none')\n self.patch.set_linewidth(0)\n self.patch.set_transform(self.transAxes)\n\n self.set_axis_on()\n\n self.xaxis.set_clip_path(self.patch)\n self.yaxis.set_clip_path(self.patch)\n\n self._shared_axes[\"x\"].clean()\n self._shared_axes[\"y\"].clean()\n if self._sharex is not None:\n self.xaxis.set_visible(xaxis_visible)\n self.patch.set_visible(patch_visible)\n if self._sharey is not None:\n self.yaxis.set_visible(yaxis_visible)\n self.patch.set_visible(patch_visible)\n\n self.stale = True\n\n def clear(self):\n \"\"\"Clear the Axes.\"\"\"\n # Act as an alias, or as the superclass implementation depending on the\n # subclass implementation.\n if self._subclass_uses_cla:\n self.cla()\n else:\n self.__clear()\n\n def cla(self):\n \"\"\"Clear the Axes.\"\"\"\n # Act as an alias, or as the superclass implementation depending on the\n # subclass implementation.\n if self._subclass_uses_cla:\n self.__clear()\n else:\n self.clear()\n\n class ArtistList(MutableSequence):\n \"\"\"\n A sublist of Axes children based on their type.\n\n The type-specific children sublists will become immutable in\n Matplotlib 3.7. Then, these artist lists will likely be replaced by\n tuples. Use as if this is a tuple already.\n\n This class exists only for the transition period to warn on the\n deprecated modification of artist lists.\n \"\"\"\n def __init__(self, axes, prop_name, add_name,\n valid_types=None, invalid_types=None):\n \"\"\"\n Parameters\n ----------\n axes : .axes.Axes\n The Axes from which this sublist will pull the children\n Artists.\n prop_name : str\n The property name used to access this sublist from the Axes;\n used to generate deprecation warnings.\n add_name : str\n The method name used to add Artists of this sublist's type to\n the Axes; used to generate deprecation warnings.\n valid_types : list of type, optional\n A list of types that determine which children will be returned\n by this sublist. If specified, then the Artists in the sublist\n must be instances of any of these types. If unspecified, then\n any type of Artist is valid (unless limited by\n *invalid_types*.)\n invalid_types : tuple, optional\n A list of types that determine which children will *not* be\n returned by this sublist. If specified, then Artists in the\n sublist will never be an instance of these types. Otherwise, no\n types will be excluded.\n \"\"\"\n self._axes = axes\n self._prop_name = prop_name\n self._add_name = add_name\n self._type_check = lambda artist: (\n (not valid_types or isinstance(artist, valid_types)) and\n (not invalid_types or not isinstance(artist, invalid_types))\n )\n\n def __repr__(self):\n return f'<Axes.ArtistList of {len(self)} {self._prop_name}>'\n\n def __len__(self):\n return sum(self._type_check(artist)\n for artist in self._axes._children)\n\n def __iter__(self):\n for artist in list(self._axes._children):\n if self._type_check(artist):\n yield artist\n\n def __getitem__(self, key):\n return [artist\n for artist in self._axes._children\n if self._type_check(artist)][key]\n\n def __add__(self, other):\n if isinstance(other, (list, _AxesBase.ArtistList)):\n return [*self, *other]\n return NotImplemented\n\n def __radd__(self, other):\n if isinstance(other, list):\n return other + list(self)\n return NotImplemented\n\n def insert(self, index, item):\n _api.warn_deprecated(\n '3.5',\n name=f'modification of the Axes.{self._prop_name}',\n obj_type='property',\n alternative=f'Axes.{self._add_name}')\n try:\n index = self._axes._children.index(self[index])\n except IndexError:\n index = None\n getattr(self._axes, self._add_name)(item)\n if index is not None:\n # Move new item to the specified index, if there's something to\n # put it before.\n self._axes._children[index:index] = self._axes._children[-1:]\n del self._axes._children[-1]\n\n def __setitem__(self, key, item):\n _api.warn_deprecated(\n '3.5',\n name=f'modification of the Axes.{self._prop_name}',\n obj_type='property',\n alternative=f'Artist.remove() and Axes.f{self._add_name}')\n del self[key]\n if isinstance(key, slice):\n key = key.start\n if not np.iterable(item):\n self.insert(key, item)\n return\n\n try:\n index = self._axes._children.index(self[key])\n except IndexError:\n index = None\n for i, artist in enumerate(item):\n getattr(self._axes, self._add_name)(artist)\n if index is not None:\n # Move new items to the specified index, if there's something\n # to put it before.\n i = -(i + 1)\n self._axes._children[index:index] = self._axes._children[i:]\n del self._axes._children[i:]\n\n def __delitem__(self, key):\n _api.warn_deprecated(\n '3.5',\n name=f'modification of the Axes.{self._prop_name}',\n obj_type='property',\n alternative='Artist.remove()')\n if isinstance(key, slice):\n for artist in self[key]:\n artist.remove()\n else:\n self[key].remove()\n\n @property\n def artists(self):\n return self.ArtistList(self, 'artists', 'add_artist', invalid_types=(\n mcoll.Collection, mimage.AxesImage, mlines.Line2D, mpatches.Patch,\n mtable.Table, mtext.Text))\n\n @property\n def collections(self):\n return self.ArtistList(self, 'collections', 'add_collection',\n valid_types=mcoll.Collection)\n\n @property\n def images(self):\n return self.ArtistList(self, 'images', 'add_image',\n valid_types=mimage.AxesImage)\n\n @property\n def lines(self):\n return self.ArtistList(self, 'lines', 'add_line',\n valid_types=mlines.Line2D)\n\n @property\n def patches(self):\n return self.ArtistList(self, 'patches', 'add_patch',\n valid_types=mpatches.Patch)\n\n @property\n def tables(self):\n return self.ArtistList(self, 'tables', 'add_table',\n valid_types=mtable.Table)\n\n @property\n def texts(self):\n return self.ArtistList(self, 'texts', 'add_artist',\n valid_types=mtext.Text)\n\n def get_facecolor(self):\n \"\"\"Get the facecolor of the Axes.\"\"\"\n return self.patch.get_facecolor()\n\n def set_facecolor(self, color):\n \"\"\"\n Set the facecolor of the Axes.\n\n Parameters\n ----------\n color : color\n \"\"\"\n self._facecolor = color\n self.stale = True\n return self.patch.set_facecolor(color)\n\n def _set_title_offset_trans(self, title_offset_points):\n \"\"\"\n Set the offset for the title either from :rc:`axes.titlepad`\n or from set_title kwarg ``pad``.\n \"\"\"\n self.titleOffsetTrans = mtransforms.ScaledTranslation(\n 0.0, title_offset_points / 72,\n self.figure.dpi_scale_trans)\n for _title in (self.title, self._left_title, self._right_title):\n _title.set_transform(self.transAxes + self.titleOffsetTrans)\n _title.set_clip_box(None)\n\n def set_prop_cycle(self, *args, **kwargs):\n \"\"\"\n Set the property cycle of the Axes.\n\n The property cycle controls the style properties such as color,\n marker and linestyle of future plot commands. The style properties\n of data already added to the Axes are not modified.\n\n Call signatures::\n\n set_prop_cycle(cycler)\n set_prop_cycle(label=values[, label2=values2[, ...]])\n set_prop_cycle(label, values)\n\n Form 1 sets given `~cycler.Cycler` object.\n\n Form 2 creates a `~cycler.Cycler` which cycles over one or more\n properties simultaneously and set it as the property cycle of the\n Axes. If multiple properties are given, their value lists must have\n the same length. This is just a shortcut for explicitly creating a\n cycler and passing it to the function, i.e. it's short for\n ``set_prop_cycle(cycler(label=values label2=values2, ...))``.\n\n Form 3 creates a `~cycler.Cycler` for a single property and set it\n as the property cycle of the Axes. This form exists for compatibility\n with the original `cycler.cycler` interface. Its use is discouraged\n in favor of the kwarg form, i.e. ``set_prop_cycle(label=values)``.\n\n Parameters\n ----------\n cycler : Cycler\n Set the given Cycler. *None* resets to the cycle defined by the\n current style.\n\n label : str\n The property key. Must be a valid `.Artist` property.\n For example, 'color' or 'linestyle'. Aliases are allowed,\n such as 'c' for 'color' and 'lw' for 'linewidth'.\n\n values : iterable\n Finite-length iterable of the property values. These values\n are validated and will raise a ValueError if invalid.\n\n See Also\n --------\n matplotlib.rcsetup.cycler\n Convenience function for creating validated cyclers for properties.\n cycler.cycler\n The original function for creating unvalidated cyclers.\n\n Examples\n --------\n Setting the property cycle for a single property:\n\n >>> ax.set_prop_cycle(color=['red', 'green', 'blue'])\n\n Setting the property cycle for simultaneously cycling over multiple\n properties (e.g. red circle, green plus, blue cross):\n\n >>> ax.set_prop_cycle(color=['red', 'green', 'blue'],\n ... marker=['o', '+', 'x'])\n\n \"\"\"\n if args and kwargs:\n raise TypeError(\"Cannot supply both positional and keyword \"\n \"arguments to this method.\")\n # Can't do `args == (None,)` as that crashes cycler.\n if len(args) == 1 and args[0] is None:\n prop_cycle = None\n else:\n prop_cycle = cycler(*args, **kwargs)\n self._get_lines.set_prop_cycle(prop_cycle)\n self._get_patches_for_fill.set_prop_cycle(prop_cycle)\n\n def get_aspect(self):\n \"\"\"\n Return the aspect ratio of the axes scaling.\n\n This is either \"auto\" or a float giving the ratio of y/x-scale.\n \"\"\"\n return self._aspect\n\n def set_aspect(self, aspect, adjustable=None, anchor=None, share=False):\n \"\"\"\n Set the aspect ratio of the axes scaling, i.e. y/x-scale.\n\n Parameters\n ----------\n aspect : {'auto', 'equal'} or float\n Possible values:\n\n - 'auto': fill the position rectangle with data.\n - 'equal': same as ``aspect=1``, i.e. same scaling for x and y.\n - *float*: The displayed size of 1 unit in y-data coordinates will\n be *aspect* times the displayed size of 1 unit in x-data\n coordinates; e.g. for ``aspect=2`` a square in data coordinates\n will be rendered with a height of twice its width.\n\n adjustable : None or {'box', 'datalim'}, optional\n If not ``None``, this defines which parameter will be adjusted to\n meet the required aspect. See `.set_adjustable` for further\n details.\n\n anchor : None or str or (float, float), optional\n If not ``None``, this defines where the Axes will be drawn if there\n is extra space due to aspect constraints. The most common way to\n to specify the anchor are abbreviations of cardinal directions:\n\n ===== =====================\n value description\n ===== =====================\n 'C' centered\n 'SW' lower left corner\n 'S' middle of bottom edge\n 'SE' lower right corner\n etc.\n ===== =====================\n\n See `~.Axes.set_anchor` for further details.\n\n share : bool, default: False\n If ``True``, apply the settings to all shared Axes.\n\n See Also\n --------\n matplotlib.axes.Axes.set_adjustable\n Set how the Axes adjusts to achieve the required aspect ratio.\n matplotlib.axes.Axes.set_anchor\n Set the position in case of extra space.\n \"\"\"\n if cbook._str_equal(aspect, 'equal'):\n aspect = 1\n if not cbook._str_equal(aspect, 'auto'):\n aspect = float(aspect) # raise ValueError if necessary\n if aspect <= 0 or not np.isfinite(aspect):\n raise ValueError(\"aspect must be finite and positive \")\n\n if share:\n axes = {sibling for name in self._axis_names\n for sibling in self._shared_axes[name].get_siblings(self)}\n else:\n axes = [self]\n\n for ax in axes:\n ax._aspect = aspect\n\n if adjustable is None:\n adjustable = self._adjustable\n self.set_adjustable(adjustable, share=share) # Handle sharing.\n\n if anchor is not None:\n self.set_anchor(anchor, share=share)\n self.stale = True\n\n def get_adjustable(self):\n \"\"\"\n Return whether the Axes will adjust its physical dimension ('box') or\n its data limits ('datalim') to achieve the desired aspect ratio.\n\n See Also\n --------\n matplotlib.axes.Axes.set_adjustable\n Set how the Axes adjusts to achieve the required aspect ratio.\n matplotlib.axes.Axes.set_aspect\n For a description of aspect handling.\n \"\"\"\n return self._adjustable\n\n def set_adjustable(self, adjustable, share=False):\n \"\"\"\n Set how the Axes adjusts to achieve the required aspect ratio.\n\n Parameters\n ----------\n adjustable : {'box', 'datalim'}\n If 'box', change the physical dimensions of the Axes.\n If 'datalim', change the ``x`` or ``y`` data limits.\n\n share : bool, default: False\n If ``True``, apply the settings to all shared Axes.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n For a description of aspect handling.\n\n Notes\n -----\n Shared Axes (of which twinned Axes are a special case)\n impose restrictions on how aspect ratios can be imposed.\n For twinned Axes, use 'datalim'. For Axes that share both\n x and y, use 'box'. Otherwise, either 'datalim' or 'box'\n may be used. These limitations are partly a requirement\n to avoid over-specification, and partly a result of the\n particular implementation we are currently using, in\n which the adjustments for aspect ratios are done sequentially\n and independently on each Axes as it is drawn.\n \"\"\"\n _api.check_in_list([\"box\", \"datalim\"], adjustable=adjustable)\n if share:\n axs = {sibling for name in self._axis_names\n for sibling in self._shared_axes[name].get_siblings(self)}\n else:\n axs = [self]\n if (adjustable == \"datalim\"\n and any(getattr(ax.get_data_ratio, \"__func__\", None)\n != _AxesBase.get_data_ratio\n for ax in axs)):\n # Limits adjustment by apply_aspect assumes that the axes' aspect\n # ratio can be computed from the data limits and scales.\n raise ValueError(\"Cannot set Axes adjustable to 'datalim' for \"\n \"Axes which override 'get_data_ratio'\")\n for ax in axs:\n ax._adjustable = adjustable\n self.stale = True\n\n def get_box_aspect(self):\n \"\"\"\n Return the Axes box aspect, i.e. the ratio of height to width.\n\n The box aspect is ``None`` (i.e. chosen depending on the available\n figure space) unless explicitly specified.\n\n See Also\n --------\n matplotlib.axes.Axes.set_box_aspect\n for a description of box aspect.\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n return self._box_aspect\n\n def set_box_aspect(self, aspect=None):\n \"\"\"\n Set the Axes box aspect, i.e. the ratio of height to width.\n\n This defines the aspect of the Axes in figure space and is not to be\n confused with the data aspect (see `~.Axes.set_aspect`).\n\n Parameters\n ----------\n aspect : float or None\n Changes the physical dimensions of the Axes, such that the ratio\n of the Axes height to the Axes width in physical units is equal to\n *aspect*. Defining a box aspect will change the *adjustable*\n property to 'datalim' (see `~.Axes.set_adjustable`).\n\n *None* will disable a fixed box aspect so that height and width\n of the Axes are chosen independently.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n axs = {*self._twinned_axes.get_siblings(self),\n *self._twinned_axes.get_siblings(self)}\n\n if aspect is not None:\n aspect = float(aspect)\n # when box_aspect is set to other than ´None`,\n # adjustable must be \"datalim\"\n for ax in axs:\n ax.set_adjustable(\"datalim\")\n\n for ax in axs:\n ax._box_aspect = aspect\n ax.stale = True\n\n def get_anchor(self):\n \"\"\"\n Get the anchor location.\n\n See Also\n --------\n matplotlib.axes.Axes.set_anchor\n for a description of the anchor.\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n return self._anchor\n\n def set_anchor(self, anchor, share=False):\n \"\"\"\n Define the anchor location.\n\n The actual drawing area (active position) of the Axes may be smaller\n than the Bbox (original position) when a fixed aspect is required. The\n anchor defines where the drawing area will be located within the\n available space.\n\n Parameters\n ----------\n anchor : (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}\n Either an (*x*, *y*) pair of relative coordinates (0 is left or\n bottom, 1 is right or top), 'C' (center), or a cardinal direction\n ('SW', southwest, is bottom left, etc.). str inputs are shorthands\n for (*x*, *y*) coordinates, as shown in the following table::\n\n .. code-block:: none\n\n +-----------------+-----------------+-----------------+\n | 'NW' (0.0, 1.0) | 'N' (0.5, 1.0) | 'NE' (1.0, 1.0) |\n +-----------------+-----------------+-----------------+\n | 'W' (0.0, 0.5) | 'C' (0.5, 0.5) | 'E' (1.0, 0.5) |\n +-----------------+-----------------+-----------------+\n | 'SW' (0.0, 0.0) | 'S' (0.5, 0.0) | 'SE' (1.0, 0.0) |\n +-----------------+-----------------+-----------------+\n\n share : bool, default: False\n If ``True``, apply the settings to all shared Axes.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n if not (anchor in mtransforms.Bbox.coefs or len(anchor) == 2):\n raise ValueError('argument must be among %s' %\n ', '.join(mtransforms.Bbox.coefs))\n if share:\n axes = {sibling for name in self._axis_names\n for sibling in self._shared_axes[name].get_siblings(self)}\n else:\n axes = [self]\n for ax in axes:\n ax._anchor = anchor\n\n self.stale = True\n\n def get_data_ratio(self):\n \"\"\"\n Return the aspect ratio of the scaled data.\n\n Notes\n -----\n This method is intended to be overridden by new projection types.\n \"\"\"\n txmin, txmax = self.xaxis.get_transform().transform(self.get_xbound())\n tymin, tymax = self.yaxis.get_transform().transform(self.get_ybound())\n xsize = max(abs(txmax - txmin), 1e-30)\n ysize = max(abs(tymax - tymin), 1e-30)\n return ysize / xsize\n\n def apply_aspect(self, position=None):\n \"\"\"\n Adjust the Axes for a specified data aspect ratio.\n\n Depending on `.get_adjustable` this will modify either the\n Axes box (position) or the view limits. In the former case,\n `~matplotlib.axes.Axes.get_anchor` will affect the position.\n\n Parameters\n ----------\n position : None or .Bbox\n If not ``None``, this defines the position of the\n Axes within the figure as a Bbox. See `~.Axes.get_position`\n for further details.\n\n Notes\n -----\n This is called automatically when each Axes is drawn. You may need\n to call it yourself if you need to update the Axes position and/or\n view limits before the Figure is drawn.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n For a description of aspect ratio handling.\n matplotlib.axes.Axes.set_adjustable\n Set how the Axes adjusts to achieve the required aspect ratio.\n matplotlib.axes.Axes.set_anchor\n Set the position in case of extra space.\n \"\"\"\n if position is None:\n position = self.get_position(original=True)\n\n aspect = self.get_aspect()\n\n if aspect == 'auto' and self._box_aspect is None:\n self._set_position(position, which='active')\n return\n\n trans = self.get_figure().transSubfigure\n bb = mtransforms.Bbox.unit().transformed(trans)\n # this is the physical aspect of the panel (or figure):\n fig_aspect = bb.height / bb.width\n\n if self._adjustable == 'box':\n if self in self._twinned_axes:\n raise RuntimeError(\"Adjustable 'box' is not allowed in a \"\n \"twinned Axes; use 'datalim' instead\")\n box_aspect = aspect * self.get_data_ratio()\n pb = position.frozen()\n pb1 = pb.shrunk_to_aspect(box_aspect, pb, fig_aspect)\n self._set_position(pb1.anchored(self.get_anchor(), pb), 'active')\n return\n\n # The following is only seen if self._adjustable == 'datalim'\n if self._box_aspect is not None:\n pb = position.frozen()\n pb1 = pb.shrunk_to_aspect(self._box_aspect, pb, fig_aspect)\n self._set_position(pb1.anchored(self.get_anchor(), pb), 'active')\n if aspect == \"auto\":\n return\n\n # reset active to original in case it had been changed by prior use\n # of 'box'\n if self._box_aspect is None:\n self._set_position(position, which='active')\n else:\n position = pb1.anchored(self.get_anchor(), pb)\n\n x_trf = self.xaxis.get_transform()\n y_trf = self.yaxis.get_transform()\n xmin, xmax = x_trf.transform(self.get_xbound())\n ymin, ymax = y_trf.transform(self.get_ybound())\n xsize = max(abs(xmax - xmin), 1e-30)\n ysize = max(abs(ymax - ymin), 1e-30)\n\n box_aspect = fig_aspect * (position.height / position.width)\n data_ratio = box_aspect / aspect\n\n y_expander = data_ratio * xsize / ysize - 1\n # If y_expander > 0, the dy/dx viewLim ratio needs to increase\n if abs(y_expander) < 0.005:\n return\n\n dL = self.dataLim\n x0, x1 = x_trf.transform(dL.intervalx)\n y0, y1 = y_trf.transform(dL.intervaly)\n xr = 1.05 * (x1 - x0)\n yr = 1.05 * (y1 - y0)\n\n xmarg = xsize - xr\n ymarg = ysize - yr\n Ysize = data_ratio * xsize\n Xsize = ysize / data_ratio\n Xmarg = Xsize - xr\n Ymarg = Ysize - yr\n # Setting these targets to, e.g., 0.05*xr does not seem to help.\n xm = 0\n ym = 0\n\n shared_x = self in self._shared_axes[\"x\"]\n shared_y = self in self._shared_axes[\"y\"]\n\n if shared_x and shared_y:\n raise RuntimeError(\"set_aspect(..., adjustable='datalim') or \"\n \"axis('equal') are not allowed when both axes \"\n \"are shared. Try set_aspect(..., \"\n \"adjustable='box').\")\n\n # If y is shared, then we are only allowed to change x, etc.\n if shared_y:\n adjust_y = False\n else:\n if xmarg > xm and ymarg > ym:\n adjy = ((Ymarg > 0 and y_expander < 0) or\n (Xmarg < 0 and y_expander > 0))\n else:\n adjy = y_expander > 0\n adjust_y = shared_x or adjy # (Ymarg > xmarg)\n\n if adjust_y:\n yc = 0.5 * (ymin + ymax)\n y0 = yc - Ysize / 2.0\n y1 = yc + Ysize / 2.0\n self.set_ybound(y_trf.inverted().transform([y0, y1]))\n else:\n xc = 0.5 * (xmin + xmax)\n x0 = xc - Xsize / 2.0\n x1 = xc + Xsize / 2.0\n self.set_xbound(x_trf.inverted().transform([x0, x1]))\n\n def axis(self, *args, emit=True, **kwargs):\n \"\"\"\n Convenience method to get or set some axis properties.\n\n Call signatures::\n\n xmin, xmax, ymin, ymax = axis()\n xmin, xmax, ymin, ymax = axis([xmin, xmax, ymin, ymax])\n xmin, xmax, ymin, ymax = axis(option)\n xmin, xmax, ymin, ymax = axis(**kwargs)\n\n Parameters\n ----------\n xmin, xmax, ymin, ymax : float, optional\n The axis limits to be set. This can also be achieved using ::\n\n ax.set(xlim=(xmin, xmax), ylim=(ymin, ymax))\n\n option : bool or str\n If a bool, turns axis lines and labels on or off. If a string,\n possible values are:\n\n ======== ==========================================================\n Value Description\n ======== ==========================================================\n 'on' Turn on axis lines and labels. Same as ``True``.\n 'off' Turn off axis lines and labels. Same as ``False``.\n 'equal' Set equal scaling (i.e., make circles circular) by\n changing axis limits. This is the same as\n ``ax.set_aspect('equal', adjustable='datalim')``.\n Explicit data limits may not be respected in this case.\n 'scaled' Set equal scaling (i.e., make circles circular) by\n changing dimensions of the plot box. This is the same as\n ``ax.set_aspect('equal', adjustable='box', anchor='C')``.\n Additionally, further autoscaling will be disabled.\n 'tight' Set limits just large enough to show all data, then\n disable further autoscaling.\n 'auto' Automatic scaling (fill plot box with data).\n 'image' 'scaled' with axis limits equal to data limits.\n 'square' Square plot; similar to 'scaled', but initially forcing\n ``xmax-xmin == ymax-ymin``.\n ======== ==========================================================\n\n emit : bool, default: True\n Whether observers are notified of the axis limit change.\n This option is passed on to `~.Axes.set_xlim` and\n `~.Axes.set_ylim`.\n\n Returns\n -------\n xmin, xmax, ymin, ymax : float\n The axis limits.\n\n See Also\n --------\n matplotlib.axes.Axes.set_xlim\n matplotlib.axes.Axes.set_ylim\n \"\"\"\n if len(args) > 1:\n raise TypeError(\"axis() takes 0 or 1 positional arguments but \"\n f\"{len(args)} were given\")\n elif len(args) == 1 and isinstance(args[0], (str, bool)):\n s = args[0]\n if s is True:\n s = 'on'\n if s is False:\n s = 'off'\n s = s.lower()\n if s == 'on':\n self.set_axis_on()\n elif s == 'off':\n self.set_axis_off()\n elif s in ('equal', 'tight', 'scaled', 'auto', 'image', 'square'):\n self.set_autoscale_on(True)\n self.set_aspect('auto')\n self.autoscale_view(tight=False)\n if s == 'equal':\n self.set_aspect('equal', adjustable='datalim')\n elif s == 'scaled':\n self.set_aspect('equal', adjustable='box', anchor='C')\n self.set_autoscale_on(False) # Req. by Mark Bakker\n elif s == 'tight':\n self.autoscale_view(tight=True)\n self.set_autoscale_on(False)\n elif s == 'image':\n self.autoscale_view(tight=True)\n self.set_autoscale_on(False)\n self.set_aspect('equal', adjustable='box', anchor='C')\n elif s == 'square':\n self.set_aspect('equal', adjustable='box', anchor='C')\n self.set_autoscale_on(False)\n xlim = self.get_xlim()\n ylim = self.get_ylim()\n edge_size = max(np.diff(xlim), np.diff(ylim))[0]\n self.set_xlim([xlim[0], xlim[0] + edge_size],\n emit=emit, auto=False)\n self.set_ylim([ylim[0], ylim[0] + edge_size],\n emit=emit, auto=False)\n else:\n raise ValueError(f\"Unrecognized string {s!r} to axis; \"\n \"try 'on' or 'off'\")\n else:\n if len(args) == 1:\n limits = args[0]\n try:\n xmin, xmax, ymin, ymax = limits\n except (TypeError, ValueError) as err:\n raise TypeError('the first argument to axis() must be an '\n 'iterable of the form '\n '[xmin, xmax, ymin, ymax]') from err\n else:\n xmin = kwargs.pop('xmin', None)\n xmax = kwargs.pop('xmax', None)\n ymin = kwargs.pop('ymin', None)\n ymax = kwargs.pop('ymax', None)\n xauto = (None # Keep autoscale state as is.\n if xmin is None and xmax is None\n else False) # Turn off autoscale.\n yauto = (None\n if ymin is None and ymax is None\n else False)\n self.set_xlim(xmin, xmax, emit=emit, auto=xauto)\n self.set_ylim(ymin, ymax, emit=emit, auto=yauto)\n if kwargs:\n raise TypeError(f\"axis() got an unexpected keyword argument \"\n f\"'{next(iter(kwargs))}'\")\n return (*self.get_xlim(), *self.get_ylim())\n\n def get_legend(self):\n \"\"\"Return the `.Legend` instance, or None if no legend is defined.\"\"\"\n return self.legend_\n\n def get_images(self):\n r\"\"\"Return a list of `.AxesImage`\\s contained by the Axes.\"\"\"\n return cbook.silent_list('AxesImage', self.images)\n\n def get_lines(self):\n \"\"\"Return a list of lines contained by the Axes.\"\"\"\n return cbook.silent_list('Line2D', self.lines)\n\n def get_xaxis(self):\n \"\"\"\n [*Discouraged*] Return the XAxis instance.\n\n .. admonition:: Discouraged\n\n The use of this function is discouraged. You should instead\n directly access the attribute ``ax.xaxis``.\n \"\"\"\n return self.xaxis\n\n def get_yaxis(self):\n \"\"\"\n [*Discouraged*] Return the YAxis instance.\n\n .. admonition:: Discouraged\n\n The use of this function is discouraged. You should instead\n directly access the attribute ``ax.yaxis``.\n \"\"\"\n return self.yaxis\n\n get_xgridlines = _axis_method_wrapper(\"xaxis\", \"get_gridlines\")\n get_xticklines = _axis_method_wrapper(\"xaxis\", \"get_ticklines\")\n get_ygridlines = _axis_method_wrapper(\"yaxis\", \"get_gridlines\")\n get_yticklines = _axis_method_wrapper(\"yaxis\", \"get_ticklines\")\n\n # Adding and tracking artists\n\n def _sci(self, im):\n \"\"\"\n Set the current image.\n\n This image will be the target of colormap functions like\n ``pyplot.viridis``, and other functions such as `~.pyplot.clim`. The\n current image is an attribute of the current Axes.\n \"\"\"\n _api.check_isinstance(\n (mpl.contour.ContourSet, mcoll.Collection, mimage.AxesImage),\n im=im)\n if isinstance(im, mpl.contour.ContourSet):\n if im.collections[0] not in self._children:\n raise ValueError(\"ContourSet must be in current Axes\")\n elif im not in self._children:\n raise ValueError(\"Argument must be an image, collection, or \"\n \"ContourSet in this Axes\")\n self._current_image = im\n\n def _gci(self):\n \"\"\"Helper for `~matplotlib.pyplot.gci`; do not use elsewhere.\"\"\"\n return self._current_image\n\n def has_data(self):\n \"\"\"\n Return whether any artists have been added to the Axes.\n\n This should not be used to determine whether the *dataLim*\n need to be updated, and may not actually be useful for\n anything.\n \"\"\"\n return any(isinstance(a, (mcoll.Collection, mimage.AxesImage,\n mlines.Line2D, mpatches.Patch))\n for a in self._children)\n\n def _deprecate_noninstance(self, _name, _types, **kwargs):\n \"\"\"\n For each *key, value* pair in *kwargs*, check that *value* is an\n instance of one of *_types*; if not, raise an appropriate deprecation.\n \"\"\"\n for key, value in kwargs.items():\n if not isinstance(value, _types):\n _api.warn_deprecated(\n '3.5', name=_name,\n message=f'Passing argument *{key}* of unexpected type '\n f'{type(value).__qualname__} to %(name)s which only '\n f'accepts {_types} is deprecated since %(since)s and will '\n 'become an error %(removal)s.')\n\n def add_artist(self, a):\n \"\"\"\n Add an `.Artist` to the Axes; return the artist.\n\n Use `add_artist` only for artists for which there is no dedicated\n \"add\" method; and if necessary, use a method such as `update_datalim`\n to manually update the dataLim if the artist is to be included in\n autoscaling.\n\n If no ``transform`` has been specified when creating the artist (e.g.\n ``artist.get_transform() == None``) then the transform is set to\n ``ax.transData``.\n \"\"\"\n a.axes = self\n self._children.append(a)\n a._remove_method = self._children.remove\n self._set_artist_props(a)\n a.set_clip_path(self.patch)\n self.stale = True\n return a\n\n def add_child_axes(self, ax):\n \"\"\"\n Add an `.AxesBase` to the Axes' children; return the child Axes.\n\n This is the lowlevel version. See `.axes.Axes.inset_axes`.\n \"\"\"\n\n # normally Axes have themselves as the Axes, but these need to have\n # their parent...\n # Need to bypass the getter...\n ax._axes = self\n ax.stale_callback = martist._stale_axes_callback\n\n self.child_axes.append(ax)\n ax._remove_method = self.child_axes.remove\n self.stale = True\n return ax\n\n def add_collection(self, collection, autolim=True):\n \"\"\"\n Add a `.Collection` to the Axes; return the collection.\n \"\"\"\n self._deprecate_noninstance('add_collection', mcoll.Collection,\n collection=collection)\n label = collection.get_label()\n if not label:\n collection.set_label(f'_child{len(self._children)}')\n self._children.append(collection)\n collection._remove_method = self._children.remove\n self._set_artist_props(collection)\n\n if collection.get_clip_path() is None:\n collection.set_clip_path(self.patch)\n\n if autolim:\n # Make sure viewLim is not stale (mostly to match\n # pre-lazy-autoscale behavior, which is not really better).\n self._unstale_viewLim()\n datalim = collection.get_datalim(self.transData)\n points = datalim.get_points()\n if not np.isinf(datalim.minpos).all():\n # By definition, if minpos (minimum positive value) is set\n # (i.e., non-inf), then min(points) <= minpos <= max(points),\n # and minpos would be superfluous. However, we add minpos to\n # the call so that self.dataLim will update its own minpos.\n # This ensures that log scales see the correct minimum.\n points = np.concatenate([points, [datalim.minpos]])\n self.update_datalim(points)\n\n self.stale = True\n return collection\n\n def add_image(self, image):\n \"\"\"\n Add an `.AxesImage` to the Axes; return the image.\n \"\"\"\n self._deprecate_noninstance('add_image', mimage.AxesImage, image=image)\n self._set_artist_props(image)\n if not image.get_label():\n image.set_label(f'_child{len(self._children)}')\n self._children.append(image)\n image._remove_method = self._children.remove\n self.stale = True\n return image\n\n def _update_image_limits(self, image):\n xmin, xmax, ymin, ymax = image.get_extent()\n self.axes.update_datalim(((xmin, ymin), (xmax, ymax)))\n\n def add_line(self, line):\n \"\"\"\n Add a `.Line2D` to the Axes; return the line.\n \"\"\"\n self._deprecate_noninstance('add_line', mlines.Line2D, line=line)\n self._set_artist_props(line)\n if line.get_clip_path() is None:\n line.set_clip_path(self.patch)\n\n self._update_line_limits(line)\n if not line.get_label():\n line.set_label(f'_child{len(self._children)}')\n self._children.append(line)\n line._remove_method = self._children.remove\n self.stale = True\n return line\n\n def _add_text(self, txt):\n \"\"\"\n Add a `.Text` to the Axes; return the text.\n \"\"\"\n self._deprecate_noninstance('_add_text', mtext.Text, txt=txt)\n self._set_artist_props(txt)\n self._children.append(txt)\n txt._remove_method = self._children.remove\n self.stale = True\n return txt\n\n def _update_line_limits(self, line):\n \"\"\"\n Figures out the data limit of the given line, updating self.dataLim.\n \"\"\"\n path = line.get_path()\n if path.vertices.size == 0:\n return\n\n line_trf = line.get_transform()\n\n if line_trf == self.transData:\n data_path = path\n elif any(line_trf.contains_branch_seperately(self.transData)):\n # Compute the transform from line coordinates to data coordinates.\n trf_to_data = line_trf - self.transData\n # If transData is affine we can use the cached non-affine component\n # of line's path (since the non-affine part of line_trf is\n # entirely encapsulated in trf_to_data).\n if self.transData.is_affine:\n line_trans_path = line._get_transformed_path()\n na_path, _ = line_trans_path.get_transformed_path_and_affine()\n data_path = trf_to_data.transform_path_affine(na_path)\n else:\n data_path = trf_to_data.transform_path(path)\n else:\n # For backwards compatibility we update the dataLim with the\n # coordinate range of the given path, even though the coordinate\n # systems are completely different. This may occur in situations\n # such as when ax.transAxes is passed through for absolute\n # positioning.\n data_path = path\n\n if not data_path.vertices.size:\n return\n\n updatex, updatey = line_trf.contains_branch_seperately(self.transData)\n if self.name != \"rectilinear\":\n # This block is mostly intended to handle axvline in polar plots,\n # for which updatey would otherwise be True.\n if updatex and line_trf == self.get_yaxis_transform():\n updatex = False\n if updatey and line_trf == self.get_xaxis_transform():\n updatey = False\n self.dataLim.update_from_path(data_path,\n self.ignore_existing_data_limits,\n updatex=updatex, updatey=updatey)\n self.ignore_existing_data_limits = False\n\n def add_patch(self, p):\n \"\"\"\n Add a `.Patch` to the Axes; return the patch.\n \"\"\"\n self._deprecate_noninstance('add_patch', mpatches.Patch, p=p)\n self._set_artist_props(p)\n if p.get_clip_path() is None:\n p.set_clip_path(self.patch)\n self._update_patch_limits(p)\n self._children.append(p)\n p._remove_method = self._children.remove\n return p\n\n def _update_patch_limits(self, patch):\n \"\"\"Update the data limits for the given patch.\"\"\"\n # hist can add zero height Rectangles, which is useful to keep\n # the bins, counts and patches lined up, but it throws off log\n # scaling. We'll ignore rects with zero height or width in\n # the auto-scaling\n\n # cannot check for '==0' since unitized data may not compare to zero\n # issue #2150 - we update the limits if patch has non zero width\n # or height.\n if (isinstance(patch, mpatches.Rectangle) and\n ((not patch.get_width()) and (not patch.get_height()))):\n return\n p = patch.get_path()\n # Get all vertices on the path\n # Loop through each segment to get extrema for Bezier curve sections\n vertices = []\n for curve, code in p.iter_bezier(simplify=False):\n # Get distance along the curve of any extrema\n _, dzeros = curve.axis_aligned_extrema()\n # Calculate vertices of start, end and any extrema in between\n vertices.append(curve([0, *dzeros, 1]))\n\n if len(vertices):\n vertices = np.row_stack(vertices)\n\n patch_trf = patch.get_transform()\n updatex, updatey = patch_trf.contains_branch_seperately(self.transData)\n if not (updatex or updatey):\n return\n if self.name != \"rectilinear\":\n # As in _update_line_limits, but for axvspan.\n if updatex and patch_trf == self.get_yaxis_transform():\n updatex = False\n if updatey and patch_trf == self.get_xaxis_transform():\n updatey = False\n trf_to_data = patch_trf - self.transData\n xys = trf_to_data.transform(vertices)\n self.update_datalim(xys, updatex=updatex, updatey=updatey)\n\n def add_table(self, tab):\n \"\"\"\n Add a `.Table` to the Axes; return the table.\n \"\"\"\n self._deprecate_noninstance('add_table', mtable.Table, tab=tab)\n self._set_artist_props(tab)\n self._children.append(tab)\n tab.set_clip_path(self.patch)\n tab._remove_method = self._children.remove\n return tab\n\n def add_container(self, container):\n \"\"\"\n Add a `.Container` to the Axes' containers; return the container.\n \"\"\"\n label = container.get_label()\n if not label:\n container.set_label('_container%d' % len(self.containers))\n self.containers.append(container)\n container._remove_method = self.containers.remove\n return container\n\n def _unit_change_handler(self, axis_name, event=None):\n \"\"\"\n Process axis units changes: requests updates to data and view limits.\n \"\"\"\n if event is None: # Allow connecting `self._unit_change_handler(name)`\n return functools.partial(\n self._unit_change_handler, axis_name, event=object())\n _api.check_in_list(self._axis_map, axis_name=axis_name)\n for line in self.lines:\n line.recache_always()\n self.relim()\n self._request_autoscale_view(axis_name)\n\n def relim(self, visible_only=False):\n \"\"\"\n Recompute the data limits based on current artists.\n\n At present, `.Collection` instances are not supported.\n\n Parameters\n ----------\n visible_only : bool, default: False\n Whether to exclude invisible artists.\n \"\"\"\n # Collections are deliberately not supported (yet); see\n # the TODO note in artists.py.\n self.dataLim.ignore(True)\n self.dataLim.set_points(mtransforms.Bbox.null().get_points())\n self.ignore_existing_data_limits = True\n\n for artist in self._children:\n if not visible_only or artist.get_visible():\n if isinstance(artist, mlines.Line2D):\n self._update_line_limits(artist)\n elif isinstance(artist, mpatches.Patch):\n self._update_patch_limits(artist)\n elif isinstance(artist, mimage.AxesImage):\n self._update_image_limits(artist)\n\n def update_datalim(self, xys, updatex=True, updatey=True):\n \"\"\"\n Extend the `~.Axes.dataLim` Bbox to include the given points.\n\n If no data is set currently, the Bbox will ignore its limits and set\n the bound to be the bounds of the xydata (*xys*). Otherwise, it will\n compute the bounds of the union of its current data and the data in\n *xys*.\n\n Parameters\n ----------\n xys : 2D array-like\n The points to include in the data limits Bbox. This can be either\n a list of (x, y) tuples or a Nx2 array.\n\n updatex, updatey : bool, default: True\n Whether to update the x/y limits.\n \"\"\"\n xys = np.asarray(xys)\n if not np.any(np.isfinite(xys)):\n return\n self.dataLim.update_from_data_xy(xys, self.ignore_existing_data_limits,\n updatex=updatex, updatey=updatey)\n self.ignore_existing_data_limits = False\n\n def _process_unit_info(self, datasets=None, kwargs=None, *, convert=True):\n \"\"\"\n Set axis units based on *datasets* and *kwargs*, and optionally apply\n unit conversions to *datasets*.\n\n Parameters\n ----------\n datasets : list\n List of (axis_name, dataset) pairs (where the axis name is defined\n as in `._axis_map`). Individual datasets can also be None\n (which gets passed through).\n kwargs : dict\n Other parameters from which unit info (i.e., the *xunits*,\n *yunits*, *zunits* (for 3D Axes), *runits* and *thetaunits* (for\n polar) entries) is popped, if present. Note that this dict is\n mutated in-place!\n convert : bool, default: True\n Whether to return the original datasets or the converted ones.\n\n Returns\n -------\n list\n Either the original datasets if *convert* is False, or the\n converted ones if *convert* is True (the default).\n \"\"\"\n # The API makes datasets a list of pairs rather than an axis_name to\n # dataset mapping because it is sometimes necessary to process multiple\n # datasets for a single axis, and concatenating them may be tricky\n # (e.g. if some are scalars, etc.).\n datasets = datasets or []\n kwargs = kwargs or {}\n axis_map = self._axis_map\n for axis_name, data in datasets:\n try:\n axis = axis_map[axis_name]\n except KeyError:\n raise ValueError(f\"Invalid axis name: {axis_name!r}\") from None\n # Update from data if axis is already set but no unit is set yet.\n if axis is not None and data is not None and not axis.have_units():\n axis.update_units(data)\n for axis_name, axis in axis_map.items():\n # Return if no axis is set.\n if axis is None:\n continue\n # Check for units in the kwargs, and if present update axis.\n units = kwargs.pop(f\"{axis_name}units\", axis.units)\n if self.name == \"polar\":\n # Special case: polar supports \"thetaunits\"/\"runits\".\n polar_units = {\"x\": \"thetaunits\", \"y\": \"runits\"}\n units = kwargs.pop(polar_units[axis_name], units)\n if units != axis.units and units is not None:\n axis.set_units(units)\n # If the units being set imply a different converter,\n # we need to update again.\n for dataset_axis_name, data in datasets:\n if dataset_axis_name == axis_name and data is not None:\n axis.update_units(data)\n return [axis_map[axis_name].convert_units(data)\n if convert and data is not None else data\n for axis_name, data in datasets]\n\n def in_axes(self, mouseevent):\n \"\"\"\n Return whether the given event (in display coords) is in the Axes.\n \"\"\"\n return self.patch.contains(mouseevent)[0]\n\n get_autoscalex_on = _axis_method_wrapper(\"xaxis\", \"_get_autoscale_on\")\n get_autoscaley_on = _axis_method_wrapper(\"yaxis\", \"_get_autoscale_on\")\n set_autoscalex_on = _axis_method_wrapper(\"xaxis\", \"_set_autoscale_on\")\n set_autoscaley_on = _axis_method_wrapper(\"yaxis\", \"_set_autoscale_on\")\n\n def get_autoscale_on(self):\n \"\"\"Return True if each axis is autoscaled, False otherwise.\"\"\"\n return all(axis._get_autoscale_on()\n for axis in self._axis_map.values())\n\n def set_autoscale_on(self, b):\n \"\"\"\n Set whether autoscaling is applied to each axis on the next draw or\n call to `.Axes.autoscale_view`.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n for axis in self._axis_map.values():\n axis._set_autoscale_on(b)\n\n @property\n def use_sticky_edges(self):\n \"\"\"\n When autoscaling, whether to obey all `Artist.sticky_edges`.\n\n Default is ``True``.\n\n Setting this to ``False`` ensures that the specified margins\n will be applied, even if the plot includes an image, for\n example, which would otherwise force a view limit to coincide\n with its data limit.\n\n The changing this property does not change the plot until\n `autoscale` or `autoscale_view` is called.\n \"\"\"\n return self._use_sticky_edges\n\n @use_sticky_edges.setter\n def use_sticky_edges(self, b):\n self._use_sticky_edges = bool(b)\n # No effect until next autoscaling, which will mark the Axes as stale.\n\n def set_xmargin(self, m):\n \"\"\"\n Set padding of X data limits prior to autoscaling.\n\n *m* times the data interval will be added to each end of that interval\n before it is used in autoscaling. If *m* is negative, this will clip\n the data range instead of expanding it.\n\n For example, if your data is in the range [0, 2], a margin of 0.1 will\n result in a range [-0.2, 2.2]; a margin of -0.1 will result in a range\n of [0.2, 1.8].\n\n Parameters\n ----------\n m : float greater than -0.5\n \"\"\"\n if m <= -0.5:\n raise ValueError(\"margin must be greater than -0.5\")\n self._xmargin = m\n self._request_autoscale_view(\"x\")\n self.stale = True\n\n def set_ymargin(self, m):\n \"\"\"\n Set padding of Y data limits prior to autoscaling.\n\n *m* times the data interval will be added to each end of that interval\n before it is used in autoscaling. If *m* is negative, this will clip\n the data range instead of expanding it.\n\n For example, if your data is in the range [0, 2], a margin of 0.1 will\n result in a range [-0.2, 2.2]; a margin of -0.1 will result in a range\n of [0.2, 1.8].\n\n Parameters\n ----------\n m : float greater than -0.5\n \"\"\"\n if m <= -0.5:\n raise ValueError(\"margin must be greater than -0.5\")\n self._ymargin = m\n self._request_autoscale_view(\"y\")\n self.stale = True\n\n def margins(self, *margins, x=None, y=None, tight=True):\n \"\"\"\n Set or retrieve autoscaling margins.\n\n The padding added to each limit of the Axes is the *margin*\n times the data interval. All input parameters must be floats\n within the range [0, 1]. Passing both positional and keyword\n arguments is invalid and will raise a TypeError. If no\n arguments (positional or otherwise) are provided, the current\n margins will remain in place and simply be returned.\n\n Specifying any margin changes only the autoscaling; for example,\n if *xmargin* is not None, then *xmargin* times the X data\n interval will be added to each end of that interval before\n it is used in autoscaling.\n\n Parameters\n ----------\n *margins : float, optional\n If a single positional argument is provided, it specifies\n both margins of the x-axis and y-axis limits. If two\n positional arguments are provided, they will be interpreted\n as *xmargin*, *ymargin*. If setting the margin on a single\n axis is desired, use the keyword arguments described below.\n\n x, y : float, optional\n Specific margin values for the x-axis and y-axis,\n respectively. These cannot be used with positional\n arguments, but can be used individually to alter on e.g.,\n only the y-axis.\n\n tight : bool or None, default: True\n The *tight* parameter is passed to `~.axes.Axes.autoscale_view`,\n which is executed after a margin is changed; the default\n here is *True*, on the assumption that when margins are\n specified, no additional padding to match tick marks is\n usually desired. Setting *tight* to *None* preserves\n the previous setting.\n\n Returns\n -------\n xmargin, ymargin : float\n\n Notes\n -----\n If a previously used Axes method such as :meth:`pcolor` has set\n :attr:`use_sticky_edges` to `True`, only the limits not set by\n the \"sticky artists\" will be modified. To force all of the\n margins to be set, set :attr:`use_sticky_edges` to `False`\n before calling :meth:`margins`.\n \"\"\"\n\n if margins and (x is not None or y is not None):\n raise TypeError('Cannot pass both positional and keyword '\n 'arguments for x and/or y.')\n elif len(margins) == 1:\n x = y = margins[0]\n elif len(margins) == 2:\n x, y = margins\n elif margins:\n raise TypeError('Must pass a single positional argument for all '\n 'margins, or one for each margin (x, y).')\n\n if x is None and y is None:\n if tight is not True:\n _api.warn_external(f'ignoring tight={tight!r} in get mode')\n return self._xmargin, self._ymargin\n\n if tight is not None:\n self._tight = tight\n if x is not None:\n self.set_xmargin(x)\n if y is not None:\n self.set_ymargin(y)\n\n def set_rasterization_zorder(self, z):\n \"\"\"\n Set the zorder threshold for rasterization for vector graphics output.\n\n All artists with a zorder below the given value will be rasterized if\n they support rasterization.\n\n This setting is ignored for pixel-based output.\n\n See also :doc:`/gallery/misc/rasterization_demo`.\n\n Parameters\n ----------\n z : float or None\n The zorder below which artists are rasterized.\n If ``None`` rasterization based on zorder is deactivated.\n \"\"\"\n self._rasterization_zorder = z\n self.stale = True\n\n def get_rasterization_zorder(self):\n \"\"\"Return the zorder value below which artists will be rasterized.\"\"\"\n return self._rasterization_zorder\n\n def autoscale(self, enable=True, axis='both', tight=None):\n \"\"\"\n Autoscale the axis view to the data (toggle).\n\n Convenience method for simple axis view autoscaling.\n It turns autoscaling on or off, and then,\n if autoscaling for either axis is on, it performs\n the autoscaling on the specified axis or Axes.\n\n Parameters\n ----------\n enable : bool or None, default: True\n True turns autoscaling on, False turns it off.\n None leaves the autoscaling state unchanged.\n axis : {'both', 'x', 'y'}, default: 'both'\n The axis on which to operate. (For 3D Axes, *axis* can also be set\n to 'z', and 'both' refers to all three axes.)\n tight : bool or None, default: None\n If True, first set the margins to zero. Then, this argument is\n forwarded to `~.axes.Axes.autoscale_view` (regardless of\n its value); see the description of its behavior there.\n \"\"\"\n if enable is None:\n scalex = True\n scaley = True\n else:\n if axis in ['x', 'both']:\n self.set_autoscalex_on(bool(enable))\n scalex = self.get_autoscalex_on()\n else:\n scalex = False\n if axis in ['y', 'both']:\n self.set_autoscaley_on(bool(enable))\n scaley = self.get_autoscaley_on()\n else:\n scaley = False\n if tight and scalex:\n self._xmargin = 0\n if tight and scaley:\n self._ymargin = 0\n if scalex:\n self._request_autoscale_view(\"x\", tight=tight)\n if scaley:\n self._request_autoscale_view(\"y\", tight=tight)\n\n def autoscale_view(self, tight=None, scalex=True, scaley=True):\n \"\"\"\n Autoscale the view limits using the data limits.\n\n Parameters\n ----------\n tight : bool or None\n If *True*, only expand the axis limits using the margins. Note\n that unlike for `autoscale`, ``tight=True`` does *not* set the\n margins to zero.\n\n If *False* and :rc:`axes.autolimit_mode` is 'round_numbers', then\n after expansion by the margins, further expand the axis limits\n using the axis major locator.\n\n If None (the default), reuse the value set in the previous call to\n `autoscale_view` (the initial value is False, but the default style\n sets :rc:`axes.autolimit_mode` to 'data', in which case this\n behaves like True).\n\n scalex : bool, default: True\n Whether to autoscale the x axis.\n\n scaley : bool, default: True\n Whether to autoscale the y axis.\n\n Notes\n -----\n The autoscaling preserves any preexisting axis direction reversal.\n\n The data limits are not updated automatically when artist data are\n changed after the artist has been added to an Axes instance. In that\n case, use :meth:`matplotlib.axes.Axes.relim` prior to calling\n autoscale_view.\n\n If the views of the Axes are fixed, e.g. via `set_xlim`, they will\n not be changed by autoscale_view().\n See :meth:`matplotlib.axes.Axes.autoscale` for an alternative.\n \"\"\"\n if tight is not None:\n self._tight = bool(tight)\n\n x_stickies = y_stickies = np.array([])\n if self.use_sticky_edges:\n # Only iterate over Axes and artists if needed. The check for\n # ``hasattr(ax, \"_children\")`` is necessary because this can be\n # called very early in the Axes init process (e.g., for twin Axes)\n # when these attributes don't even exist yet, in which case\n # `get_children` would raise an AttributeError.\n if self._xmargin and scalex and self.get_autoscalex_on():\n x_stickies = np.sort(np.concatenate([\n artist.sticky_edges.x\n for ax in self._shared_axes[\"x\"].get_siblings(self)\n if hasattr(ax, \"_children\")\n for artist in ax.get_children()]))\n if self._ymargin and scaley and self.get_autoscaley_on():\n y_stickies = np.sort(np.concatenate([\n artist.sticky_edges.y\n for ax in self._shared_axes[\"y\"].get_siblings(self)\n if hasattr(ax, \"_children\")\n for artist in ax.get_children()]))\n if self.get_xscale() == 'log':\n x_stickies = x_stickies[x_stickies > 0]\n if self.get_yscale() == 'log':\n y_stickies = y_stickies[y_stickies > 0]\n\n def handle_single_axis(\n scale, shared_axes, name, axis, margin, stickies, set_bound):\n\n if not (scale and axis._get_autoscale_on()):\n return # nothing to do...\n\n shared = shared_axes.get_siblings(self)\n # Base autoscaling on finite data limits when there is at least one\n # finite data limit among all the shared_axes and intervals.\n values = [val for ax in shared\n for val in getattr(ax.dataLim, f\"interval{name}\")\n if np.isfinite(val)]\n if values:\n x0, x1 = (min(values), max(values))\n elif getattr(self._viewLim, f\"mutated{name}\")():\n # No data, but explicit viewLims already set:\n # in mutatedx or mutatedy.\n return\n else:\n x0, x1 = (-np.inf, np.inf)\n # If x0 and x1 are nonfinite, get default limits from the locator.\n locator = axis.get_major_locator()\n x0, x1 = locator.nonsingular(x0, x1)\n # Find the minimum minpos for use in the margin calculation.\n minimum_minpos = min(\n getattr(ax.dataLim, f\"minpos{name}\") for ax in shared)\n\n # Prevent margin addition from crossing a sticky value. A small\n # tolerance must be added due to floating point issues with\n # streamplot; it is defined relative to x0, x1, x1-x0 but has\n # no absolute term (e.g. \"+1e-8\") to avoid issues when working with\n # datasets where all values are tiny (less than 1e-8).\n tol = 1e-5 * max(abs(x0), abs(x1), abs(x1 - x0))\n # Index of largest element < x0 + tol, if any.\n i0 = stickies.searchsorted(x0 + tol) - 1\n x0bound = stickies[i0] if i0 != -1 else None\n # Index of smallest element > x1 - tol, if any.\n i1 = stickies.searchsorted(x1 - tol)\n x1bound = stickies[i1] if i1 != len(stickies) else None\n\n # Add the margin in figure space and then transform back, to handle\n # non-linear scales.\n transform = axis.get_transform()\n inverse_trans = transform.inverted()\n x0, x1 = axis._scale.limit_range_for_scale(x0, x1, minimum_minpos)\n x0t, x1t = transform.transform([x0, x1])\n delta = (x1t - x0t) * margin\n if not np.isfinite(delta):\n delta = 0 # If a bound isn't finite, set margin to zero.\n x0, x1 = inverse_trans.transform([x0t - delta, x1t + delta])\n\n # Apply sticky bounds.\n if x0bound is not None:\n x0 = max(x0, x0bound)\n if x1bound is not None:\n x1 = min(x1, x1bound)\n\n if not self._tight:\n x0, x1 = locator.view_limits(x0, x1)\n set_bound(x0, x1)\n # End of definition of internal function 'handle_single_axis'.\n\n handle_single_axis(\n scalex, self._shared_axes[\"x\"], 'x', self.xaxis, self._xmargin,\n x_stickies, self.set_xbound)\n handle_single_axis(\n scaley, self._shared_axes[\"y\"], 'y', self.yaxis, self._ymargin,\n y_stickies, self.set_ybound)\n\n def _update_title_position(self, renderer):\n \"\"\"\n Update the title position based on the bounding box enclosing\n all the ticklabels and x-axis spine and xlabel...\n \"\"\"\n if self._autotitlepos is not None and not self._autotitlepos:\n _log.debug('title position was updated manually, not adjusting')\n return\n\n titles = (self.title, self._left_title, self._right_title)\n\n # Need to check all our twins too, and all the children as well.\n axs = self._twinned_axes.get_siblings(self) + self.child_axes\n for ax in self.child_axes: # Child positions must be updated first.\n locator = ax.get_axes_locator()\n ax.apply_aspect(locator(self, renderer) if locator else None)\n\n for title in titles:\n x, _ = title.get_position()\n # need to start again in case of window resizing\n title.set_position((x, 1.0))\n top = -np.inf\n for ax in axs:\n bb = None\n if (ax.xaxis.get_ticks_position() in ['top', 'unknown']\n or ax.xaxis.get_label_position() == 'top'):\n bb = ax.xaxis.get_tightbbox(renderer)\n if bb is None:\n if 'outline' in ax.spines:\n # Special case for colorbars:\n bb = ax.spines['outline'].get_window_extent()\n else:\n bb = ax.get_window_extent(renderer)\n top = max(top, bb.ymax)\n if title.get_text():\n ax.yaxis.get_tightbbox(renderer) # update offsetText\n if ax.yaxis.offsetText.get_text():\n bb = ax.yaxis.offsetText.get_tightbbox(renderer)\n if bb.intersection(title.get_tightbbox(renderer), bb):\n top = bb.ymax\n if top < 0:\n # the top of Axes is not even on the figure, so don't try and\n # automatically place it.\n _log.debug('top of Axes not in the figure, so title not moved')\n return\n if title.get_window_extent(renderer).ymin < top:\n _, y = self.transAxes.inverted().transform((0, top))\n title.set_position((x, y))\n # empirically, this doesn't always get the min to top,\n # so we need to adjust again.\n if title.get_window_extent(renderer).ymin < top:\n _, y = self.transAxes.inverted().transform(\n (0., 2 * top - title.get_window_extent(renderer).ymin))\n title.set_position((x, y))\n\n ymax = max(title.get_position()[1] for title in titles)\n for title in titles:\n # now line up all the titles at the highest baseline.\n x, _ = title.get_position()\n title.set_position((x, ymax))\n\n # Drawing\n @martist.allow_rasterization\n def draw(self, renderer):\n # docstring inherited\n if renderer is None:\n raise RuntimeError('No renderer defined')\n if not self.get_visible():\n return\n self._unstale_viewLim()\n\n renderer.open_group('axes', gid=self.get_gid())\n\n # prevent triggering call backs during the draw process\n self._stale = True\n\n # loop over self and child Axes...\n locator = self.get_axes_locator()\n self.apply_aspect(locator(self, renderer) if locator else None)\n\n artists = self.get_children()\n artists.remove(self.patch)\n\n # the frame draws the edges around the Axes patch -- we\n # decouple these so the patch can be in the background and the\n # frame in the foreground. Do this before drawing the axis\n # objects so that the spine has the opportunity to update them.\n if not (self.axison and self._frameon):\n for spine in self.spines.values():\n artists.remove(spine)\n\n self._update_title_position(renderer)\n\n if not self.axison:\n for _axis in self._axis_map.values():\n artists.remove(_axis)\n\n if not self.figure.canvas.is_saving():\n artists = [\n a for a in artists\n if not a.get_animated() or isinstance(a, mimage.AxesImage)]\n artists = sorted(artists, key=attrgetter('zorder'))\n\n # rasterize artists with negative zorder\n # if the minimum zorder is negative, start rasterization\n rasterization_zorder = self._rasterization_zorder\n\n if (rasterization_zorder is not None and\n artists and artists[0].zorder < rasterization_zorder):\n renderer.start_rasterizing()\n artists_rasterized = [a for a in artists\n if a.zorder < rasterization_zorder]\n artists = [a for a in artists\n if a.zorder >= rasterization_zorder]\n else:\n artists_rasterized = []\n\n # the patch draws the background rectangle -- the frame below\n # will draw the edges\n if self.axison and self._frameon:\n self.patch.draw(renderer)\n\n if artists_rasterized:\n for a in artists_rasterized:\n a.draw(renderer)\n renderer.stop_rasterizing()\n\n mimage._draw_list_compositing_images(\n renderer, self, artists, self.figure.suppressComposite)\n\n renderer.close_group('axes')\n self.stale = False\n\n def draw_artist(self, a):\n \"\"\"\n Efficiently redraw a single artist.\n \"\"\"\n a.draw(self.figure.canvas.get_renderer())\n\n def redraw_in_frame(self):\n \"\"\"\n Efficiently redraw Axes data, but not axis ticks, labels, etc.\n \"\"\"\n with ExitStack() as stack:\n for artist in [*self._axis_map.values(),\n self.title, self._left_title, self._right_title]:\n stack.enter_context(artist._cm_set(visible=False))\n self.draw(self.figure.canvas.get_renderer())\n\n @_api.deprecated(\"3.6\", alternative=\"Axes.figure.canvas.get_renderer()\")\n def get_renderer_cache(self):\n return self.figure.canvas.get_renderer()\n\n # Axes rectangle characteristics\n\n def get_frame_on(self):\n \"\"\"Get whether the Axes rectangle patch is drawn.\"\"\"\n return self._frameon\n\n def set_frame_on(self, b):\n \"\"\"\n Set whether the Axes rectangle patch is drawn.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self._frameon = b\n self.stale = True\n\n def get_axisbelow(self):\n \"\"\"\n Get whether axis ticks and gridlines are above or below most artists.\n\n Returns\n -------\n bool or 'line'\n\n See Also\n --------\n set_axisbelow\n \"\"\"\n return self._axisbelow\n\n def set_axisbelow(self, b):\n \"\"\"\n Set whether axis ticks and gridlines are above or below most artists.\n\n This controls the zorder of the ticks and gridlines. For more\n information on the zorder see :doc:`/gallery/misc/zorder_demo`.\n\n Parameters\n ----------\n b : bool or 'line'\n Possible values:\n\n - *True* (zorder = 0.5): Ticks and gridlines are below all Artists.\n - 'line' (zorder = 1.5): Ticks and gridlines are above patches\n (e.g. rectangles, with default zorder = 1) but still below lines\n and markers (with their default zorder = 2).\n - *False* (zorder = 2.5): Ticks and gridlines are above patches\n and lines / markers.\n\n See Also\n --------\n get_axisbelow\n \"\"\"\n # Check that b is True, False or 'line'\n self._axisbelow = axisbelow = validate_axisbelow(b)\n zorder = {\n True: 0.5,\n 'line': 1.5,\n False: 2.5,\n }[axisbelow]\n for axis in self._axis_map.values():\n axis.set_zorder(zorder)\n self.stale = True\n\n @_docstring.dedent_interpd\n def grid(self, visible=None, which='major', axis='both', **kwargs):\n \"\"\"\n Configure the grid lines.\n\n Parameters\n ----------\n visible : bool or None, optional\n Whether to show the grid lines. If any *kwargs* are supplied, it\n is assumed you want the grid on and *visible* will be set to True.\n\n If *visible* is *None* and there are no *kwargs*, this toggles the\n visibility of the lines.\n\n which : {'major', 'minor', 'both'}, optional\n The grid lines to apply the changes on.\n\n axis : {'both', 'x', 'y'}, optional\n The axis to apply the changes on.\n\n **kwargs : `.Line2D` properties\n Define the line properties of the grid, e.g.::\n\n grid(color='r', linestyle='-', linewidth=2)\n\n Valid keyword arguments are:\n\n %(Line2D:kwdoc)s\n\n Notes\n -----\n The axis is drawn as a unit, so the effective zorder for drawing the\n grid is determined by the zorder of each axis, not by the zorder of the\n `.Line2D` objects comprising the grid. Therefore, to set grid zorder,\n use `.set_axisbelow` or, for more control, call the\n `~.Artist.set_zorder` method of each axis.\n \"\"\"\n _api.check_in_list(['x', 'y', 'both'], axis=axis)\n if axis in ['x', 'both']:\n self.xaxis.grid(visible, which=which, **kwargs)\n if axis in ['y', 'both']:\n self.yaxis.grid(visible, which=which, **kwargs)\n\n def ticklabel_format(self, *, axis='both', style='', scilimits=None,\n useOffset=None, useLocale=None, useMathText=None):\n r\"\"\"\n Configure the `.ScalarFormatter` used by default for linear Axes.\n\n If a parameter is not set, the corresponding property of the formatter\n is left unchanged.\n\n Parameters\n ----------\n axis : {'x', 'y', 'both'}, default: 'both'\n The axis to configure. Only major ticks are affected.\n\n style : {'sci', 'scientific', 'plain'}\n Whether to use scientific notation.\n The formatter default is to use scientific notation.\n\n scilimits : pair of ints (m, n)\n Scientific notation is used only for numbers outside the range\n 10\\ :sup:`m` to 10\\ :sup:`n` (and only if the formatter is\n configured to use scientific notation at all). Use (0, 0) to\n include all numbers. Use (m, m) where m != 0 to fix the order of\n magnitude to 10\\ :sup:`m`.\n The formatter default is :rc:`axes.formatter.limits`.\n\n useOffset : bool or float\n If True, the offset is calculated as needed.\n If False, no offset is used.\n If a numeric value, it sets the offset.\n The formatter default is :rc:`axes.formatter.useoffset`.\n\n useLocale : bool\n Whether to format the number using the current locale or using the\n C (English) locale. This affects e.g. the decimal separator. The\n formatter default is :rc:`axes.formatter.use_locale`.\n\n useMathText : bool\n Render the offset and scientific notation in mathtext.\n The formatter default is :rc:`axes.formatter.use_mathtext`.\n\n Raises\n ------\n AttributeError\n If the current formatter is not a `.ScalarFormatter`.\n \"\"\"\n style = style.lower()\n axis = axis.lower()\n if scilimits is not None:\n try:\n m, n = scilimits\n m + n + 1 # check that both are numbers\n except (ValueError, TypeError) as err:\n raise ValueError(\"scilimits must be a sequence of 2 integers\"\n ) from err\n STYLES = {'sci': True, 'scientific': True, 'plain': False, '': None}\n is_sci_style = _api.check_getitem(STYLES, style=style)\n axis_map = {**{k: [v] for k, v in self._axis_map.items()},\n 'both': list(self._axis_map.values())}\n axises = _api.check_getitem(axis_map, axis=axis)\n try:\n for axis in axises:\n if is_sci_style is not None:\n axis.major.formatter.set_scientific(is_sci_style)\n if scilimits is not None:\n axis.major.formatter.set_powerlimits(scilimits)\n if useOffset is not None:\n axis.major.formatter.set_useOffset(useOffset)\n if useLocale is not None:\n axis.major.formatter.set_useLocale(useLocale)\n if useMathText is not None:\n axis.major.formatter.set_useMathText(useMathText)\n except AttributeError as err:\n raise AttributeError(\n \"This method only works with the ScalarFormatter\") from err\n\n def locator_params(self, axis='both', tight=None, **kwargs):\n \"\"\"\n Control behavior of major tick locators.\n\n Because the locator is involved in autoscaling, `~.Axes.autoscale_view`\n is called automatically after the parameters are changed.\n\n Parameters\n ----------\n axis : {'both', 'x', 'y'}, default: 'both'\n The axis on which to operate. (For 3D Axes, *axis* can also be\n set to 'z', and 'both' refers to all three axes.)\n tight : bool or None, optional\n Parameter passed to `~.Axes.autoscale_view`.\n Default is None, for no change.\n\n Other Parameters\n ----------------\n **kwargs\n Remaining keyword arguments are passed to directly to the\n ``set_params()`` method of the locator. Supported keywords depend\n on the type of the locator. See for example\n `~.ticker.MaxNLocator.set_params` for the `.ticker.MaxNLocator`\n used by default for linear.\n\n Examples\n --------\n When plotting small subplots, one might want to reduce the maximum\n number of ticks and use tight bounds, for example::\n\n ax.locator_params(tight=True, nbins=4)\n\n \"\"\"\n _api.check_in_list([*self._axis_names, \"both\"], axis=axis)\n for name in self._axis_names:\n if axis in [name, \"both\"]:\n loc = self._axis_map[name].get_major_locator()\n loc.set_params(**kwargs)\n self._request_autoscale_view(name, tight=tight)\n self.stale = True\n\n def tick_params(self, axis='both', **kwargs):\n \"\"\"\n Change the appearance of ticks, tick labels, and gridlines.\n\n Tick properties that are not explicitly set using the keyword\n arguments remain unchanged unless *reset* is True.\n\n Parameters\n ----------\n axis : {'x', 'y', 'both'}, default: 'both'\n The axis to which the parameters are applied.\n which : {'major', 'minor', 'both'}, default: 'major'\n The group of ticks to which the parameters are applied.\n reset : bool, default: False\n Whether to reset the ticks to defaults before updating them.\n\n Other Parameters\n ----------------\n direction : {'in', 'out', 'inout'}\n Puts ticks inside the Axes, outside the Axes, or both.\n length : float\n Tick length in points.\n width : float\n Tick width in points.\n color : color\n Tick color.\n pad : float\n Distance in points between tick and label.\n labelsize : float or str\n Tick label font size in points or as a string (e.g., 'large').\n labelcolor : color\n Tick label color.\n colors : color\n Tick color and label color.\n zorder : float\n Tick and label zorder.\n bottom, top, left, right : bool\n Whether to draw the respective ticks.\n labelbottom, labeltop, labelleft, labelright : bool\n Whether to draw the respective tick labels.\n labelrotation : float\n Tick label rotation\n grid_color : color\n Gridline color.\n grid_alpha : float\n Transparency of gridlines: 0 (transparent) to 1 (opaque).\n grid_linewidth : float\n Width of gridlines in points.\n grid_linestyle : str\n Any valid `.Line2D` line style spec.\n\n Examples\n --------\n ::\n\n ax.tick_params(direction='out', length=6, width=2, colors='r',\n grid_color='r', grid_alpha=0.5)\n\n This will make all major ticks be red, pointing out of the box,\n and with dimensions 6 points by 2 points. Tick labels will\n also be red. Gridlines will be red and translucent.\n\n \"\"\"\n _api.check_in_list(['x', 'y', 'both'], axis=axis)\n if axis in ['x', 'both']:\n xkw = dict(kwargs)\n xkw.pop('left', None)\n xkw.pop('right', None)\n xkw.pop('labelleft', None)\n xkw.pop('labelright', None)\n self.xaxis.set_tick_params(**xkw)\n if axis in ['y', 'both']:\n ykw = dict(kwargs)\n ykw.pop('top', None)\n ykw.pop('bottom', None)\n ykw.pop('labeltop', None)\n ykw.pop('labelbottom', None)\n self.yaxis.set_tick_params(**ykw)\n\n def set_axis_off(self):\n \"\"\"\n Turn the x- and y-axis off.\n\n This affects the axis lines, ticks, ticklabels, grid and axis labels.\n \"\"\"\n self.axison = False\n self.stale = True\n\n def set_axis_on(self):\n \"\"\"\n Turn the x- and y-axis on.\n\n This affects the axis lines, ticks, ticklabels, grid and axis labels.\n \"\"\"\n self.axison = True\n self.stale = True\n\n # data limits, ticks, tick labels, and formatting\n\n def get_xlabel(self):\n \"\"\"\n Get the xlabel text string.\n \"\"\"\n label = self.xaxis.get_label()\n return label.get_text()\n\n def set_xlabel(self, xlabel, fontdict=None, labelpad=None, *,\n loc=None, **kwargs):\n \"\"\"\n Set the label for the x-axis.\n\n Parameters\n ----------\n xlabel : str\n The label text.\n\n labelpad : float, default: :rc:`axes.labelpad`\n Spacing in points from the Axes bounding box including ticks\n and tick labels. If None, the previous value is left as is.\n\n loc : {'left', 'center', 'right'}, default: :rc:`xaxis.labellocation`\n The label position. This is a high-level alternative for passing\n parameters *x* and *horizontalalignment*.\n\n Other Parameters\n ----------------\n **kwargs : `.Text` properties\n `.Text` properties control the appearance of the label.\n\n See Also\n --------\n text : Documents the properties supported by `.Text`.\n \"\"\"\n if labelpad is not None:\n self.xaxis.labelpad = labelpad\n protected_kw = ['x', 'horizontalalignment', 'ha']\n if {*kwargs} & {*protected_kw}:\n if loc is not None:\n raise TypeError(f\"Specifying 'loc' is disallowed when any of \"\n f\"its corresponding low level keyword \"\n f\"arguments ({protected_kw}) are also \"\n f\"supplied\")\n\n else:\n loc = (loc if loc is not None\n else mpl.rcParams['xaxis.labellocation'])\n _api.check_in_list(('left', 'center', 'right'), loc=loc)\n\n x = {\n 'left': 0,\n 'center': 0.5,\n 'right': 1,\n }[loc]\n kwargs.update(x=x, horizontalalignment=loc)\n\n return self.xaxis.set_label_text(xlabel, fontdict, **kwargs)\n\n def invert_xaxis(self):\n \"\"\"\n Invert the x-axis.\n\n See Also\n --------\n xaxis_inverted\n get_xlim, set_xlim\n get_xbound, set_xbound\n \"\"\"\n self.xaxis.set_inverted(not self.xaxis.get_inverted())\n\n xaxis_inverted = _axis_method_wrapper(\"xaxis\", \"get_inverted\")\n\n def get_xbound(self):\n \"\"\"\n Return the lower and upper x-axis bounds, in increasing order.\n\n See Also\n --------\n set_xbound\n get_xlim, set_xlim\n invert_xaxis, xaxis_inverted\n \"\"\"\n left, right = self.get_xlim()\n if left < right:\n return left, right\n else:\n return right, left\n\n def set_xbound(self, lower=None, upper=None):\n \"\"\"\n Set the lower and upper numerical bounds of the x-axis.\n\n This method will honor axis inversion regardless of parameter order.\n It will not change the autoscaling setting (`.get_autoscalex_on()`).\n\n Parameters\n ----------\n lower, upper : float or None\n The lower and upper bounds. If *None*, the respective axis bound\n is not modified.\n\n See Also\n --------\n get_xbound\n get_xlim, set_xlim\n invert_xaxis, xaxis_inverted\n \"\"\"\n if upper is None and np.iterable(lower):\n lower, upper = lower\n\n old_lower, old_upper = self.get_xbound()\n if lower is None:\n lower = old_lower\n if upper is None:\n upper = old_upper\n\n self.set_xlim(sorted((lower, upper),\n reverse=bool(self.xaxis_inverted())),\n auto=None)\n\n def get_xlim(self):\n \"\"\"\n Return the x-axis view limits.\n\n Returns\n -------\n left, right : (float, float)\n The current x-axis limits in data coordinates.\n\n See Also\n --------\n .Axes.set_xlim\n set_xbound, get_xbound\n invert_xaxis, xaxis_inverted\n\n Notes\n -----\n The x-axis may be inverted, in which case the *left* value will\n be greater than the *right* value.\n \"\"\"\n return tuple(self.viewLim.intervalx)\n\n def _validate_converted_limits(self, limit, convert):\n \"\"\"\n Raise ValueError if converted limits are non-finite.\n\n Note that this function also accepts None as a limit argument.\n\n Returns\n -------\n The limit value after call to convert(), or None if limit is None.\n \"\"\"\n if limit is not None:\n converted_limit = convert(limit)\n if (isinstance(converted_limit, Real)\n and not np.isfinite(converted_limit)):\n raise ValueError(\"Axis limits cannot be NaN or Inf\")\n return converted_limit\n\n @_api.make_keyword_only(\"3.6\", \"emit\")\n def set_xlim(self, left=None, right=None, emit=True, auto=False,\n *, xmin=None, xmax=None):\n \"\"\"\n Set the x-axis view limits.\n\n Parameters\n ----------\n left : float, optional\n The left xlim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n The left and right xlims may also be passed as the tuple\n (*left*, *right*) as the first positional argument (or as\n the *left* keyword argument).\n\n .. ACCEPTS: (bottom: float, top: float)\n\n right : float, optional\n The right xlim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n emit : bool, default: True\n Whether to notify observers of limit change.\n\n auto : bool or None, default: False\n Whether to turn on autoscaling of the x-axis. True turns on,\n False turns off, None leaves unchanged.\n\n xmin, xmax : float, optional\n They are equivalent to left and right respectively, and it is an\n error to pass both *xmin* and *left* or *xmax* and *right*.\n\n Returns\n -------\n left, right : (float, float)\n The new x-axis limits in data coordinates.\n\n See Also\n --------\n get_xlim\n set_xbound, get_xbound\n invert_xaxis, xaxis_inverted\n\n Notes\n -----\n The *left* value may be greater than the *right* value, in which\n case the x-axis values will decrease from left to right.\n\n Examples\n --------\n >>> set_xlim(left, right)\n >>> set_xlim((left, right))\n >>> left, right = set_xlim(left, right)\n\n One limit may be left unchanged.\n\n >>> set_xlim(right=right_lim)\n\n Limits may be passed in reverse order to flip the direction of\n the x-axis. For example, suppose *x* represents the number of\n years before present. The x-axis limits might be set like the\n following so 5000 years ago is on the left of the plot and the\n present is on the right.\n\n >>> set_xlim(5000, 0)\n \"\"\"\n if right is None and np.iterable(left):\n left, right = left\n if xmin is not None:\n if left is not None:\n raise TypeError(\"Cannot pass both 'left' and 'xmin'\")\n left = xmin\n if xmax is not None:\n if right is not None:\n raise TypeError(\"Cannot pass both 'right' and 'xmax'\")\n right = xmax\n return self.xaxis._set_lim(left, right, emit=emit, auto=auto)\n\n get_xscale = _axis_method_wrapper(\"xaxis\", \"get_scale\")\n set_xscale = _axis_method_wrapper(\"xaxis\", \"_set_axes_scale\")\n get_xticks = _axis_method_wrapper(\"xaxis\", \"get_ticklocs\")\n set_xticks = _axis_method_wrapper(\"xaxis\", \"set_ticks\")\n get_xmajorticklabels = _axis_method_wrapper(\"xaxis\", \"get_majorticklabels\")\n get_xminorticklabels = _axis_method_wrapper(\"xaxis\", \"get_minorticklabels\")\n get_xticklabels = _axis_method_wrapper(\"xaxis\", \"get_ticklabels\")\n set_xticklabels = _axis_method_wrapper(\n \"xaxis\", \"_set_ticklabels\",\n doc_sub={\"Axis.set_ticks\": \"Axes.set_xticks\"})\n\n def get_ylabel(self):\n \"\"\"\n Get the ylabel text string.\n \"\"\"\n label = self.yaxis.get_label()\n return label.get_text()\n\n def set_ylabel(self, ylabel, fontdict=None, labelpad=None, *,\n loc=None, **kwargs):\n \"\"\"\n Set the label for the y-axis.\n\n Parameters\n ----------\n ylabel : str\n The label text.\n\n labelpad : float, default: :rc:`axes.labelpad`\n Spacing in points from the Axes bounding box including ticks\n and tick labels. If None, the previous value is left as is.\n\n loc : {'bottom', 'center', 'top'}, default: :rc:`yaxis.labellocation`\n The label position. This is a high-level alternative for passing\n parameters *y* and *horizontalalignment*.\n\n Other Parameters\n ----------------\n **kwargs : `.Text` properties\n `.Text` properties control the appearance of the label.\n\n See Also\n --------\n text : Documents the properties supported by `.Text`.\n \"\"\"\n if labelpad is not None:\n self.yaxis.labelpad = labelpad\n protected_kw = ['y', 'horizontalalignment', 'ha']\n if {*kwargs} & {*protected_kw}:\n if loc is not None:\n raise TypeError(f\"Specifying 'loc' is disallowed when any of \"\n f\"its corresponding low level keyword \"\n f\"arguments ({protected_kw}) are also \"\n f\"supplied\")\n\n else:\n loc = (loc if loc is not None\n else mpl.rcParams['yaxis.labellocation'])\n _api.check_in_list(('bottom', 'center', 'top'), loc=loc)\n\n y, ha = {\n 'bottom': (0, 'left'),\n 'center': (0.5, 'center'),\n 'top': (1, 'right')\n }[loc]\n kwargs.update(y=y, horizontalalignment=ha)\n\n return self.yaxis.set_label_text(ylabel, fontdict, **kwargs)\n\n def invert_yaxis(self):\n \"\"\"\n Invert the y-axis.\n\n See Also\n --------\n yaxis_inverted\n get_ylim, set_ylim\n get_ybound, set_ybound\n \"\"\"\n self.yaxis.set_inverted(not self.yaxis.get_inverted())\n\n yaxis_inverted = _axis_method_wrapper(\"yaxis\", \"get_inverted\")\n\n def get_ybound(self):\n \"\"\"\n Return the lower and upper y-axis bounds, in increasing order.\n\n See Also\n --------\n set_ybound\n get_ylim, set_ylim\n invert_yaxis, yaxis_inverted\n \"\"\"\n bottom, top = self.get_ylim()\n if bottom < top:\n return bottom, top\n else:\n return top, bottom\n\n def set_ybound(self, lower=None, upper=None):\n \"\"\"\n Set the lower and upper numerical bounds of the y-axis.\n\n This method will honor axis inversion regardless of parameter order.\n It will not change the autoscaling setting (`.get_autoscaley_on()`).\n\n Parameters\n ----------\n lower, upper : float or None\n The lower and upper bounds. If *None*, the respective axis bound\n is not modified.\n\n See Also\n --------\n get_ybound\n get_ylim, set_ylim\n invert_yaxis, yaxis_inverted\n \"\"\"\n if upper is None and np.iterable(lower):\n lower, upper = lower\n\n old_lower, old_upper = self.get_ybound()\n if lower is None:\n lower = old_lower\n if upper is None:\n upper = old_upper\n\n self.set_ylim(sorted((lower, upper),\n reverse=bool(self.yaxis_inverted())),\n auto=None)\n\n def get_ylim(self):\n \"\"\"\n Return the y-axis view limits.\n\n Returns\n -------\n bottom, top : (float, float)\n The current y-axis limits in data coordinates.\n\n See Also\n --------\n .Axes.set_ylim\n set_ybound, get_ybound\n invert_yaxis, yaxis_inverted\n\n Notes\n -----\n The y-axis may be inverted, in which case the *bottom* value\n will be greater than the *top* value.\n \"\"\"\n return tuple(self.viewLim.intervaly)\n\n @_api.make_keyword_only(\"3.6\", \"emit\")\n def set_ylim(self, bottom=None, top=None, emit=True, auto=False,\n *, ymin=None, ymax=None):\n \"\"\"\n Set the y-axis view limits.\n\n Parameters\n ----------\n bottom : float, optional\n The bottom ylim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n The bottom and top ylims may also be passed as the tuple\n (*bottom*, *top*) as the first positional argument (or as\n the *bottom* keyword argument).\n\n .. ACCEPTS: (bottom: float, top: float)\n\n top : float, optional\n The top ylim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n emit : bool, default: True\n Whether to notify observers of limit change.\n\n auto : bool or None, default: False\n Whether to turn on autoscaling of the y-axis. *True* turns on,\n *False* turns off, *None* leaves unchanged.\n\n ymin, ymax : float, optional\n They are equivalent to bottom and top respectively, and it is an\n error to pass both *ymin* and *bottom* or *ymax* and *top*.\n\n Returns\n -------\n bottom, top : (float, float)\n The new y-axis limits in data coordinates.\n\n See Also\n --------\n get_ylim\n set_ybound, get_ybound\n invert_yaxis, yaxis_inverted\n\n Notes\n -----\n The *bottom* value may be greater than the *top* value, in which\n case the y-axis values will decrease from *bottom* to *top*.\n\n Examples\n --------\n >>> set_ylim(bottom, top)\n >>> set_ylim((bottom, top))\n >>> bottom, top = set_ylim(bottom, top)\n\n One limit may be left unchanged.\n\n >>> set_ylim(top=top_lim)\n\n Limits may be passed in reverse order to flip the direction of\n the y-axis. For example, suppose ``y`` represents depth of the\n ocean in m. The y-axis limits might be set like the following\n so 5000 m depth is at the bottom of the plot and the surface,\n 0 m, is at the top.\n\n >>> set_ylim(5000, 0)\n \"\"\"\n if top is None and np.iterable(bottom):\n bottom, top = bottom\n if ymin is not None:\n if bottom is not None:\n raise TypeError(\"Cannot pass both 'bottom' and 'ymin'\")\n bottom = ymin\n if ymax is not None:\n if top is not None:\n raise TypeError(\"Cannot pass both 'top' and 'ymax'\")\n top = ymax\n return self.yaxis._set_lim(bottom, top, emit=emit, auto=auto)\n\n get_yscale = _axis_method_wrapper(\"yaxis\", \"get_scale\")\n set_yscale = _axis_method_wrapper(\"yaxis\", \"_set_axes_scale\")\n get_yticks = _axis_method_wrapper(\"yaxis\", \"get_ticklocs\")\n set_yticks = _axis_method_wrapper(\"yaxis\", \"set_ticks\")\n get_ymajorticklabels = _axis_method_wrapper(\"yaxis\", \"get_majorticklabels\")\n get_yminorticklabels = _axis_method_wrapper(\"yaxis\", \"get_minorticklabels\")\n get_yticklabels = _axis_method_wrapper(\"yaxis\", \"get_ticklabels\")\n set_yticklabels = _axis_method_wrapper(\n \"yaxis\", \"_set_ticklabels\",\n doc_sub={\"Axis.set_ticks\": \"Axes.set_yticks\"})\n\n xaxis_date = _axis_method_wrapper(\"xaxis\", \"axis_date\")\n yaxis_date = _axis_method_wrapper(\"yaxis\", \"axis_date\")\n\n def format_xdata(self, x):\n \"\"\"\n Return *x* formatted as an x-value.\n\n This function will use the `.fmt_xdata` attribute if it is not None,\n else will fall back on the xaxis major formatter.\n \"\"\"\n return (self.fmt_xdata if self.fmt_xdata is not None\n else self.xaxis.get_major_formatter().format_data_short)(x)\n\n def format_ydata(self, y):\n \"\"\"\n Return *y* formatted as an y-value.\n\n This function will use the `.fmt_ydata` attribute if it is not None,\n else will fall back on the yaxis major formatter.\n \"\"\"\n return (self.fmt_ydata if self.fmt_ydata is not None\n else self.yaxis.get_major_formatter().format_data_short)(y)\n\n def format_coord(self, x, y):\n \"\"\"Return a format string formatting the *x*, *y* coordinates.\"\"\"\n return \"x={} y={}\".format(\n \"???\" if x is None else self.format_xdata(x),\n \"???\" if y is None else self.format_ydata(y),\n )\n\n def minorticks_on(self):\n \"\"\"\n Display minor ticks on the Axes.\n\n Displaying minor ticks may reduce performance; you may turn them off\n using `minorticks_off()` if drawing speed is a problem.\n \"\"\"\n for ax in (self.xaxis, self.yaxis):\n scale = ax.get_scale()\n if scale == 'log':\n s = ax._scale\n ax.set_minor_locator(mticker.LogLocator(s.base, s.subs))\n elif scale == 'symlog':\n s = ax._scale\n ax.set_minor_locator(\n mticker.SymmetricalLogLocator(s._transform, s.subs))\n else:\n ax.set_minor_locator(mticker.AutoMinorLocator())\n\n def minorticks_off(self):\n \"\"\"Remove minor ticks from the Axes.\"\"\"\n self.xaxis.set_minor_locator(mticker.NullLocator())\n self.yaxis.set_minor_locator(mticker.NullLocator())\n\n # Interactive manipulation\n\n def can_zoom(self):\n \"\"\"\n Return whether this Axes supports the zoom box button functionality.\n \"\"\"\n return True\n\n def can_pan(self):\n \"\"\"\n Return whether this Axes supports any pan/zoom button functionality.\n \"\"\"\n return True\n\n def get_navigate(self):\n \"\"\"\n Get whether the Axes responds to navigation commands.\n \"\"\"\n return self._navigate\n\n def set_navigate(self, b):\n \"\"\"\n Set whether the Axes responds to navigation toolbar commands.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self._navigate = b\n\n def get_navigate_mode(self):\n \"\"\"\n Get the navigation toolbar button status: 'PAN', 'ZOOM', or None.\n \"\"\"\n return self._navigate_mode\n\n def set_navigate_mode(self, b):\n \"\"\"\n Set the navigation toolbar button status.\n\n .. warning::\n This is not a user-API function.\n\n \"\"\"\n self._navigate_mode = b\n\n def _get_view(self):\n \"\"\"\n Save information required to reproduce the current view.\n\n Called before a view is changed, such as during a pan or zoom\n initiated by the user. You may return any information you deem\n necessary to describe the view.\n\n .. note::\n\n Intended to be overridden by new projection types, but if not, the\n default implementation saves the view limits. You *must* implement\n :meth:`_set_view` if you implement this method.\n \"\"\"\n xmin, xmax = self.get_xlim()\n ymin, ymax = self.get_ylim()\n return xmin, xmax, ymin, ymax\n\n def _set_view(self, view):\n \"\"\"\n Apply a previously saved view.\n\n Called when restoring a view, such as with the navigation buttons.\n\n .. note::\n\n Intended to be overridden by new projection types, but if not, the\n default implementation restores the view limits. You *must*\n implement :meth:`_get_view` if you implement this method.\n \"\"\"\n xmin, xmax, ymin, ymax = view\n self.set_xlim((xmin, xmax))\n self.set_ylim((ymin, ymax))\n\n def _prepare_view_from_bbox(self, bbox, direction='in',\n mode=None, twinx=False, twiny=False):\n \"\"\"\n Helper function to prepare the new bounds from a bbox.\n\n This helper function returns the new x and y bounds from the zoom\n bbox. This a convenience method to abstract the bbox logic\n out of the base setter.\n \"\"\"\n if len(bbox) == 3:\n xp, yp, scl = bbox # Zooming code\n if scl == 0: # Should not happen\n scl = 1.\n if scl > 1:\n direction = 'in'\n else:\n direction = 'out'\n scl = 1/scl\n # get the limits of the axes\n (xmin, ymin), (xmax, ymax) = self.transData.transform(\n np.transpose([self.get_xlim(), self.get_ylim()]))\n # set the range\n xwidth = xmax - xmin\n ywidth = ymax - ymin\n xcen = (xmax + xmin)*.5\n ycen = (ymax + ymin)*.5\n xzc = (xp*(scl - 1) + xcen)/scl\n yzc = (yp*(scl - 1) + ycen)/scl\n bbox = [xzc - xwidth/2./scl, yzc - ywidth/2./scl,\n xzc + xwidth/2./scl, yzc + ywidth/2./scl]\n elif len(bbox) != 4:\n # should be len 3 or 4 but nothing else\n _api.warn_external(\n \"Warning in _set_view_from_bbox: bounding box is not a tuple \"\n \"of length 3 or 4. Ignoring the view change.\")\n return\n\n # Original limits.\n xmin0, xmax0 = self.get_xbound()\n ymin0, ymax0 = self.get_ybound()\n # The zoom box in screen coords.\n startx, starty, stopx, stopy = bbox\n # Convert to data coords.\n (startx, starty), (stopx, stopy) = self.transData.inverted().transform(\n [(startx, starty), (stopx, stopy)])\n # Clip to axes limits.\n xmin, xmax = np.clip(sorted([startx, stopx]), xmin0, xmax0)\n ymin, ymax = np.clip(sorted([starty, stopy]), ymin0, ymax0)\n # Don't double-zoom twinned axes or if zooming only the other axis.\n if twinx or mode == \"y\":\n xmin, xmax = xmin0, xmax0\n if twiny or mode == \"x\":\n ymin, ymax = ymin0, ymax0\n\n if direction == \"in\":\n new_xbound = xmin, xmax\n new_ybound = ymin, ymax\n\n elif direction == \"out\":\n x_trf = self.xaxis.get_transform()\n sxmin0, sxmax0, sxmin, sxmax = x_trf.transform(\n [xmin0, xmax0, xmin, xmax]) # To screen space.\n factor = (sxmax0 - sxmin0) / (sxmax - sxmin) # Unzoom factor.\n # Move original bounds away by\n # (factor) x (distance between unzoom box and Axes bbox).\n sxmin1 = sxmin0 - factor * (sxmin - sxmin0)\n sxmax1 = sxmax0 + factor * (sxmax0 - sxmax)\n # And back to data space.\n new_xbound = x_trf.inverted().transform([sxmin1, sxmax1])\n\n y_trf = self.yaxis.get_transform()\n symin0, symax0, symin, symax = y_trf.transform(\n [ymin0, ymax0, ymin, ymax])\n factor = (symax0 - symin0) / (symax - symin)\n symin1 = symin0 - factor * (symin - symin0)\n symax1 = symax0 + factor * (symax0 - symax)\n new_ybound = y_trf.inverted().transform([symin1, symax1])\n\n return new_xbound, new_ybound\n\n def _set_view_from_bbox(self, bbox, direction='in',\n mode=None, twinx=False, twiny=False):\n \"\"\"\n Update view from a selection bbox.\n\n .. note::\n\n Intended to be overridden by new projection types, but if not, the\n default implementation sets the view limits to the bbox directly.\n\n Parameters\n ----------\n bbox : 4-tuple or 3 tuple\n * If bbox is a 4 tuple, it is the selected bounding box limits,\n in *display* coordinates.\n * If bbox is a 3 tuple, it is an (xp, yp, scl) triple, where\n (xp, yp) is the center of zooming and scl the scale factor to\n zoom by.\n\n direction : str\n The direction to apply the bounding box.\n * `'in'` - The bounding box describes the view directly, i.e.,\n it zooms in.\n * `'out'` - The bounding box describes the size to make the\n existing view, i.e., it zooms out.\n\n mode : str or None\n The selection mode, whether to apply the bounding box in only the\n `'x'` direction, `'y'` direction or both (`None`).\n\n twinx : bool\n Whether this axis is twinned in the *x*-direction.\n\n twiny : bool\n Whether this axis is twinned in the *y*-direction.\n \"\"\"\n new_xbound, new_ybound = self._prepare_view_from_bbox(\n bbox, direction=direction, mode=mode, twinx=twinx, twiny=twiny)\n if not twinx and mode != \"y\":\n self.set_xbound(new_xbound)\n self.set_autoscalex_on(False)\n if not twiny and mode != \"x\":\n self.set_ybound(new_ybound)\n self.set_autoscaley_on(False)\n\n def start_pan(self, x, y, button):\n \"\"\"\n Called when a pan operation has started.\n\n Parameters\n ----------\n x, y : float\n The mouse coordinates in display coords.\n button : `.MouseButton`\n The pressed mouse button.\n\n Notes\n -----\n This is intended to be overridden by new projection types.\n \"\"\"\n self._pan_start = types.SimpleNamespace(\n lim=self.viewLim.frozen(),\n trans=self.transData.frozen(),\n trans_inverse=self.transData.inverted().frozen(),\n bbox=self.bbox.frozen(),\n x=x,\n y=y)\n\n def end_pan(self):\n \"\"\"\n Called when a pan operation completes (when the mouse button is up.)\n\n Notes\n -----\n This is intended to be overridden by new projection types.\n \"\"\"\n del self._pan_start\n\n def _get_pan_points(self, button, key, x, y):\n \"\"\"\n Helper function to return the new points after a pan.\n\n This helper function returns the points on the axis after a pan has\n occurred. This is a convenience method to abstract the pan logic\n out of the base setter.\n \"\"\"\n def format_deltas(key, dx, dy):\n if key == 'control':\n if abs(dx) > abs(dy):\n dy = dx\n else:\n dx = dy\n elif key == 'x':\n dy = 0\n elif key == 'y':\n dx = 0\n elif key == 'shift':\n if 2 * abs(dx) < abs(dy):\n dx = 0\n elif 2 * abs(dy) < abs(dx):\n dy = 0\n elif abs(dx) > abs(dy):\n dy = dy / abs(dy) * abs(dx)\n else:\n dx = dx / abs(dx) * abs(dy)\n return dx, dy\n\n p = self._pan_start\n dx = x - p.x\n dy = y - p.y\n if dx == dy == 0:\n return\n if button == 1:\n dx, dy = format_deltas(key, dx, dy)\n result = p.bbox.translated(-dx, -dy).transformed(p.trans_inverse)\n elif button == 3:\n try:\n dx = -dx / self.bbox.width\n dy = -dy / self.bbox.height\n dx, dy = format_deltas(key, dx, dy)\n if self.get_aspect() != 'auto':\n dx = dy = 0.5 * (dx + dy)\n alpha = np.power(10.0, (dx, dy))\n start = np.array([p.x, p.y])\n oldpoints = p.lim.transformed(p.trans)\n newpoints = start + alpha * (oldpoints - start)\n result = (mtransforms.Bbox(newpoints)\n .transformed(p.trans_inverse))\n except OverflowError:\n _api.warn_external('Overflow while panning')\n return\n else:\n return\n\n valid = np.isfinite(result.transformed(p.trans))\n points = result.get_points().astype(object)\n # Just ignore invalid limits (typically, underflow in log-scale).\n points[~valid] = None\n return points\n\n def drag_pan(self, button, key, x, y):\n \"\"\"\n Called when the mouse moves during a pan operation.\n\n Parameters\n ----------\n button : `.MouseButton`\n The pressed mouse button.\n key : str or None\n The pressed key, if any.\n x, y : float\n The mouse coordinates in display coords.\n\n Notes\n -----\n This is intended to be overridden by new projection types.\n \"\"\"\n points = self._get_pan_points(button, key, x, y)\n if points is not None:\n self.set_xlim(points[:, 0])\n self.set_ylim(points[:, 1])\n\n def get_children(self):\n # docstring inherited.\n return [\n *self._children,\n *self.spines.values(),\n *self._axis_map.values(),\n self.title, self._left_title, self._right_title,\n *self.child_axes,\n *([self.legend_] if self.legend_ is not None else []),\n self.patch,\n ]\n\n def contains(self, mouseevent):\n # docstring inherited.\n inside, info = self._default_contains(mouseevent)\n if inside is not None:\n return inside, info\n return self.patch.contains(mouseevent)\n\n def contains_point(self, point):\n \"\"\"\n Return whether *point* (pair of pixel coordinates) is inside the Axes\n patch.\n \"\"\"\n return self.patch.contains_point(point, radius=1.0)\n\n def get_default_bbox_extra_artists(self):\n \"\"\"\n Return a default list of artists that are used for the bounding box\n calculation.\n\n Artists are excluded either by not being visible or\n ``artist.set_in_layout(False)``.\n \"\"\"\n\n artists = self.get_children()\n\n for axis in self._axis_map.values():\n # axis tight bboxes are calculated separately inside\n # Axes.get_tightbbox() using for_layout_only=True\n artists.remove(axis)\n if not (self.axison and self._frameon):\n # don't do bbox on spines if frame not on.\n for spine in self.spines.values():\n artists.remove(spine)\n\n artists.remove(self.title)\n artists.remove(self._left_title)\n artists.remove(self._right_title)\n\n # always include types that do not internally implement clipping\n # to Axes. may have clip_on set to True and clip_box equivalent\n # to ax.bbox but then ignore these properties during draws.\n noclip = (_AxesBase, maxis.Axis,\n offsetbox.AnnotationBbox, offsetbox.OffsetBox)\n return [a for a in artists if a.get_visible() and a.get_in_layout()\n and (isinstance(a, noclip) or not a._fully_clipped_to_axes())]\n\n def get_tightbbox(self, renderer=None, call_axes_locator=True,\n bbox_extra_artists=None, *, for_layout_only=False):\n \"\"\"\n Return the tight bounding box of the Axes, including axis and their\n decorators (xlabel, title, etc).\n\n Artists that have ``artist.set_in_layout(False)`` are not included\n in the bbox.\n\n Parameters\n ----------\n renderer : `.RendererBase` subclass\n renderer that will be used to draw the figures (i.e.\n ``fig.canvas.get_renderer()``)\n\n bbox_extra_artists : list of `.Artist` or ``None``\n List of artists to include in the tight bounding box. If\n ``None`` (default), then all artist children of the Axes are\n included in the tight bounding box.\n\n call_axes_locator : bool, default: True\n If *call_axes_locator* is ``False``, it does not call the\n ``_axes_locator`` attribute, which is necessary to get the correct\n bounding box. ``call_axes_locator=False`` can be used if the\n caller is only interested in the relative size of the tightbbox\n compared to the Axes bbox.\n\n for_layout_only : default: False\n The bounding box will *not* include the x-extent of the title and\n the xlabel, or the y-extent of the ylabel.\n\n Returns\n -------\n `.BboxBase`\n Bounding box in figure pixel coordinates.\n\n See Also\n --------\n matplotlib.axes.Axes.get_window_extent\n matplotlib.axis.Axis.get_tightbbox\n matplotlib.spines.Spine.get_window_extent\n \"\"\"\n\n bb = []\n if renderer is None:\n renderer = self.figure._get_renderer()\n\n if not self.get_visible():\n return None\n\n locator = self.get_axes_locator()\n self.apply_aspect(\n locator(self, renderer) if locator and call_axes_locator else None)\n\n for axis in self._axis_map.values():\n if self.axison and axis.get_visible():\n ba = martist._get_tightbbox_for_layout_only(axis, renderer)\n if ba:\n bb.append(ba)\n self._update_title_position(renderer)\n axbbox = self.get_window_extent(renderer)\n bb.append(axbbox)\n\n for title in [self.title, self._left_title, self._right_title]:\n if title.get_visible():\n bt = title.get_window_extent(renderer)\n if for_layout_only and bt.width > 0:\n # make the title bbox 1 pixel wide so its width\n # is not accounted for in bbox calculations in\n # tight/constrained_layout\n bt.x0 = (bt.x0 + bt.x1) / 2 - 0.5\n bt.x1 = bt.x0 + 1.0\n bb.append(bt)\n\n bbox_artists = bbox_extra_artists\n if bbox_artists is None:\n bbox_artists = self.get_default_bbox_extra_artists()\n\n for a in bbox_artists:\n bbox = a.get_tightbbox(renderer)\n if (bbox is not None\n and 0 < bbox.width < np.inf\n and 0 < bbox.height < np.inf):\n bb.append(bbox)\n return mtransforms.Bbox.union(\n [b for b in bb if b.width != 0 or b.height != 0])\n\n def _make_twin_axes(self, *args, **kwargs):\n \"\"\"Make a twinx Axes of self. This is used for twinx and twiny.\"\"\"\n if 'sharex' in kwargs and 'sharey' in kwargs:\n # The following line is added in v2.2 to avoid breaking Seaborn,\n # which currently uses this internal API.\n if kwargs[\"sharex\"] is not self and kwargs[\"sharey\"] is not self:\n raise ValueError(\"Twinned Axes may share only one axis\")\n ss = self.get_subplotspec()\n if ss:\n twin = self.figure.add_subplot(ss, *args, **kwargs)\n else:\n twin = self.figure.add_axes(\n self.get_position(True), *args, **kwargs,\n axes_locator=_TransformedBoundsLocator(\n [0, 0, 1, 1], self.transAxes))\n self.set_adjustable('datalim')\n twin.set_adjustable('datalim')\n self._twinned_axes.join(self, twin)\n return twin\n\n def twinx(self):\n \"\"\"\n Create a twin Axes sharing the xaxis.\n\n Create a new Axes with an invisible x-axis and an independent\n y-axis positioned opposite to the original one (i.e. at right). The\n x-axis autoscale setting will be inherited from the original\n Axes. To ensure that the tick marks of both y-axes align, see\n `~matplotlib.ticker.LinearLocator`.\n\n Returns\n -------\n Axes\n The newly created Axes instance\n\n Notes\n -----\n For those who are 'picking' artists while using twinx, pick\n events are only called for the artists in the top-most Axes.\n \"\"\"\n ax2 = self._make_twin_axes(sharex=self)\n ax2.yaxis.tick_right()\n ax2.yaxis.set_label_position('right')\n ax2.yaxis.set_offset_position('right')\n ax2.set_autoscalex_on(self.get_autoscalex_on())\n self.yaxis.tick_left()\n ax2.xaxis.set_visible(False)\n ax2.patch.set_visible(False)\n return ax2\n\n def twiny(self):\n \"\"\"\n Create a twin Axes sharing the yaxis.\n\n Create a new Axes with an invisible y-axis and an independent\n x-axis positioned opposite to the original one (i.e. at top). The\n y-axis autoscale setting will be inherited from the original Axes.\n To ensure that the tick marks of both x-axes align, see\n `~matplotlib.ticker.LinearLocator`.\n\n Returns\n -------\n Axes\n The newly created Axes instance\n\n Notes\n -----\n For those who are 'picking' artists while using twiny, pick\n events are only called for the artists in the top-most Axes.\n \"\"\"\n ax2 = self._make_twin_axes(sharey=self)\n ax2.xaxis.tick_top()\n ax2.xaxis.set_label_position('top')\n ax2.set_autoscaley_on(self.get_autoscaley_on())\n self.xaxis.tick_bottom()\n ax2.yaxis.set_visible(False)\n ax2.patch.set_visible(False)\n return ax2\n\n def get_shared_x_axes(self):\n \"\"\"Return an immutable view on the shared x-axes Grouper.\"\"\"\n return cbook.GrouperView(self._shared_axes[\"x\"])\n\n def get_shared_y_axes(self):\n \"\"\"Return an immutable view on the shared y-axes Grouper.\"\"\"\n return cbook.GrouperView(self._shared_axes[\"y\"])\n\n def label_outer(self):\n \"\"\"\n Only show \"outer\" labels and tick labels.\n\n x-labels are only kept for subplots on the last row (or first row, if\n labels are on the top side); y-labels only for subplots on the first\n column (or last column, if labels are on the right side).\n \"\"\"\n self._label_outer_xaxis(check_patch=False)\n self._label_outer_yaxis(check_patch=False)\n\n def _label_outer_xaxis(self, *, check_patch):\n # see documentation in label_outer.\n if check_patch and not isinstance(self.patch, mpl.patches.Rectangle):\n return\n ss = self.get_subplotspec()\n if not ss:\n return\n label_position = self.xaxis.get_label_position()\n if not ss.is_first_row(): # Remove top label/ticklabels/offsettext.\n if label_position == \"top\":\n self.set_xlabel(\"\")\n self.xaxis.set_tick_params(which=\"both\", labeltop=False)\n if self.xaxis.offsetText.get_position()[1] == 1:\n self.xaxis.offsetText.set_visible(False)\n if not ss.is_last_row(): # Remove bottom label/ticklabels/offsettext.\n if label_position == \"bottom\":\n self.set_xlabel(\"\")\n self.xaxis.set_tick_params(which=\"both\", labelbottom=False)\n if self.xaxis.offsetText.get_position()[1] == 0:\n self.xaxis.offsetText.set_visible(False)\n\n def _label_outer_yaxis(self, *, check_patch):\n # see documentation in label_outer.\n if check_patch and not isinstance(self.patch, mpl.patches.Rectangle):\n return\n ss = self.get_subplotspec()\n if not ss:\n return\n label_position = self.yaxis.get_label_position()\n if not ss.is_first_col(): # Remove left label/ticklabels/offsettext.\n if label_position == \"left\":\n self.set_ylabel(\"\")\n self.yaxis.set_tick_params(which=\"both\", labelleft=False)\n if self.yaxis.offsetText.get_position()[0] == 0:\n self.yaxis.offsetText.set_visible(False)\n if not ss.is_last_col(): # Remove right label/ticklabels/offsettext.\n if label_position == \"right\":\n self.set_ylabel(\"\")\n self.yaxis.set_tick_params(which=\"both\", labelright=False)\n if self.yaxis.offsetText.get_position()[0] == 1:\n self.yaxis.offsetText.set_visible(False)\n", "lib/matplotlib/axis.py": "\"\"\"\nClasses for the ticks and x and y axis.\n\"\"\"\n\nimport datetime\nimport functools\nimport logging\nfrom numbers import Number\n\nimport numpy as np\n\nimport matplotlib as mpl\nfrom matplotlib import _api, cbook\nimport matplotlib.artist as martist\nimport matplotlib.colors as mcolors\nimport matplotlib.lines as mlines\nimport matplotlib.scale as mscale\nimport matplotlib.text as mtext\nimport matplotlib.ticker as mticker\nimport matplotlib.transforms as mtransforms\nimport matplotlib.units as munits\n\n_log = logging.getLogger(__name__)\n\nGRIDLINE_INTERPOLATION_STEPS = 180\n\n# This list is being used for compatibility with Axes.grid, which\n# allows all Line2D kwargs.\n_line_inspector = martist.ArtistInspector(mlines.Line2D)\n_line_param_names = _line_inspector.get_setters()\n_line_param_aliases = [list(d)[0] for d in _line_inspector.aliasd.values()]\n_gridline_param_names = ['grid_' + name\n for name in _line_param_names + _line_param_aliases]\n\n\nclass Tick(martist.Artist):\n \"\"\"\n Abstract base class for the axis ticks, grid lines and labels.\n\n Ticks mark a position on an Axis. They contain two lines as markers and\n two labels; one each for the bottom and top positions (in case of an\n `.XAxis`) or for the left and right positions (in case of a `.YAxis`).\n\n Attributes\n ----------\n tick1line : `.Line2D`\n The left/bottom tick marker.\n tick2line : `.Line2D`\n The right/top tick marker.\n gridline : `.Line2D`\n The grid line associated with the label position.\n label1 : `.Text`\n The left/bottom tick label.\n label2 : `.Text`\n The right/top tick label.\n\n \"\"\"\n def __init__(\n self, axes, loc, *,\n size=None, # points\n width=None,\n color=None,\n tickdir=None,\n pad=None,\n labelsize=None,\n labelcolor=None,\n zorder=None,\n gridOn=None, # defaults to axes.grid depending on axes.grid.which\n tick1On=True,\n tick2On=True,\n label1On=True,\n label2On=False,\n major=True,\n labelrotation=0,\n grid_color=None,\n grid_linestyle=None,\n grid_linewidth=None,\n grid_alpha=None,\n **kwargs, # Other Line2D kwargs applied to gridlines.\n ):\n \"\"\"\n bbox is the Bound2D bounding box in display coords of the Axes\n loc is the tick location in data coords\n size is the tick size in points\n \"\"\"\n super().__init__()\n\n if gridOn is None:\n if major and (mpl.rcParams['axes.grid.which']\n in ('both', 'major')):\n gridOn = mpl.rcParams['axes.grid']\n elif (not major) and (mpl.rcParams['axes.grid.which']\n in ('both', 'minor')):\n gridOn = mpl.rcParams['axes.grid']\n else:\n gridOn = False\n\n self.set_figure(axes.figure)\n self.axes = axes\n\n self._loc = loc\n self._major = major\n\n name = self.__name__\n major_minor = \"major\" if major else \"minor\"\n\n if size is None:\n size = mpl.rcParams[f\"{name}.{major_minor}.size\"]\n self._size = size\n\n if width is None:\n width = mpl.rcParams[f\"{name}.{major_minor}.width\"]\n self._width = width\n\n if color is None:\n color = mpl.rcParams[f\"{name}.color\"]\n\n if pad is None:\n pad = mpl.rcParams[f\"{name}.{major_minor}.pad\"]\n self._base_pad = pad\n\n if labelcolor is None:\n labelcolor = mpl.rcParams[f\"{name}.labelcolor\"]\n\n if labelcolor == 'inherit':\n # inherit from tick color\n labelcolor = mpl.rcParams[f\"{name}.color\"]\n\n if labelsize is None:\n labelsize = mpl.rcParams[f\"{name}.labelsize\"]\n\n self._set_labelrotation(labelrotation)\n\n if zorder is None:\n if major:\n zorder = mlines.Line2D.zorder + 0.01\n else:\n zorder = mlines.Line2D.zorder\n self._zorder = zorder\n\n if grid_color is None:\n grid_color = mpl.rcParams[\"grid.color\"]\n if grid_linestyle is None:\n grid_linestyle = mpl.rcParams[\"grid.linestyle\"]\n if grid_linewidth is None:\n grid_linewidth = mpl.rcParams[\"grid.linewidth\"]\n if grid_alpha is None and not mcolors._has_alpha_channel(grid_color):\n # alpha precedence: kwarg > color alpha > rcParams['grid.alpha']\n # Note: only resolve to rcParams if the color does not have alpha\n # otherwise `grid(color=(1, 1, 1, 0.5))` would work like\n # grid(color=(1, 1, 1, 0.5), alpha=rcParams['grid.alpha'])\n # so the that the rcParams default would override color alpha.\n grid_alpha = mpl.rcParams[\"grid.alpha\"]\n grid_kw = {k[5:]: v for k, v in kwargs.items()}\n\n self.tick1line = mlines.Line2D(\n [], [],\n color=color, linestyle=\"none\", zorder=zorder, visible=tick1On,\n markeredgecolor=color, markersize=size, markeredgewidth=width,\n )\n self.tick2line = mlines.Line2D(\n [], [],\n color=color, linestyle=\"none\", zorder=zorder, visible=tick2On,\n markeredgecolor=color, markersize=size, markeredgewidth=width,\n )\n self.gridline = mlines.Line2D(\n [], [],\n color=grid_color, alpha=grid_alpha, visible=gridOn,\n linestyle=grid_linestyle, linewidth=grid_linewidth, marker=\"\",\n **grid_kw,\n )\n self.gridline.get_path()._interpolation_steps = \\\n GRIDLINE_INTERPOLATION_STEPS\n self.label1 = mtext.Text(\n np.nan, np.nan,\n fontsize=labelsize, color=labelcolor, visible=label1On,\n rotation=self._labelrotation[1])\n self.label2 = mtext.Text(\n np.nan, np.nan,\n fontsize=labelsize, color=labelcolor, visible=label2On,\n rotation=self._labelrotation[1])\n\n self._apply_tickdir(tickdir)\n\n for artist in [self.tick1line, self.tick2line, self.gridline,\n self.label1, self.label2]:\n self._set_artist_props(artist)\n\n self.update_position(loc)\n\n @property\n @_api.deprecated(\"3.1\", alternative=\"Tick.label1\", removal=\"3.8\")\n def label(self):\n return self.label1\n\n def _set_labelrotation(self, labelrotation):\n if isinstance(labelrotation, str):\n mode = labelrotation\n angle = 0\n elif isinstance(labelrotation, (tuple, list)):\n mode, angle = labelrotation\n else:\n mode = 'default'\n angle = labelrotation\n _api.check_in_list(['auto', 'default'], labelrotation=mode)\n self._labelrotation = (mode, angle)\n\n def _apply_tickdir(self, tickdir):\n \"\"\"Set tick direction. Valid values are 'out', 'in', 'inout'.\"\"\"\n # This method is responsible for updating `_pad`, and, in subclasses,\n # for setting the tick{1,2}line markers as well. From the user\n # perspective this should always be called though _apply_params, which\n # further updates ticklabel positions using the new pads.\n if tickdir is None:\n tickdir = mpl.rcParams[f'{self.__name__}.direction']\n _api.check_in_list(['in', 'out', 'inout'], tickdir=tickdir)\n self._tickdir = tickdir\n self._pad = self._base_pad + self.get_tick_padding()\n\n def get_tickdir(self):\n return self._tickdir\n\n def get_tick_padding(self):\n \"\"\"Get the length of the tick outside of the Axes.\"\"\"\n padding = {\n 'in': 0.0,\n 'inout': 0.5,\n 'out': 1.0\n }\n return self._size * padding[self._tickdir]\n\n def get_children(self):\n children = [self.tick1line, self.tick2line,\n self.gridline, self.label1, self.label2]\n return children\n\n def set_clip_path(self, clippath, transform=None):\n # docstring inherited\n super().set_clip_path(clippath, transform)\n self.gridline.set_clip_path(clippath, transform)\n self.stale = True\n\n @_api.deprecated(\"3.6\")\n def get_pad_pixels(self):\n return self.figure.dpi * self._base_pad / 72\n\n def contains(self, mouseevent):\n \"\"\"\n Test whether the mouse event occurred in the Tick marks.\n\n This function always returns false. It is more useful to test if the\n axis as a whole contains the mouse rather than the set of tick marks.\n \"\"\"\n inside, info = self._default_contains(mouseevent)\n if inside is not None:\n return inside, info\n return False, {}\n\n def set_pad(self, val):\n \"\"\"\n Set the tick label pad in points\n\n Parameters\n ----------\n val : float\n \"\"\"\n self._apply_params(pad=val)\n self.stale = True\n\n def get_pad(self):\n \"\"\"Get the value of the tick label pad in points.\"\"\"\n return self._base_pad\n\n def _get_text1(self):\n \"\"\"Get the default Text 1 instance.\"\"\"\n\n def _get_text2(self):\n \"\"\"Get the default Text 2 instance.\"\"\"\n\n def _get_tick1line(self):\n \"\"\"Get the default line2D instance for tick1.\"\"\"\n\n def _get_tick2line(self):\n \"\"\"Get the default line2D instance for tick2.\"\"\"\n\n def _get_gridline(self):\n \"\"\"Get the default grid Line2d instance for this tick.\"\"\"\n\n def get_loc(self):\n \"\"\"Return the tick location (data coords) as a scalar.\"\"\"\n return self._loc\n\n @martist.allow_rasterization\n def draw(self, renderer):\n if not self.get_visible():\n self.stale = False\n return\n renderer.open_group(self.__name__, gid=self.get_gid())\n for artist in [self.gridline, self.tick1line, self.tick2line,\n self.label1, self.label2]:\n artist.draw(renderer)\n renderer.close_group(self.__name__)\n self.stale = False\n\n def set_label1(self, s):\n \"\"\"\n Set the label1 text.\n\n Parameters\n ----------\n s : str\n \"\"\"\n self.label1.set_text(s)\n self.stale = True\n\n set_label = set_label1\n\n def set_label2(self, s):\n \"\"\"\n Set the label2 text.\n\n Parameters\n ----------\n s : str\n \"\"\"\n self.label2.set_text(s)\n self.stale = True\n\n def set_url(self, url):\n \"\"\"\n Set the url of label1 and label2.\n\n Parameters\n ----------\n url : str\n \"\"\"\n super().set_url(url)\n self.label1.set_url(url)\n self.label2.set_url(url)\n self.stale = True\n\n def _set_artist_props(self, a):\n a.set_figure(self.figure)\n\n def get_view_interval(self):\n \"\"\"\n Return the view limits ``(min, max)`` of the axis the tick belongs to.\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n def _apply_params(self, **kwargs):\n for name, target in [(\"gridOn\", self.gridline),\n (\"tick1On\", self.tick1line),\n (\"tick2On\", self.tick2line),\n (\"label1On\", self.label1),\n (\"label2On\", self.label2)]:\n if name in kwargs:\n target.set_visible(kwargs.pop(name))\n if any(k in kwargs for k in ['size', 'width', 'pad', 'tickdir']):\n self._size = kwargs.pop('size', self._size)\n # Width could be handled outside this block, but it is\n # convenient to leave it here.\n self._width = kwargs.pop('width', self._width)\n self._base_pad = kwargs.pop('pad', self._base_pad)\n # _apply_tickdir uses _size and _base_pad to make _pad, and also\n # sets the ticklines markers.\n self._apply_tickdir(kwargs.pop('tickdir', self._tickdir))\n for line in (self.tick1line, self.tick2line):\n line.set_markersize(self._size)\n line.set_markeredgewidth(self._width)\n # _get_text1_transform uses _pad from _apply_tickdir.\n trans = self._get_text1_transform()[0]\n self.label1.set_transform(trans)\n trans = self._get_text2_transform()[0]\n self.label2.set_transform(trans)\n tick_kw = {k: v for k, v in kwargs.items() if k in ['color', 'zorder']}\n if 'color' in kwargs:\n tick_kw['markeredgecolor'] = kwargs['color']\n self.tick1line.set(**tick_kw)\n self.tick2line.set(**tick_kw)\n for k, v in tick_kw.items():\n setattr(self, '_' + k, v)\n\n if 'labelrotation' in kwargs:\n self._set_labelrotation(kwargs.pop('labelrotation'))\n self.label1.set(rotation=self._labelrotation[1])\n self.label2.set(rotation=self._labelrotation[1])\n\n label_kw = {k[5:]: v for k, v in kwargs.items()\n if k in ['labelsize', 'labelcolor']}\n self.label1.set(**label_kw)\n self.label2.set(**label_kw)\n\n grid_kw = {k[5:]: v for k, v in kwargs.items()\n if k in _gridline_param_names}\n self.gridline.set(**grid_kw)\n\n def update_position(self, loc):\n \"\"\"Set the location of tick in data coords with scalar *loc*.\"\"\"\n raise NotImplementedError('Derived must override')\n\n def _get_text1_transform(self):\n raise NotImplementedError('Derived must override')\n\n def _get_text2_transform(self):\n raise NotImplementedError('Derived must override')\n\n\nclass XTick(Tick):\n \"\"\"\n Contains all the Artists needed to make an x tick - the tick line,\n the label text and the grid line\n \"\"\"\n __name__ = 'xtick'\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # x in data coords, y in axes coords\n ax = self.axes\n self.tick1line.set(\n data=([0], [0]), transform=ax.get_xaxis_transform(\"tick1\"))\n self.tick2line.set(\n data=([0], [1]), transform=ax.get_xaxis_transform(\"tick2\"))\n self.gridline.set(\n data=([0, 0], [0, 1]), transform=ax.get_xaxis_transform(\"grid\"))\n # the y loc is 3 points below the min of y axis\n trans, va, ha = self._get_text1_transform()\n self.label1.set(\n x=0, y=0,\n verticalalignment=va, horizontalalignment=ha, transform=trans,\n )\n trans, va, ha = self._get_text2_transform()\n self.label2.set(\n x=0, y=1,\n verticalalignment=va, horizontalalignment=ha, transform=trans,\n )\n\n def _get_text1_transform(self):\n return self.axes.get_xaxis_text1_transform(self._pad)\n\n def _get_text2_transform(self):\n return self.axes.get_xaxis_text2_transform(self._pad)\n\n def _apply_tickdir(self, tickdir):\n # docstring inherited\n super()._apply_tickdir(tickdir)\n mark1, mark2 = {\n 'out': (mlines.TICKDOWN, mlines.TICKUP),\n 'in': (mlines.TICKUP, mlines.TICKDOWN),\n 'inout': ('|', '|'),\n }[self._tickdir]\n self.tick1line.set_marker(mark1)\n self.tick2line.set_marker(mark2)\n\n def update_position(self, loc):\n \"\"\"Set the location of tick in data coords with scalar *loc*.\"\"\"\n self.tick1line.set_xdata((loc,))\n self.tick2line.set_xdata((loc,))\n self.gridline.set_xdata((loc,))\n self.label1.set_x(loc)\n self.label2.set_x(loc)\n self._loc = loc\n self.stale = True\n\n def get_view_interval(self):\n # docstring inherited\n return self.axes.viewLim.intervalx\n\n\nclass YTick(Tick):\n \"\"\"\n Contains all the Artists needed to make a Y tick - the tick line,\n the label text and the grid line\n \"\"\"\n __name__ = 'ytick'\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # x in axes coords, y in data coords\n ax = self.axes\n self.tick1line.set(\n data=([0], [0]), transform=ax.get_yaxis_transform(\"tick1\"))\n self.tick2line.set(\n data=([1], [0]), transform=ax.get_yaxis_transform(\"tick2\"))\n self.gridline.set(\n data=([0, 1], [0, 0]), transform=ax.get_yaxis_transform(\"grid\"))\n # the y loc is 3 points below the min of y axis\n trans, va, ha = self._get_text1_transform()\n self.label1.set(\n x=0, y=0,\n verticalalignment=va, horizontalalignment=ha, transform=trans,\n )\n trans, va, ha = self._get_text2_transform()\n self.label2.set(\n x=1, y=0,\n verticalalignment=va, horizontalalignment=ha, transform=trans,\n )\n\n def _get_text1_transform(self):\n return self.axes.get_yaxis_text1_transform(self._pad)\n\n def _get_text2_transform(self):\n return self.axes.get_yaxis_text2_transform(self._pad)\n\n def _apply_tickdir(self, tickdir):\n # docstring inherited\n super()._apply_tickdir(tickdir)\n mark1, mark2 = {\n 'out': (mlines.TICKLEFT, mlines.TICKRIGHT),\n 'in': (mlines.TICKRIGHT, mlines.TICKLEFT),\n 'inout': ('_', '_'),\n }[self._tickdir]\n self.tick1line.set_marker(mark1)\n self.tick2line.set_marker(mark2)\n\n def update_position(self, loc):\n \"\"\"Set the location of tick in data coords with scalar *loc*.\"\"\"\n self.tick1line.set_ydata((loc,))\n self.tick2line.set_ydata((loc,))\n self.gridline.set_ydata((loc,))\n self.label1.set_y(loc)\n self.label2.set_y(loc)\n self._loc = loc\n self.stale = True\n\n def get_view_interval(self):\n # docstring inherited\n return self.axes.viewLim.intervaly\n\n\nclass Ticker:\n \"\"\"\n A container for the objects defining tick position and format.\n\n Attributes\n ----------\n locator : `matplotlib.ticker.Locator` subclass\n Determines the positions of the ticks.\n formatter : `matplotlib.ticker.Formatter` subclass\n Determines the format of the tick labels.\n \"\"\"\n\n def __init__(self):\n self._locator = None\n self._formatter = None\n self._locator_is_default = True\n self._formatter_is_default = True\n\n @property\n def locator(self):\n return self._locator\n\n @locator.setter\n def locator(self, locator):\n if not isinstance(locator, mticker.Locator):\n raise TypeError('locator must be a subclass of '\n 'matplotlib.ticker.Locator')\n self._locator = locator\n\n @property\n def formatter(self):\n return self._formatter\n\n @formatter.setter\n def formatter(self, formatter):\n if not isinstance(formatter, mticker.Formatter):\n raise TypeError('formatter must be a subclass of '\n 'matplotlib.ticker.Formatter')\n self._formatter = formatter\n\n\nclass _LazyTickList:\n \"\"\"\n A descriptor for lazy instantiation of tick lists.\n\n See comment above definition of the ``majorTicks`` and ``minorTicks``\n attributes.\n \"\"\"\n\n def __init__(self, major):\n self._major = major\n\n def __get__(self, instance, cls):\n if instance is None:\n return self\n else:\n # instance._get_tick() can itself try to access the majorTicks\n # attribute (e.g. in certain projection classes which override\n # e.g. get_xaxis_text1_transform). In order to avoid infinite\n # recursion, first set the majorTicks on the instance to an empty\n # list, then create the tick and append it.\n if self._major:\n instance.majorTicks = []\n tick = instance._get_tick(major=True)\n instance.majorTicks.append(tick)\n return instance.majorTicks\n else:\n instance.minorTicks = []\n tick = instance._get_tick(major=False)\n instance.minorTicks.append(tick)\n return instance.minorTicks\n\n\nclass Axis(martist.Artist):\n \"\"\"\n Base class for `.XAxis` and `.YAxis`.\n\n Attributes\n ----------\n isDefault_label : bool\n\n axes : `matplotlib.axes.Axes`\n The `~.axes.Axes` to which the Axis belongs.\n major : `matplotlib.axis.Ticker`\n Determines the major tick positions and their label format.\n minor : `matplotlib.axis.Ticker`\n Determines the minor tick positions and their label format.\n callbacks : `matplotlib.cbook.CallbackRegistry`\n\n label : `.Text`\n The axis label.\n labelpad : float\n The distance between the axis label and the tick labels.\n Defaults to :rc:`axes.labelpad` = 4.\n offsetText : `.Text`\n A `.Text` object containing the data offset of the ticks (if any).\n pickradius : float\n The acceptance radius for containment tests. See also `.Axis.contains`.\n majorTicks : list of `.Tick`\n The major ticks.\n minorTicks : list of `.Tick`\n The minor ticks.\n \"\"\"\n OFFSETTEXTPAD = 3\n # The class used in _get_tick() to create tick instances. Must either be\n # overwritten in subclasses, or subclasses must reimplement _get_tick().\n _tick_class = None\n\n def __str__(self):\n return \"{}({},{})\".format(\n type(self).__name__, *self.axes.transAxes.transform((0, 0)))\n\n @_api.make_keyword_only(\"3.6\", name=\"pickradius\")\n def __init__(self, axes, pickradius=15):\n \"\"\"\n Parameters\n ----------\n axes : `matplotlib.axes.Axes`\n The `~.axes.Axes` to which the created Axis belongs.\n pickradius : float\n The acceptance radius for containment tests. See also\n `.Axis.contains`.\n \"\"\"\n super().__init__()\n self._remove_overlapping_locs = True\n\n self.set_figure(axes.figure)\n\n self.isDefault_label = True\n\n self.axes = axes\n self.major = Ticker()\n self.minor = Ticker()\n self.callbacks = cbook.CallbackRegistry(signals=[\"units\"])\n\n self._autolabelpos = True\n\n self.label = mtext.Text(\n np.nan, np.nan,\n fontsize=mpl.rcParams['axes.labelsize'],\n fontweight=mpl.rcParams['axes.labelweight'],\n color=mpl.rcParams['axes.labelcolor'],\n )\n self._set_artist_props(self.label)\n self.offsetText = mtext.Text(np.nan, np.nan)\n self._set_artist_props(self.offsetText)\n\n self.labelpad = mpl.rcParams['axes.labelpad']\n\n self.pickradius = pickradius\n\n # Initialize here for testing; later add API\n self._major_tick_kw = dict()\n self._minor_tick_kw = dict()\n\n self.clear()\n self._autoscale_on = True\n\n @property\n def isDefault_majloc(self):\n return self.major._locator_is_default\n\n @isDefault_majloc.setter\n def isDefault_majloc(self, value):\n self.major._locator_is_default = value\n\n @property\n def isDefault_majfmt(self):\n return self.major._formatter_is_default\n\n @isDefault_majfmt.setter\n def isDefault_majfmt(self, value):\n self.major._formatter_is_default = value\n\n @property\n def isDefault_minloc(self):\n return self.minor._locator_is_default\n\n @isDefault_minloc.setter\n def isDefault_minloc(self, value):\n self.minor._locator_is_default = value\n\n @property\n def isDefault_minfmt(self):\n return self.minor._formatter_is_default\n\n @isDefault_minfmt.setter\n def isDefault_minfmt(self, value):\n self.minor._formatter_is_default = value\n\n # During initialization, Axis objects often create ticks that are later\n # unused; this turns out to be a very slow step. Instead, use a custom\n # descriptor to make the tick lists lazy and instantiate them as needed.\n majorTicks = _LazyTickList(major=True)\n minorTicks = _LazyTickList(major=False)\n\n def get_remove_overlapping_locs(self):\n return self._remove_overlapping_locs\n\n def set_remove_overlapping_locs(self, val):\n self._remove_overlapping_locs = bool(val)\n\n remove_overlapping_locs = property(\n get_remove_overlapping_locs, set_remove_overlapping_locs,\n doc=('If minor ticker locations that overlap with major '\n 'ticker locations should be trimmed.'))\n\n def set_label_coords(self, x, y, transform=None):\n \"\"\"\n Set the coordinates of the label.\n\n By default, the x coordinate of the y label and the y coordinate of the\n x label are determined by the tick label bounding boxes, but this can\n lead to poor alignment of multiple labels if there are multiple axes.\n\n You can also specify the coordinate system of the label with the\n transform. If None, the default coordinate system will be the axes\n coordinate system: (0, 0) is bottom left, (0.5, 0.5) is center, etc.\n \"\"\"\n self._autolabelpos = False\n if transform is None:\n transform = self.axes.transAxes\n\n self.label.set_transform(transform)\n self.label.set_position((x, y))\n self.stale = True\n\n def get_transform(self):\n return self._scale.get_transform()\n\n def get_scale(self):\n \"\"\"Return this Axis' scale (as a str).\"\"\"\n return self._scale.name\n\n def _set_scale(self, value, **kwargs):\n if not isinstance(value, mscale.ScaleBase):\n self._scale = mscale.scale_factory(value, self, **kwargs)\n else:\n self._scale = value\n self._scale.set_default_locators_and_formatters(self)\n\n self.isDefault_majloc = True\n self.isDefault_minloc = True\n self.isDefault_majfmt = True\n self.isDefault_minfmt = True\n\n # This method is directly wrapped by Axes.set_{x,y}scale.\n def _set_axes_scale(self, value, **kwargs):\n \"\"\"\n Set this Axis' scale.\n\n Parameters\n ----------\n value : {\"linear\", \"log\", \"symlog\", \"logit\", ...} or `.ScaleBase`\n The axis scale type to apply.\n\n **kwargs\n Different keyword arguments are accepted, depending on the scale.\n See the respective class keyword arguments:\n\n - `matplotlib.scale.LinearScale`\n - `matplotlib.scale.LogScale`\n - `matplotlib.scale.SymmetricalLogScale`\n - `matplotlib.scale.LogitScale`\n - `matplotlib.scale.FuncScale`\n\n Notes\n -----\n By default, Matplotlib supports the above mentioned scales.\n Additionally, custom scales may be registered using\n `matplotlib.scale.register_scale`. These scales can then also\n be used here.\n \"\"\"\n name, = [name for name, axis in self.axes._axis_map.items()\n if axis is self] # The axis name.\n old_default_lims = (self.get_major_locator()\n .nonsingular(-np.inf, np.inf))\n g = self.axes._shared_axes[name]\n for ax in g.get_siblings(self.axes):\n ax._axis_map[name]._set_scale(value, **kwargs)\n ax._update_transScale()\n ax.stale = True\n new_default_lims = (self.get_major_locator()\n .nonsingular(-np.inf, np.inf))\n if old_default_lims != new_default_lims:\n # Force autoscaling now, to take advantage of the scale locator's\n # nonsingular() before it possibly gets swapped out by the user.\n self.axes.autoscale_view(\n **{f\"scale{k}\": k == name for k in self.axes._axis_names})\n\n def limit_range_for_scale(self, vmin, vmax):\n return self._scale.limit_range_for_scale(vmin, vmax, self.get_minpos())\n\n def _get_autoscale_on(self):\n \"\"\"Return whether this Axis is autoscaled.\"\"\"\n return self._autoscale_on\n\n def _set_autoscale_on(self, b):\n \"\"\"\n Set whether this Axis is autoscaled when drawing or by\n `.Axes.autoscale_view`.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self._autoscale_on = b\n\n def get_children(self):\n return [self.label, self.offsetText,\n *self.get_major_ticks(), *self.get_minor_ticks()]\n\n def _reset_major_tick_kw(self):\n self._major_tick_kw.clear()\n self._major_tick_kw['gridOn'] = (\n mpl.rcParams['axes.grid'] and\n mpl.rcParams['axes.grid.which'] in ('both', 'major'))\n\n def _reset_minor_tick_kw(self):\n self._minor_tick_kw.clear()\n self._minor_tick_kw['gridOn'] = (\n mpl.rcParams['axes.grid'] and\n mpl.rcParams['axes.grid.which'] in ('both', 'minor'))\n\n def clear(self):\n \"\"\"\n Clear the axis.\n\n This resets axis properties to their default values:\n\n - the label\n - the scale\n - locators, formatters and ticks\n - major and minor grid\n - units\n - registered callbacks\n \"\"\"\n self.label._reset_visual_defaults()\n self.offsetText._reset_visual_defaults()\n self.labelpad = mpl.rcParams['axes.labelpad']\n\n self._init()\n\n self._set_scale('linear')\n\n # Clear the callback registry for this axis, or it may \"leak\"\n self.callbacks = cbook.CallbackRegistry(signals=[\"units\"])\n\n # whether the grids are on\n self._major_tick_kw['gridOn'] = (\n mpl.rcParams['axes.grid'] and\n mpl.rcParams['axes.grid.which'] in ('both', 'major'))\n self._minor_tick_kw['gridOn'] = (\n mpl.rcParams['axes.grid'] and\n mpl.rcParams['axes.grid.which'] in ('both', 'minor'))\n self.reset_ticks()\n\n self.converter = None\n self.units = None\n self.set_units(None)\n self.stale = True\n\n def reset_ticks(self):\n \"\"\"\n Re-initialize the major and minor Tick lists.\n\n Each list starts with a single fresh Tick.\n \"\"\"\n # Restore the lazy tick lists.\n try:\n del self.majorTicks\n except AttributeError:\n pass\n try:\n del self.minorTicks\n except AttributeError:\n pass\n try:\n self.set_clip_path(self.axes.patch)\n except AttributeError:\n pass\n\n def set_tick_params(self, which='major', reset=False, **kwargs):\n \"\"\"\n Set appearance parameters for ticks, ticklabels, and gridlines.\n\n For documentation of keyword arguments, see\n :meth:`matplotlib.axes.Axes.tick_params`.\n \"\"\"\n _api.check_in_list(['major', 'minor', 'both'], which=which)\n kwtrans = self._translate_tick_params(kwargs)\n\n # the kwargs are stored in self._major/minor_tick_kw so that any\n # future new ticks will automatically get them\n if reset:\n if which in ['major', 'both']:\n self._reset_major_tick_kw()\n self._major_tick_kw.update(kwtrans)\n if which in ['minor', 'both']:\n self._reset_minor_tick_kw()\n self._minor_tick_kw.update(kwtrans)\n self.reset_ticks()\n else:\n if which in ['major', 'both']:\n self._major_tick_kw.update(kwtrans)\n for tick in self.majorTicks:\n tick._apply_params(**kwtrans)\n if which in ['minor', 'both']:\n self._minor_tick_kw.update(kwtrans)\n for tick in self.minorTicks:\n tick._apply_params(**kwtrans)\n # labelOn and labelcolor also apply to the offset text.\n if 'label1On' in kwtrans or 'label2On' in kwtrans:\n self.offsetText.set_visible(\n self._major_tick_kw.get('label1On', False)\n or self._major_tick_kw.get('label2On', False))\n if 'labelcolor' in kwtrans:\n self.offsetText.set_color(kwtrans['labelcolor'])\n\n self.stale = True\n\n @staticmethod\n def _translate_tick_params(kw):\n \"\"\"\n Translate the kwargs supported by `.Axis.set_tick_params` to kwargs\n supported by `.Tick._apply_params`.\n\n In particular, this maps axis specific names like 'top', 'left'\n to the generic tick1, tick2 logic of the axis. Additionally, there\n are some other name translations.\n\n Returns a new dict of translated kwargs.\n\n Note: The input *kwargs* are currently modified, but that's ok for\n the only caller.\n \"\"\"\n # The following lists may be moved to a more accessible location.\n allowed_keys = [\n 'size', 'width', 'color', 'tickdir', 'pad',\n 'labelsize', 'labelcolor', 'zorder', 'gridOn',\n 'tick1On', 'tick2On', 'label1On', 'label2On',\n 'length', 'direction', 'left', 'bottom', 'right', 'top',\n 'labelleft', 'labelbottom', 'labelright', 'labeltop',\n 'labelrotation',\n *_gridline_param_names]\n\n keymap = {\n # tick_params key -> axis key\n 'length': 'size',\n 'direction': 'tickdir',\n 'rotation': 'labelrotation',\n 'left': 'tick1On',\n 'bottom': 'tick1On',\n 'right': 'tick2On',\n 'top': 'tick2On',\n 'labelleft': 'label1On',\n 'labelbottom': 'label1On',\n 'labelright': 'label2On',\n 'labeltop': 'label2On',\n }\n kwtrans = {newkey: kw.pop(oldkey)\n for oldkey, newkey in keymap.items() if oldkey in kw}\n if 'colors' in kw:\n c = kw.pop('colors')\n kwtrans['color'] = c\n kwtrans['labelcolor'] = c\n # Maybe move the checking up to the caller of this method.\n for key in kw:\n if key not in allowed_keys:\n raise ValueError(\n \"keyword %s is not recognized; valid keywords are %s\"\n % (key, allowed_keys))\n kwtrans.update(kw)\n return kwtrans\n\n def set_clip_path(self, clippath, transform=None):\n super().set_clip_path(clippath, transform)\n for child in self.majorTicks + self.minorTicks:\n child.set_clip_path(clippath, transform)\n self.stale = True\n\n def get_view_interval(self):\n \"\"\"Return the ``(min, max)`` view limits of this axis.\"\"\"\n raise NotImplementedError('Derived must override')\n\n def set_view_interval(self, vmin, vmax, ignore=False):\n \"\"\"\n Set the axis view limits. This method is for internal use; Matplotlib\n users should typically use e.g. `~.Axes.set_xlim` or `~.Axes.set_ylim`.\n\n If *ignore* is False (the default), this method will never reduce the\n preexisting view limits, only expand them if *vmin* or *vmax* are not\n within them. Moreover, the order of *vmin* and *vmax* does not matter;\n the orientation of the axis will not change.\n\n If *ignore* is True, the view limits will be set exactly to ``(vmin,\n vmax)`` in that order.\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n def get_data_interval(self):\n \"\"\"Return the ``(min, max)`` data limits of this axis.\"\"\"\n raise NotImplementedError('Derived must override')\n\n def set_data_interval(self, vmin, vmax, ignore=False):\n \"\"\"\n Set the axis data limits. This method is for internal use.\n\n If *ignore* is False (the default), this method will never reduce the\n preexisting data limits, only expand them if *vmin* or *vmax* are not\n within them. Moreover, the order of *vmin* and *vmax* does not matter;\n the orientation of the axis will not change.\n\n If *ignore* is True, the data limits will be set exactly to ``(vmin,\n vmax)`` in that order.\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n def get_inverted(self):\n \"\"\"\n Return whether this Axis is oriented in the \"inverse\" direction.\n\n The \"normal\" direction is increasing to the right for the x-axis and to\n the top for the y-axis; the \"inverse\" direction is increasing to the\n left for the x-axis and to the bottom for the y-axis.\n \"\"\"\n low, high = self.get_view_interval()\n return high < low\n\n def set_inverted(self, inverted):\n \"\"\"\n Set whether this Axis is oriented in the \"inverse\" direction.\n\n The \"normal\" direction is increasing to the right for the x-axis and to\n the top for the y-axis; the \"inverse\" direction is increasing to the\n left for the x-axis and to the bottom for the y-axis.\n \"\"\"\n a, b = self.get_view_interval()\n # cast to bool to avoid bad interaction between python 3.8 and np.bool_\n self._set_lim(*sorted((a, b), reverse=bool(inverted)), auto=None)\n\n def set_default_intervals(self):\n \"\"\"\n Set the default limits for the axis data and view interval if they\n have not been not mutated yet.\n \"\"\"\n # this is mainly in support of custom object plotting. For\n # example, if someone passes in a datetime object, we do not\n # know automagically how to set the default min/max of the\n # data and view limits. The unit conversion AxisInfo\n # interface provides a hook for custom types to register\n # default limits through the AxisInfo.default_limits\n # attribute, and the derived code below will check for that\n # and use it if it's available (else just use 0..1)\n\n def _set_lim(self, v0, v1, *, emit=True, auto):\n \"\"\"\n Set view limits.\n\n This method is a helper for the Axes ``set_xlim``, ``set_ylim``, and\n ``set_zlim`` methods.\n\n Parameters\n ----------\n v0, v1 : float\n The view limits. (Passing *v0* as a (low, high) pair is not\n supported; normalization must occur in the Axes setters.)\n emit : bool, default: True\n Whether to notify observers of limit change.\n auto : bool or None, default: False\n Whether to turn on autoscaling of the x-axis. True turns on, False\n turns off, None leaves unchanged.\n \"\"\"\n name, = [name for name, axis in self.axes._axis_map.items()\n if axis is self] # The axis name.\n\n self.axes._process_unit_info([(name, (v0, v1))], convert=False)\n v0 = self.axes._validate_converted_limits(v0, self.convert_units)\n v1 = self.axes._validate_converted_limits(v1, self.convert_units)\n\n if v0 is None or v1 is None:\n # Axes init calls set_xlim(0, 1) before get_xlim() can be called,\n # so only grab the limits if we really need them.\n old0, old1 = self.get_view_interval()\n if v0 is None:\n v0 = old0\n if v1 is None:\n v1 = old1\n\n if self.get_scale() == 'log' and (v0 <= 0 or v1 <= 0):\n # Axes init calls set_xlim(0, 1) before get_xlim() can be called,\n # so only grab the limits if we really need them.\n old0, old1 = self.get_view_interval()\n if v0 <= 0:\n _api.warn_external(f\"Attempt to set non-positive {name}lim on \"\n f\"a log-scaled axis will be ignored.\")\n v0 = old0\n if v1 <= 0:\n _api.warn_external(f\"Attempt to set non-positive {name}lim on \"\n f\"a log-scaled axis will be ignored.\")\n v1 = old1\n if v0 == v1:\n _api.warn_external(\n f\"Attempting to set identical low and high {name}lims \"\n f\"makes transformation singular; automatically expanding.\")\n reverse = bool(v0 > v1) # explicit cast needed for python3.8+np.bool_.\n v0, v1 = self.get_major_locator().nonsingular(v0, v1)\n v0, v1 = self.limit_range_for_scale(v0, v1)\n v0, v1 = sorted([v0, v1], reverse=bool(reverse))\n\n self.set_view_interval(v0, v1, ignore=True)\n # Mark viewlims as no longer stale without triggering an autoscale.\n for ax in self.axes._shared_axes[name].get_siblings(self.axes):\n ax._stale_viewlims[name] = False\n if auto is not None:\n self._set_autoscale_on(bool(auto))\n\n if emit:\n self.axes.callbacks.process(f\"{name}lim_changed\", self.axes)\n # Call all of the other axes that are shared with this one\n for other in self.axes._shared_axes[name].get_siblings(self.axes):\n if other is not self.axes:\n other._axis_map[name]._set_lim(\n v0, v1, emit=False, auto=auto)\n if other.figure != self.figure:\n other.figure.canvas.draw_idle()\n\n self.stale = True\n return v0, v1\n\n def _set_artist_props(self, a):\n if a is None:\n return\n a.set_figure(self.figure)\n\n @_api.deprecated(\"3.6\")\n def get_ticklabel_extents(self, renderer):\n \"\"\"Get the extents of the tick labels on either side of the axes.\"\"\"\n ticks_to_draw = self._update_ticks()\n tlb1, tlb2 = self._get_ticklabel_bboxes(ticks_to_draw, renderer)\n if len(tlb1):\n bbox1 = mtransforms.Bbox.union(tlb1)\n else:\n bbox1 = mtransforms.Bbox.from_extents(0, 0, 0, 0)\n if len(tlb2):\n bbox2 = mtransforms.Bbox.union(tlb2)\n else:\n bbox2 = mtransforms.Bbox.from_extents(0, 0, 0, 0)\n return bbox1, bbox2\n\n def _update_ticks(self):\n \"\"\"\n Update ticks (position and labels) using the current data interval of\n the axes. Return the list of ticks that will be drawn.\n \"\"\"\n major_locs = self.get_majorticklocs()\n major_labels = self.major.formatter.format_ticks(major_locs)\n major_ticks = self.get_major_ticks(len(major_locs))\n self.major.formatter.set_locs(major_locs)\n for tick, loc, label in zip(major_ticks, major_locs, major_labels):\n tick.update_position(loc)\n tick.set_label1(label)\n tick.set_label2(label)\n minor_locs = self.get_minorticklocs()\n minor_labels = self.minor.formatter.format_ticks(minor_locs)\n minor_ticks = self.get_minor_ticks(len(minor_locs))\n self.minor.formatter.set_locs(minor_locs)\n for tick, loc, label in zip(minor_ticks, minor_locs, minor_labels):\n tick.update_position(loc)\n tick.set_label1(label)\n tick.set_label2(label)\n ticks = [*major_ticks, *minor_ticks]\n\n view_low, view_high = self.get_view_interval()\n if view_low > view_high:\n view_low, view_high = view_high, view_low\n\n interval_t = self.get_transform().transform([view_low, view_high])\n\n ticks_to_draw = []\n for tick in ticks:\n try:\n loc_t = self.get_transform().transform(tick.get_loc())\n except AssertionError:\n # transforms.transform doesn't allow masked values but\n # some scales might make them, so we need this try/except.\n pass\n else:\n if mtransforms._interval_contains_close(interval_t, loc_t):\n ticks_to_draw.append(tick)\n\n return ticks_to_draw\n\n def _get_ticklabel_bboxes(self, ticks, renderer=None):\n \"\"\"Return lists of bboxes for ticks' label1's and label2's.\"\"\"\n if renderer is None:\n renderer = self.figure._get_renderer()\n return ([tick.label1.get_window_extent(renderer)\n for tick in ticks if tick.label1.get_visible()],\n [tick.label2.get_window_extent(renderer)\n for tick in ticks if tick.label2.get_visible()])\n\n def get_tightbbox(self, renderer=None, *, for_layout_only=False):\n \"\"\"\n Return a bounding box that encloses the axis. It only accounts\n tick labels, axis label, and offsetText.\n\n If *for_layout_only* is True, then the width of the label (if this\n is an x-axis) or the height of the label (if this is a y-axis) is\n collapsed to near zero. This allows tight/constrained_layout to ignore\n too-long labels when doing their layout.\n \"\"\"\n if not self.get_visible():\n return\n if renderer is None:\n renderer = self.figure._get_renderer()\n ticks_to_draw = self._update_ticks()\n\n self._update_label_position(renderer)\n\n # go back to just this axis's tick labels\n tlb1, tlb2 = self._get_ticklabel_bboxes(ticks_to_draw, renderer)\n\n self._update_offset_text_position(tlb1, tlb2)\n self.offsetText.set_text(self.major.formatter.get_offset())\n\n bboxes = [\n *(a.get_window_extent(renderer)\n for a in [self.offsetText]\n if a.get_visible()),\n *tlb1, *tlb2,\n ]\n # take care of label\n if self.label.get_visible():\n bb = self.label.get_window_extent(renderer)\n # for constrained/tight_layout, we want to ignore the label's\n # width/height because the adjustments they make can't be improved.\n # this code collapses the relevant direction\n if for_layout_only:\n if self.axis_name == \"x\" and bb.width > 0:\n bb.x0 = (bb.x0 + bb.x1) / 2 - 0.5\n bb.x1 = bb.x0 + 1.0\n if self.axis_name == \"y\" and bb.height > 0:\n bb.y0 = (bb.y0 + bb.y1) / 2 - 0.5\n bb.y1 = bb.y0 + 1.0\n bboxes.append(bb)\n bboxes = [b for b in bboxes\n if 0 < b.width < np.inf and 0 < b.height < np.inf]\n if bboxes:\n return mtransforms.Bbox.union(bboxes)\n else:\n return None\n\n def get_tick_padding(self):\n values = []\n if len(self.majorTicks):\n values.append(self.majorTicks[0].get_tick_padding())\n if len(self.minorTicks):\n values.append(self.minorTicks[0].get_tick_padding())\n return max(values, default=0)\n\n @martist.allow_rasterization\n def draw(self, renderer, *args, **kwargs):\n # docstring inherited\n\n if not self.get_visible():\n return\n renderer.open_group(__name__, gid=self.get_gid())\n\n ticks_to_draw = self._update_ticks()\n tlb1, tlb2 = self._get_ticklabel_bboxes(ticks_to_draw, renderer)\n\n for tick in ticks_to_draw:\n tick.draw(renderer)\n\n # Scale up the axis label box to also find the neighbors, not just the\n # tick labels that actually overlap. We need a *copy* of the axis\n # label box because we don't want to scale the actual bbox.\n\n self._update_label_position(renderer)\n\n self.label.draw(renderer)\n\n self._update_offset_text_position(tlb1, tlb2)\n self.offsetText.set_text(self.major.formatter.get_offset())\n self.offsetText.draw(renderer)\n\n renderer.close_group(__name__)\n self.stale = False\n\n def get_gridlines(self):\n r\"\"\"Return this Axis' grid lines as a list of `.Line2D`\\s.\"\"\"\n ticks = self.get_major_ticks()\n return cbook.silent_list('Line2D gridline',\n [tick.gridline for tick in ticks])\n\n def get_label(self):\n \"\"\"Return the axis label as a Text instance.\"\"\"\n return self.label\n\n def get_offset_text(self):\n \"\"\"Return the axis offsetText as a Text instance.\"\"\"\n return self.offsetText\n\n def get_pickradius(self):\n \"\"\"Return the depth of the axis used by the picker.\"\"\"\n return self._pickradius\n\n def get_majorticklabels(self):\n \"\"\"Return this Axis' major tick labels, as a list of `~.text.Text`.\"\"\"\n self._update_ticks()\n ticks = self.get_major_ticks()\n labels1 = [tick.label1 for tick in ticks if tick.label1.get_visible()]\n labels2 = [tick.label2 for tick in ticks if tick.label2.get_visible()]\n return labels1 + labels2\n\n def get_minorticklabels(self):\n \"\"\"Return this Axis' minor tick labels, as a list of `~.text.Text`.\"\"\"\n self._update_ticks()\n ticks = self.get_minor_ticks()\n labels1 = [tick.label1 for tick in ticks if tick.label1.get_visible()]\n labels2 = [tick.label2 for tick in ticks if tick.label2.get_visible()]\n return labels1 + labels2\n\n def get_ticklabels(self, minor=False, which=None):\n \"\"\"\n Get this Axis' tick labels.\n\n Parameters\n ----------\n minor : bool\n Whether to return the minor or the major ticklabels.\n\n which : None, ('minor', 'major', 'both')\n Overrides *minor*.\n\n Selects which ticklabels to return\n\n Returns\n -------\n list of `~matplotlib.text.Text`\n \"\"\"\n if which is not None:\n if which == 'minor':\n return self.get_minorticklabels()\n elif which == 'major':\n return self.get_majorticklabels()\n elif which == 'both':\n return self.get_majorticklabels() + self.get_minorticklabels()\n else:\n _api.check_in_list(['major', 'minor', 'both'], which=which)\n if minor:\n return self.get_minorticklabels()\n return self.get_majorticklabels()\n\n def get_majorticklines(self):\n r\"\"\"Return this Axis' major tick lines as a list of `.Line2D`\\s.\"\"\"\n lines = []\n ticks = self.get_major_ticks()\n for tick in ticks:\n lines.append(tick.tick1line)\n lines.append(tick.tick2line)\n return cbook.silent_list('Line2D ticklines', lines)\n\n def get_minorticklines(self):\n r\"\"\"Return this Axis' minor tick lines as a list of `.Line2D`\\s.\"\"\"\n lines = []\n ticks = self.get_minor_ticks()\n for tick in ticks:\n lines.append(tick.tick1line)\n lines.append(tick.tick2line)\n return cbook.silent_list('Line2D ticklines', lines)\n\n def get_ticklines(self, minor=False):\n r\"\"\"Return this Axis' tick lines as a list of `.Line2D`\\s.\"\"\"\n if minor:\n return self.get_minorticklines()\n return self.get_majorticklines()\n\n def get_majorticklocs(self):\n \"\"\"Return this Axis' major tick locations in data coordinates.\"\"\"\n return self.major.locator()\n\n def get_minorticklocs(self):\n \"\"\"Return this Axis' minor tick locations in data coordinates.\"\"\"\n # Remove minor ticks duplicating major ticks.\n minor_locs = np.asarray(self.minor.locator())\n if self.remove_overlapping_locs:\n major_locs = self.major.locator()\n transform = self._scale.get_transform()\n tr_minor_locs = transform.transform(minor_locs)\n tr_major_locs = transform.transform(major_locs)\n lo, hi = sorted(transform.transform(self.get_view_interval()))\n # Use the transformed view limits as scale. 1e-5 is the default\n # rtol for np.isclose.\n tol = (hi - lo) * 1e-5\n mask = np.isclose(tr_minor_locs[:, None], tr_major_locs[None, :],\n atol=tol, rtol=0).any(axis=1)\n minor_locs = minor_locs[~mask]\n return minor_locs\n\n def get_ticklocs(self, *, minor=False):\n \"\"\"\n Return this Axis' tick locations in data coordinates.\n\n The locations are not clipped to the current axis limits and hence\n may contain locations that are not visible in the output.\n\n Parameters\n ----------\n minor : bool, default: False\n True to return the minor tick directions,\n False to return the major tick directions.\n\n Returns\n -------\n numpy array of tick locations\n \"\"\"\n return self.get_minorticklocs() if minor else self.get_majorticklocs()\n\n def get_ticks_direction(self, minor=False):\n \"\"\"\n Get the tick directions as a numpy array\n\n Parameters\n ----------\n minor : bool, default: False\n True to return the minor tick directions,\n False to return the major tick directions.\n\n Returns\n -------\n numpy array of tick directions\n \"\"\"\n if minor:\n return np.array(\n [tick._tickdir for tick in self.get_minor_ticks()])\n else:\n return np.array(\n [tick._tickdir for tick in self.get_major_ticks()])\n\n def _get_tick(self, major):\n \"\"\"Return the default tick instance.\"\"\"\n if self._tick_class is None:\n raise NotImplementedError(\n f\"The Axis subclass {self.__class__.__name__} must define \"\n \"_tick_class or reimplement _get_tick()\")\n tick_kw = self._major_tick_kw if major else self._minor_tick_kw\n return self._tick_class(self.axes, 0, major=major, **tick_kw)\n\n def _get_tick_label_size(self, axis_name):\n \"\"\"\n Return the text size of tick labels for this Axis.\n\n This is a convenience function to avoid having to create a `Tick` in\n `.get_tick_space`, since it is expensive.\n \"\"\"\n tick_kw = self._major_tick_kw\n size = tick_kw.get('labelsize',\n mpl.rcParams[f'{axis_name}tick.labelsize'])\n return mtext.FontProperties(size=size).get_size_in_points()\n\n def _copy_tick_props(self, src, dest):\n \"\"\"Copy the properties from *src* tick to *dest* tick.\"\"\"\n if src is None or dest is None:\n return\n dest.label1.update_from(src.label1)\n dest.label2.update_from(src.label2)\n dest.tick1line.update_from(src.tick1line)\n dest.tick2line.update_from(src.tick2line)\n dest.gridline.update_from(src.gridline)\n\n def get_label_text(self):\n \"\"\"Get the text of the label.\"\"\"\n return self.label.get_text()\n\n def get_major_locator(self):\n \"\"\"Get the locator of the major ticker.\"\"\"\n return self.major.locator\n\n def get_minor_locator(self):\n \"\"\"Get the locator of the minor ticker.\"\"\"\n return self.minor.locator\n\n def get_major_formatter(self):\n \"\"\"Get the formatter of the major ticker.\"\"\"\n return self.major.formatter\n\n def get_minor_formatter(self):\n \"\"\"Get the formatter of the minor ticker.\"\"\"\n return self.minor.formatter\n\n def get_major_ticks(self, numticks=None):\n r\"\"\"Return the list of major `.Tick`\\s.\"\"\"\n if numticks is None:\n numticks = len(self.get_majorticklocs())\n\n while len(self.majorTicks) < numticks:\n # Update the new tick label properties from the old.\n tick = self._get_tick(major=True)\n self.majorTicks.append(tick)\n self._copy_tick_props(self.majorTicks[0], tick)\n\n return self.majorTicks[:numticks]\n\n def get_minor_ticks(self, numticks=None):\n r\"\"\"Return the list of minor `.Tick`\\s.\"\"\"\n if numticks is None:\n numticks = len(self.get_minorticklocs())\n\n while len(self.minorTicks) < numticks:\n # Update the new tick label properties from the old.\n tick = self._get_tick(major=False)\n self.minorTicks.append(tick)\n self._copy_tick_props(self.minorTicks[0], tick)\n\n return self.minorTicks[:numticks]\n\n def grid(self, visible=None, which='major', **kwargs):\n \"\"\"\n Configure the grid lines.\n\n Parameters\n ----------\n visible : bool or None\n Whether to show the grid lines. If any *kwargs* are supplied, it\n is assumed you want the grid on and *visible* will be set to True.\n\n If *visible* is *None* and there are no *kwargs*, this toggles the\n visibility of the lines.\n\n which : {'major', 'minor', 'both'}\n The grid lines to apply the changes on.\n\n **kwargs : `.Line2D` properties\n Define the line properties of the grid, e.g.::\n\n grid(color='r', linestyle='-', linewidth=2)\n \"\"\"\n if kwargs:\n if visible is None:\n visible = True\n elif not visible: # something false-like but not None\n _api.warn_external('First parameter to grid() is false, '\n 'but line properties are supplied. The '\n 'grid will be enabled.')\n visible = True\n which = which.lower()\n _api.check_in_list(['major', 'minor', 'both'], which=which)\n gridkw = {f'grid_{name}': value for name, value in kwargs.items()}\n if which in ['minor', 'both']:\n gridkw['gridOn'] = (not self._minor_tick_kw['gridOn']\n if visible is None else visible)\n self.set_tick_params(which='minor', **gridkw)\n if which in ['major', 'both']:\n gridkw['gridOn'] = (not self._major_tick_kw['gridOn']\n if visible is None else visible)\n self.set_tick_params(which='major', **gridkw)\n self.stale = True\n\n def update_units(self, data):\n \"\"\"\n Introspect *data* for units converter and update the\n axis.converter instance if necessary. Return *True*\n if *data* is registered for unit conversion.\n \"\"\"\n converter = munits.registry.get_converter(data)\n if converter is None:\n return False\n\n neednew = self.converter != converter\n self.converter = converter\n default = self.converter.default_units(data, self)\n if default is not None and self.units is None:\n self.set_units(default)\n\n elif neednew:\n self._update_axisinfo()\n self.stale = True\n return True\n\n def _update_axisinfo(self):\n \"\"\"\n Check the axis converter for the stored units to see if the\n axis info needs to be updated.\n \"\"\"\n if self.converter is None:\n return\n\n info = self.converter.axisinfo(self.units, self)\n\n if info is None:\n return\n if info.majloc is not None and \\\n self.major.locator != info.majloc and self.isDefault_majloc:\n self.set_major_locator(info.majloc)\n self.isDefault_majloc = True\n if info.minloc is not None and \\\n self.minor.locator != info.minloc and self.isDefault_minloc:\n self.set_minor_locator(info.minloc)\n self.isDefault_minloc = True\n if info.majfmt is not None and \\\n self.major.formatter != info.majfmt and self.isDefault_majfmt:\n self.set_major_formatter(info.majfmt)\n self.isDefault_majfmt = True\n if info.minfmt is not None and \\\n self.minor.formatter != info.minfmt and self.isDefault_minfmt:\n self.set_minor_formatter(info.minfmt)\n self.isDefault_minfmt = True\n if info.label is not None and self.isDefault_label:\n self.set_label_text(info.label)\n self.isDefault_label = True\n\n self.set_default_intervals()\n\n def have_units(self):\n return self.converter is not None or self.units is not None\n\n def convert_units(self, x):\n # If x is natively supported by Matplotlib, doesn't need converting\n if munits._is_natively_supported(x):\n return x\n\n if self.converter is None:\n self.converter = munits.registry.get_converter(x)\n\n if self.converter is None:\n return x\n try:\n ret = self.converter.convert(x, self.units, self)\n except Exception as e:\n raise munits.ConversionError('Failed to convert value(s) to axis '\n f'units: {x!r}') from e\n return ret\n\n def set_units(self, u):\n \"\"\"\n Set the units for axis.\n\n Parameters\n ----------\n u : units tag\n\n Notes\n -----\n The units of any shared axis will also be updated.\n \"\"\"\n if u == self.units:\n return\n for name, axis in self.axes._axis_map.items():\n if self is axis:\n shared = [\n getattr(ax, f\"{name}axis\")\n for ax\n in self.axes._shared_axes[name].get_siblings(self.axes)]\n break\n else:\n shared = [self]\n for axis in shared:\n axis.units = u\n axis._update_axisinfo()\n axis.callbacks.process('units')\n axis.stale = True\n\n def get_units(self):\n \"\"\"Return the units for axis.\"\"\"\n return self.units\n\n def set_label_text(self, label, fontdict=None, **kwargs):\n \"\"\"\n Set the text value of the axis label.\n\n Parameters\n ----------\n label : str\n Text string.\n fontdict : dict\n Text properties.\n **kwargs\n Merged into fontdict.\n \"\"\"\n self.isDefault_label = False\n self.label.set_text(label)\n if fontdict is not None:\n self.label.update(fontdict)\n self.label.update(kwargs)\n self.stale = True\n return self.label\n\n def set_major_formatter(self, formatter):\n \"\"\"\n Set the formatter of the major ticker.\n\n In addition to a `~matplotlib.ticker.Formatter` instance,\n this also accepts a ``str`` or function.\n\n For a ``str`` a `~matplotlib.ticker.StrMethodFormatter` is used.\n The field used for the value must be labeled ``'x'`` and the field used\n for the position must be labeled ``'pos'``.\n See the `~matplotlib.ticker.StrMethodFormatter` documentation for\n more information.\n\n For a function, a `~matplotlib.ticker.FuncFormatter` is used.\n The function must take two inputs (a tick value ``x`` and a\n position ``pos``), and return a string containing the corresponding\n tick label.\n See the `~matplotlib.ticker.FuncFormatter` documentation for\n more information.\n\n Parameters\n ----------\n formatter : `~matplotlib.ticker.Formatter`, ``str``, or function\n \"\"\"\n self._set_formatter(formatter, self.major)\n\n def set_minor_formatter(self, formatter):\n \"\"\"\n Set the formatter of the minor ticker.\n\n In addition to a `~matplotlib.ticker.Formatter` instance,\n this also accepts a ``str`` or function.\n See `.Axis.set_major_formatter` for more information.\n\n Parameters\n ----------\n formatter : `~matplotlib.ticker.Formatter`, ``str``, or function\n \"\"\"\n self._set_formatter(formatter, self.minor)\n\n def _set_formatter(self, formatter, level):\n if isinstance(formatter, str):\n formatter = mticker.StrMethodFormatter(formatter)\n # Don't allow any other TickHelper to avoid easy-to-make errors,\n # like using a Locator instead of a Formatter.\n elif (callable(formatter) and\n not isinstance(formatter, mticker.TickHelper)):\n formatter = mticker.FuncFormatter(formatter)\n else:\n _api.check_isinstance(mticker.Formatter, formatter=formatter)\n\n if (isinstance(formatter, mticker.FixedFormatter)\n and len(formatter.seq) > 0\n and not isinstance(level.locator, mticker.FixedLocator)):\n _api.warn_external('FixedFormatter should only be used together '\n 'with FixedLocator')\n\n if level == self.major:\n self.isDefault_majfmt = False\n else:\n self.isDefault_minfmt = False\n\n level.formatter = formatter\n formatter.set_axis(self)\n self.stale = True\n\n def set_major_locator(self, locator):\n \"\"\"\n Set the locator of the major ticker.\n\n Parameters\n ----------\n locator : `~matplotlib.ticker.Locator`\n \"\"\"\n _api.check_isinstance(mticker.Locator, locator=locator)\n self.isDefault_majloc = False\n self.major.locator = locator\n if self.major.formatter:\n self.major.formatter._set_locator(locator)\n locator.set_axis(self)\n self.stale = True\n\n def set_minor_locator(self, locator):\n \"\"\"\n Set the locator of the minor ticker.\n\n Parameters\n ----------\n locator : `~matplotlib.ticker.Locator`\n \"\"\"\n _api.check_isinstance(mticker.Locator, locator=locator)\n self.isDefault_minloc = False\n self.minor.locator = locator\n if self.minor.formatter:\n self.minor.formatter._set_locator(locator)\n locator.set_axis(self)\n self.stale = True\n\n def set_pickradius(self, pickradius):\n \"\"\"\n Set the depth of the axis used by the picker.\n\n Parameters\n ----------\n pickradius : float\n The acceptance radius for containment tests.\n See also `.Axis.contains`.\n \"\"\"\n if not isinstance(pickradius, Number) or pickradius < 0:\n raise ValueError(\"pick radius should be a distance\")\n self._pickradius = pickradius\n\n pickradius = property(\n get_pickradius, set_pickradius, doc=\"The acceptance radius for \"\n \"containment tests. See also `.Axis.contains`.\")\n\n # Helper for set_ticklabels. Defining it here makes it picklable.\n @staticmethod\n def _format_with_dict(tickd, x, pos):\n return tickd.get(x, \"\")\n\n def set_ticklabels(self, ticklabels, *, minor=False, **kwargs):\n r\"\"\"\n [*Discouraged*] Set the text values of the tick labels.\n\n .. admonition:: Discouraged\n\n The use of this method is discouraged, because of the dependency\n on tick positions. In most cases, you'll want to use\n ``set_[x/y]ticks(positions, labels)`` instead.\n\n If you are using this method, you should always fix the tick\n positions before, e.g. by using `.Axis.set_ticks` or by explicitly\n setting a `~.ticker.FixedLocator`. Otherwise, ticks are free to\n move and the labels may end up in unexpected positions.\n\n Parameters\n ----------\n ticklabels : sequence of str or of `.Text`\\s\n Texts for labeling each tick location in the sequence set by\n `.Axis.set_ticks`; the number of labels must match the number of\n locations.\n minor : bool\n If True, set minor ticks instead of major ticks.\n **kwargs\n Text properties.\n\n Returns\n -------\n list of `.Text`\\s\n For each tick, includes ``tick.label1`` if it is visible, then\n ``tick.label2`` if it is visible, in that order.\n \"\"\"\n try:\n ticklabels = [t.get_text() if hasattr(t, 'get_text') else t\n for t in ticklabels]\n except TypeError:\n raise TypeError(f\"{ticklabels:=} must be a sequence\") from None\n locator = (self.get_minor_locator() if minor\n else self.get_major_locator())\n if isinstance(locator, mticker.FixedLocator):\n # Passing [] as a list of ticklabels is often used as a way to\n # remove all tick labels, so only error for > 0 ticklabels\n if len(locator.locs) != len(ticklabels) and len(ticklabels) != 0:\n raise ValueError(\n \"The number of FixedLocator locations\"\n f\" ({len(locator.locs)}), usually from a call to\"\n \" set_ticks, does not match\"\n f\" the number of ticklabels ({len(ticklabels)}).\")\n tickd = {loc: lab for loc, lab in zip(locator.locs, ticklabels)}\n func = functools.partial(self._format_with_dict, tickd)\n formatter = mticker.FuncFormatter(func)\n else:\n formatter = mticker.FixedFormatter(ticklabels)\n\n if minor:\n self.set_minor_formatter(formatter)\n locs = self.get_minorticklocs()\n ticks = self.get_minor_ticks(len(locs))\n else:\n self.set_major_formatter(formatter)\n locs = self.get_majorticklocs()\n ticks = self.get_major_ticks(len(locs))\n\n ret = []\n for pos, (loc, tick) in enumerate(zip(locs, ticks)):\n tick.update_position(loc)\n tick_label = formatter(loc, pos)\n # deal with label1\n tick.label1.set_text(tick_label)\n tick.label1._internal_update(kwargs)\n # deal with label2\n tick.label2.set_text(tick_label)\n tick.label2._internal_update(kwargs)\n # only return visible tick labels\n if tick.label1.get_visible():\n ret.append(tick.label1)\n if tick.label2.get_visible():\n ret.append(tick.label2)\n\n self.stale = True\n return ret\n\n # Wrapper around set_ticklabels used to generate Axes.set_x/ytickabels; can\n # go away once the API of Axes.set_x/yticklabels becomes consistent.\n def _set_ticklabels(self, labels, *, fontdict=None, minor=False, **kwargs):\n \"\"\"\n Set this Axis' labels with list of string labels.\n\n .. warning::\n This method should only be used after fixing the tick positions\n using `.Axis.set_ticks`. Otherwise, the labels may end up in\n unexpected positions.\n\n Parameters\n ----------\n labels : list of str\n The label texts.\n\n fontdict : dict, optional\n A dictionary controlling the appearance of the ticklabels.\n The default *fontdict* is::\n\n {'fontsize': rcParams['axes.titlesize'],\n 'fontweight': rcParams['axes.titleweight'],\n 'verticalalignment': 'baseline',\n 'horizontalalignment': loc}\n\n minor : bool, default: False\n Whether to set the minor ticklabels rather than the major ones.\n\n Returns\n -------\n list of `.Text`\n The labels.\n\n Other Parameters\n ----------------\n **kwargs : `~.text.Text` properties.\n \"\"\"\n if fontdict is not None:\n kwargs.update(fontdict)\n return self.set_ticklabels(labels, minor=minor, **kwargs)\n\n def _set_tick_locations(self, ticks, *, minor=False):\n # see docstring of set_ticks\n\n # XXX if the user changes units, the information will be lost here\n ticks = self.convert_units(ticks)\n for name, axis in self.axes._axis_map.items():\n if self is axis:\n shared = [\n getattr(ax, f\"{name}axis\")\n for ax\n in self.axes._shared_axes[name].get_siblings(self.axes)]\n break\n else:\n shared = [self]\n if len(ticks):\n for axis in shared:\n # set_view_interval maintains any preexisting inversion.\n axis.set_view_interval(min(ticks), max(ticks))\n self.axes.stale = True\n if minor:\n self.set_minor_locator(mticker.FixedLocator(ticks))\n return self.get_minor_ticks(len(ticks))\n else:\n self.set_major_locator(mticker.FixedLocator(ticks))\n return self.get_major_ticks(len(ticks))\n\n def set_ticks(self, ticks, labels=None, *, minor=False, **kwargs):\n \"\"\"\n Set this Axis' tick locations and optionally labels.\n\n If necessary, the view limits of the Axis are expanded so that all\n given ticks are visible.\n\n Parameters\n ----------\n ticks : list of floats\n List of tick locations. The axis `.Locator` is replaced by a\n `~.ticker.FixedLocator`.\n\n Some tick formatters will not label arbitrary tick positions;\n e.g. log formatters only label decade ticks by default. In\n such a case you can set a formatter explicitly on the axis\n using `.Axis.set_major_formatter` or provide formatted\n *labels* yourself.\n labels : list of str, optional\n List of tick labels. If not set, the labels are generated with\n the axis tick `.Formatter`.\n minor : bool, default: False\n If ``False``, set the major ticks; if ``True``, the minor ticks.\n **kwargs\n `.Text` properties for the labels. These take effect only if you\n pass *labels*. In other cases, please use `~.Axes.tick_params`.\n\n Notes\n -----\n The mandatory expansion of the view limits is an intentional design\n choice to prevent the surprise of a non-visible tick. If you need\n other limits, you should set the limits explicitly after setting the\n ticks.\n \"\"\"\n result = self._set_tick_locations(ticks, minor=minor)\n if labels is not None:\n self.set_ticklabels(labels, minor=minor, **kwargs)\n return result\n\n def _get_tick_boxes_siblings(self, renderer):\n \"\"\"\n Get the bounding boxes for this `.axis` and its siblings\n as set by `.Figure.align_xlabels` or `.Figure.align_ylabels`.\n\n By default it just gets bboxes for self.\n \"\"\"\n # Get the Grouper keeping track of x or y label groups for this figure.\n axis_names = [\n name for name, axis in self.axes._axis_map.items()\n if name in self.figure._align_label_groups and axis is self]\n if len(axis_names) != 1:\n return [], []\n axis_name, = axis_names\n grouper = self.figure._align_label_groups[axis_name]\n bboxes = []\n bboxes2 = []\n # If we want to align labels from other Axes:\n for ax in grouper.get_siblings(self.axes):\n axis = getattr(ax, f\"{axis_name}axis\")\n ticks_to_draw = axis._update_ticks()\n tlb, tlb2 = axis._get_ticklabel_bboxes(ticks_to_draw, renderer)\n bboxes.extend(tlb)\n bboxes2.extend(tlb2)\n return bboxes, bboxes2\n\n def _update_label_position(self, renderer):\n \"\"\"\n Update the label position based on the bounding box enclosing\n all the ticklabels and axis spine.\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n def _update_offset_text_position(self, bboxes, bboxes2):\n \"\"\"\n Update the offset text position based on the sequence of bounding\n boxes of all the ticklabels.\n \"\"\"\n raise NotImplementedError('Derived must override')\n\n def axis_date(self, tz=None):\n \"\"\"\n Set up axis ticks and labels to treat data along this Axis as dates.\n\n Parameters\n ----------\n tz : str or `datetime.tzinfo`, default: :rc:`timezone`\n The timezone used to create date labels.\n \"\"\"\n # By providing a sample datetime instance with the desired timezone,\n # the registered converter can be selected, and the \"units\" attribute,\n # which is the timezone, can be set.\n if isinstance(tz, str):\n import dateutil.tz\n tz = dateutil.tz.gettz(tz)\n self.update_units(datetime.datetime(2009, 1, 1, 0, 0, 0, 0, tz))\n\n def get_tick_space(self):\n \"\"\"Return the estimated number of ticks that can fit on the axis.\"\"\"\n # Must be overridden in the subclass\n raise NotImplementedError()\n\n def _get_ticks_position(self):\n \"\"\"\n Helper for `XAxis.get_ticks_position` and `YAxis.get_ticks_position`.\n\n Check the visibility of tick1line, label1, tick2line, and label2 on\n the first major and the first minor ticks, and return\n\n - 1 if only tick1line and label1 are visible (which corresponds to\n \"bottom\" for the x-axis and \"left\" for the y-axis);\n - 2 if only tick2line and label2 are visible (which corresponds to\n \"top\" for the x-axis and \"right\" for the y-axis);\n - \"default\" if only tick1line, tick2line and label1 are visible;\n - \"unknown\" otherwise.\n \"\"\"\n major = self.majorTicks[0]\n minor = self.minorTicks[0]\n if all(tick.tick1line.get_visible()\n and not tick.tick2line.get_visible()\n and tick.label1.get_visible()\n and not tick.label2.get_visible()\n for tick in [major, minor]):\n return 1\n elif all(tick.tick2line.get_visible()\n and not tick.tick1line.get_visible()\n and tick.label2.get_visible()\n and not tick.label1.get_visible()\n for tick in [major, minor]):\n return 2\n elif all(tick.tick1line.get_visible()\n and tick.tick2line.get_visible()\n and tick.label1.get_visible()\n and not tick.label2.get_visible()\n for tick in [major, minor]):\n return \"default\"\n else:\n return \"unknown\"\n\n def get_label_position(self):\n \"\"\"\n Return the label position (top or bottom)\n \"\"\"\n return self.label_position\n\n def set_label_position(self, position):\n \"\"\"\n Set the label position (top or bottom)\n\n Parameters\n ----------\n position : {'top', 'bottom'}\n \"\"\"\n raise NotImplementedError()\n\n def get_minpos(self):\n raise NotImplementedError()\n\n\ndef _make_getset_interval(method_name, lim_name, attr_name):\n \"\"\"\n Helper to generate ``get_{data,view}_interval`` and\n ``set_{data,view}_interval`` implementations.\n \"\"\"\n\n def getter(self):\n # docstring inherited.\n return getattr(getattr(self.axes, lim_name), attr_name)\n\n def setter(self, vmin, vmax, ignore=False):\n # docstring inherited.\n if ignore:\n setattr(getattr(self.axes, lim_name), attr_name, (vmin, vmax))\n else:\n oldmin, oldmax = getter(self)\n if oldmin < oldmax:\n setter(self, min(vmin, vmax, oldmin), max(vmin, vmax, oldmax),\n ignore=True)\n else:\n setter(self, max(vmin, vmax, oldmin), min(vmin, vmax, oldmax),\n ignore=True)\n self.stale = True\n\n getter.__name__ = f\"get_{method_name}_interval\"\n setter.__name__ = f\"set_{method_name}_interval\"\n\n return getter, setter\n\n\nclass XAxis(Axis):\n __name__ = 'xaxis'\n axis_name = 'x' #: Read-only name identifying the axis.\n _tick_class = XTick\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._init()\n\n def _init(self):\n \"\"\"\n Initialize the label and offsetText instance values and\n `label_position` / `offset_text_position`.\n \"\"\"\n # x in axes coords, y in display coords (to be updated at draw time by\n # _update_label_positions and _update_offset_text_position).\n self.label.set(\n x=0.5, y=0,\n verticalalignment='top', horizontalalignment='center',\n transform=mtransforms.blended_transform_factory(\n self.axes.transAxes, mtransforms.IdentityTransform()),\n )\n self.label_position = 'bottom'\n\n self.offsetText.set(\n x=1, y=0,\n verticalalignment='top', horizontalalignment='right',\n transform=mtransforms.blended_transform_factory(\n self.axes.transAxes, mtransforms.IdentityTransform()),\n fontsize=mpl.rcParams['xtick.labelsize'],\n color=mpl.rcParams['xtick.color'],\n )\n self.offset_text_position = 'bottom'\n\n def contains(self, mouseevent):\n \"\"\"Test whether the mouse event occurred in the x axis.\"\"\"\n inside, info = self._default_contains(mouseevent)\n if inside is not None:\n return inside, info\n\n x, y = mouseevent.x, mouseevent.y\n try:\n trans = self.axes.transAxes.inverted()\n xaxes, yaxes = trans.transform((x, y))\n except ValueError:\n return False, {}\n (l, b), (r, t) = self.axes.transAxes.transform([(0, 0), (1, 1)])\n inaxis = 0 <= xaxes <= 1 and (\n b - self._pickradius < y < b or\n t < y < t + self._pickradius)\n return inaxis, {}\n\n def set_label_position(self, position):\n \"\"\"\n Set the label position (top or bottom)\n\n Parameters\n ----------\n position : {'top', 'bottom'}\n \"\"\"\n self.label.set_verticalalignment(_api.check_getitem({\n 'top': 'baseline', 'bottom': 'top',\n }, position=position))\n self.label_position = position\n self.stale = True\n\n def _update_label_position(self, renderer):\n \"\"\"\n Update the label position based on the bounding box enclosing\n all the ticklabels and axis spine\n \"\"\"\n if not self._autolabelpos:\n return\n\n # get bounding boxes for this axis and any siblings\n # that have been set by `fig.align_xlabels()`\n bboxes, bboxes2 = self._get_tick_boxes_siblings(renderer=renderer)\n\n x, y = self.label.get_position()\n if self.label_position == 'bottom':\n try:\n spine = self.axes.spines['bottom']\n spinebbox = spine.get_window_extent()\n except KeyError:\n # use Axes if spine doesn't exist\n spinebbox = self.axes.bbox\n bbox = mtransforms.Bbox.union(bboxes + [spinebbox])\n bottom = bbox.y0\n\n self.label.set_position(\n (x, bottom - self.labelpad * self.figure.dpi / 72)\n )\n else:\n try:\n spine = self.axes.spines['top']\n spinebbox = spine.get_window_extent()\n except KeyError:\n # use Axes if spine doesn't exist\n spinebbox = self.axes.bbox\n bbox = mtransforms.Bbox.union(bboxes2 + [spinebbox])\n top = bbox.y1\n\n self.label.set_position(\n (x, top + self.labelpad * self.figure.dpi / 72)\n )\n\n def _update_offset_text_position(self, bboxes, bboxes2):\n \"\"\"\n Update the offset_text position based on the sequence of bounding\n boxes of all the ticklabels\n \"\"\"\n x, y = self.offsetText.get_position()\n if not hasattr(self, '_tick_position'):\n self._tick_position = 'bottom'\n if self._tick_position == 'bottom':\n if not len(bboxes):\n bottom = self.axes.bbox.ymin\n else:\n bbox = mtransforms.Bbox.union(bboxes)\n bottom = bbox.y0\n y = bottom - self.OFFSETTEXTPAD * self.figure.dpi / 72\n else:\n if not len(bboxes2):\n top = self.axes.bbox.ymax\n else:\n bbox = mtransforms.Bbox.union(bboxes2)\n top = bbox.y1\n y = top + self.OFFSETTEXTPAD * self.figure.dpi / 72\n self.offsetText.set_position((x, y))\n\n @_api.deprecated(\"3.6\")\n def get_text_heights(self, renderer):\n \"\"\"\n Return how much space should be reserved for text above and below the\n Axes, as a pair of floats.\n \"\"\"\n bbox, bbox2 = self.get_ticklabel_extents(renderer)\n # MGDTODO: Need a better way to get the pad\n pad_pixels = self.majorTicks[0].get_pad_pixels()\n\n above = 0.0\n if bbox2.height:\n above += bbox2.height + pad_pixels\n below = 0.0\n if bbox.height:\n below += bbox.height + pad_pixels\n\n if self.get_label_position() == 'top':\n above += self.label.get_window_extent(renderer).height + pad_pixels\n else:\n below += self.label.get_window_extent(renderer).height + pad_pixels\n return above, below\n\n def set_ticks_position(self, position):\n \"\"\"\n Set the ticks position.\n\n Parameters\n ----------\n position : {'top', 'bottom', 'both', 'default', 'none'}\n 'both' sets the ticks to appear on both positions, but does not\n change the tick labels. 'default' resets the tick positions to\n the default: ticks on both positions, labels at bottom. 'none'\n can be used if you don't want any ticks. 'none' and 'both'\n affect only the ticks, not the labels.\n \"\"\"\n _api.check_in_list(['top', 'bottom', 'both', 'default', 'none'],\n position=position)\n if position == 'top':\n self.set_tick_params(which='both', top=True, labeltop=True,\n bottom=False, labelbottom=False)\n self._tick_position = 'top'\n self.offsetText.set_verticalalignment('bottom')\n elif position == 'bottom':\n self.set_tick_params(which='both', top=False, labeltop=False,\n bottom=True, labelbottom=True)\n self._tick_position = 'bottom'\n self.offsetText.set_verticalalignment('top')\n elif position == 'both':\n self.set_tick_params(which='both', top=True,\n bottom=True)\n elif position == 'none':\n self.set_tick_params(which='both', top=False,\n bottom=False)\n elif position == 'default':\n self.set_tick_params(which='both', top=True, labeltop=False,\n bottom=True, labelbottom=True)\n self._tick_position = 'bottom'\n self.offsetText.set_verticalalignment('top')\n else:\n assert False, \"unhandled parameter not caught by _check_in_list\"\n self.stale = True\n\n def tick_top(self):\n \"\"\"\n Move ticks and ticklabels (if present) to the top of the Axes.\n \"\"\"\n label = True\n if 'label1On' in self._major_tick_kw:\n label = (self._major_tick_kw['label1On']\n or self._major_tick_kw['label2On'])\n self.set_ticks_position('top')\n # If labels were turned off before this was called, leave them off.\n self.set_tick_params(which='both', labeltop=label)\n\n def tick_bottom(self):\n \"\"\"\n Move ticks and ticklabels (if present) to the bottom of the Axes.\n \"\"\"\n label = True\n if 'label1On' in self._major_tick_kw:\n label = (self._major_tick_kw['label1On']\n or self._major_tick_kw['label2On'])\n self.set_ticks_position('bottom')\n # If labels were turned off before this was called, leave them off.\n self.set_tick_params(which='both', labelbottom=label)\n\n def get_ticks_position(self):\n \"\"\"\n Return the ticks position (\"top\", \"bottom\", \"default\", or \"unknown\").\n \"\"\"\n return {1: \"bottom\", 2: \"top\",\n \"default\": \"default\", \"unknown\": \"unknown\"}[\n self._get_ticks_position()]\n\n get_view_interval, set_view_interval = _make_getset_interval(\n \"view\", \"viewLim\", \"intervalx\")\n get_data_interval, set_data_interval = _make_getset_interval(\n \"data\", \"dataLim\", \"intervalx\")\n\n def get_minpos(self):\n return self.axes.dataLim.minposx\n\n def set_default_intervals(self):\n # docstring inherited\n # only change view if dataLim has not changed and user has\n # not changed the view:\n if (not self.axes.dataLim.mutatedx() and\n not self.axes.viewLim.mutatedx()):\n if self.converter is not None:\n info = self.converter.axisinfo(self.units, self)\n if info.default_limits is not None:\n xmin, xmax = self.convert_units(info.default_limits)\n self.axes.viewLim.intervalx = xmin, xmax\n self.stale = True\n\n def get_tick_space(self):\n ends = mtransforms.Bbox.unit().transformed(\n self.axes.transAxes - self.figure.dpi_scale_trans)\n length = ends.width * 72\n # There is a heuristic here that the aspect ratio of tick text\n # is no more than 3:1\n size = self._get_tick_label_size('x') * 3\n if size > 0:\n return int(np.floor(length / size))\n else:\n return 2**31 - 1\n\n\nclass YAxis(Axis):\n __name__ = 'yaxis'\n axis_name = 'y' #: Read-only name identifying the axis.\n _tick_class = YTick\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._init()\n\n def _init(self):\n \"\"\"\n Initialize the label and offsetText instance values and\n `label_position` / `offset_text_position`.\n \"\"\"\n # x in display coords, y in axes coords (to be updated at draw time by\n # _update_label_positions and _update_offset_text_position).\n self.label.set(\n x=0, y=0.5,\n verticalalignment='bottom', horizontalalignment='center',\n rotation='vertical', rotation_mode='anchor',\n transform=mtransforms.blended_transform_factory(\n mtransforms.IdentityTransform(), self.axes.transAxes),\n )\n self.label_position = 'left'\n # x in axes coords, y in display coords(!).\n self.offsetText.set(\n x=0, y=0.5,\n verticalalignment='baseline', horizontalalignment='left',\n transform=mtransforms.blended_transform_factory(\n self.axes.transAxes, mtransforms.IdentityTransform()),\n fontsize=mpl.rcParams['ytick.labelsize'],\n color=mpl.rcParams['ytick.color'],\n )\n self.offset_text_position = 'left'\n\n def contains(self, mouseevent):\n # docstring inherited\n inside, info = self._default_contains(mouseevent)\n if inside is not None:\n return inside, info\n\n x, y = mouseevent.x, mouseevent.y\n try:\n trans = self.axes.transAxes.inverted()\n xaxes, yaxes = trans.transform((x, y))\n except ValueError:\n return False, {}\n (l, b), (r, t) = self.axes.transAxes.transform([(0, 0), (1, 1)])\n inaxis = 0 <= yaxes <= 1 and (\n l - self._pickradius < x < l or\n r < x < r + self._pickradius)\n return inaxis, {}\n\n def set_label_position(self, position):\n \"\"\"\n Set the label position (left or right)\n\n Parameters\n ----------\n position : {'left', 'right'}\n \"\"\"\n self.label.set_rotation_mode('anchor')\n self.label.set_verticalalignment(_api.check_getitem({\n 'left': 'bottom', 'right': 'top',\n }, position=position))\n self.label_position = position\n self.stale = True\n\n def _update_label_position(self, renderer):\n \"\"\"\n Update the label position based on the bounding box enclosing\n all the ticklabels and axis spine\n \"\"\"\n if not self._autolabelpos:\n return\n\n # get bounding boxes for this axis and any siblings\n # that have been set by `fig.align_ylabels()`\n bboxes, bboxes2 = self._get_tick_boxes_siblings(renderer=renderer)\n x, y = self.label.get_position()\n if self.label_position == 'left':\n try:\n spine = self.axes.spines['left']\n spinebbox = spine.get_window_extent()\n except KeyError:\n # use Axes if spine doesn't exist\n spinebbox = self.axes.bbox\n bbox = mtransforms.Bbox.union(bboxes + [spinebbox])\n left = bbox.x0\n self.label.set_position(\n (left - self.labelpad * self.figure.dpi / 72, y)\n )\n\n else:\n try:\n spine = self.axes.spines['right']\n spinebbox = spine.get_window_extent()\n except KeyError:\n # use Axes if spine doesn't exist\n spinebbox = self.axes.bbox\n\n bbox = mtransforms.Bbox.union(bboxes2 + [spinebbox])\n right = bbox.x1\n self.label.set_position(\n (right + self.labelpad * self.figure.dpi / 72, y)\n )\n\n def _update_offset_text_position(self, bboxes, bboxes2):\n \"\"\"\n Update the offset_text position based on the sequence of bounding\n boxes of all the ticklabels\n \"\"\"\n x, _ = self.offsetText.get_position()\n if 'outline' in self.axes.spines:\n # Special case for colorbars:\n bbox = self.axes.spines['outline'].get_window_extent()\n else:\n bbox = self.axes.bbox\n top = bbox.ymax\n self.offsetText.set_position(\n (x, top + self.OFFSETTEXTPAD * self.figure.dpi / 72)\n )\n\n def set_offset_position(self, position):\n \"\"\"\n Parameters\n ----------\n position : {'left', 'right'}\n \"\"\"\n x, y = self.offsetText.get_position()\n x = _api.check_getitem({'left': 0, 'right': 1}, position=position)\n\n self.offsetText.set_ha(position)\n self.offsetText.set_position((x, y))\n self.stale = True\n\n @_api.deprecated(\"3.6\")\n def get_text_widths(self, renderer):\n bbox, bbox2 = self.get_ticklabel_extents(renderer)\n # MGDTODO: Need a better way to get the pad\n pad_pixels = self.majorTicks[0].get_pad_pixels()\n\n left = 0.0\n if bbox.width:\n left += bbox.width + pad_pixels\n right = 0.0\n if bbox2.width:\n right += bbox2.width + pad_pixels\n\n if self.get_label_position() == 'left':\n left += self.label.get_window_extent(renderer).width + pad_pixels\n else:\n right += self.label.get_window_extent(renderer).width + pad_pixels\n return left, right\n\n def set_ticks_position(self, position):\n \"\"\"\n Set the ticks position.\n\n Parameters\n ----------\n position : {'left', 'right', 'both', 'default', 'none'}\n 'both' sets the ticks to appear on both positions, but does not\n change the tick labels. 'default' resets the tick positions to\n the default: ticks on both positions, labels at left. 'none'\n can be used if you don't want any ticks. 'none' and 'both'\n affect only the ticks, not the labels.\n \"\"\"\n _api.check_in_list(['left', 'right', 'both', 'default', 'none'],\n position=position)\n if position == 'right':\n self.set_tick_params(which='both', right=True, labelright=True,\n left=False, labelleft=False)\n self.set_offset_position(position)\n elif position == 'left':\n self.set_tick_params(which='both', right=False, labelright=False,\n left=True, labelleft=True)\n self.set_offset_position(position)\n elif position == 'both':\n self.set_tick_params(which='both', right=True,\n left=True)\n elif position == 'none':\n self.set_tick_params(which='both', right=False,\n left=False)\n elif position == 'default':\n self.set_tick_params(which='both', right=True, labelright=False,\n left=True, labelleft=True)\n else:\n assert False, \"unhandled parameter not caught by _check_in_list\"\n self.stale = True\n\n def tick_right(self):\n \"\"\"\n Move ticks and ticklabels (if present) to the right of the Axes.\n \"\"\"\n label = True\n if 'label1On' in self._major_tick_kw:\n label = (self._major_tick_kw['label1On']\n or self._major_tick_kw['label2On'])\n self.set_ticks_position('right')\n # if labels were turned off before this was called\n # leave them off\n self.set_tick_params(which='both', labelright=label)\n\n def tick_left(self):\n \"\"\"\n Move ticks and ticklabels (if present) to the left of the Axes.\n \"\"\"\n label = True\n if 'label1On' in self._major_tick_kw:\n label = (self._major_tick_kw['label1On']\n or self._major_tick_kw['label2On'])\n self.set_ticks_position('left')\n # if labels were turned off before this was called\n # leave them off\n self.set_tick_params(which='both', labelleft=label)\n\n def get_ticks_position(self):\n \"\"\"\n Return the ticks position (\"left\", \"right\", \"default\", or \"unknown\").\n \"\"\"\n return {1: \"left\", 2: \"right\",\n \"default\": \"default\", \"unknown\": \"unknown\"}[\n self._get_ticks_position()]\n\n get_view_interval, set_view_interval = _make_getset_interval(\n \"view\", \"viewLim\", \"intervaly\")\n get_data_interval, set_data_interval = _make_getset_interval(\n \"data\", \"dataLim\", \"intervaly\")\n\n def get_minpos(self):\n return self.axes.dataLim.minposy\n\n def set_default_intervals(self):\n # docstring inherited\n # only change view if dataLim has not changed and user has\n # not changed the view:\n if (not self.axes.dataLim.mutatedy() and\n not self.axes.viewLim.mutatedy()):\n if self.converter is not None:\n info = self.converter.axisinfo(self.units, self)\n if info.default_limits is not None:\n ymin, ymax = self.convert_units(info.default_limits)\n self.axes.viewLim.intervaly = ymin, ymax\n self.stale = True\n\n def get_tick_space(self):\n ends = mtransforms.Bbox.unit().transformed(\n self.axes.transAxes - self.figure.dpi_scale_trans)\n length = ends.height * 72\n # Having a spacing of at least 2 just looks good.\n size = self._get_tick_label_size('y') * 2\n if size > 0:\n return int(np.floor(length / size))\n else:\n return 2**31 - 1\n"}
|
diff --git a/doc/api/axis_api.rst b/doc/api/axis_api.rst
index d950d1e253a4..e7da26a11706 100644
--- a/doc/api/axis_api.rst
+++ b/doc/api/axis_api.rst
@@ -100,6 +100,7 @@ Ticks, tick labels and Offset text
Axis.get_offset_text
Axis.get_tick_padding
+ Axis.get_tick_params
Axis.get_ticklabels
Axis.get_ticklines
Axis.get_ticklocs
diff --git a/doc/users/next_whats_new/view_current_axis_format.rst b/doc/users/next_whats_new/view_current_axis_format.rst
new file mode 100644
index 000000000000..eb7e03600f0e
--- /dev/null
+++ b/doc/users/next_whats_new/view_current_axis_format.rst
@@ -0,0 +1,29 @@
+View current appearance settings for ticks, tick labels, and gridlines
+----------------------------------------------------------------------
+
+The new `~matplotlib.axis.Axis.get_tick_params` method can be used to
+retrieve the appearance settings that will be applied to any
+additional ticks, tick labels, and gridlines added to the plot:
+
+.. code-block:: pycon
+
+ >>> import matplotlib.pyplot as plt
+
+ >>> fig, ax = plt.subplots()
+ >>> ax.yaxis.set_tick_params(labelsize=30, labelcolor='red',
+ ... direction='out', which='major')
+ >>> ax.yaxis.get_tick_params(which='major')
+ {'direction': 'out',
+ 'left': True,
+ 'right': False,
+ 'labelleft': True,
+ 'labelright': False,
+ 'gridOn': False,
+ 'labelsize': 30,
+ 'labelcolor': 'red'}
+ >>> ax.yaxis.get_tick_params(which='minor')
+ {'left': True,
+ 'right': False,
+ 'labelleft': True,
+ 'labelright': False,
+ 'gridOn': False}
|
{"lib/matplotlib/axis.py": [{"type": "function", "name": "Axis.get_tick_params", "lines": [958, 1010], "signature": "def get_tick_params(self, which='major'):", "doc": "Get appearance parameters for ticks, ticklabels, and gridlines.\n\n.. versionadded:: 3.7\n\nParameters\n----------\nwhich : {'major', 'minor'}, default: 'major'\n The group of ticks for which the parameters are retrieved.\n\nReturns\n-------\ndict\n Properties for styling tick elements added to the axis.\n\nNotes\n-----\nThis method returns the appearance parameters for styling *new*\nelements added to this axis and may be different from the values\non current elements if they were modified directly by the user\n(e.g., via ``set_*`` methods on individual tick objects).\n\nExamples\n--------\n::\n\n >>> ax.yaxis.set_tick_params(labelsize=30, labelcolor='red',\n direction='out', which='major')\n >>> ax.yaxis.get_tick_params(which='major')\n {'direction': 'out',\n 'left': True,\n 'right': False,\n 'labelleft': True,\n 'labelright': False,\n 'gridOn': False,\n 'labelsize': 30,\n 'labelcolor': 'red'}\n >>> ax.yaxis.get_tick_params(which='minor')\n {'left': True,\n 'right': False,\n 'labelleft': True,\n 'labelright': False,\n 'gridOn': False}"}]}
|
3.5
|
["lib/matplotlib/tests/test_axes.py::test_axis_get_tick_params"]
|
["lib/matplotlib/tests/test_axes.py::test_invisible_axes[png]", "lib/matplotlib/tests/test_axes.py::test_get_labels", "lib/matplotlib/tests/test_axes.py::test_repr", "lib/matplotlib/tests/test_axes.py::test_label_loc_vertical[png]", "lib/matplotlib/tests/test_axes.py::test_label_loc_horizontal[png]", "lib/matplotlib/tests/test_axes.py::test_label_loc_rc[png]", "lib/matplotlib/tests/test_axes.py::test_label_shift", "lib/matplotlib/tests/test_axes.py::test_acorr[png]", "lib/matplotlib/tests/test_axes.py::test_spy[png]", "lib/matplotlib/tests/test_axes.py::test_spy_invalid_kwargs", "lib/matplotlib/tests/test_axes.py::test_matshow[png]", "lib/matplotlib/tests/test_axes.py::test_formatter_ticker[png]", "lib/matplotlib/tests/test_axes.py::test_funcformatter_auto_formatter", "lib/matplotlib/tests/test_axes.py::test_strmethodformatter_auto_formatter", "lib/matplotlib/tests/test_axes.py::test_twin_axis_locators_formatters[png]", "lib/matplotlib/tests/test_axes.py::test_twinx_cla", "lib/matplotlib/tests/test_axes.py::test_twin_logscale[png-x]", "lib/matplotlib/tests/test_axes.py::test_twin_logscale[png-y]", "lib/matplotlib/tests/test_axes.py::test_twinx_axis_scales[png]", "lib/matplotlib/tests/test_axes.py::test_twin_inherit_autoscale_setting", "lib/matplotlib/tests/test_axes.py::test_inverted_cla", "lib/matplotlib/tests/test_axes.py::test_subclass_clear_cla", "lib/matplotlib/tests/test_axes.py::test_cla_not_redefined_internally", "lib/matplotlib/tests/test_axes.py::test_minorticks_on_rcParams_both[png]", "lib/matplotlib/tests/test_axes.py::test_autoscale_tiny_range[png]", "lib/matplotlib/tests/test_axes.py::test_autoscale_tight", "lib/matplotlib/tests/test_axes.py::test_autoscale_log_shared", "lib/matplotlib/tests/test_axes.py::test_use_sticky_edges", "lib/matplotlib/tests/test_axes.py::test_sticky_shared_axes[png]", "lib/matplotlib/tests/test_axes.py::test_basic_annotate[png]", "lib/matplotlib/tests/test_axes.py::test_arrow_simple[png]", "lib/matplotlib/tests/test_axes.py::test_arrow_empty", "lib/matplotlib/tests/test_axes.py::test_arrow_in_view", "lib/matplotlib/tests/test_axes.py::test_annotate_default_arrow", "lib/matplotlib/tests/test_axes.py::test_annotate_signature", "lib/matplotlib/tests/test_axes.py::test_fill_units[png]", "lib/matplotlib/tests/test_axes.py::test_plot_format_kwarg_redundant", "lib/matplotlib/tests/test_axes.py::test_errorbar_dashes[png]", "lib/matplotlib/tests/test_axes.py::test_single_point[png]", "lib/matplotlib/tests/test_axes.py::test_single_date[png]", "lib/matplotlib/tests/test_axes.py::test_shaped_data[png]", "lib/matplotlib/tests/test_axes.py::test_structured_data", "lib/matplotlib/tests/test_axes.py::test_aitoff_proj[png]", "lib/matplotlib/tests/test_axes.py::test_axvspan_epoch[png]", "lib/matplotlib/tests/test_axes.py::test_axhspan_epoch[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_extent[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_empty[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_pickable", "lib/matplotlib/tests/test_axes.py::test_hexbin_log[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_linear[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_log_clim", "lib/matplotlib/tests/test_axes.py::test_inverted_limits", "lib/matplotlib/tests/test_axes.py::test_nonfinite_limits[png]", "lib/matplotlib/tests/test_axes.py::test_limits_empty_data[png-scatter]", "lib/matplotlib/tests/test_axes.py::test_limits_empty_data[png-plot]", "lib/matplotlib/tests/test_axes.py::test_limits_empty_data[png-fill_between]", "lib/matplotlib/tests/test_axes.py::test_imshow[png]", "lib/matplotlib/tests/test_axes.py::test_imshow_clip[png]", "lib/matplotlib/tests/test_axes.py::test_imshow_norm_vminvmax", "lib/matplotlib/tests/test_axes.py::test_polycollection_joinstyle[png]", "lib/matplotlib/tests/test_axes.py::test_fill_between_input[2d_x_input]", "lib/matplotlib/tests/test_axes.py::test_fill_between_input[2d_y1_input]", "lib/matplotlib/tests/test_axes.py::test_fill_between_input[2d_y2_input]", "lib/matplotlib/tests/test_axes.py::test_fill_betweenx_input[2d_y_input]", "lib/matplotlib/tests/test_axes.py::test_fill_betweenx_input[2d_x1_input]", "lib/matplotlib/tests/test_axes.py::test_fill_betweenx_input[2d_x2_input]", "lib/matplotlib/tests/test_axes.py::test_fill_between_interpolate[png]", "lib/matplotlib/tests/test_axes.py::test_fill_between_interpolate_decreasing[png]", "lib/matplotlib/tests/test_axes.py::test_fill_between_interpolate_nan[png]", "lib/matplotlib/tests/test_axes.py::test_pcolorargs_5205", "lib/matplotlib/tests/test_axes.py::test_pcolormesh[png]", "lib/matplotlib/tests/test_axes.py::test_pcolormesh_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_pcolormesh_datetime_axis[png]", "lib/matplotlib/tests/test_axes.py::test_pcolor_datetime_axis[png]", "lib/matplotlib/tests/test_axes.py::test_pcolorargs", "lib/matplotlib/tests/test_axes.py::test_pcolornearest[png]", "lib/matplotlib/tests/test_axes.py::test_pcolornearestunits[png]", "lib/matplotlib/tests/test_axes.py::test_pcolorflaterror", "lib/matplotlib/tests/test_axes.py::test_pcolorauto[png-False]", "lib/matplotlib/tests/test_axes.py::test_pcolorauto[png-True]", "lib/matplotlib/tests/test_axes.py::test_canonical[png]", "lib/matplotlib/tests/test_axes.py::test_arc_angles[png]", "lib/matplotlib/tests/test_axes.py::test_arc_ellipse[png]", "lib/matplotlib/tests/test_axes.py::test_marker_as_markerstyle", "lib/matplotlib/tests/test_axes.py::test_markevery[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_line[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_linear_scales[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_linear_scales_zoomed[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_log_scales[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_polar[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_linear_scales_nans[png]", "lib/matplotlib/tests/test_axes.py::test_marker_edges[png]", "lib/matplotlib/tests/test_axes.py::test_bar_tick_label_single[png]", "lib/matplotlib/tests/test_axes.py::test_nan_bar_values", "lib/matplotlib/tests/test_axes.py::test_bar_ticklabel_fail", "lib/matplotlib/tests/test_axes.py::test_bar_tick_label_multiple[png]", "lib/matplotlib/tests/test_axes.py::test_bar_tick_label_multiple_old_alignment[png]", "lib/matplotlib/tests/test_axes.py::test_bar_decimal_center[png]", "lib/matplotlib/tests/test_axes.py::test_barh_decimal_center[png]", "lib/matplotlib/tests/test_axes.py::test_bar_decimal_width[png]", "lib/matplotlib/tests/test_axes.py::test_barh_decimal_height[png]", "lib/matplotlib/tests/test_axes.py::test_bar_color_none_alpha", "lib/matplotlib/tests/test_axes.py::test_bar_edgecolor_none_alpha", "lib/matplotlib/tests/test_axes.py::test_barh_tick_label[png]", "lib/matplotlib/tests/test_axes.py::test_bar_timedelta", "lib/matplotlib/tests/test_axes.py::test_boxplot_dates_pandas", "lib/matplotlib/tests/test_axes.py::test_boxplot_capwidths", "lib/matplotlib/tests/test_axes.py::test_pcolor_regression", "lib/matplotlib/tests/test_axes.py::test_bar_pandas", "lib/matplotlib/tests/test_axes.py::test_bar_pandas_indexed", "lib/matplotlib/tests/test_axes.py::test_bar_hatches[png]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x-1-x-expected_labels0-x]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x1-width1-label1-expected_labels1-_nolegend_]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x2-width2-label2-expected_labels2-_nolegend_]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x3-width3-bars-expected_labels3-bars]", "lib/matplotlib/tests/test_axes.py::test_bar_labels_length", "lib/matplotlib/tests/test_axes.py::test_pandas_minimal_plot", "lib/matplotlib/tests/test_axes.py::test_hist_log[png]", "lib/matplotlib/tests/test_axes.py::test_hist_log_2[png]", "lib/matplotlib/tests/test_axes.py::test_hist_log_barstacked", "lib/matplotlib/tests/test_axes.py::test_hist_bar_empty[png]", "lib/matplotlib/tests/test_axes.py::test_hist_float16", "lib/matplotlib/tests/test_axes.py::test_hist_step_empty[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step_filled[png]", "lib/matplotlib/tests/test_axes.py::test_hist_density[png]", "lib/matplotlib/tests/test_axes.py::test_hist_unequal_bins_density", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets_bins[date2num]", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets_bins[datetime.datetime]", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets_bins[np.datetime64]", "lib/matplotlib/tests/test_axes.py::test_hist_with_empty_input[data0-1]", "lib/matplotlib/tests/test_axes.py::test_hist_with_empty_input[data1-1]", "lib/matplotlib/tests/test_axes.py::test_hist_with_empty_input[data2-2]", "lib/matplotlib/tests/test_axes.py::test_hist_zorder[bar-1]", "lib/matplotlib/tests/test_axes.py::test_hist_zorder[step-2]", "lib/matplotlib/tests/test_axes.py::test_hist_zorder[stepfilled-1]", "lib/matplotlib/tests/test_axes.py::test_stairs[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_fill[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_update[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_baseline_0[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_empty", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_nan", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_mismatch", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_update", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_update2", "lib/matplotlib/tests/test_axes.py::test_stairs_options[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_datetime[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_edge_handling[png]", "lib/matplotlib/tests/test_axes.py::test_contour_hatching[png]", "lib/matplotlib/tests/test_axes.py::test_contour_colorbar[png]", "lib/matplotlib/tests/test_axes.py::test_hist2d[png]", "lib/matplotlib/tests/test_axes.py::test_hist2d_transpose[png]", "lib/matplotlib/tests/test_axes.py::test_hist2d_density", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_plot[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_marker[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_2D[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_decimal[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs0]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs1]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs2]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs3]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_unfilled", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_unfillable", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_size_arg_size", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_edgecolor_RGB", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_invalid_color[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_no_invalid_color[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_norm_vminvmax", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_single_point[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_different_shapes[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[0.5-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case1-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[red-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[none-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[None-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case5-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[jaune-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case7-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case8-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case9-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case10-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case11-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case12-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case13-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case14-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case15-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case16-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case17-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case18-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case19-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case20-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case21-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case22-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case23-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case24-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case25-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case26-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case27-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case28-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case29-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_single_color_c[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_linewidths", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params0-expected_result0]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params1-expected_result1]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params2-expected_result2]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params3-expected_result3]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params4-expected_result4]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs0-None]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs1-None]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs2-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs3-expected_edgecolors3]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs4-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs5-face]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs6-none]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs7-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs8-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs9-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs10-g]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_error", "lib/matplotlib/tests/test_axes.py::test_as_mpl_axes_api", "lib/matplotlib/tests/test_axes.py::test_pyplot_axes", "lib/matplotlib/tests/test_axes.py::test_log_scales", "lib/matplotlib/tests/test_axes.py::test_log_scales_no_data", "lib/matplotlib/tests/test_axes.py::test_log_scales_invalid", "lib/matplotlib/tests/test_axes.py::test_stackplot[png]", "lib/matplotlib/tests/test_axes.py::test_stackplot_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_rangewhis[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_percentilewhis[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_with_xlabels[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_horizontal[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_with_ylabels[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_patchartist[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custompatchartist[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customoutlier[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_showcustommean[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custombox[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custommedian[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customcap[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customwhisker[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_shownotches[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_nocaps[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_nobox[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_no_flier_stats[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_showmean[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_showmeanasline[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_scalarwidth[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customwidths[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custompositions[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_bad_widths", "lib/matplotlib/tests/test_axes.py::test_bxp_bad_positions", "lib/matplotlib/tests/test_axes.py::test_bxp_custom_capwidths[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custom_capwidth[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_bad_capwidths", "lib/matplotlib/tests/test_axes.py::test_boxplot[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_custom_capwidths[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_sym2[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_sym[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_autorange_whiskers[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_rc_parameters[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_with_CIarray[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_no_weird_whisker[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_bad_medians", "lib/matplotlib/tests/test_axes.py::test_boxplot_bad_ci", "lib/matplotlib/tests/test_axes.py::test_boxplot_zorder", "lib/matplotlib/tests/test_axes.py::test_boxplot_marker_behavior", "lib/matplotlib/tests/test_axes.py::test_boxplot_mod_artist_after_plotting[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showmeans[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showextrema[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showmedians[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showall[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_custompoints_10[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_custompoints_200[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showmedians[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showmeans[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showextrema[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showall[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_custompoints_10[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_custompoints_200[png]", "lib/matplotlib/tests/test_axes.py::test_violinplot_bad_positions", "lib/matplotlib/tests/test_axes.py::test_violinplot_bad_widths", "lib/matplotlib/tests/test_axes.py::test_violinplot_bad_quantiles", "lib/matplotlib/tests/test_axes.py::test_violinplot_outofrange_quantiles", "lib/matplotlib/tests/test_axes.py::test_violinplot_single_list_quantiles[png]", "lib/matplotlib/tests/test_axes.py::test_violinplot_pandas_series[png]", "lib/matplotlib/tests/test_axes.py::test_manage_xticks", "lib/matplotlib/tests/test_axes.py::test_boxplot_not_single", "lib/matplotlib/tests/test_axes.py::test_tick_space_size_0", "lib/matplotlib/tests/test_axes.py::test_errorbar[png]", "lib/matplotlib/tests/test_axes.py::test_mixed_errorbar_polar_caps[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_colorcycle", "lib/matplotlib/tests/test_axes.py::test_errorbar_cycle_ecolor[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_shape", "lib/matplotlib/tests/test_axes.py::test_errorbar_limits[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_nonefmt", "lib/matplotlib/tests/test_axes.py::test_errorbar_line_specific_kwargs", "lib/matplotlib/tests/test_axes.py::test_errorbar_with_prop_cycle[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_every_invalid", "lib/matplotlib/tests/test_axes.py::test_xerr_yerr_not_negative", "lib/matplotlib/tests/test_axes.py::test_errorbar_every[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_linewidth_type[elinewidth0]", "lib/matplotlib/tests/test_axes.py::test_errorbar_linewidth_type[elinewidth1]", "lib/matplotlib/tests/test_axes.py::test_errorbar_linewidth_type[1]", "lib/matplotlib/tests/test_axes.py::test_errorbar_nan[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled[png]", "lib/matplotlib/tests/test_axes.py::test_hist_offset[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step_horiz[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_weighted[png]", "lib/matplotlib/tests/test_axes.py::test_stem[png-w/", "lib/matplotlib/tests/test_axes.py::test_stem[png-w/o", "lib/matplotlib/tests/test_axes.py::test_stem_args", "lib/matplotlib/tests/test_axes.py::test_stem_markerfmt", "lib/matplotlib/tests/test_axes.py::test_stem_dates", "lib/matplotlib/tests/test_axes.py::test_stem_orientation[png-w/", "lib/matplotlib/tests/test_axes.py::test_stem_orientation[png-w/o", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_step[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_density[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step_bottom[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stepfilled_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_step_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stepfilled_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_step_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_step_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_step_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_bar[png]", "lib/matplotlib/tests/test_axes.py::test_hist_barstacked_bottom_unchanged", "lib/matplotlib/tests/test_axes.py::test_hist_emptydata", "lib/matplotlib/tests/test_axes.py::test_hist_labels", "lib/matplotlib/tests/test_axes.py::test_transparent_markers[png]", "lib/matplotlib/tests/test_axes.py::test_rgba_markers[png]", "lib/matplotlib/tests/test_axes.py::test_mollweide_grid[png]", "lib/matplotlib/tests/test_axes.py::test_mollweide_forward_inverse_closure", "lib/matplotlib/tests/test_axes.py::test_mollweide_inverse_forward_closure", "lib/matplotlib/tests/test_axes.py::test_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_eventplot[png]", "lib/matplotlib/tests/test_axes.py::test_eventplot_defaults[png]", "lib/matplotlib/tests/test_axes.py::test_eventplot_colors[colors0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_colors[colors1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_colors[colors2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_problem_kwargs[png]", "lib/matplotlib/tests/test_axes.py::test_empty_eventplot", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[None-data0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[None-data1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[None-data2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[vertical-data0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[vertical-data1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[vertical-data2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[horizontal-data0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[horizontal-data1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[horizontal-data2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_units_list[png]", "lib/matplotlib/tests/test_axes.py::test_marker_styles[png]", "lib/matplotlib/tests/test_axes.py::test_markers_fillstyle_rcparams[png]", "lib/matplotlib/tests/test_axes.py::test_vertex_markers[png]", "lib/matplotlib/tests/test_axes.py::test_eb_line_zorder[png]", "lib/matplotlib/tests/test_axes.py::test_axline_loglog[png]", "lib/matplotlib/tests/test_axes.py::test_axline[png]", "lib/matplotlib/tests/test_axes.py::test_axline_transaxes[png]", "lib/matplotlib/tests/test_axes.py::test_axline_transaxes_panzoom[png]", "lib/matplotlib/tests/test_axes.py::test_axline_args", "lib/matplotlib/tests/test_axes.py::test_vlines[png]", "lib/matplotlib/tests/test_axes.py::test_vlines_default", "lib/matplotlib/tests/test_axes.py::test_hlines[png]", "lib/matplotlib/tests/test_axes.py::test_hlines_default", "lib/matplotlib/tests/test_axes.py::test_lines_with_colors[png-data0]", "lib/matplotlib/tests/test_axes.py::test_lines_with_colors[png-data1]", "lib/matplotlib/tests/test_axes.py::test_step_linestyle[png]", "lib/matplotlib/tests/test_axes.py::test_mixed_collection[png]", "lib/matplotlib/tests/test_axes.py::test_subplot_key_hash", "lib/matplotlib/tests/test_axes.py::test_specgram[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_magnitude[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_angle[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_fs_none", "lib/matplotlib/tests/test_axes.py::test_specgram_origin_rcparam[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_origin_kwarg", "lib/matplotlib/tests/test_axes.py::test_psd_csd[png]", "lib/matplotlib/tests/test_axes.py::test_spectrum[png]", "lib/matplotlib/tests/test_axes.py::test_psd_csd_edge_cases", "lib/matplotlib/tests/test_axes.py::test_twin_remove[png]", "lib/matplotlib/tests/test_axes.py::test_twin_spines[png]", "lib/matplotlib/tests/test_axes.py::test_twin_spines_on_top[png]", "lib/matplotlib/tests/test_axes.py::test_rcparam_grid_minor[both-True-True]", "lib/matplotlib/tests/test_axes.py::test_rcparam_grid_minor[major-True-False]", "lib/matplotlib/tests/test_axes.py::test_rcparam_grid_minor[minor-False-True]", "lib/matplotlib/tests/test_axes.py::test_grid", "lib/matplotlib/tests/test_axes.py::test_reset_grid", "lib/matplotlib/tests/test_axes.py::test_reset_ticks[png]", "lib/matplotlib/tests/test_axes.py::test_vline_limit", "lib/matplotlib/tests/test_axes.py::test_axline_minmax[axvline-axhline-args0]", "lib/matplotlib/tests/test_axes.py::test_axline_minmax[axvspan-axhspan-args1]", "lib/matplotlib/tests/test_axes.py::test_empty_shared_subplots", "lib/matplotlib/tests/test_axes.py::test_shared_with_aspect_1", "lib/matplotlib/tests/test_axes.py::test_shared_with_aspect_2", "lib/matplotlib/tests/test_axes.py::test_shared_with_aspect_3", "lib/matplotlib/tests/test_axes.py::test_shared_aspect_error", "lib/matplotlib/tests/test_axes.py::test_axis_errors[TypeError-args0-kwargs0-axis\\\\(\\\\)", "lib/matplotlib/tests/test_axes.py::test_axis_errors[ValueError-args1-kwargs1-Unrecognized", "lib/matplotlib/tests/test_axes.py::test_axis_errors[TypeError-args2-kwargs2-the", "lib/matplotlib/tests/test_axes.py::test_axis_errors[TypeError-args3-kwargs3-axis\\\\(\\\\)", "lib/matplotlib/tests/test_axes.py::test_axis_method_errors", "lib/matplotlib/tests/test_axes.py::test_twin_with_aspect[x]", "lib/matplotlib/tests/test_axes.py::test_twin_with_aspect[y]", "lib/matplotlib/tests/test_axes.py::test_relim_visible_only", "lib/matplotlib/tests/test_axes.py::test_text_labelsize", "lib/matplotlib/tests/test_axes.py::test_pie_default[png]", "lib/matplotlib/tests/test_axes.py::test_pie_linewidth_0[png]", "lib/matplotlib/tests/test_axes.py::test_pie_center_radius[png]", "lib/matplotlib/tests/test_axes.py::test_pie_linewidth_2[png]", "lib/matplotlib/tests/test_axes.py::test_pie_ccw_true[png]", "lib/matplotlib/tests/test_axes.py::test_pie_frame_grid[png]", "lib/matplotlib/tests/test_axes.py::test_pie_rotatelabels_true[png]", "lib/matplotlib/tests/test_axes.py::test_pie_nolabel_but_legend[png]", "lib/matplotlib/tests/test_axes.py::test_pie_textprops", "lib/matplotlib/tests/test_axes.py::test_pie_get_negative_values", "lib/matplotlib/tests/test_axes.py::test_normalize_kwarg_pie", "lib/matplotlib/tests/test_axes.py::test_set_get_ticklabels[png]", "lib/matplotlib/tests/test_axes.py::test_set_ticks_with_labels[png]", "lib/matplotlib/tests/test_axes.py::test_set_noniterable_ticklabels", "lib/matplotlib/tests/test_axes.py::test_subsampled_ticklabels", "lib/matplotlib/tests/test_axes.py::test_mismatched_ticklabels", "lib/matplotlib/tests/test_axes.py::test_empty_ticks_fixed_loc", "lib/matplotlib/tests/test_axes.py::test_retain_tick_visibility[png]", "lib/matplotlib/tests/test_axes.py::test_tick_label_update", "lib/matplotlib/tests/test_axes.py::test_o_marker_path_snap[png]", "lib/matplotlib/tests/test_axes.py::test_margins", "lib/matplotlib/tests/test_axes.py::test_set_margin_updates_limits", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args0-kwargs0-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args1-kwargs1-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args2-kwargs2-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args3-kwargs3-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[TypeError-args4-kwargs4-Cannot", "lib/matplotlib/tests/test_axes.py::test_margins_errors[TypeError-args5-kwargs5-Cannot", "lib/matplotlib/tests/test_axes.py::test_margins_errors[TypeError-args6-kwargs6-Must", "lib/matplotlib/tests/test_axes.py::test_length_one_hist", "lib/matplotlib/tests/test_axes.py::test_set_xy_bound", "lib/matplotlib/tests/test_axes.py::test_pathological_hexbin", "lib/matplotlib/tests/test_axes.py::test_color_None", "lib/matplotlib/tests/test_axes.py::test_color_alias", "lib/matplotlib/tests/test_axes.py::test_numerical_hist_label", "lib/matplotlib/tests/test_axes.py::test_unicode_hist_label", "lib/matplotlib/tests/test_axes.py::test_move_offsetlabel", "lib/matplotlib/tests/test_axes.py::test_rc_spines[png]", "lib/matplotlib/tests/test_axes.py::test_rc_grid[png]", "lib/matplotlib/tests/test_axes.py::test_rc_tick", "lib/matplotlib/tests/test_axes.py::test_rc_major_minor_tick", "lib/matplotlib/tests/test_axes.py::test_square_plot", "lib/matplotlib/tests/test_axes.py::test_bad_plot_args", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy0-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy1-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy2-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy3-PcolorImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy4-QuadMesh]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy0-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy1-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy2-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy3-PcolorImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy4-QuadMesh]", "lib/matplotlib/tests/test_axes.py::test_shared_scale", "lib/matplotlib/tests/test_axes.py::test_shared_bool", "lib/matplotlib/tests/test_axes.py::test_violin_point_mass", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs0]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs1]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs2]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs3]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs4]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs5]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs6]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs7]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs8]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs9]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs10]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs11]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs12]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs13]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs14]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs15]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs16]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs17]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs18]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs19]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs20]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs21]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs22]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs23]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs24]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs25]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs26]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs27]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs28]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs29]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs30]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs31]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs32]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs33]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs34]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs35]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs36]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs37]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs38]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs39]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs40]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs41]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs42]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs43]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs44]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs45]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs46]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs47]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs48]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs49]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs50]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs51]", "lib/matplotlib/tests/test_axes.py::test_dash_offset[png]", "lib/matplotlib/tests/test_axes.py::test_title_pad", "lib/matplotlib/tests/test_axes.py::test_title_location_roundtrip", "lib/matplotlib/tests/test_axes.py::test_title_location_shared[True]", "lib/matplotlib/tests/test_axes.py::test_title_location_shared[False]", "lib/matplotlib/tests/test_axes.py::test_loglog[png]", "lib/matplotlib/tests/test_axes.py::test_loglog_nonpos[png]", "lib/matplotlib/tests/test_axes.py::test_axes_margins", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[gca-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[gca-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots_shared-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots_shared-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[add_axes-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[add_axes-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes_relim", "lib/matplotlib/tests/test_axes.py::test_shared_axes_autoscale", "lib/matplotlib/tests/test_axes.py::test_adjust_numtick_aspect", "lib/matplotlib/tests/test_axes.py::test_auto_numticks", "lib/matplotlib/tests/test_axes.py::test_auto_numticks_log", "lib/matplotlib/tests/test_axes.py::test_broken_barh_empty", "lib/matplotlib/tests/test_axes.py::test_broken_barh_timedelta", "lib/matplotlib/tests/test_axes.py::test_pandas_pcolormesh", "lib/matplotlib/tests/test_axes.py::test_pandas_indexing_dates", "lib/matplotlib/tests/test_axes.py::test_pandas_errorbar_indexing", "lib/matplotlib/tests/test_axes.py::test_pandas_index_shape", "lib/matplotlib/tests/test_axes.py::test_pandas_indexing_hist", "lib/matplotlib/tests/test_axes.py::test_pandas_bar_align_center", "lib/matplotlib/tests/test_axes.py::test_axis_set_tick_params_labelsize_labelcolor", "lib/matplotlib/tests/test_axes.py::test_axes_tick_params_gridlines", "lib/matplotlib/tests/test_axes.py::test_axes_tick_params_ylabelside", "lib/matplotlib/tests/test_axes.py::test_axes_tick_params_xlabelside", "lib/matplotlib/tests/test_axes.py::test_none_kwargs", "lib/matplotlib/tests/test_axes.py::test_bar_uint8", "lib/matplotlib/tests/test_axes.py::test_date_timezone_x[png]", "lib/matplotlib/tests/test_axes.py::test_date_timezone_y[png]", "lib/matplotlib/tests/test_axes.py::test_date_timezone_x_and_y[png]", "lib/matplotlib/tests/test_axes.py::test_axisbelow[png]", "lib/matplotlib/tests/test_axes.py::test_titletwiny", "lib/matplotlib/tests/test_axes.py::test_titlesetpos", "lib/matplotlib/tests/test_axes.py::test_title_xticks_top", "lib/matplotlib/tests/test_axes.py::test_title_xticks_top_both", "lib/matplotlib/tests/test_axes.py::test_title_above_offset[left", "lib/matplotlib/tests/test_axes.py::test_title_above_offset[center", "lib/matplotlib/tests/test_axes.py::test_title_above_offset[both", "lib/matplotlib/tests/test_axes.py::test_title_no_move_off_page", "lib/matplotlib/tests/test_axes.py::test_offset_label_color", "lib/matplotlib/tests/test_axes.py::test_offset_text_visible", "lib/matplotlib/tests/test_axes.py::test_large_offset", "lib/matplotlib/tests/test_axes.py::test_barb_units", "lib/matplotlib/tests/test_axes.py::test_quiver_units", "lib/matplotlib/tests/test_axes.py::test_bar_color_cycle", "lib/matplotlib/tests/test_axes.py::test_tick_param_label_rotation", "lib/matplotlib/tests/test_axes.py::test_fillbetween_cycle", "lib/matplotlib/tests/test_axes.py::test_log_margins", "lib/matplotlib/tests/test_axes.py::test_color_length_mismatch", "lib/matplotlib/tests/test_axes.py::test_eventplot_legend", "lib/matplotlib/tests/test_axes.py::test_bar_broadcast_args", "lib/matplotlib/tests/test_axes.py::test_invalid_axis_limits", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[symlog-symlog]", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[symlog-log]", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[log-symlog]", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[log-log]", "lib/matplotlib/tests/test_axes.py::test_twinx_knows_limits", "lib/matplotlib/tests/test_axes.py::test_zero_linewidth", "lib/matplotlib/tests/test_axes.py::test_empty_errorbar_legend", "lib/matplotlib/tests/test_axes.py::test_plot_decimal[png]", "lib/matplotlib/tests/test_axes.py::test_markerfacecolor_none_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_tick_padding_tightbbox", "lib/matplotlib/tests/test_axes.py::test_inset", "lib/matplotlib/tests/test_axes.py::test_zoom_inset", "lib/matplotlib/tests/test_axes.py::test_inset_polar[png]", "lib/matplotlib/tests/test_axes.py::test_inset_projection", "lib/matplotlib/tests/test_axes.py::test_inset_subclass", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[False-False]", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[False-True]", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[True-False]", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[True-True]", "lib/matplotlib/tests/test_axes.py::test_set_position", "lib/matplotlib/tests/test_axes.py::test_spines_properbbox_after_zoom", "lib/matplotlib/tests/test_axes.py::test_gettightbbox_ignore_nan", "lib/matplotlib/tests/test_axes.py::test_scatter_series_non_zero_index", "lib/matplotlib/tests/test_axes.py::test_scatter_empty_data", "lib/matplotlib/tests/test_axes.py::test_annotate_across_transforms[png]", "lib/matplotlib/tests/test_axes.py::test_secondary_xy[png]", "lib/matplotlib/tests/test_axes.py::test_secondary_fail", "lib/matplotlib/tests/test_axes.py::test_secondary_resize", "lib/matplotlib/tests/test_axes.py::test_secondary_minorloc", "lib/matplotlib/tests/test_axes.py::test_secondary_formatter", "lib/matplotlib/tests/test_axes.py::test_secondary_repr", "lib/matplotlib/tests/test_axes.py::test_normal_axes", "lib/matplotlib/tests/test_axes.py::test_nodecorator", "lib/matplotlib/tests/test_axes.py::test_displaced_spine", "lib/matplotlib/tests/test_axes.py::test_tickdirs", "lib/matplotlib/tests/test_axes.py::test_minor_accountedfor", "lib/matplotlib/tests/test_axes.py::test_axis_bool_arguments[png]", "lib/matplotlib/tests/test_axes.py::test_axis_extent_arg", "lib/matplotlib/tests/test_axes.py::test_axis_extent_arg2", "lib/matplotlib/tests/test_axes.py::test_hist_auto_bins", "lib/matplotlib/tests/test_axes.py::test_hist_nan_data", "lib/matplotlib/tests/test_axes.py::test_hist_range_and_density", "lib/matplotlib/tests/test_axes.py::test_bar_errbar_zorder", "lib/matplotlib/tests/test_axes.py::test_set_ticks_inverted", "lib/matplotlib/tests/test_axes.py::test_aspect_nonlinear_adjustable_box", "lib/matplotlib/tests/test_axes.py::test_aspect_nonlinear_adjustable_datalim", "lib/matplotlib/tests/test_axes.py::test_box_aspect", "lib/matplotlib/tests/test_axes.py::test_box_aspect_custom_position", "lib/matplotlib/tests/test_axes.py::test_bbox_aspect_axes_init", "lib/matplotlib/tests/test_axes.py::test_set_aspect_negative", "lib/matplotlib/tests/test_axes.py::test_redraw_in_frame", "lib/matplotlib/tests/test_axes.py::test_invisible_axes_events", "lib/matplotlib/tests/test_axes.py::test_xtickcolor_is_not_markercolor", "lib/matplotlib/tests/test_axes.py::test_ytickcolor_is_not_markercolor", "lib/matplotlib/tests/test_axes.py::test_unautoscale[True-x]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[True-y]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[False-x]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[False-y]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[None-x]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[None-y]", "lib/matplotlib/tests/test_axes.py::test_polar_interpolation_steps_variable_r[png]", "lib/matplotlib/tests/test_axes.py::test_autoscale_tiny_sticky", "lib/matplotlib/tests/test_axes.py::test_xtickcolor_is_not_xticklabelcolor", "lib/matplotlib/tests/test_axes.py::test_ytickcolor_is_not_yticklabelcolor", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[xx-small]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[x-small]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[small]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[medium]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[large]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[x-large]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[xx-large]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[larger]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[smaller]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[8]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[10]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[12]", "lib/matplotlib/tests/test_axes.py::test_multiplot_autoscale", "lib/matplotlib/tests/test_axes.py::test_sharing_does_not_link_positions", "lib/matplotlib/tests/test_axes.py::test_shared_axes_clear[png]", "lib/matplotlib/tests/test_axes.py::test_shared_axes_retick", "lib/matplotlib/tests/test_axes.py::test_ylabel_ha_with_position[left]", "lib/matplotlib/tests/test_axes.py::test_ylabel_ha_with_position[center]", "lib/matplotlib/tests/test_axes.py::test_ylabel_ha_with_position[right]", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_vertical", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_vertical_yinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal_yinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal_xinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal_xyinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_center", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_errorbars", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt[%.2f]", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt[{:.2f}]", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt[format]", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt_error", "lib/matplotlib/tests/test_axes.py::test_bar_label_labels", "lib/matplotlib/tests/test_axes.py::test_bar_label_nan_ydata", "lib/matplotlib/tests/test_axes.py::test_bar_label_nan_ydata_inverted", "lib/matplotlib/tests/test_axes.py::test_nan_barlabels", "lib/matplotlib/tests/test_axes.py::test_patch_bounds", "lib/matplotlib/tests/test_axes.py::test_warn_ignored_scatter_kwargs", "lib/matplotlib/tests/test_axes.py::test_artist_sublists", "lib/matplotlib/tests/test_axes.py::test_empty_line_plots", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-f-'f'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-o+-'o\\\\+'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-:--':-'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-rk-'rk'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-:o-r-':o-r'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-f-'f'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-o+-'o\\\\+'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-:--':-'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-rk-'rk'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-:o-r-':o-r'", "lib/matplotlib/tests/test_axes.py::test_plot_format", "lib/matplotlib/tests/test_axes.py::test_automatic_legend", "lib/matplotlib/tests/test_axes.py::test_plot_errors", "lib/matplotlib/tests/test_axes.py::test_clim", "lib/matplotlib/tests/test_axes.py::test_bezier_autoscale", "lib/matplotlib/tests/test_axes.py::test_small_autoscale", "lib/matplotlib/tests/test_axes.py::test_get_xticklabel", "lib/matplotlib/tests/test_axes.py::test_bar_leading_nan", "lib/matplotlib/tests/test_axes.py::test_bar_all_nan[png]", "lib/matplotlib/tests/test_axes.py::test_extent_units[png]"]
|
3d6c3da884fafae4654df68144391cfe9be6f134
|
{"first_commit_time": 1661037601.0, "pr_title": "Add `Axes.get_tick_params()` method.", "pr_body": "## PR Summary\r\nAddresses https://github.com/matplotlib/matplotlib/issues/23603\r\n\r\n- Added `Axis.get_tick_params()` method, which will return `Axis._major_tick_kw` or `Axis._minor_tick_kw` depending on the value of `which`\r\n- Added `Axes.get_tick_params()` method, which will return the result of `Axis.get_tick_params()` on the `axis` specified and error out if `both` is passed in\r\n\r\n## PR Checklist\r\n\r\n<!-- Please mark any checkboxes that do not apply to this PR as [N/A]. -->\r\n**Tests and Styling**\r\n- [x] Has pytest style unit tests (and `pytest` passes).\r\n- [X] Is [Flake 8](https://flake8.pycqa.org/en/latest/) compliant (install `flake8-docstrings` and run `flake8 --docstring-convention=all`).\r\n\r\n**Documentation**\r\n- [x] New features are documented, with examples if plot related.\r\n- [x] New features have an entry in `doc/users/next_whats_new/` (follow instructions in README.rst there).\r\n- [N/A] API changes documented in `doc/api/next_api_changes/` (follow instructions in README.rst there).\r\n- [x] Documentation is sphinx and numpydoc compliant (the docs should [build](https://matplotlib.org/devel/documenting_mpl.html#building-the-docs) without error).\r\n\r\n<!--\r\nThank you so much for your PR! To help us review your contribution, please\r\nconsider the following points:\r\n\r\n- A development guide is available at https://matplotlib.org/devdocs/devel/index.html.\r\n\r\n- Help with git and github is available at\r\n https://matplotlib.org/devel/gitwash/development_workflow.html.\r\n\r\n- Do not create the PR out of main, but out of a separate branch.\r\n\r\n- The PR title should summarize the changes, for example \"Raise ValueError on\r\n non-numeric input to set_xlim\". Avoid non-descriptive titles such as\r\n \"Addresses issue #8576\".\r\n\r\n- The summary should provide at least 1-2 sentences describing the pull request\r\n in detail (Why is this change required? What problem does it solve?) and\r\n link to any relevant issues.\r\n\r\n- If you are contributing fixes to docstrings, please pay attention to\r\n http://matplotlib.org/devel/documenting_mpl.html#formatting. In particular,\r\n note the difference between using single backquotes, double backquotes, and\r\n asterisks in the markup.\r\n\r\nWe understand that PRs can sometimes be overwhelming, especially as the\r\nreviews start coming in. Please let us know if the reviews are unclear or\r\nthe recommended next step seems overly demanding, if you would like help in\r\naddressing a reviewer's comments, or if you have been waiting too long to hear\r\nback on your PR.\r\n-->\r\n", "pr_timeline": [{"time": 1661846075.0, "comment": "I believe that this can be one way to go, subject to the caveats in https://github.com/matplotlib/matplotlib/issues/23603#issuecomment-1212572486\r\n\r\nI think that there should be a copy of the dictionary returned and that there should be a comment that only those parameters deviating from the default (and/or rcParams?) are returned. \r\n\r\n(There should probably also be a version in mplot3d for Axes including a zaxis, but that can be extended in a different PR later.)"}, {"time": 1661846282.0, "comment": "Based on https://github.com/matplotlib/matplotlib/issues/23633 one may also opt for skipping the `Axes` method and only keep the `Axis` method. This would avoid any mplot3d issues and keep the API a bit cleaner. (But there is also the discoverability drawback of the `Axis` API.)"}, {"time": 1661857218.0, "comment": "@stefmolin sorry for letting this slip.\r\n\r\nI'm tempted to only add `Axis.get_tick_params()` for now.\r\n\r\nAs opposed to the setter, which can set both Axis, and thus can save code, the getter has to resolve to exactly one axis and thus `Axes.get_tick_params()` is just a 1:1 rewriting of the axis access. We can always add the `Axes` method later if we consider it reasonable, but adding it now and taking away later is problematic. So, let's be defensive to start with.\r\n\r\nWe should cross-ref in the docs of `Axes.tick_params()` and `Axis.tick_params()` which should reduce the discoverability issue a bit.\r\n\r\nA very central part of this, is communicating the meaning of these parameters: They are the default values, i.e. if new ticks are created (e.g. by panning or zooming) these values are used for the new ticks. This is in contrast with the actual tick values (the properties of the current tick objects), which can in principle be overwritten by the user.\r\nI think we don't have very clear communication on that right now (in particular also for the setter case `tick_params()` vs. `for tick in ...get_major_ticks(): tick.set_*()`), and it would be good to improve that, but maybe that's going a bit far for this PR and \"Get the appearance of ticks, tick labels, and gridlines.\" is a good enough description for a start."}, {"time": 1662417930.0, "comment": "> I'm tempted to only add `Axis.get_tick_params()` for now.\r\n> \r\n> As opposed to the setter, which can set both Axis, and thus can save code, the getter has to resolve to exactly one axis and thus `Axes.get_tick_params()` is just a 1:1 rewriting of the axis access. We can always add the `Axes` method later if we consider it reasonable, but adding it now and taking away later is problematic. So, let's be defensive to start with.\r\n\r\nWorks for me; I removed the extra method.\r\n\r\n> We should cross-ref in the docs of `Axes.tick_params()` and `Axis.tick_params()` which should reduce the discoverability issue a bit.\r\n>\r\n> A very central part of this, is communicating the meaning of these parameters: They are the default values, i.e. if new ticks are created (e.g. by panning or zooming) these values are used for the new ticks. This is in contrast with the actual tick values (the properties of the current tick objects), which can in principle be overwritten by the user.\r\n\r\nI updated the docstrings for this – let me know what you think.\r\n\r\n> I think we don't have very clear communication on that right now (in particular also for the setter case `tick_params()` vs. `for tick in ...get_major_ticks(): tick.set_*()`), and it would be good to improve that, but maybe that's going a bit far for this PR and \"Get the appearance of ticks, tick labels, and gridlines.\" is a good enough description for a start.\r\n\r\nNot sure what you mean by this. Are you referring to updating a docstring or code?\r\n"}, {"time": 1662537633.0, "comment": "> > I think we don't have very clear communication on that right now (in particular also for the setter case `tick_params()` vs. `for tick in ...get_major_ticks(): tick.set_*()`), and it would be good to improve that, but maybe that's going a bit far for this PR and \"Get the appearance of ticks, tick labels, and gridlines.\" is a good enough description for a start.\r\n> \r\n> Not sure what you mean by this. Are you referring to updating a docstring or code?\r\n\r\nThe larger point is that we are not clear in distinguishing between the concept of a Axis-wide default tick style (stored in `Axis._major/minor_tick_kw`) and the styles of the actual ticks (stored in the individual tick instances). While it's debatable whether such a distinction is desirable, this is how our current implementation works and it shines through in parts of the API, which sometimes causes confusion to the users.\r\n\r\nI'm not sure whether documentation or code is the the right way forward. With documentation, we can explain everything in detail. OTOH we could also try to move the user away from interacting with or thinking about single ticks (#23372), which might make a detailed explanation of the topic unnecessary. This would be part code part documentation.\r\n\r\nOn a side note, the latter would also be in line with ideas for internal refactoring that should increase performance (https://github.com/matplotlib/matplotlib/issues/5665#issuecomment-875155264).\r\n"}, {"time": 1662595018.0, "comment": "One additional complication I just noted is that the internal names in `_major/minor_tick_kw` do not match the `tick_params()` keywords:\r\n\r\nhttps://github.com/matplotlib/matplotlib/blob/3b79f63980c767fabc59ec5ba267818d8b4b5671/lib/matplotlib/axis.py#L948\r\n\r\nWe have to either translate back for `get_tick_params()` or clearly document that you get something else :frowning:\r\n\r\nI'm sorry this is a bit of a mess."}, {"time": 1662838859.0, "comment": "> We have to either translate back for `get_tick_params()` or clearly document that you get something else \ud83d\ude26\r\n\r\n@timhoffm – Is there a preference? I can see these being confusing if you get them back:\r\n\r\nhttps://github.com/matplotlib/matplotlib/blob/3b79f63980c767fabc59ec5ba267818d8b4b5671/lib/matplotlib/axis.py#L1009-L1016\r\n\r\nThe rest are probably fine. What do you think?"}, {"time": 1663103131.0, "comment": "Uh, this is a difficult decision. I'm tempted to translate back. I think the docstring https://matplotlib.org/devdocs/api/_as_gen/matplotlib.axes.Axes.tick_params.html#matplotlib.axes.Axes.tick_params is the only user-visible place where we document what can go in (we currently accept more, but that somewhat coincidence). In that sense, `get_tick_params` should return the names as given there.\r\n\r\nThis back-and forth converting is a bit annoying internally, but I think it is the most logical and consistent interface we can offer here."}, {"time": 1666470687.0, "comment": "I added a reverse option to the `_translate_tick_params()` method and updated it to no longer alter the input dictionary. I'm using this in the forward and reverse directions in the test, so we can make sure both directions work going forward."}, {"time": 1667668778.0, "comment": "@timhoffm - Any ideas what is going on for the docs per [this comment](https://github.com/matplotlib/matplotlib/pull/23692/#discussion_r1002745295) above?\r\n\r\nSome additional questions on documentation for this PR:\r\n\r\n1. Do we need an example to the docs for this?\r\n2. Should I add `get_tick_params()` to the What's New section?\r\n3. Since `_translate_tick_params()` no longer alters the keyword dict, should we add it to the API changes? Or does this not matter since it is not for external use?"}, {"time": 1667670073.0, "comment": "> @timhoffm - Any ideas what is going on for the docs per [this comment](https://github.com/matplotlib/matplotlib/pull/23692/#discussion_r1002745295) above?\r\n\r\nPlease add an entry to `doc/api/axis_api.rst`\r\n\r\n> 1. Do we need an example to the docs for this?\r\n\r\nYou can try and add an Example section to the docstring. I would not make an addition to the Example Gallery.\r\n\r\nI hope this is correct, because it'S only from memory - please ignore if I'm wrong: It may be a bit confusing because AFAIR there are a few entries in _major_tick_kw by default. But we're stating that this only contains deviations from defaults (having \"deviations from defaults\" by default is a bit awkward). OTOH it may make sense to spell this out.\r\n\r\n> 2. Should I add `get_tick_params()` to the What's New section?\r\n\r\nYes, please.\r\n \r\n> 3. Since `_translate_tick_params()` no longer alters the keyword dict, should we add it to the API changes? Or does this not matter since it is not for external use?\r\n\r\nNo API change note, because our public API has not changed.\r\n"}, {"time": 1667684403.0, "comment": "<s>Just close the PR. Open PRs anticipate a follow up action on the PR, even if that action is only \"decide what to do\".</s>"}, {"time": 1667677679.0, "comment": "> Just close the PR. Open PRs anticipate a follow up action on the PR, even if that action is only \"decide what to do\".\r\n\r\n@timhoffm What do you mean by this? Was this meant for another PR?"}, {"time": 1667682124.0, "comment": "I'm 98% sure @timhoffm meant this for #23787"}, {"time": 1667684418.0, "comment": "Yes, sorry! :sheep:\r\n"}, {"time": 1667698613.0, "comment": "> I hope this is correct, because it'S only from memory - please ignore if I'm wrong: It may be a bit confusing because AFAIR there are a few entries in _major_tick_kw by default. But we're stating that this only contains deviations from defaults (having \"deviations from defaults\" by default is a bit awkward). OTOH it may make sense to spell this out.\r\n\r\nYeah, there are a few entries in there by default. I agree the phrasing is a little confusing. I modified the docstrings to indicate that we are returning the appearance of any new elements that will be added without calling out defaults \u2013 hopefully, that makes it easier to understand.\r\n\r\nI also added the example to the docstring and what's new entry."}, {"time": 1668220611.0, "comment": "Should I also add the `.. versionadded::` directive per the latest version of the PR template? Would this be for 3.7?"}, {"time": 1668304538.0, "comment": "@jklymak - I added the `.. versionadded::` directive and tweaked the docstrings and example per your comments."}, {"time": 1669237761.0, "comment": "Thank you @stefmolin ! "}], "issues": {}}
|
matplotlib/matplotlib
| 25,686
|
https://github.com/matplotlib/matplotlib/pull/25686
|
matplotlib__matplotlib-25686
|
[]
|
b86ebbafe4673583345d0a01a6ea205af34c58dc
|
diff --git a/doc/users/next_whats_new/get_suptitle.rst b/doc/users/next_whats_new/get_suptitle.rst
new file mode 100644
index 000000000000..b03ad10b1b4c
--- /dev/null
+++ b/doc/users/next_whats_new/get_suptitle.rst
@@ -0,0 +1,4 @@
+``Figure.get_suptitle()``, ``Figure.get_supxlabel()``, ``Figure.get_supylabel()``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+These methods return the strings set by ``Figure.suptitle()``, ``Figure.supxlabel()``
+and ``Figure.supylabel()`` respectively.
diff --git a/lib/matplotlib/figure.py b/lib/matplotlib/figure.py
index 7fdcc8cb6627..970bf957d4bf 100644
--- a/lib/matplotlib/figure.py
+++ b/lib/matplotlib/figure.py
@@ -388,6 +388,11 @@ def suptitle(self, t, **kwargs):
'size': 'figure.titlesize', 'weight': 'figure.titleweight'}
return self._suplabels(t, info, **kwargs)
+ def get_suptitle(self):
+ """Return the suptitle as string or an empty string if not set."""
+ text_obj = self._suptitle
+ return "" if text_obj is None else text_obj.get_text()
+
@_docstring.Substitution(x0=0.5, y0=0.01, name='supxlabel', ha='center',
va='bottom', rc='label')
@_docstring.copy(_suplabels)
@@ -398,6 +403,11 @@ def supxlabel(self, t, **kwargs):
'size': 'figure.labelsize', 'weight': 'figure.labelweight'}
return self._suplabels(t, info, **kwargs)
+ def get_supxlabel(self):
+ """Return the supxlabel as string or an empty string if not set."""
+ text_obj = self._supxlabel
+ return "" if text_obj is None else text_obj.get_text()
+
@_docstring.Substitution(x0=0.02, y0=0.5, name='supylabel', ha='left',
va='center', rc='label')
@_docstring.copy(_suplabels)
@@ -409,6 +419,11 @@ def supylabel(self, t, **kwargs):
'weight': 'figure.labelweight'}
return self._suplabels(t, info, **kwargs)
+ def get_supylabel(self):
+ """Return the supylabel as string or an empty string if not set."""
+ text_obj = self._supylabel
+ return "" if text_obj is None else text_obj.get_text()
+
def get_edgecolor(self):
"""Get the edge color of the Figure rectangle."""
return self.patch.get_edgecolor()
diff --git a/lib/matplotlib/figure.pyi b/lib/matplotlib/figure.pyi
index ee21892f32ac..f4c31506a2e1 100644
--- a/lib/matplotlib/figure.pyi
+++ b/lib/matplotlib/figure.pyi
@@ -90,8 +90,11 @@ class FigureBase(Artist):
def get_children(self) -> list[Artist]: ...
def contains(self, mouseevent: MouseEvent) -> tuple[bool, dict[Any, Any]]: ...
def suptitle(self, t: str, **kwargs) -> Text: ...
+ def get_suptitle(self) -> str: ...
def supxlabel(self, t: str, **kwargs) -> Text: ...
+ def get_supxlabel(self) -> str: ...
def supylabel(self, t: str, **kwargs) -> Text: ...
+ def get_supylabel(self) -> str: ...
def get_edgecolor(self) -> ColorType: ...
def get_facecolor(self) -> ColorType: ...
def get_frameon(self) -> bool: ...
|
diff --git a/lib/matplotlib/tests/test_figure.py b/lib/matplotlib/tests/test_figure.py
index 4188ca878fed..d8f137ddd61a 100644
--- a/lib/matplotlib/tests/test_figure.py
+++ b/lib/matplotlib/tests/test_figure.py
@@ -302,6 +302,19 @@ def test_suptitle_subfigures():
assert sf2.get_facecolor() == (1.0, 1.0, 1.0, 1.0)
+def test_get_suptitle_supxlabel_supylabel():
+ fig, ax = plt.subplots()
+ assert fig.get_suptitle() == ""
+ assert fig.get_supxlabel() == ""
+ assert fig.get_supylabel() == ""
+ fig.suptitle('suptitle')
+ assert fig.get_suptitle() == 'suptitle'
+ fig.supxlabel('supxlabel')
+ assert fig.get_supxlabel() == 'supxlabel'
+ fig.supylabel('supylabel')
+ assert fig.get_supylabel() == 'supylabel'
+
+
@image_comparison(['alpha_background'],
# only test png and svg. The PDF output appears correct,
# but Ghostscript does not preserve the background color.
| 2023-04-14T21:31:42
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"doc/users/next_whats_new/get_suptitle.rst": null, "lib/matplotlib/figure.py": "\"\"\"\n`matplotlib.figure` implements the following classes:\n\n`Figure`\n Top level `~matplotlib.artist.Artist`, which holds all plot elements.\n Many methods are implemented in `FigureBase`.\n\n`SubFigure`\n A logical figure inside a figure, usually added to a figure (or parent\n `SubFigure`) with `Figure.add_subfigure` or `Figure.subfigures` methods\n (provisional API v3.4).\n\n`SubplotParams`\n Control the default spacing between subplots.\n\nFigures are typically created using pyplot methods `~.pyplot.figure`,\n`~.pyplot.subplots`, and `~.pyplot.subplot_mosaic`.\n\n.. plot::\n :include-source:\n\n fig, ax = plt.subplots(figsize=(2, 2), facecolor='lightskyblue',\n layout='constrained')\n fig.suptitle('Figure')\n ax.set_title('Axes', loc='left', fontstyle='oblique', fontsize='medium')\n\nSome situations call for directly instantiating a `~.figure.Figure` class,\nusually inside an application of some sort (see :ref:`user_interfaces` for a\nlist of examples) . More information about Figures can be found at\n:ref:`figure_explanation`.\n\"\"\"\n\nfrom contextlib import ExitStack\nimport inspect\nimport itertools\nimport logging\nfrom numbers import Integral\n\nimport numpy as np\n\nimport matplotlib as mpl\nfrom matplotlib import _blocking_input, backend_bases, _docstring, projections\nfrom matplotlib.artist import (\n Artist, allow_rasterization, _finalize_rasterization)\nfrom matplotlib.backend_bases import (\n DrawEvent, FigureCanvasBase, NonGuiException, MouseButton, _get_renderer)\nimport matplotlib._api as _api\nimport matplotlib.cbook as cbook\nimport matplotlib.colorbar as cbar\nimport matplotlib.image as mimage\n\nfrom matplotlib.axes import Axes\nfrom matplotlib.gridspec import GridSpec\nfrom matplotlib.layout_engine import (\n ConstrainedLayoutEngine, TightLayoutEngine, LayoutEngine,\n PlaceHolderLayoutEngine\n)\nimport matplotlib.legend as mlegend\nfrom matplotlib.patches import Rectangle\nfrom matplotlib.text import Text\nfrom matplotlib.transforms import (Affine2D, Bbox, BboxTransformTo,\n TransformedBbox)\n\n_log = logging.getLogger(__name__)\n\n\ndef _stale_figure_callback(self, val):\n if self.figure:\n self.figure.stale = val\n\n\nclass _AxesStack:\n \"\"\"\n Helper class to track axes in a figure.\n\n Axes are tracked both in the order in which they have been added\n (``self._axes`` insertion/iteration order) and in the separate \"gca\" stack\n (which is the index to which they map in the ``self._axes`` dict).\n \"\"\"\n\n def __init__(self):\n self._axes = {} # Mapping of axes to \"gca\" order.\n self._counter = itertools.count()\n\n def as_list(self):\n \"\"\"List the axes that have been added to the figure.\"\"\"\n return [*self._axes] # This relies on dict preserving order.\n\n def remove(self, a):\n \"\"\"Remove the axes from the stack.\"\"\"\n self._axes.pop(a)\n\n def bubble(self, a):\n \"\"\"Move an axes, which must already exist in the stack, to the top.\"\"\"\n if a not in self._axes:\n raise ValueError(\"Axes has not been added yet\")\n self._axes[a] = next(self._counter)\n\n def add(self, a):\n \"\"\"Add an axes to the stack, ignoring it if already present.\"\"\"\n if a not in self._axes:\n self._axes[a] = next(self._counter)\n\n def current(self):\n \"\"\"Return the active axes, or None if the stack is empty.\"\"\"\n return max(self._axes, key=self._axes.__getitem__, default=None)\n\n\nclass SubplotParams:\n \"\"\"\n A class to hold the parameters for a subplot.\n \"\"\"\n\n def __init__(self, left=None, bottom=None, right=None, top=None,\n wspace=None, hspace=None):\n \"\"\"\n Defaults are given by :rc:`figure.subplot.[name]`.\n\n Parameters\n ----------\n left : float\n The position of the left edge of the subplots,\n as a fraction of the figure width.\n right : float\n The position of the right edge of the subplots,\n as a fraction of the figure width.\n bottom : float\n The position of the bottom edge of the subplots,\n as a fraction of the figure height.\n top : float\n The position of the top edge of the subplots,\n as a fraction of the figure height.\n wspace : float\n The width of the padding between subplots,\n as a fraction of the average Axes width.\n hspace : float\n The height of the padding between subplots,\n as a fraction of the average Axes height.\n \"\"\"\n for key in [\"left\", \"bottom\", \"right\", \"top\", \"wspace\", \"hspace\"]:\n setattr(self, key, mpl.rcParams[f\"figure.subplot.{key}\"])\n self.update(left, bottom, right, top, wspace, hspace)\n\n def update(self, left=None, bottom=None, right=None, top=None,\n wspace=None, hspace=None):\n \"\"\"\n Update the dimensions of the passed parameters. *None* means unchanged.\n \"\"\"\n if ((left if left is not None else self.left)\n >= (right if right is not None else self.right)):\n raise ValueError('left cannot be >= right')\n if ((bottom if bottom is not None else self.bottom)\n >= (top if top is not None else self.top)):\n raise ValueError('bottom cannot be >= top')\n if left is not None:\n self.left = left\n if right is not None:\n self.right = right\n if bottom is not None:\n self.bottom = bottom\n if top is not None:\n self.top = top\n if wspace is not None:\n self.wspace = wspace\n if hspace is not None:\n self.hspace = hspace\n\n\nclass FigureBase(Artist):\n \"\"\"\n Base class for `.Figure` and `.SubFigure` containing the methods that add\n artists to the figure or subfigure, create Axes, etc.\n \"\"\"\n def __init__(self, **kwargs):\n super().__init__()\n # remove the non-figure artist _axes property\n # as it makes no sense for a figure to be _in_ an Axes\n # this is used by the property methods in the artist base class\n # which are over-ridden in this class\n del self._axes\n\n self._suptitle = None\n self._supxlabel = None\n self._supylabel = None\n\n # groupers to keep track of x and y labels we want to align.\n # see self.align_xlabels and self.align_ylabels and\n # axis._get_tick_boxes_siblings\n self._align_label_groups = {\"x\": cbook.Grouper(), \"y\": cbook.Grouper()}\n\n self.figure = self\n self._localaxes = [] # track all axes\n self.artists = []\n self.lines = []\n self.patches = []\n self.texts = []\n self.images = []\n self.legends = []\n self.subfigs = []\n self.stale = True\n self.suppressComposite = None\n self.set(**kwargs)\n\n def _get_draw_artists(self, renderer):\n \"\"\"Also runs apply_aspect\"\"\"\n artists = self.get_children()\n for sfig in self.subfigs:\n artists.remove(sfig)\n childa = sfig.get_children()\n for child in childa:\n if child in artists:\n artists.remove(child)\n\n artists.remove(self.patch)\n artists = sorted(\n (artist for artist in artists if not artist.get_animated()),\n key=lambda artist: artist.get_zorder())\n for ax in self._localaxes:\n locator = ax.get_axes_locator()\n ax.apply_aspect(locator(ax, renderer) if locator else None)\n\n for child in ax.get_children():\n if hasattr(child, 'apply_aspect'):\n locator = child.get_axes_locator()\n child.apply_aspect(\n locator(child, renderer) if locator else None)\n return artists\n\n def autofmt_xdate(\n self, bottom=0.2, rotation=30, ha='right', which='major'):\n \"\"\"\n Date ticklabels often overlap, so it is useful to rotate them\n and right align them. Also, a common use case is a number of\n subplots with shared x-axis where the x-axis is date data. The\n ticklabels are often long, and it helps to rotate them on the\n bottom subplot and turn them off on other subplots, as well as\n turn off xlabels.\n\n Parameters\n ----------\n bottom : float, default: 0.2\n The bottom of the subplots for `subplots_adjust`.\n rotation : float, default: 30 degrees\n The rotation angle of the xtick labels in degrees.\n ha : {'left', 'center', 'right'}, default: 'right'\n The horizontal alignment of the xticklabels.\n which : {'major', 'minor', 'both'}, default: 'major'\n Selects which ticklabels to rotate.\n \"\"\"\n _api.check_in_list(['major', 'minor', 'both'], which=which)\n allsubplots = all(ax.get_subplotspec() for ax in self.axes)\n if len(self.axes) == 1:\n for label in self.axes[0].get_xticklabels(which=which):\n label.set_ha(ha)\n label.set_rotation(rotation)\n else:\n if allsubplots:\n for ax in self.get_axes():\n if ax.get_subplotspec().is_last_row():\n for label in ax.get_xticklabels(which=which):\n label.set_ha(ha)\n label.set_rotation(rotation)\n else:\n for label in ax.get_xticklabels(which=which):\n label.set_visible(False)\n ax.set_xlabel('')\n\n if allsubplots:\n self.subplots_adjust(bottom=bottom)\n self.stale = True\n\n def get_children(self):\n \"\"\"Get a list of artists contained in the figure.\"\"\"\n return [self.patch,\n *self.artists,\n *self._localaxes,\n *self.lines,\n *self.patches,\n *self.texts,\n *self.images,\n *self.legends,\n *self.subfigs]\n\n def contains(self, mouseevent):\n \"\"\"\n Test whether the mouse event occurred on the figure.\n\n Returns\n -------\n bool, {}\n \"\"\"\n if self._different_canvas(mouseevent):\n return False, {}\n inside = self.bbox.contains(mouseevent.x, mouseevent.y)\n return inside, {}\n\n def get_window_extent(self, renderer=None):\n # docstring inherited\n return self.bbox\n\n def _suplabels(self, t, info, **kwargs):\n \"\"\"\n Add a centered %(name)s to the figure.\n\n Parameters\n ----------\n t : str\n The %(name)s text.\n x : float, default: %(x0)s\n The x location of the text in figure coordinates.\n y : float, default: %(y0)s\n The y location of the text in figure coordinates.\n horizontalalignment, ha : {'center', 'left', 'right'}, default: %(ha)s\n The horizontal alignment of the text relative to (*x*, *y*).\n verticalalignment, va : {'top', 'center', 'bottom', 'baseline'}, \\\ndefault: %(va)s\n The vertical alignment of the text relative to (*x*, *y*).\n fontsize, size : default: :rc:`figure.%(rc)ssize`\n The font size of the text. See `.Text.set_size` for possible\n values.\n fontweight, weight : default: :rc:`figure.%(rc)sweight`\n The font weight of the text. See `.Text.set_weight` for possible\n values.\n\n Returns\n -------\n text\n The `.Text` instance of the %(name)s.\n\n Other Parameters\n ----------------\n fontproperties : None or dict, optional\n A dict of font properties. If *fontproperties* is given the\n default values for font size and weight are taken from the\n `.FontProperties` defaults. :rc:`figure.%(rc)ssize` and\n :rc:`figure.%(rc)sweight` are ignored in this case.\n\n **kwargs\n Additional kwargs are `matplotlib.text.Text` properties.\n \"\"\"\n\n suplab = getattr(self, info['name'])\n\n x = kwargs.pop('x', None)\n y = kwargs.pop('y', None)\n if info['name'] in ['_supxlabel', '_suptitle']:\n autopos = y is None\n elif info['name'] == '_supylabel':\n autopos = x is None\n if x is None:\n x = info['x0']\n if y is None:\n y = info['y0']\n\n if 'horizontalalignment' not in kwargs and 'ha' not in kwargs:\n kwargs['horizontalalignment'] = info['ha']\n if 'verticalalignment' not in kwargs and 'va' not in kwargs:\n kwargs['verticalalignment'] = info['va']\n if 'rotation' not in kwargs:\n kwargs['rotation'] = info['rotation']\n\n if 'fontproperties' not in kwargs:\n if 'fontsize' not in kwargs and 'size' not in kwargs:\n kwargs['size'] = mpl.rcParams[info['size']]\n if 'fontweight' not in kwargs and 'weight' not in kwargs:\n kwargs['weight'] = mpl.rcParams[info['weight']]\n\n sup = self.text(x, y, t, **kwargs)\n if suplab is not None:\n suplab.set_text(t)\n suplab.set_position((x, y))\n suplab.update_from(sup)\n sup.remove()\n else:\n suplab = sup\n suplab._autopos = autopos\n setattr(self, info['name'], suplab)\n self.stale = True\n return suplab\n\n @_docstring.Substitution(x0=0.5, y0=0.98, name='suptitle', ha='center',\n va='top', rc='title')\n @_docstring.copy(_suplabels)\n def suptitle(self, t, **kwargs):\n # docstring from _suplabels...\n info = {'name': '_suptitle', 'x0': 0.5, 'y0': 0.98,\n 'ha': 'center', 'va': 'top', 'rotation': 0,\n 'size': 'figure.titlesize', 'weight': 'figure.titleweight'}\n return self._suplabels(t, info, **kwargs)\n\n @_docstring.Substitution(x0=0.5, y0=0.01, name='supxlabel', ha='center',\n va='bottom', rc='label')\n @_docstring.copy(_suplabels)\n def supxlabel(self, t, **kwargs):\n # docstring from _suplabels...\n info = {'name': '_supxlabel', 'x0': 0.5, 'y0': 0.01,\n 'ha': 'center', 'va': 'bottom', 'rotation': 0,\n 'size': 'figure.labelsize', 'weight': 'figure.labelweight'}\n return self._suplabels(t, info, **kwargs)\n\n @_docstring.Substitution(x0=0.02, y0=0.5, name='supylabel', ha='left',\n va='center', rc='label')\n @_docstring.copy(_suplabels)\n def supylabel(self, t, **kwargs):\n # docstring from _suplabels...\n info = {'name': '_supylabel', 'x0': 0.02, 'y0': 0.5,\n 'ha': 'left', 'va': 'center', 'rotation': 'vertical',\n 'rotation_mode': 'anchor', 'size': 'figure.labelsize',\n 'weight': 'figure.labelweight'}\n return self._suplabels(t, info, **kwargs)\n\n def get_edgecolor(self):\n \"\"\"Get the edge color of the Figure rectangle.\"\"\"\n return self.patch.get_edgecolor()\n\n def get_facecolor(self):\n \"\"\"Get the face color of the Figure rectangle.\"\"\"\n return self.patch.get_facecolor()\n\n def get_frameon(self):\n \"\"\"\n Return the figure's background patch visibility, i.e.\n whether the figure background will be drawn. Equivalent to\n ``Figure.patch.get_visible()``.\n \"\"\"\n return self.patch.get_visible()\n\n def set_linewidth(self, linewidth):\n \"\"\"\n Set the line width of the Figure rectangle.\n\n Parameters\n ----------\n linewidth : number\n \"\"\"\n self.patch.set_linewidth(linewidth)\n\n def get_linewidth(self):\n \"\"\"\n Get the line width of the Figure rectangle.\n \"\"\"\n return self.patch.get_linewidth()\n\n def set_edgecolor(self, color):\n \"\"\"\n Set the edge color of the Figure rectangle.\n\n Parameters\n ----------\n color : color\n \"\"\"\n self.patch.set_edgecolor(color)\n\n def set_facecolor(self, color):\n \"\"\"\n Set the face color of the Figure rectangle.\n\n Parameters\n ----------\n color : color\n \"\"\"\n self.patch.set_facecolor(color)\n\n def set_frameon(self, b):\n \"\"\"\n Set the figure's background patch visibility, i.e.\n whether the figure background will be drawn. Equivalent to\n ``Figure.patch.set_visible()``.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self.patch.set_visible(b)\n self.stale = True\n\n frameon = property(get_frameon, set_frameon)\n\n def add_artist(self, artist, clip=False):\n \"\"\"\n Add an `.Artist` to the figure.\n\n Usually artists are added to `~.axes.Axes` objects using\n `.Axes.add_artist`; this method can be used in the rare cases where\n one needs to add artists directly to the figure instead.\n\n Parameters\n ----------\n artist : `~matplotlib.artist.Artist`\n The artist to add to the figure. If the added artist has no\n transform previously set, its transform will be set to\n ``figure.transSubfigure``.\n clip : bool, default: False\n Whether the added artist should be clipped by the figure patch.\n\n Returns\n -------\n `~matplotlib.artist.Artist`\n The added artist.\n \"\"\"\n artist.set_figure(self)\n self.artists.append(artist)\n artist._remove_method = self.artists.remove\n\n if not artist.is_transform_set():\n artist.set_transform(self.transSubfigure)\n\n if clip and artist.get_clip_path() is None:\n artist.set_clip_path(self.patch)\n\n self.stale = True\n return artist\n\n @_docstring.dedent_interpd\n def add_axes(self, *args, **kwargs):\n \"\"\"\n Add an `~.axes.Axes` to the figure.\n\n Call signatures::\n\n add_axes(rect, projection=None, polar=False, **kwargs)\n add_axes(ax)\n\n Parameters\n ----------\n rect : tuple (left, bottom, width, height)\n The dimensions (left, bottom, width, height) of the new\n `~.axes.Axes`. All quantities are in fractions of figure width and\n height.\n\n projection : {None, 'aitoff', 'hammer', 'lambert', 'mollweide', \\\n'polar', 'rectilinear', str}, optional\n The projection type of the `~.axes.Axes`. *str* is the name of\n a custom projection, see `~matplotlib.projections`. The default\n None results in a 'rectilinear' projection.\n\n polar : bool, default: False\n If True, equivalent to projection='polar'.\n\n axes_class : subclass type of `~.axes.Axes`, optional\n The `.axes.Axes` subclass that is instantiated. This parameter\n is incompatible with *projection* and *polar*. See\n :ref:`axisartist_users-guide-index` for examples.\n\n sharex, sharey : `~.axes.Axes`, optional\n Share the x or y `~matplotlib.axis` with sharex and/or sharey.\n The axis will have the same limits, ticks, and scale as the axis\n of the shared axes.\n\n label : str\n A label for the returned Axes.\n\n Returns\n -------\n `~.axes.Axes`, or a subclass of `~.axes.Axes`\n The returned axes class depends on the projection used. It is\n `~.axes.Axes` if rectilinear projection is used and\n `.projections.polar.PolarAxes` if polar projection is used.\n\n Other Parameters\n ----------------\n **kwargs\n This method also takes the keyword arguments for\n the returned Axes class. The keyword arguments for the\n rectilinear Axes class `~.axes.Axes` can be found in\n the following table but there might also be other keyword\n arguments if another projection is used, see the actual Axes\n class.\n\n %(Axes:kwdoc)s\n\n Notes\n -----\n In rare circumstances, `.add_axes` may be called with a single\n argument, an Axes instance already created in the present figure but\n not in the figure's list of Axes.\n\n See Also\n --------\n .Figure.add_subplot\n .pyplot.subplot\n .pyplot.axes\n .Figure.subplots\n .pyplot.subplots\n\n Examples\n --------\n Some simple examples::\n\n rect = l, b, w, h\n fig = plt.figure()\n fig.add_axes(rect)\n fig.add_axes(rect, frameon=False, facecolor='g')\n fig.add_axes(rect, polar=True)\n ax = fig.add_axes(rect, projection='polar')\n fig.delaxes(ax)\n fig.add_axes(ax)\n \"\"\"\n\n if not len(args) and 'rect' not in kwargs:\n raise TypeError(\n \"add_axes() missing 1 required positional argument: 'rect'\")\n elif 'rect' in kwargs:\n if len(args):\n raise TypeError(\n \"add_axes() got multiple values for argument 'rect'\")\n args = (kwargs.pop('rect'), )\n\n if isinstance(args[0], Axes):\n a = args[0]\n key = a._projection_init\n if a.get_figure() is not self:\n raise ValueError(\n \"The Axes must have been created in the present figure\")\n else:\n rect = args[0]\n if not np.isfinite(rect).all():\n raise ValueError('all entries in rect must be finite '\n f'not {rect}')\n projection_class, pkw = self._process_projection_requirements(\n *args, **kwargs)\n\n # create the new axes using the axes class given\n a = projection_class(self, rect, **pkw)\n key = (projection_class, pkw)\n return self._add_axes_internal(a, key)\n\n @_docstring.dedent_interpd\n def add_subplot(self, *args, **kwargs):\n \"\"\"\n Add an `~.axes.Axes` to the figure as part of a subplot arrangement.\n\n Call signatures::\n\n add_subplot(nrows, ncols, index, **kwargs)\n add_subplot(pos, **kwargs)\n add_subplot(ax)\n add_subplot()\n\n Parameters\n ----------\n *args : int, (int, int, *index*), or `.SubplotSpec`, default: (1, 1, 1)\n The position of the subplot described by one of\n\n - Three integers (*nrows*, *ncols*, *index*). The subplot will\n take the *index* position on a grid with *nrows* rows and\n *ncols* columns. *index* starts at 1 in the upper left corner\n and increases to the right. *index* can also be a two-tuple\n specifying the (*first*, *last*) indices (1-based, and including\n *last*) of the subplot, e.g., ``fig.add_subplot(3, 1, (1, 2))``\n makes a subplot that spans the upper 2/3 of the figure.\n - A 3-digit integer. The digits are interpreted as if given\n separately as three single-digit integers, i.e.\n ``fig.add_subplot(235)`` is the same as\n ``fig.add_subplot(2, 3, 5)``. Note that this can only be used\n if there are no more than 9 subplots.\n - A `.SubplotSpec`.\n\n In rare circumstances, `.add_subplot` may be called with a single\n argument, a subplot Axes instance already created in the\n present figure but not in the figure's list of Axes.\n\n projection : {None, 'aitoff', 'hammer', 'lambert', 'mollweide', \\\n'polar', 'rectilinear', str}, optional\n The projection type of the subplot (`~.axes.Axes`). *str* is the\n name of a custom projection, see `~matplotlib.projections`. The\n default None results in a 'rectilinear' projection.\n\n polar : bool, default: False\n If True, equivalent to projection='polar'.\n\n axes_class : subclass type of `~.axes.Axes`, optional\n The `.axes.Axes` subclass that is instantiated. This parameter\n is incompatible with *projection* and *polar*. See\n :ref:`axisartist_users-guide-index` for examples.\n\n sharex, sharey : `~.axes.Axes`, optional\n Share the x or y `~matplotlib.axis` with sharex and/or sharey.\n The axis will have the same limits, ticks, and scale as the axis\n of the shared axes.\n\n label : str\n A label for the returned Axes.\n\n Returns\n -------\n `~.axes.Axes`\n\n The Axes of the subplot. The returned Axes can actually be an\n instance of a subclass, such as `.projections.polar.PolarAxes` for\n polar projections.\n\n Other Parameters\n ----------------\n **kwargs\n This method also takes the keyword arguments for the returned Axes\n base class; except for the *figure* argument. The keyword arguments\n for the rectilinear base class `~.axes.Axes` can be found in\n the following table but there might also be other keyword\n arguments if another projection is used.\n\n %(Axes:kwdoc)s\n\n See Also\n --------\n .Figure.add_axes\n .pyplot.subplot\n .pyplot.axes\n .Figure.subplots\n .pyplot.subplots\n\n Examples\n --------\n ::\n\n fig = plt.figure()\n\n fig.add_subplot(231)\n ax1 = fig.add_subplot(2, 3, 1) # equivalent but more general\n\n fig.add_subplot(232, frameon=False) # subplot with no frame\n fig.add_subplot(233, projection='polar') # polar subplot\n fig.add_subplot(234, sharex=ax1) # subplot sharing x-axis with ax1\n fig.add_subplot(235, facecolor=\"red\") # red subplot\n\n ax1.remove() # delete ax1 from the figure\n fig.add_subplot(ax1) # add ax1 back to the figure\n \"\"\"\n if 'figure' in kwargs:\n # Axes itself allows for a 'figure' kwarg, but since we want to\n # bind the created Axes to self, it is not allowed here.\n raise _api.kwarg_error(\"add_subplot\", \"figure\")\n\n if (len(args) == 1\n and isinstance(args[0], mpl.axes._base._AxesBase)\n and args[0].get_subplotspec()):\n ax = args[0]\n key = ax._projection_init\n if ax.get_figure() is not self:\n raise ValueError(\"The Axes must have been created in \"\n \"the present figure\")\n else:\n if not args:\n args = (1, 1, 1)\n # Normalize correct ijk values to (i, j, k) here so that\n # add_subplot(211) == add_subplot(2, 1, 1). Invalid values will\n # trigger errors later (via SubplotSpec._from_subplot_args).\n if (len(args) == 1 and isinstance(args[0], Integral)\n and 100 <= args[0] <= 999):\n args = tuple(map(int, str(args[0])))\n projection_class, pkw = self._process_projection_requirements(\n *args, **kwargs)\n ax = projection_class(self, *args, **pkw)\n key = (projection_class, pkw)\n return self._add_axes_internal(ax, key)\n\n def _add_axes_internal(self, ax, key):\n \"\"\"Private helper for `add_axes` and `add_subplot`.\"\"\"\n self._axstack.add(ax)\n if ax not in self._localaxes:\n self._localaxes.append(ax)\n self.sca(ax)\n ax._remove_method = self.delaxes\n # this is to support plt.subplot's re-selection logic\n ax._projection_init = key\n self.stale = True\n ax.stale_callback = _stale_figure_callback\n return ax\n\n def subplots(self, nrows=1, ncols=1, *, sharex=False, sharey=False,\n squeeze=True, width_ratios=None, height_ratios=None,\n subplot_kw=None, gridspec_kw=None):\n \"\"\"\n Add a set of subplots to this figure.\n\n This utility wrapper makes it convenient to create common layouts of\n subplots in a single call.\n\n Parameters\n ----------\n nrows, ncols : int, default: 1\n Number of rows/columns of the subplot grid.\n\n sharex, sharey : bool or {'none', 'all', 'row', 'col'}, default: False\n Controls sharing of x-axis (*sharex*) or y-axis (*sharey*):\n\n - True or 'all': x- or y-axis will be shared among all subplots.\n - False or 'none': each subplot x- or y-axis will be independent.\n - 'row': each subplot row will share an x- or y-axis.\n - 'col': each subplot column will share an x- or y-axis.\n\n When subplots have a shared x-axis along a column, only the x tick\n labels of the bottom subplot are created. Similarly, when subplots\n have a shared y-axis along a row, only the y tick labels of the\n first column subplot are created. To later turn other subplots'\n ticklabels on, use `~matplotlib.axes.Axes.tick_params`.\n\n When subplots have a shared axis that has units, calling\n `.Axis.set_units` will update each axis with the new units.\n\n squeeze : bool, default: True\n - If True, extra dimensions are squeezed out from the returned\n array of Axes:\n\n - if only one subplot is constructed (nrows=ncols=1), the\n resulting single Axes object is returned as a scalar.\n - for Nx1 or 1xM subplots, the returned object is a 1D numpy\n object array of Axes objects.\n - for NxM, subplots with N>1 and M>1 are returned as a 2D array.\n\n - If False, no squeezing at all is done: the returned Axes object\n is always a 2D array containing Axes instances, even if it ends\n up being 1x1.\n\n width_ratios : array-like of length *ncols*, optional\n Defines the relative widths of the columns. Each column gets a\n relative width of ``width_ratios[i] / sum(width_ratios)``.\n If not given, all columns will have the same width. Equivalent\n to ``gridspec_kw={'width_ratios': [...]}``.\n\n height_ratios : array-like of length *nrows*, optional\n Defines the relative heights of the rows. Each row gets a\n relative height of ``height_ratios[i] / sum(height_ratios)``.\n If not given, all rows will have the same height. Equivalent\n to ``gridspec_kw={'height_ratios': [...]}``.\n\n subplot_kw : dict, optional\n Dict with keywords passed to the `.Figure.add_subplot` call used to\n create each subplot.\n\n gridspec_kw : dict, optional\n Dict with keywords passed to the\n `~matplotlib.gridspec.GridSpec` constructor used to create\n the grid the subplots are placed on.\n\n Returns\n -------\n `~.axes.Axes` or array of Axes\n Either a single `~matplotlib.axes.Axes` object or an array of Axes\n objects if more than one subplot was created. The dimensions of the\n resulting array can be controlled with the *squeeze* keyword, see\n above.\n\n See Also\n --------\n .pyplot.subplots\n .Figure.add_subplot\n .pyplot.subplot\n\n Examples\n --------\n ::\n\n # First create some toy data:\n x = np.linspace(0, 2*np.pi, 400)\n y = np.sin(x**2)\n\n # Create a figure\n plt.figure()\n\n # Create a subplot\n ax = fig.subplots()\n ax.plot(x, y)\n ax.set_title('Simple plot')\n\n # Create two subplots and unpack the output array immediately\n ax1, ax2 = fig.subplots(1, 2, sharey=True)\n ax1.plot(x, y)\n ax1.set_title('Sharing Y axis')\n ax2.scatter(x, y)\n\n # Create four polar Axes and access them through the returned array\n axes = fig.subplots(2, 2, subplot_kw=dict(projection='polar'))\n axes[0, 0].plot(x, y)\n axes[1, 1].scatter(x, y)\n\n # Share an X-axis with each column of subplots\n fig.subplots(2, 2, sharex='col')\n\n # Share a Y-axis with each row of subplots\n fig.subplots(2, 2, sharey='row')\n\n # Share both X- and Y-axes with all subplots\n fig.subplots(2, 2, sharex='all', sharey='all')\n\n # Note that this is the same as\n fig.subplots(2, 2, sharex=True, sharey=True)\n \"\"\"\n gridspec_kw = dict(gridspec_kw or {})\n if height_ratios is not None:\n if 'height_ratios' in gridspec_kw:\n raise ValueError(\"'height_ratios' must not be defined both as \"\n \"parameter and as key in 'gridspec_kw'\")\n gridspec_kw['height_ratios'] = height_ratios\n if width_ratios is not None:\n if 'width_ratios' in gridspec_kw:\n raise ValueError(\"'width_ratios' must not be defined both as \"\n \"parameter and as key in 'gridspec_kw'\")\n gridspec_kw['width_ratios'] = width_ratios\n\n gs = self.add_gridspec(nrows, ncols, figure=self, **gridspec_kw)\n axs = gs.subplots(sharex=sharex, sharey=sharey, squeeze=squeeze,\n subplot_kw=subplot_kw)\n return axs\n\n def delaxes(self, ax):\n \"\"\"\n Remove the `~.axes.Axes` *ax* from the figure; update the current Axes.\n \"\"\"\n\n def _reset_locators_and_formatters(axis):\n # Set the formatters and locators to be associated with axis\n # (where previously they may have been associated with another\n # Axis instance)\n axis.get_major_formatter().set_axis(axis)\n axis.get_major_locator().set_axis(axis)\n axis.get_minor_formatter().set_axis(axis)\n axis.get_minor_locator().set_axis(axis)\n\n def _break_share_link(ax, grouper):\n siblings = grouper.get_siblings(ax)\n if len(siblings) > 1:\n grouper.remove(ax)\n for last_ax in siblings:\n if ax is not last_ax:\n return last_ax\n return None\n\n self._axstack.remove(ax)\n self._axobservers.process(\"_axes_change_event\", self)\n self.stale = True\n self._localaxes.remove(ax)\n self.canvas.release_mouse(ax)\n\n # Break link between any shared axes\n for name in ax._axis_names:\n last_ax = _break_share_link(ax, ax._shared_axes[name])\n if last_ax is not None:\n _reset_locators_and_formatters(last_ax._axis_map[name])\n\n # Break link between any twinned axes\n _break_share_link(ax, ax._twinned_axes)\n\n def clear(self, keep_observers=False):\n \"\"\"\n Clear the figure.\n\n Parameters\n ----------\n keep_observers : bool, default: False\n Set *keep_observers* to True if, for example,\n a gui widget is tracking the Axes in the figure.\n \"\"\"\n self.suppressComposite = None\n\n # first clear the axes in any subfigures\n for subfig in self.subfigs:\n subfig.clear(keep_observers=keep_observers)\n self.subfigs = []\n\n for ax in tuple(self.axes): # Iterate over the copy.\n ax.clear()\n self.delaxes(ax) # Remove ax from self._axstack.\n\n self.artists = []\n self.lines = []\n self.patches = []\n self.texts = []\n self.images = []\n self.legends = []\n if not keep_observers:\n self._axobservers = cbook.CallbackRegistry()\n self._suptitle = None\n self._supxlabel = None\n self._supylabel = None\n\n self.stale = True\n\n # synonym for `clear`.\n def clf(self, keep_observers=False):\n \"\"\"\n [*Discouraged*] Alias for the `clear()` method.\n\n .. admonition:: Discouraged\n\n The use of ``clf()`` is discouraged. Use ``clear()`` instead.\n\n Parameters\n ----------\n keep_observers : bool, default: False\n Set *keep_observers* to True if, for example,\n a gui widget is tracking the Axes in the figure.\n \"\"\"\n return self.clear(keep_observers=keep_observers)\n\n # Note: the docstring below is modified with replace for the pyplot\n # version of this function because the method name differs (plt.figlegend)\n # the replacements are:\n # \" legend(\" -> \" figlegend(\" for the signatures\n # \"fig.legend(\" -> \"plt.figlegend\" for the code examples\n # \"ax.plot\" -> \"plt.plot\" for consistency in using pyplot when able\n @_docstring.dedent_interpd\n def legend(self, *args, **kwargs):\n \"\"\"\n Place a legend on the figure.\n\n Call signatures::\n\n legend()\n legend(handles, labels)\n legend(handles=handles)\n legend(labels)\n\n The call signatures correspond to the following different ways to use\n this method:\n\n **1. Automatic detection of elements to be shown in the legend**\n\n The elements to be added to the legend are automatically determined,\n when you do not pass in any extra arguments.\n\n In this case, the labels are taken from the artist. You can specify\n them either at artist creation or by calling the\n :meth:`~.Artist.set_label` method on the artist::\n\n ax.plot([1, 2, 3], label='Inline label')\n fig.legend()\n\n or::\n\n line, = ax.plot([1, 2, 3])\n line.set_label('Label via method')\n fig.legend()\n\n Specific lines can be excluded from the automatic legend element\n selection by defining a label starting with an underscore.\n This is default for all artists, so calling `.Figure.legend` without\n any arguments and without setting the labels manually will result in\n no legend being drawn.\n\n\n **2. Explicitly listing the artists and labels in the legend**\n\n For full control of which artists have a legend entry, it is possible\n to pass an iterable of legend artists followed by an iterable of\n legend labels respectively::\n\n fig.legend([line1, line2, line3], ['label1', 'label2', 'label3'])\n\n\n **3. Explicitly listing the artists in the legend**\n\n This is similar to 2, but the labels are taken from the artists'\n label properties. Example::\n\n line1, = ax1.plot([1, 2, 3], label='label1')\n line2, = ax2.plot([1, 2, 3], label='label2')\n fig.legend(handles=[line1, line2])\n\n\n **4. Labeling existing plot elements**\n\n .. admonition:: Discouraged\n\n This call signature is discouraged, because the relation between\n plot elements and labels is only implicit by their order and can\n easily be mixed up.\n\n To make a legend for all artists on all Axes, call this function with\n an iterable of strings, one for each legend item. For example::\n\n fig, (ax1, ax2) = plt.subplots(1, 2)\n ax1.plot([1, 3, 5], color='blue')\n ax2.plot([2, 4, 6], color='red')\n fig.legend(['the blues', 'the reds'])\n\n\n Parameters\n ----------\n handles : list of `.Artist`, optional\n A list of Artists (lines, patches) to be added to the legend.\n Use this together with *labels*, if you need full control on what\n is shown in the legend and the automatic mechanism described above\n is not sufficient.\n\n The length of handles and labels should be the same in this\n case. If they are not, they are truncated to the smaller length.\n\n labels : list of str, optional\n A list of labels to show next to the artists.\n Use this together with *handles*, if you need full control on what\n is shown in the legend and the automatic mechanism described above\n is not sufficient.\n\n Returns\n -------\n `~matplotlib.legend.Legend`\n\n Other Parameters\n ----------------\n %(_legend_kw_figure)s\n\n See Also\n --------\n .Axes.legend\n\n Notes\n -----\n Some artists are not supported by this function. See\n :ref:`legend_guide` for details.\n \"\"\"\n\n handles, labels, extra_args, kwargs = mlegend._parse_legend_args(\n self.axes,\n *args,\n **kwargs)\n # check for third arg\n if len(extra_args):\n # _api.warn_deprecated(\n # \"2.1\",\n # message=\"Figure.legend will accept no more than two \"\n # \"positional arguments in the future. Use \"\n # \"'fig.legend(handles, labels, loc=location)' \"\n # \"instead.\")\n # kwargs['loc'] = extra_args[0]\n # extra_args = extra_args[1:]\n pass\n transform = kwargs.pop('bbox_transform', self.transSubfigure)\n # explicitly set the bbox transform if the user hasn't.\n l = mlegend.Legend(self, handles, labels, *extra_args,\n bbox_transform=transform, **kwargs)\n self.legends.append(l)\n l._remove_method = self.legends.remove\n self.stale = True\n return l\n\n @_docstring.dedent_interpd\n def text(self, x, y, s, fontdict=None, **kwargs):\n \"\"\"\n Add text to figure.\n\n Parameters\n ----------\n x, y : float\n The position to place the text. By default, this is in figure\n coordinates, floats in [0, 1]. The coordinate system can be changed\n using the *transform* keyword.\n\n s : str\n The text string.\n\n fontdict : dict, optional\n A dictionary to override the default text properties. If not given,\n the defaults are determined by :rc:`font.*`. Properties passed as\n *kwargs* override the corresponding ones given in *fontdict*.\n\n Returns\n -------\n `~.text.Text`\n\n Other Parameters\n ----------------\n **kwargs : `~matplotlib.text.Text` properties\n Other miscellaneous text parameters.\n\n %(Text:kwdoc)s\n\n See Also\n --------\n .Axes.text\n .pyplot.text\n \"\"\"\n effective_kwargs = {\n 'transform': self.transSubfigure,\n **(fontdict if fontdict is not None else {}),\n **kwargs,\n }\n text = Text(x=x, y=y, text=s, **effective_kwargs)\n text.set_figure(self)\n text.stale_callback = _stale_figure_callback\n\n self.texts.append(text)\n text._remove_method = self.texts.remove\n self.stale = True\n return text\n\n @_docstring.dedent_interpd\n def colorbar(\n self, mappable, cax=None, ax=None, use_gridspec=True, **kwargs):\n \"\"\"\n Add a colorbar to a plot.\n\n Parameters\n ----------\n mappable\n The `matplotlib.cm.ScalarMappable` (i.e., `.AxesImage`,\n `.ContourSet`, etc.) described by this colorbar. This argument is\n mandatory for the `.Figure.colorbar` method but optional for the\n `.pyplot.colorbar` function, which sets the default to the current\n image.\n\n Note that one can create a `.ScalarMappable` \"on-the-fly\" to\n generate colorbars not attached to a previously drawn artist, e.g.\n ::\n\n fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax)\n\n cax : `~matplotlib.axes.Axes`, optional\n Axes into which the colorbar will be drawn.\n\n ax : `~.axes.Axes` or iterable or `numpy.ndarray` of Axes, optional\n One or more parent axes from which space for a new colorbar axes\n will be stolen, if *cax* is None. This has no effect if *cax* is\n set.\n\n use_gridspec : bool, optional\n If *cax* is ``None``, a new *cax* is created as an instance of\n Axes. If *ax* is positioned with a subplotspec and *use_gridspec*\n is ``True``, then *cax* is also positioned with a subplotspec.\n\n Returns\n -------\n colorbar : `~matplotlib.colorbar.Colorbar`\n\n Other Parameters\n ----------------\n %(_make_axes_kw_doc)s\n %(_colormap_kw_doc)s\n\n Notes\n -----\n If *mappable* is a `~.contour.ContourSet`, its *extend* kwarg is\n included automatically.\n\n The *shrink* kwarg provides a simple way to scale the colorbar with\n respect to the axes. Note that if *cax* is specified, it determines the\n size of the colorbar, and *shrink* and *aspect* are ignored.\n\n For more precise control, you can manually specify the positions of the\n axes objects in which the mappable and the colorbar are drawn. In this\n case, do not use any of the axes properties kwargs.\n\n It is known that some vector graphics viewers (svg and pdf) render\n white gaps between segments of the colorbar. This is due to bugs in\n the viewers, not Matplotlib. As a workaround, the colorbar can be\n rendered with overlapping segments::\n\n cbar = colorbar()\n cbar.solids.set_edgecolor(\"face\")\n draw()\n\n However, this has negative consequences in other circumstances, e.g.\n with semi-transparent images (alpha < 1) and colorbar extensions;\n therefore, this workaround is not used by default (see issue #1188).\n \"\"\"\n\n if ax is None:\n ax = getattr(mappable, \"axes\", None)\n\n if (self.get_layout_engine() is not None and\n not self.get_layout_engine().colorbar_gridspec):\n use_gridspec = False\n if cax is None:\n if ax is None:\n raise ValueError(\n 'Unable to determine Axes to steal space for Colorbar. '\n 'Either provide the *cax* argument to use as the Axes for '\n 'the Colorbar, provide the *ax* argument to steal space '\n 'from it, or add *mappable* to an Axes.')\n current_ax = self.gca()\n if (use_gridspec\n and isinstance(ax, mpl.axes._base._AxesBase)\n and ax.get_subplotspec()):\n cax, kwargs = cbar.make_axes_gridspec(ax, **kwargs)\n else:\n cax, kwargs = cbar.make_axes(ax, **kwargs)\n # make_axes calls add_{axes,subplot} which changes gca; undo that.\n self.sca(current_ax)\n cax.grid(visible=False, which='both', axis='both')\n\n NON_COLORBAR_KEYS = [ # remove kws that cannot be passed to Colorbar\n 'fraction', 'pad', 'shrink', 'aspect', 'anchor', 'panchor']\n cb = cbar.Colorbar(cax, mappable, **{\n k: v for k, v in kwargs.items() if k not in NON_COLORBAR_KEYS})\n self.stale = True\n return cb\n\n def subplots_adjust(self, left=None, bottom=None, right=None, top=None,\n wspace=None, hspace=None):\n \"\"\"\n Adjust the subplot layout parameters.\n\n Unset parameters are left unmodified; initial values are given by\n :rc:`figure.subplot.[name]`.\n\n Parameters\n ----------\n left : float, optional\n The position of the left edge of the subplots,\n as a fraction of the figure width.\n right : float, optional\n The position of the right edge of the subplots,\n as a fraction of the figure width.\n bottom : float, optional\n The position of the bottom edge of the subplots,\n as a fraction of the figure height.\n top : float, optional\n The position of the top edge of the subplots,\n as a fraction of the figure height.\n wspace : float, optional\n The width of the padding between subplots,\n as a fraction of the average Axes width.\n hspace : float, optional\n The height of the padding between subplots,\n as a fraction of the average Axes height.\n \"\"\"\n if (self.get_layout_engine() is not None and\n not self.get_layout_engine().adjust_compatible):\n _api.warn_external(\n \"This figure was using a layout engine that is \"\n \"incompatible with subplots_adjust and/or tight_layout; \"\n \"not calling subplots_adjust.\")\n return\n self.subplotpars.update(left, bottom, right, top, wspace, hspace)\n for ax in self.axes:\n if ax.get_subplotspec() is not None:\n ax._set_position(ax.get_subplotspec().get_position(self))\n self.stale = True\n\n def align_xlabels(self, axs=None):\n \"\"\"\n Align the xlabels of subplots in the same subplot column if label\n alignment is being done automatically (i.e. the label position is\n not manually set).\n\n Alignment persists for draw events after this is called.\n\n If a label is on the bottom, it is aligned with labels on Axes that\n also have their label on the bottom and that have the same\n bottom-most subplot row. If the label is on the top,\n it is aligned with labels on Axes with the same top-most row.\n\n Parameters\n ----------\n axs : list of `~matplotlib.axes.Axes`\n Optional list of (or `~numpy.ndarray`) `~matplotlib.axes.Axes`\n to align the xlabels.\n Default is to align all Axes on the figure.\n\n See Also\n --------\n matplotlib.figure.Figure.align_ylabels\n matplotlib.figure.Figure.align_labels\n\n Notes\n -----\n This assumes that ``axs`` are from the same `.GridSpec`, so that\n their `.SubplotSpec` positions correspond to figure positions.\n\n Examples\n --------\n Example with rotated xtick labels::\n\n fig, axs = plt.subplots(1, 2)\n for tick in axs[0].get_xticklabels():\n tick.set_rotation(55)\n axs[0].set_xlabel('XLabel 0')\n axs[1].set_xlabel('XLabel 1')\n fig.align_xlabels()\n \"\"\"\n if axs is None:\n axs = self.axes\n axs = [ax for ax in np.ravel(axs) if ax.get_subplotspec() is not None]\n for ax in axs:\n _log.debug(' Working on: %s', ax.get_xlabel())\n rowspan = ax.get_subplotspec().rowspan\n pos = ax.xaxis.get_label_position() # top or bottom\n # Search through other axes for label positions that are same as\n # this one and that share the appropriate row number.\n # Add to a grouper associated with each axes of siblings.\n # This list is inspected in `axis.draw` by\n # `axis._update_label_position`.\n for axc in axs:\n if axc.xaxis.get_label_position() == pos:\n rowspanc = axc.get_subplotspec().rowspan\n if (pos == 'top' and rowspan.start == rowspanc.start or\n pos == 'bottom' and rowspan.stop == rowspanc.stop):\n # grouper for groups of xlabels to align\n self._align_label_groups['x'].join(ax, axc)\n\n def align_ylabels(self, axs=None):\n \"\"\"\n Align the ylabels of subplots in the same subplot column if label\n alignment is being done automatically (i.e. the label position is\n not manually set).\n\n Alignment persists for draw events after this is called.\n\n If a label is on the left, it is aligned with labels on Axes that\n also have their label on the left and that have the same\n left-most subplot column. If the label is on the right,\n it is aligned with labels on Axes with the same right-most column.\n\n Parameters\n ----------\n axs : list of `~matplotlib.axes.Axes`\n Optional list (or `~numpy.ndarray`) of `~matplotlib.axes.Axes`\n to align the ylabels.\n Default is to align all Axes on the figure.\n\n See Also\n --------\n matplotlib.figure.Figure.align_xlabels\n matplotlib.figure.Figure.align_labels\n\n Notes\n -----\n This assumes that ``axs`` are from the same `.GridSpec`, so that\n their `.SubplotSpec` positions correspond to figure positions.\n\n Examples\n --------\n Example with large yticks labels::\n\n fig, axs = plt.subplots(2, 1)\n axs[0].plot(np.arange(0, 1000, 50))\n axs[0].set_ylabel('YLabel 0')\n axs[1].set_ylabel('YLabel 1')\n fig.align_ylabels()\n \"\"\"\n if axs is None:\n axs = self.axes\n axs = [ax for ax in np.ravel(axs) if ax.get_subplotspec() is not None]\n for ax in axs:\n _log.debug(' Working on: %s', ax.get_ylabel())\n colspan = ax.get_subplotspec().colspan\n pos = ax.yaxis.get_label_position() # left or right\n # Search through other axes for label positions that are same as\n # this one and that share the appropriate column number.\n # Add to a list associated with each axes of siblings.\n # This list is inspected in `axis.draw` by\n # `axis._update_label_position`.\n for axc in axs:\n if axc.yaxis.get_label_position() == pos:\n colspanc = axc.get_subplotspec().colspan\n if (pos == 'left' and colspan.start == colspanc.start or\n pos == 'right' and colspan.stop == colspanc.stop):\n # grouper for groups of ylabels to align\n self._align_label_groups['y'].join(ax, axc)\n\n def align_labels(self, axs=None):\n \"\"\"\n Align the xlabels and ylabels of subplots with the same subplots\n row or column (respectively) if label alignment is being\n done automatically (i.e. the label position is not manually set).\n\n Alignment persists for draw events after this is called.\n\n Parameters\n ----------\n axs : list of `~matplotlib.axes.Axes`\n Optional list (or `~numpy.ndarray`) of `~matplotlib.axes.Axes`\n to align the labels.\n Default is to align all Axes on the figure.\n\n See Also\n --------\n matplotlib.figure.Figure.align_xlabels\n\n matplotlib.figure.Figure.align_ylabels\n \"\"\"\n self.align_xlabels(axs=axs)\n self.align_ylabels(axs=axs)\n\n def add_gridspec(self, nrows=1, ncols=1, **kwargs):\n \"\"\"\n Return a `.GridSpec` that has this figure as a parent. This allows\n complex layout of Axes in the figure.\n\n Parameters\n ----------\n nrows : int, default: 1\n Number of rows in grid.\n\n ncols : int, default: 1\n Number of columns in grid.\n\n Returns\n -------\n `.GridSpec`\n\n Other Parameters\n ----------------\n **kwargs\n Keyword arguments are passed to `.GridSpec`.\n\n See Also\n --------\n matplotlib.pyplot.subplots\n\n Examples\n --------\n Adding a subplot that spans two rows::\n\n fig = plt.figure()\n gs = fig.add_gridspec(2, 2)\n ax1 = fig.add_subplot(gs[0, 0])\n ax2 = fig.add_subplot(gs[1, 0])\n # spans two rows:\n ax3 = fig.add_subplot(gs[:, 1])\n\n \"\"\"\n\n _ = kwargs.pop('figure', None) # pop in case user has added this...\n gs = GridSpec(nrows=nrows, ncols=ncols, figure=self, **kwargs)\n return gs\n\n def subfigures(self, nrows=1, ncols=1, squeeze=True,\n wspace=None, hspace=None,\n width_ratios=None, height_ratios=None,\n **kwargs):\n \"\"\"\n Add a set of subfigures to this figure or subfigure.\n\n A subfigure has the same artist methods as a figure, and is logically\n the same as a figure, but cannot print itself.\n See :doc:`/gallery/subplots_axes_and_figures/subfigures`.\n\n Parameters\n ----------\n nrows, ncols : int, default: 1\n Number of rows/columns of the subfigure grid.\n\n squeeze : bool, default: True\n If True, extra dimensions are squeezed out from the returned\n array of subfigures.\n\n wspace, hspace : float, default: None\n The amount of width/height reserved for space between subfigures,\n expressed as a fraction of the average subfigure width/height.\n If not given, the values will be inferred from a figure or\n rcParams when necessary.\n\n width_ratios : array-like of length *ncols*, optional\n Defines the relative widths of the columns. Each column gets a\n relative width of ``width_ratios[i] / sum(width_ratios)``.\n If not given, all columns will have the same width.\n\n height_ratios : array-like of length *nrows*, optional\n Defines the relative heights of the rows. Each row gets a\n relative height of ``height_ratios[i] / sum(height_ratios)``.\n If not given, all rows will have the same height.\n \"\"\"\n gs = GridSpec(nrows=nrows, ncols=ncols, figure=self,\n wspace=wspace, hspace=hspace,\n width_ratios=width_ratios,\n height_ratios=height_ratios)\n\n sfarr = np.empty((nrows, ncols), dtype=object)\n for i in range(ncols):\n for j in range(nrows):\n sfarr[j, i] = self.add_subfigure(gs[j, i], **kwargs)\n\n if squeeze:\n # Discarding unneeded dimensions that equal 1. If we only have one\n # subfigure, just return it instead of a 1-element array.\n return sfarr.item() if sfarr.size == 1 else sfarr.squeeze()\n else:\n # Returned axis array will be always 2-d, even if nrows=ncols=1.\n return sfarr\n\n def add_subfigure(self, subplotspec, **kwargs):\n \"\"\"\n Add a `.SubFigure` to the figure as part of a subplot arrangement.\n\n Parameters\n ----------\n subplotspec : `.gridspec.SubplotSpec`\n Defines the region in a parent gridspec where the subfigure will\n be placed.\n\n Returns\n -------\n `.SubFigure`\n\n Other Parameters\n ----------------\n **kwargs\n Are passed to the `.SubFigure` object.\n\n See Also\n --------\n .Figure.subfigures\n \"\"\"\n sf = SubFigure(self, subplotspec, **kwargs)\n self.subfigs += [sf]\n return sf\n\n def sca(self, a):\n \"\"\"Set the current Axes to be *a* and return *a*.\"\"\"\n self._axstack.bubble(a)\n self._axobservers.process(\"_axes_change_event\", self)\n return a\n\n def gca(self):\n \"\"\"\n Get the current Axes.\n\n If there is currently no Axes on this Figure, a new one is created\n using `.Figure.add_subplot`. (To test whether there is currently an\n Axes on a Figure, check whether ``figure.axes`` is empty. To test\n whether there is currently a Figure on the pyplot figure stack, check\n whether `.pyplot.get_fignums()` is empty.)\n \"\"\"\n ax = self._axstack.current()\n return ax if ax is not None else self.add_subplot()\n\n def _gci(self):\n # Helper for `~matplotlib.pyplot.gci`. Do not use elsewhere.\n \"\"\"\n Get the current colorable artist.\n\n Specifically, returns the current `.ScalarMappable` instance (`.Image`\n created by `imshow` or `figimage`, `.Collection` created by `pcolor` or\n `scatter`, etc.), or *None* if no such instance has been defined.\n\n The current image is an attribute of the current Axes, or the nearest\n earlier Axes in the current figure that contains an image.\n\n Notes\n -----\n Historically, the only colorable artists were images; hence the name\n ``gci`` (get current image).\n \"\"\"\n # Look first for an image in the current Axes.\n ax = self._axstack.current()\n if ax is None:\n return None\n im = ax._gci()\n if im is not None:\n return im\n # If there is no image in the current Axes, search for\n # one in a previously created Axes. Whether this makes\n # sense is debatable, but it is the documented behavior.\n for ax in reversed(self.axes):\n im = ax._gci()\n if im is not None:\n return im\n return None\n\n def _process_projection_requirements(\n self, *args, axes_class=None, polar=False, projection=None,\n **kwargs):\n \"\"\"\n Handle the args/kwargs to add_axes/add_subplot/gca, returning::\n\n (axes_proj_class, proj_class_kwargs)\n\n which can be used for new Axes initialization/identification.\n \"\"\"\n if axes_class is not None:\n if polar or projection is not None:\n raise ValueError(\n \"Cannot combine 'axes_class' and 'projection' or 'polar'\")\n projection_class = axes_class\n else:\n\n if polar:\n if projection is not None and projection != 'polar':\n raise ValueError(\n f\"polar={polar}, yet projection={projection!r}. \"\n \"Only one of these arguments should be supplied.\"\n )\n projection = 'polar'\n\n if isinstance(projection, str) or projection is None:\n projection_class = projections.get_projection_class(projection)\n elif hasattr(projection, '_as_mpl_axes'):\n projection_class, extra_kwargs = projection._as_mpl_axes()\n kwargs.update(**extra_kwargs)\n else:\n raise TypeError(\n f\"projection must be a string, None or implement a \"\n f\"_as_mpl_axes method, not {projection!r}\")\n return projection_class, kwargs\n\n def get_default_bbox_extra_artists(self):\n bbox_artists = [artist for artist in self.get_children()\n if (artist.get_visible() and artist.get_in_layout())]\n for ax in self.axes:\n if ax.get_visible():\n bbox_artists.extend(ax.get_default_bbox_extra_artists())\n return bbox_artists\n\n @_api.make_keyword_only(\"3.8\", \"bbox_extra_artists\")\n def get_tightbbox(self, renderer=None, bbox_extra_artists=None):\n \"\"\"\n Return a (tight) bounding box of the figure *in inches*.\n\n Note that `.FigureBase` differs from all other artists, which return\n their `.Bbox` in pixels.\n\n Artists that have ``artist.set_in_layout(False)`` are not included\n in the bbox.\n\n Parameters\n ----------\n renderer : `.RendererBase` subclass\n Renderer that will be used to draw the figures (i.e.\n ``fig.canvas.get_renderer()``)\n\n bbox_extra_artists : list of `.Artist` or ``None``\n List of artists to include in the tight bounding box. If\n ``None`` (default), then all artist children of each Axes are\n included in the tight bounding box.\n\n Returns\n -------\n `.BboxBase`\n containing the bounding box (in figure inches).\n \"\"\"\n\n if renderer is None:\n renderer = self.figure._get_renderer()\n\n bb = []\n if bbox_extra_artists is None:\n artists = self.get_default_bbox_extra_artists()\n else:\n artists = bbox_extra_artists\n\n for a in artists:\n bbox = a.get_tightbbox(renderer)\n if bbox is not None:\n bb.append(bbox)\n\n for ax in self.axes:\n if ax.get_visible():\n # some axes don't take the bbox_extra_artists kwarg so we\n # need this conditional....\n try:\n bbox = ax.get_tightbbox(\n renderer, bbox_extra_artists=bbox_extra_artists)\n except TypeError:\n bbox = ax.get_tightbbox(renderer)\n bb.append(bbox)\n bb = [b for b in bb\n if (np.isfinite(b.width) and np.isfinite(b.height)\n and (b.width != 0 or b.height != 0))]\n\n isfigure = hasattr(self, 'bbox_inches')\n if len(bb) == 0:\n if isfigure:\n return self.bbox_inches\n else:\n # subfigures do not have bbox_inches, but do have a bbox\n bb = [self.bbox]\n\n _bbox = Bbox.union(bb)\n\n if isfigure:\n # transform from pixels to inches...\n _bbox = TransformedBbox(_bbox, self.dpi_scale_trans.inverted())\n\n return _bbox\n\n @staticmethod\n def _norm_per_subplot_kw(per_subplot_kw):\n expanded = {}\n for k, v in per_subplot_kw.items():\n if isinstance(k, tuple):\n for sub_key in k:\n if sub_key in expanded:\n raise ValueError(\n f'The key {sub_key!r} appears multiple times.'\n )\n expanded[sub_key] = v\n else:\n if k in expanded:\n raise ValueError(\n f'The key {k!r} appears multiple times.'\n )\n expanded[k] = v\n return expanded\n\n @staticmethod\n def _normalize_grid_string(layout):\n if '\\n' not in layout:\n # single-line string\n return [list(ln) for ln in layout.split(';')]\n else:\n # multi-line string\n layout = inspect.cleandoc(layout)\n return [list(ln) for ln in layout.strip('\\n').split('\\n')]\n\n def subplot_mosaic(self, mosaic, *, sharex=False, sharey=False,\n width_ratios=None, height_ratios=None,\n empty_sentinel='.',\n subplot_kw=None, per_subplot_kw=None, gridspec_kw=None):\n \"\"\"\n Build a layout of Axes based on ASCII art or nested lists.\n\n This is a helper function to build complex GridSpec layouts visually.\n\n See :ref:`mosaic`\n for an example and full API documentation\n\n Parameters\n ----------\n mosaic : list of list of {hashable or nested} or str\n\n A visual layout of how you want your Axes to be arranged\n labeled as strings. For example ::\n\n x = [['A panel', 'A panel', 'edge'],\n ['C panel', '.', 'edge']]\n\n produces 4 Axes:\n\n - 'A panel' which is 1 row high and spans the first two columns\n - 'edge' which is 2 rows high and is on the right edge\n - 'C panel' which in 1 row and 1 column wide in the bottom left\n - a blank space 1 row and 1 column wide in the bottom center\n\n Any of the entries in the layout can be a list of lists\n of the same form to create nested layouts.\n\n If input is a str, then it can either be a multi-line string of\n the form ::\n\n '''\n AAE\n C.E\n '''\n\n where each character is a column and each line is a row. Or it\n can be a single-line string where rows are separated by ``;``::\n\n 'AB;CC'\n\n The string notation allows only single character Axes labels and\n does not support nesting but is very terse.\n\n The Axes identifiers may be `str` or a non-iterable hashable\n object (e.g. `tuple` s may not be used).\n\n sharex, sharey : bool, default: False\n If True, the x-axis (*sharex*) or y-axis (*sharey*) will be shared\n among all subplots. In that case, tick label visibility and axis\n units behave as for `subplots`. If False, each subplot's x- or\n y-axis will be independent.\n\n width_ratios : array-like of length *ncols*, optional\n Defines the relative widths of the columns. Each column gets a\n relative width of ``width_ratios[i] / sum(width_ratios)``.\n If not given, all columns will have the same width. Equivalent\n to ``gridspec_kw={'width_ratios': [...]}``. In the case of nested\n layouts, this argument applies only to the outer layout.\n\n height_ratios : array-like of length *nrows*, optional\n Defines the relative heights of the rows. Each row gets a\n relative height of ``height_ratios[i] / sum(height_ratios)``.\n If not given, all rows will have the same height. Equivalent\n to ``gridspec_kw={'height_ratios': [...]}``. In the case of nested\n layouts, this argument applies only to the outer layout.\n\n subplot_kw : dict, optional\n Dictionary with keywords passed to the `.Figure.add_subplot` call\n used to create each subplot. These values may be overridden by\n values in *per_subplot_kw*.\n\n per_subplot_kw : dict, optional\n A dictionary mapping the Axes identifiers or tuples of identifiers\n to a dictionary of keyword arguments to be passed to the\n `.Figure.add_subplot` call used to create each subplot. The values\n in these dictionaries have precedence over the values in\n *subplot_kw*.\n\n If *mosaic* is a string, and thus all keys are single characters,\n it is possible to use a single string instead of a tuple as keys;\n i.e. ``\"AB\"`` is equivalent to ``(\"A\", \"B\")``.\n\n .. versionadded:: 3.7\n\n gridspec_kw : dict, optional\n Dictionary with keywords passed to the `.GridSpec` constructor used\n to create the grid the subplots are placed on. In the case of\n nested layouts, this argument applies only to the outer layout.\n For more complex layouts, users should use `.Figure.subfigures`\n to create the nesting.\n\n empty_sentinel : object, optional\n Entry in the layout to mean \"leave this space empty\". Defaults\n to ``'.'``. Note, if *layout* is a string, it is processed via\n `inspect.cleandoc` to remove leading white space, which may\n interfere with using white-space as the empty sentinel.\n\n Returns\n -------\n dict[label, Axes]\n A dictionary mapping the labels to the Axes objects. The order of\n the axes is left-to-right and top-to-bottom of their position in the\n total layout.\n\n \"\"\"\n subplot_kw = subplot_kw or {}\n gridspec_kw = dict(gridspec_kw or {})\n per_subplot_kw = per_subplot_kw or {}\n\n if height_ratios is not None:\n if 'height_ratios' in gridspec_kw:\n raise ValueError(\"'height_ratios' must not be defined both as \"\n \"parameter and as key in 'gridspec_kw'\")\n gridspec_kw['height_ratios'] = height_ratios\n if width_ratios is not None:\n if 'width_ratios' in gridspec_kw:\n raise ValueError(\"'width_ratios' must not be defined both as \"\n \"parameter and as key in 'gridspec_kw'\")\n gridspec_kw['width_ratios'] = width_ratios\n\n # special-case string input\n if isinstance(mosaic, str):\n mosaic = self._normalize_grid_string(mosaic)\n per_subplot_kw = {\n tuple(k): v for k, v in per_subplot_kw.items()\n }\n\n per_subplot_kw = self._norm_per_subplot_kw(per_subplot_kw)\n\n # Only accept strict bools to allow a possible future API expansion.\n _api.check_isinstance(bool, sharex=sharex, sharey=sharey)\n\n def _make_array(inp):\n \"\"\"\n Convert input into 2D array\n\n We need to have this internal function rather than\n ``np.asarray(..., dtype=object)`` so that a list of lists\n of lists does not get converted to an array of dimension > 2.\n\n Returns\n -------\n 2D object array\n \"\"\"\n r0, *rest = inp\n if isinstance(r0, str):\n raise ValueError('List mosaic specification must be 2D')\n for j, r in enumerate(rest, start=1):\n if isinstance(r, str):\n raise ValueError('List mosaic specification must be 2D')\n if len(r0) != len(r):\n raise ValueError(\n \"All of the rows must be the same length, however \"\n f\"the first row ({r0!r}) has length {len(r0)} \"\n f\"and row {j} ({r!r}) has length {len(r)}.\"\n )\n out = np.zeros((len(inp), len(r0)), dtype=object)\n for j, r in enumerate(inp):\n for k, v in enumerate(r):\n out[j, k] = v\n return out\n\n def _identify_keys_and_nested(mosaic):\n \"\"\"\n Given a 2D object array, identify unique IDs and nested mosaics\n\n Parameters\n ----------\n mosaic : 2D object array\n\n Returns\n -------\n unique_ids : tuple\n The unique non-sub mosaic entries in this mosaic\n nested : dict[tuple[int, int], 2D object array]\n \"\"\"\n # make sure we preserve the user supplied order\n unique_ids = cbook._OrderedSet()\n nested = {}\n for j, row in enumerate(mosaic):\n for k, v in enumerate(row):\n if v == empty_sentinel:\n continue\n elif not cbook.is_scalar_or_string(v):\n nested[(j, k)] = _make_array(v)\n else:\n unique_ids.add(v)\n\n return tuple(unique_ids), nested\n\n def _do_layout(gs, mosaic, unique_ids, nested):\n \"\"\"\n Recursively do the mosaic.\n\n Parameters\n ----------\n gs : GridSpec\n mosaic : 2D object array\n The input converted to a 2D array for this level.\n unique_ids : tuple\n The identified scalar labels at this level of nesting.\n nested : dict[tuple[int, int]], 2D object array\n The identified nested mosaics, if any.\n\n Returns\n -------\n dict[label, Axes]\n A flat dict of all of the Axes created.\n \"\"\"\n output = dict()\n\n # we need to merge together the Axes at this level and the axes\n # in the (recursively) nested sub-mosaics so that we can add\n # them to the figure in the \"natural\" order if you were to\n # ravel in c-order all of the Axes that will be created\n #\n # This will stash the upper left index of each object (axes or\n # nested mosaic) at this level\n this_level = dict()\n\n # go through the unique keys,\n for name in unique_ids:\n # sort out where each axes starts/ends\n indx = np.argwhere(mosaic == name)\n start_row, start_col = np.min(indx, axis=0)\n end_row, end_col = np.max(indx, axis=0) + 1\n # and construct the slice object\n slc = (slice(start_row, end_row), slice(start_col, end_col))\n # some light error checking\n if (mosaic[slc] != name).any():\n raise ValueError(\n f\"While trying to layout\\n{mosaic!r}\\n\"\n f\"we found that the label {name!r} specifies a \"\n \"non-rectangular or non-contiguous area.\")\n # and stash this slice for later\n this_level[(start_row, start_col)] = (name, slc, 'axes')\n\n # do the same thing for the nested mosaics (simpler because these\n # cannot be spans yet!)\n for (j, k), nested_mosaic in nested.items():\n this_level[(j, k)] = (None, nested_mosaic, 'nested')\n\n # now go through the things in this level and add them\n # in order left-to-right top-to-bottom\n for key in sorted(this_level):\n name, arg, method = this_level[key]\n # we are doing some hokey function dispatch here based\n # on the 'method' string stashed above to sort out if this\n # element is an Axes or a nested mosaic.\n if method == 'axes':\n slc = arg\n # add a single axes\n if name in output:\n raise ValueError(f\"There are duplicate keys {name} \"\n f\"in the layout\\n{mosaic!r}\")\n ax = self.add_subplot(\n gs[slc], **{\n 'label': str(name),\n **subplot_kw,\n **per_subplot_kw.get(name, {})\n }\n )\n output[name] = ax\n elif method == 'nested':\n nested_mosaic = arg\n j, k = key\n # recursively add the nested mosaic\n rows, cols = nested_mosaic.shape\n nested_output = _do_layout(\n gs[j, k].subgridspec(rows, cols),\n nested_mosaic,\n *_identify_keys_and_nested(nested_mosaic)\n )\n overlap = set(output) & set(nested_output)\n if overlap:\n raise ValueError(\n f\"There are duplicate keys {overlap} \"\n f\"between the outer layout\\n{mosaic!r}\\n\"\n f\"and the nested layout\\n{nested_mosaic}\"\n )\n output.update(nested_output)\n else:\n raise RuntimeError(\"This should never happen\")\n return output\n\n mosaic = _make_array(mosaic)\n rows, cols = mosaic.shape\n gs = self.add_gridspec(rows, cols, **gridspec_kw)\n ret = _do_layout(gs, mosaic, *_identify_keys_and_nested(mosaic))\n ax0 = next(iter(ret.values()))\n for ax in ret.values():\n if sharex:\n ax.sharex(ax0)\n ax._label_outer_xaxis(check_patch=True)\n if sharey:\n ax.sharey(ax0)\n ax._label_outer_yaxis(check_patch=True)\n if extra := set(per_subplot_kw) - set(ret):\n raise ValueError(\n f\"The keys {extra} are in *per_subplot_kw* \"\n \"but not in the mosaic.\"\n )\n return ret\n\n def _set_artist_props(self, a):\n if a != self:\n a.set_figure(self)\n a.stale_callback = _stale_figure_callback\n a.set_transform(self.transSubfigure)\n\n\n@_docstring.interpd\nclass SubFigure(FigureBase):\n \"\"\"\n Logical figure that can be placed inside a figure.\n\n Typically instantiated using `.Figure.add_subfigure` or\n `.SubFigure.add_subfigure`, or `.SubFigure.subfigures`. A subfigure has\n the same methods as a figure except for those particularly tied to the size\n or dpi of the figure, and is confined to a prescribed region of the figure.\n For example the following puts two subfigures side-by-side::\n\n fig = plt.figure()\n sfigs = fig.subfigures(1, 2)\n axsL = sfigs[0].subplots(1, 2)\n axsR = sfigs[1].subplots(2, 1)\n\n See :doc:`/gallery/subplots_axes_and_figures/subfigures`\n \"\"\"\n callbacks = _api.deprecated(\n \"3.6\", alternative=(\"the 'resize_event' signal in \"\n \"Figure.canvas.callbacks\")\n )(property(lambda self: self._fig_callbacks))\n\n def __init__(self, parent, subplotspec, *,\n facecolor=None,\n edgecolor=None,\n linewidth=0.0,\n frameon=None,\n **kwargs):\n \"\"\"\n Parameters\n ----------\n parent : `.Figure` or `.SubFigure`\n Figure or subfigure that contains the SubFigure. SubFigures\n can be nested.\n\n subplotspec : `.gridspec.SubplotSpec`\n Defines the region in a parent gridspec where the subfigure will\n be placed.\n\n facecolor : default: ``\"none\"``\n The figure patch face color; transparent by default.\n\n edgecolor : default: :rc:`figure.edgecolor`\n The figure patch edge color.\n\n linewidth : float\n The linewidth of the frame (i.e. the edge linewidth of the figure\n patch).\n\n frameon : bool, default: :rc:`figure.frameon`\n If ``False``, suppress drawing the figure background patch.\n\n Other Parameters\n ----------------\n **kwargs : `.SubFigure` properties, optional\n\n %(SubFigure:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n if facecolor is None:\n facecolor = \"none\"\n if edgecolor is None:\n edgecolor = mpl.rcParams['figure.edgecolor']\n if frameon is None:\n frameon = mpl.rcParams['figure.frameon']\n\n self._subplotspec = subplotspec\n self._parent = parent\n self.figure = parent.figure\n self._fig_callbacks = parent._fig_callbacks\n\n # subfigures use the parent axstack\n self._axstack = parent._axstack\n self.subplotpars = parent.subplotpars\n self.dpi_scale_trans = parent.dpi_scale_trans\n self._axobservers = parent._axobservers\n self.canvas = parent.canvas\n self.transFigure = parent.transFigure\n self.bbox_relative = None\n self._redo_transform_rel_fig()\n self.figbbox = self._parent.figbbox\n self.bbox = TransformedBbox(self.bbox_relative,\n self._parent.transSubfigure)\n self.transSubfigure = BboxTransformTo(self.bbox)\n\n self.patch = Rectangle(\n xy=(0, 0), width=1, height=1, visible=frameon,\n facecolor=facecolor, edgecolor=edgecolor, linewidth=linewidth,\n # Don't let the figure patch influence bbox calculation.\n in_layout=False, transform=self.transSubfigure)\n self._set_artist_props(self.patch)\n self.patch.set_antialiased(False)\n\n @property\n def dpi(self):\n return self._parent.dpi\n\n @dpi.setter\n def dpi(self, value):\n self._parent.dpi = value\n\n def get_dpi(self):\n \"\"\"\n Return the resolution of the parent figure in dots-per-inch as a float.\n \"\"\"\n return self._parent.dpi\n\n def set_dpi(self, val):\n \"\"\"\n Set the resolution of parent figure in dots-per-inch.\n\n Parameters\n ----------\n val : float\n \"\"\"\n self._parent.dpi = val\n self.stale = True\n\n def _get_renderer(self):\n return self._parent._get_renderer()\n\n def _redo_transform_rel_fig(self, bbox=None):\n \"\"\"\n Make the transSubfigure bbox relative to Figure transform.\n\n Parameters\n ----------\n bbox : bbox or None\n If not None, then the bbox is used for relative bounding box.\n Otherwise, it is calculated from the subplotspec.\n \"\"\"\n if bbox is not None:\n self.bbox_relative.p0 = bbox.p0\n self.bbox_relative.p1 = bbox.p1\n return\n # need to figure out *where* this subplotspec is.\n gs = self._subplotspec.get_gridspec()\n wr = np.asarray(gs.get_width_ratios())\n hr = np.asarray(gs.get_height_ratios())\n dx = wr[self._subplotspec.colspan].sum() / wr.sum()\n dy = hr[self._subplotspec.rowspan].sum() / hr.sum()\n x0 = wr[:self._subplotspec.colspan.start].sum() / wr.sum()\n y0 = 1 - hr[:self._subplotspec.rowspan.stop].sum() / hr.sum()\n if self.bbox_relative is None:\n self.bbox_relative = Bbox.from_bounds(x0, y0, dx, dy)\n else:\n self.bbox_relative.p0 = (x0, y0)\n self.bbox_relative.p1 = (x0 + dx, y0 + dy)\n\n def get_constrained_layout(self):\n \"\"\"\n Return whether constrained layout is being used.\n\n See :ref:`constrainedlayout_guide`.\n \"\"\"\n return self._parent.get_constrained_layout()\n\n def get_constrained_layout_pads(self, relative=False):\n \"\"\"\n Get padding for ``constrained_layout``.\n\n Returns a list of ``w_pad, h_pad`` in inches and\n ``wspace`` and ``hspace`` as fractions of the subplot.\n\n See :ref:`constrainedlayout_guide`.\n\n Parameters\n ----------\n relative : bool\n If `True`, then convert from inches to figure relative.\n \"\"\"\n return self._parent.get_constrained_layout_pads(relative=relative)\n\n def get_layout_engine(self):\n return self._parent.get_layout_engine()\n\n @property\n def axes(self):\n \"\"\"\n List of Axes in the SubFigure. You can access and modify the Axes\n in the SubFigure through this list.\n\n Modifying this list has no effect. Instead, use `~.SubFigure.add_axes`,\n `~.SubFigure.add_subplot` or `~.SubFigure.delaxes` to add or remove an\n Axes.\n\n Note: The `.SubFigure.axes` property and `~.SubFigure.get_axes` method\n are equivalent.\n \"\"\"\n return self._localaxes[:]\n\n get_axes = axes.fget\n\n def draw(self, renderer):\n # docstring inherited\n\n # draw the figure bounding box, perhaps none for white figure\n if not self.get_visible():\n return\n\n artists = self._get_draw_artists(renderer)\n\n try:\n renderer.open_group('subfigure', gid=self.get_gid())\n self.patch.draw(renderer)\n mimage._draw_list_compositing_images(\n renderer, self, artists, self.figure.suppressComposite)\n for sfig in self.subfigs:\n sfig.draw(renderer)\n renderer.close_group('subfigure')\n\n finally:\n self.stale = False\n\n\n@_docstring.interpd\nclass Figure(FigureBase):\n \"\"\"\n The top level container for all the plot elements.\n\n Attributes\n ----------\n patch\n The `.Rectangle` instance representing the figure background patch.\n\n suppressComposite\n For multiple images, the figure will make composite images\n depending on the renderer option_image_nocomposite function. If\n *suppressComposite* is a boolean, this will override the renderer.\n \"\"\"\n # Remove the self._fig_callbacks properties on figure and subfigure\n # after the deprecation expires.\n callbacks = _api.deprecated(\n \"3.6\", alternative=(\"the 'resize_event' signal in \"\n \"Figure.canvas.callbacks\")\n )(property(lambda self: self._fig_callbacks))\n\n def __str__(self):\n return \"Figure(%gx%g)\" % tuple(self.bbox.size)\n\n def __repr__(self):\n return \"<{clsname} size {h:g}x{w:g} with {naxes} Axes>\".format(\n clsname=self.__class__.__name__,\n h=self.bbox.size[0], w=self.bbox.size[1],\n naxes=len(self.axes),\n )\n\n def __init__(self,\n figsize=None,\n dpi=None,\n *,\n facecolor=None,\n edgecolor=None,\n linewidth=0.0,\n frameon=None,\n subplotpars=None, # rc figure.subplot.*\n tight_layout=None, # rc figure.autolayout\n constrained_layout=None, # rc figure.constrained_layout.use\n layout=None,\n **kwargs\n ):\n \"\"\"\n Parameters\n ----------\n figsize : 2-tuple of floats, default: :rc:`figure.figsize`\n Figure dimension ``(width, height)`` in inches.\n\n dpi : float, default: :rc:`figure.dpi`\n Dots per inch.\n\n facecolor : default: :rc:`figure.facecolor`\n The figure patch facecolor.\n\n edgecolor : default: :rc:`figure.edgecolor`\n The figure patch edge color.\n\n linewidth : float\n The linewidth of the frame (i.e. the edge linewidth of the figure\n patch).\n\n frameon : bool, default: :rc:`figure.frameon`\n If ``False``, suppress drawing the figure background patch.\n\n subplotpars : `SubplotParams`\n Subplot parameters. If not given, the default subplot\n parameters :rc:`figure.subplot.*` are used.\n\n tight_layout : bool or dict, default: :rc:`figure.autolayout`\n Whether to use the tight layout mechanism. See `.set_tight_layout`.\n\n .. admonition:: Discouraged\n\n The use of this parameter is discouraged. Please use\n ``layout='tight'`` instead for the common case of\n ``tight_layout=True`` and use `.set_tight_layout` otherwise.\n\n constrained_layout : bool, default: :rc:`figure.constrained_layout.use`\n This is equal to ``layout='constrained'``.\n\n .. admonition:: Discouraged\n\n The use of this parameter is discouraged. Please use\n ``layout='constrained'`` instead.\n\n layout : {'constrained', 'compressed', 'tight', 'none', `.LayoutEngine`, \\\nNone}, default: None\n The layout mechanism for positioning of plot elements to avoid\n overlapping Axes decorations (labels, ticks, etc). Note that\n layout managers can have significant performance penalties.\n\n - 'constrained': The constrained layout solver adjusts axes sizes\n to avoid overlapping axes decorations. Can handle complex plot\n layouts and colorbars, and is thus recommended.\n\n See :ref:`constrainedlayout_guide`\n for examples.\n\n - 'compressed': uses the same algorithm as 'constrained', but\n removes extra space between fixed-aspect-ratio Axes. Best for\n simple grids of axes.\n\n - 'tight': Use the tight layout mechanism. This is a relatively\n simple algorithm that adjusts the subplot parameters so that\n decorations do not overlap. See `.Figure.set_tight_layout` for\n further details.\n\n - 'none': Do not use a layout engine.\n\n - A `.LayoutEngine` instance. Builtin layout classes are\n `.ConstrainedLayoutEngine` and `.TightLayoutEngine`, more easily\n accessible by 'constrained' and 'tight'. Passing an instance\n allows third parties to provide their own layout engine.\n\n If not given, fall back to using the parameters *tight_layout* and\n *constrained_layout*, including their config defaults\n :rc:`figure.autolayout` and :rc:`figure.constrained_layout.use`.\n\n Other Parameters\n ----------------\n **kwargs : `.Figure` properties, optional\n\n %(Figure:kwdoc)s\n \"\"\"\n super().__init__(**kwargs)\n self._layout_engine = None\n\n if layout is not None:\n if (tight_layout is not None):\n _api.warn_external(\n \"The Figure parameters 'layout' and 'tight_layout' cannot \"\n \"be used together. Please use 'layout' only.\")\n if (constrained_layout is not None):\n _api.warn_external(\n \"The Figure parameters 'layout' and 'constrained_layout' \"\n \"cannot be used together. Please use 'layout' only.\")\n self.set_layout_engine(layout=layout)\n elif tight_layout is not None:\n if constrained_layout is not None:\n _api.warn_external(\n \"The Figure parameters 'tight_layout' and \"\n \"'constrained_layout' cannot be used together. Please use \"\n \"'layout' parameter\")\n self.set_layout_engine(layout='tight')\n if isinstance(tight_layout, dict):\n self.get_layout_engine().set(**tight_layout)\n elif constrained_layout is not None:\n if isinstance(constrained_layout, dict):\n self.set_layout_engine(layout='constrained')\n self.get_layout_engine().set(**constrained_layout)\n elif constrained_layout:\n self.set_layout_engine(layout='constrained')\n\n else:\n # everything is None, so use default:\n self.set_layout_engine(layout=layout)\n\n self._fig_callbacks = cbook.CallbackRegistry(signals=[\"dpi_changed\"])\n # Callbacks traditionally associated with the canvas (and exposed with\n # a proxy property), but that actually need to be on the figure for\n # pickling.\n self._canvas_callbacks = cbook.CallbackRegistry(\n signals=FigureCanvasBase.events)\n connect = self._canvas_callbacks._connect_picklable\n self._mouse_key_ids = [\n connect('key_press_event', backend_bases._key_handler),\n connect('key_release_event', backend_bases._key_handler),\n connect('key_release_event', backend_bases._key_handler),\n connect('button_press_event', backend_bases._mouse_handler),\n connect('button_release_event', backend_bases._mouse_handler),\n connect('scroll_event', backend_bases._mouse_handler),\n connect('motion_notify_event', backend_bases._mouse_handler),\n ]\n self._button_pick_id = connect('button_press_event', self.pick)\n self._scroll_pick_id = connect('scroll_event', self.pick)\n\n if figsize is None:\n figsize = mpl.rcParams['figure.figsize']\n if dpi is None:\n dpi = mpl.rcParams['figure.dpi']\n if facecolor is None:\n facecolor = mpl.rcParams['figure.facecolor']\n if edgecolor is None:\n edgecolor = mpl.rcParams['figure.edgecolor']\n if frameon is None:\n frameon = mpl.rcParams['figure.frameon']\n\n if not np.isfinite(figsize).all() or (np.array(figsize) < 0).any():\n raise ValueError('figure size must be positive finite not '\n f'{figsize}')\n self.bbox_inches = Bbox.from_bounds(0, 0, *figsize)\n\n self.dpi_scale_trans = Affine2D().scale(dpi)\n # do not use property as it will trigger\n self._dpi = dpi\n self.bbox = TransformedBbox(self.bbox_inches, self.dpi_scale_trans)\n self.figbbox = self.bbox\n self.transFigure = BboxTransformTo(self.bbox)\n self.transSubfigure = self.transFigure\n\n self.patch = Rectangle(\n xy=(0, 0), width=1, height=1, visible=frameon,\n facecolor=facecolor, edgecolor=edgecolor, linewidth=linewidth,\n # Don't let the figure patch influence bbox calculation.\n in_layout=False)\n self._set_artist_props(self.patch)\n self.patch.set_antialiased(False)\n\n FigureCanvasBase(self) # Set self.canvas.\n\n if subplotpars is None:\n subplotpars = SubplotParams()\n\n self.subplotpars = subplotpars\n\n self._axstack = _AxesStack() # track all figure axes and current axes\n self.clear()\n\n def pick(self, mouseevent):\n if not self.canvas.widgetlock.locked():\n super().pick(mouseevent)\n\n def _check_layout_engines_compat(self, old, new):\n \"\"\"\n Helper for set_layout engine\n\n If the figure has used the old engine and added a colorbar then the\n value of colorbar_gridspec must be the same on the new engine.\n \"\"\"\n if old is None or new is None:\n return True\n if old.colorbar_gridspec == new.colorbar_gridspec:\n return True\n # colorbar layout different, so check if any colorbars are on the\n # figure...\n for ax in self.axes:\n if hasattr(ax, '_colorbar'):\n # colorbars list themselves as a colorbar.\n return False\n return True\n\n def set_layout_engine(self, layout=None, **kwargs):\n \"\"\"\n Set the layout engine for this figure.\n\n Parameters\n ----------\n layout: {'constrained', 'compressed', 'tight', 'none'} or \\\n`LayoutEngine` or None\n\n - 'constrained' will use `~.ConstrainedLayoutEngine`\n - 'compressed' will also use `~.ConstrainedLayoutEngine`, but with\n a correction that attempts to make a good layout for fixed-aspect\n ratio Axes.\n - 'tight' uses `~.TightLayoutEngine`\n - 'none' removes layout engine.\n\n If `None`, the behavior is controlled by :rc:`figure.autolayout`\n (which if `True` behaves as if 'tight' was passed) and\n :rc:`figure.constrained_layout.use` (which if `True` behaves as if\n 'constrained' was passed). If both are `True`,\n :rc:`figure.autolayout` takes priority.\n\n Users and libraries can define their own layout engines and pass\n the instance directly as well.\n\n kwargs: dict\n The keyword arguments are passed to the layout engine to set things\n like padding and margin sizes. Only used if *layout* is a string.\n\n \"\"\"\n if layout is None:\n if mpl.rcParams['figure.autolayout']:\n layout = 'tight'\n elif mpl.rcParams['figure.constrained_layout.use']:\n layout = 'constrained'\n else:\n self._layout_engine = None\n return\n if layout == 'tight':\n new_layout_engine = TightLayoutEngine(**kwargs)\n elif layout == 'constrained':\n new_layout_engine = ConstrainedLayoutEngine(**kwargs)\n elif layout == 'compressed':\n new_layout_engine = ConstrainedLayoutEngine(compress=True,\n **kwargs)\n elif layout == 'none':\n if self._layout_engine is not None:\n new_layout_engine = PlaceHolderLayoutEngine(\n self._layout_engine.adjust_compatible,\n self._layout_engine.colorbar_gridspec\n )\n else:\n new_layout_engine = None\n elif isinstance(layout, LayoutEngine):\n new_layout_engine = layout\n else:\n raise ValueError(f\"Invalid value for 'layout': {layout!r}\")\n\n if self._check_layout_engines_compat(self._layout_engine,\n new_layout_engine):\n self._layout_engine = new_layout_engine\n else:\n raise RuntimeError('Colorbar layout of new layout engine not '\n 'compatible with old engine, and a colorbar '\n 'has been created. Engine not changed.')\n\n def get_layout_engine(self):\n return self._layout_engine\n\n # TODO: I'd like to dynamically add the _repr_html_ method\n # to the figure in the right context, but then IPython doesn't\n # use it, for some reason.\n\n def _repr_html_(self):\n # We can't use \"isinstance\" here, because then we'd end up importing\n # webagg unconditionally.\n if 'WebAgg' in type(self.canvas).__name__:\n from matplotlib.backends import backend_webagg\n return backend_webagg.ipython_inline_display(self)\n\n def show(self, warn=True):\n \"\"\"\n If using a GUI backend with pyplot, display the figure window.\n\n If the figure was not created using `~.pyplot.figure`, it will lack\n a `~.backend_bases.FigureManagerBase`, and this method will raise an\n AttributeError.\n\n .. warning::\n\n This does not manage an GUI event loop. Consequently, the figure\n may only be shown briefly or not shown at all if you or your\n environment are not managing an event loop.\n\n Use cases for `.Figure.show` include running this from a GUI\n application (where there is persistently an event loop running) or\n from a shell, like IPython, that install an input hook to allow the\n interactive shell to accept input while the figure is also being\n shown and interactive. Some, but not all, GUI toolkits will\n register an input hook on import. See :ref:`cp_integration` for\n more details.\n\n If you're in a shell without input hook integration or executing a\n python script, you should use `matplotlib.pyplot.show` with\n ``block=True`` instead, which takes care of starting and running\n the event loop for you.\n\n Parameters\n ----------\n warn : bool, default: True\n If ``True`` and we are not running headless (i.e. on Linux with an\n unset DISPLAY), issue warning when called on a non-GUI backend.\n\n \"\"\"\n if self.canvas.manager is None:\n raise AttributeError(\n \"Figure.show works only for figures managed by pyplot, \"\n \"normally created by pyplot.figure()\")\n try:\n self.canvas.manager.show()\n except NonGuiException as exc:\n if warn:\n _api.warn_external(str(exc))\n\n @property\n def axes(self):\n \"\"\"\n List of Axes in the Figure. You can access and modify the Axes in the\n Figure through this list.\n\n Do not modify the list itself. Instead, use `~Figure.add_axes`,\n `~.Figure.add_subplot` or `~.Figure.delaxes` to add or remove an Axes.\n\n Note: The `.Figure.axes` property and `~.Figure.get_axes` method are\n equivalent.\n \"\"\"\n return self._axstack.as_list()\n\n get_axes = axes.fget\n\n def _get_renderer(self):\n if hasattr(self.canvas, 'get_renderer'):\n return self.canvas.get_renderer()\n else:\n return _get_renderer(self)\n\n def _get_dpi(self):\n return self._dpi\n\n def _set_dpi(self, dpi, forward=True):\n \"\"\"\n Parameters\n ----------\n dpi : float\n\n forward : bool\n Passed on to `~.Figure.set_size_inches`\n \"\"\"\n if dpi == self._dpi:\n # We don't want to cause undue events in backends.\n return\n self._dpi = dpi\n self.dpi_scale_trans.clear().scale(dpi)\n w, h = self.get_size_inches()\n self.set_size_inches(w, h, forward=forward)\n self._fig_callbacks.process('dpi_changed', self)\n\n dpi = property(_get_dpi, _set_dpi, doc=\"The resolution in dots per inch.\")\n\n def get_tight_layout(self):\n \"\"\"Return whether `.tight_layout` is called when drawing.\"\"\"\n return isinstance(self.get_layout_engine(), TightLayoutEngine)\n\n @_api.deprecated(\"3.6\", alternative=\"set_layout_engine\",\n pending=True)\n def set_tight_layout(self, tight):\n \"\"\"\n [*Discouraged*] Set whether and how `.tight_layout` is called when\n drawing.\n\n .. admonition:: Discouraged\n\n This method is discouraged in favor of `~.set_layout_engine`.\n\n Parameters\n ----------\n tight : bool or dict with keys \"pad\", \"w_pad\", \"h_pad\", \"rect\" or None\n If a bool, sets whether to call `.tight_layout` upon drawing.\n If ``None``, use :rc:`figure.autolayout` instead.\n If a dict, pass it as kwargs to `.tight_layout`, overriding the\n default paddings.\n \"\"\"\n if tight is None:\n tight = mpl.rcParams['figure.autolayout']\n _tight = 'tight' if bool(tight) else 'none'\n _tight_parameters = tight if isinstance(tight, dict) else {}\n self.set_layout_engine(_tight, **_tight_parameters)\n self.stale = True\n\n def get_constrained_layout(self):\n \"\"\"\n Return whether constrained layout is being used.\n\n See :ref:`constrainedlayout_guide`.\n \"\"\"\n return isinstance(self.get_layout_engine(), ConstrainedLayoutEngine)\n\n @_api.deprecated(\"3.6\", alternative=\"set_layout_engine('constrained')\",\n pending=True)\n def set_constrained_layout(self, constrained):\n \"\"\"\n [*Discouraged*] Set whether ``constrained_layout`` is used upon\n drawing.\n\n If None, :rc:`figure.constrained_layout.use` value will be used.\n\n When providing a dict containing the keys ``w_pad``, ``h_pad``\n the default ``constrained_layout`` paddings will be\n overridden. These pads are in inches and default to 3.0/72.0.\n ``w_pad`` is the width padding and ``h_pad`` is the height padding.\n\n .. admonition:: Discouraged\n\n This method is discouraged in favor of `~.set_layout_engine`.\n\n Parameters\n ----------\n constrained : bool or dict or None\n \"\"\"\n if constrained is None:\n constrained = mpl.rcParams['figure.constrained_layout.use']\n _constrained = 'constrained' if bool(constrained) else 'none'\n _parameters = constrained if isinstance(constrained, dict) else {}\n self.set_layout_engine(_constrained, **_parameters)\n self.stale = True\n\n @_api.deprecated(\n \"3.6\", alternative=\"figure.get_layout_engine().set()\",\n pending=True)\n def set_constrained_layout_pads(self, **kwargs):\n \"\"\"\n Set padding for ``constrained_layout``.\n\n Tip: The parameters can be passed from a dictionary by using\n ``fig.set_constrained_layout(**pad_dict)``.\n\n See :ref:`constrainedlayout_guide`.\n\n Parameters\n ----------\n w_pad : float, default: :rc:`figure.constrained_layout.w_pad`\n Width padding in inches. This is the pad around Axes\n and is meant to make sure there is enough room for fonts to\n look good. Defaults to 3 pts = 0.04167 inches\n\n h_pad : float, default: :rc:`figure.constrained_layout.h_pad`\n Height padding in inches. Defaults to 3 pts.\n\n wspace : float, default: :rc:`figure.constrained_layout.wspace`\n Width padding between subplots, expressed as a fraction of the\n subplot width. The total padding ends up being w_pad + wspace.\n\n hspace : float, default: :rc:`figure.constrained_layout.hspace`\n Height padding between subplots, expressed as a fraction of the\n subplot width. The total padding ends up being h_pad + hspace.\n\n \"\"\"\n if isinstance(self.get_layout_engine(), ConstrainedLayoutEngine):\n self.get_layout_engine().set(**kwargs)\n\n @_api.deprecated(\"3.6\", alternative=\"fig.get_layout_engine().get()\",\n pending=True)\n def get_constrained_layout_pads(self, relative=False):\n \"\"\"\n Get padding for ``constrained_layout``.\n\n Returns a list of ``w_pad, h_pad`` in inches and\n ``wspace`` and ``hspace`` as fractions of the subplot.\n All values are None if ``constrained_layout`` is not used.\n\n See :ref:`constrainedlayout_guide`.\n\n Parameters\n ----------\n relative : bool\n If `True`, then convert from inches to figure relative.\n \"\"\"\n if not isinstance(self.get_layout_engine(), ConstrainedLayoutEngine):\n return None, None, None, None\n info = self.get_layout_engine().get_info()\n w_pad = info['w_pad']\n h_pad = info['h_pad']\n wspace = info['wspace']\n hspace = info['hspace']\n\n if relative and (w_pad is not None or h_pad is not None):\n renderer = self._get_renderer()\n dpi = renderer.dpi\n w_pad = w_pad * dpi / renderer.width\n h_pad = h_pad * dpi / renderer.height\n\n return w_pad, h_pad, wspace, hspace\n\n def set_canvas(self, canvas):\n \"\"\"\n Set the canvas that contains the figure\n\n Parameters\n ----------\n canvas : FigureCanvas\n \"\"\"\n self.canvas = canvas\n\n @_docstring.interpd\n def figimage(self, X, xo=0, yo=0, alpha=None, norm=None, cmap=None,\n vmin=None, vmax=None, origin=None, resize=False, **kwargs):\n \"\"\"\n Add a non-resampled image to the figure.\n\n The image is attached to the lower or upper left corner depending on\n *origin*.\n\n Parameters\n ----------\n X\n The image data. This is an array of one of the following shapes:\n\n - (M, N): an image with scalar data. Color-mapping is controlled\n by *cmap*, *norm*, *vmin*, and *vmax*.\n - (M, N, 3): an image with RGB values (0-1 float or 0-255 int).\n - (M, N, 4): an image with RGBA values (0-1 float or 0-255 int),\n i.e. including transparency.\n\n xo, yo : int\n The *x*/*y* image offset in pixels.\n\n alpha : None or float\n The alpha blending value.\n\n %(cmap_doc)s\n\n This parameter is ignored if *X* is RGB(A).\n\n %(norm_doc)s\n\n This parameter is ignored if *X* is RGB(A).\n\n %(vmin_vmax_doc)s\n\n This parameter is ignored if *X* is RGB(A).\n\n origin : {'upper', 'lower'}, default: :rc:`image.origin`\n Indicates where the [0, 0] index of the array is in the upper left\n or lower left corner of the axes.\n\n resize : bool\n If *True*, resize the figure to match the given image size.\n\n Returns\n -------\n `matplotlib.image.FigureImage`\n\n Other Parameters\n ----------------\n **kwargs\n Additional kwargs are `.Artist` kwargs passed on to `.FigureImage`.\n\n Notes\n -----\n figimage complements the Axes image (`~matplotlib.axes.Axes.imshow`)\n which will be resampled to fit the current Axes. If you want\n a resampled image to fill the entire figure, you can define an\n `~matplotlib.axes.Axes` with extent [0, 0, 1, 1].\n\n Examples\n --------\n ::\n\n f = plt.figure()\n nx = int(f.get_figwidth() * f.dpi)\n ny = int(f.get_figheight() * f.dpi)\n data = np.random.random((ny, nx))\n f.figimage(data)\n plt.show()\n \"\"\"\n if resize:\n dpi = self.get_dpi()\n figsize = [x / dpi for x in (X.shape[1], X.shape[0])]\n self.set_size_inches(figsize, forward=True)\n\n im = mimage.FigureImage(self, cmap=cmap, norm=norm,\n offsetx=xo, offsety=yo,\n origin=origin, **kwargs)\n im.stale_callback = _stale_figure_callback\n\n im.set_array(X)\n im.set_alpha(alpha)\n if norm is None:\n im.set_clim(vmin, vmax)\n self.images.append(im)\n im._remove_method = self.images.remove\n self.stale = True\n return im\n\n def set_size_inches(self, w, h=None, forward=True):\n \"\"\"\n Set the figure size in inches.\n\n Call signatures::\n\n fig.set_size_inches(w, h) # OR\n fig.set_size_inches((w, h))\n\n Parameters\n ----------\n w : (float, float) or float\n Width and height in inches (if height not specified as a separate\n argument) or width.\n h : float\n Height in inches.\n forward : bool, default: True\n If ``True``, the canvas size is automatically updated, e.g.,\n you can resize the figure window from the shell.\n\n See Also\n --------\n matplotlib.figure.Figure.get_size_inches\n matplotlib.figure.Figure.set_figwidth\n matplotlib.figure.Figure.set_figheight\n\n Notes\n -----\n To transform from pixels to inches divide by `Figure.dpi`.\n \"\"\"\n if h is None: # Got called with a single pair as argument.\n w, h = w\n size = np.array([w, h])\n if not np.isfinite(size).all() or (size < 0).any():\n raise ValueError(f'figure size must be positive finite not {size}')\n self.bbox_inches.p1 = size\n if forward:\n manager = self.canvas.manager\n if manager is not None:\n manager.resize(*(size * self.dpi).astype(int))\n self.stale = True\n\n def get_size_inches(self):\n \"\"\"\n Return the current size of the figure in inches.\n\n Returns\n -------\n ndarray\n The size (width, height) of the figure in inches.\n\n See Also\n --------\n matplotlib.figure.Figure.set_size_inches\n matplotlib.figure.Figure.get_figwidth\n matplotlib.figure.Figure.get_figheight\n\n Notes\n -----\n The size in pixels can be obtained by multiplying with `Figure.dpi`.\n \"\"\"\n return np.array(self.bbox_inches.p1)\n\n def get_figwidth(self):\n \"\"\"Return the figure width in inches.\"\"\"\n return self.bbox_inches.width\n\n def get_figheight(self):\n \"\"\"Return the figure height in inches.\"\"\"\n return self.bbox_inches.height\n\n def get_dpi(self):\n \"\"\"Return the resolution in dots per inch as a float.\"\"\"\n return self.dpi\n\n def set_dpi(self, val):\n \"\"\"\n Set the resolution of the figure in dots-per-inch.\n\n Parameters\n ----------\n val : float\n \"\"\"\n self.dpi = val\n self.stale = True\n\n def set_figwidth(self, val, forward=True):\n \"\"\"\n Set the width of the figure in inches.\n\n Parameters\n ----------\n val : float\n forward : bool\n See `set_size_inches`.\n\n See Also\n --------\n matplotlib.figure.Figure.set_figheight\n matplotlib.figure.Figure.set_size_inches\n \"\"\"\n self.set_size_inches(val, self.get_figheight(), forward=forward)\n\n def set_figheight(self, val, forward=True):\n \"\"\"\n Set the height of the figure in inches.\n\n Parameters\n ----------\n val : float\n forward : bool\n See `set_size_inches`.\n\n See Also\n --------\n matplotlib.figure.Figure.set_figwidth\n matplotlib.figure.Figure.set_size_inches\n \"\"\"\n self.set_size_inches(self.get_figwidth(), val, forward=forward)\n\n def clear(self, keep_observers=False):\n # docstring inherited\n super().clear(keep_observers=keep_observers)\n # FigureBase.clear does not clear toolbars, as\n # only Figure can have toolbars\n toolbar = self.canvas.toolbar\n if toolbar is not None:\n toolbar.update()\n\n @_finalize_rasterization\n @allow_rasterization\n def draw(self, renderer):\n # docstring inherited\n\n # draw the figure bounding box, perhaps none for white figure\n if not self.get_visible():\n return\n\n artists = self._get_draw_artists(renderer)\n try:\n renderer.open_group('figure', gid=self.get_gid())\n if self.axes and self.get_layout_engine() is not None:\n try:\n self.get_layout_engine().execute(self)\n except ValueError:\n pass\n # ValueError can occur when resizing a window.\n\n self.patch.draw(renderer)\n mimage._draw_list_compositing_images(\n renderer, self, artists, self.suppressComposite)\n\n for sfig in self.subfigs:\n sfig.draw(renderer)\n\n renderer.close_group('figure')\n finally:\n self.stale = False\n\n DrawEvent(\"draw_event\", self.canvas, renderer)._process()\n\n def draw_without_rendering(self):\n \"\"\"\n Draw the figure with no output. Useful to get the final size of\n artists that require a draw before their size is known (e.g. text).\n \"\"\"\n renderer = _get_renderer(self)\n with renderer._draw_disabled():\n self.draw(renderer)\n\n def draw_artist(self, a):\n \"\"\"\n Draw `.Artist` *a* only.\n \"\"\"\n a.draw(self.canvas.get_renderer())\n\n def __getstate__(self):\n state = super().__getstate__()\n\n # The canvas cannot currently be pickled, but this has the benefit\n # of meaning that a figure can be detached from one canvas, and\n # re-attached to another.\n state.pop(\"canvas\")\n\n # discard any changes to the dpi due to pixel ratio changes\n state[\"_dpi\"] = state.get('_original_dpi', state['_dpi'])\n\n # add version information to the state\n state['__mpl_version__'] = mpl.__version__\n\n # check whether the figure manager (if any) is registered with pyplot\n from matplotlib import _pylab_helpers\n if self.canvas.manager in _pylab_helpers.Gcf.figs.values():\n state['_restore_to_pylab'] = True\n return state\n\n def __setstate__(self, state):\n version = state.pop('__mpl_version__')\n restore_to_pylab = state.pop('_restore_to_pylab', False)\n\n if version != mpl.__version__:\n _api.warn_external(\n f\"This figure was saved with matplotlib version {version} and \"\n f\"is unlikely to function correctly.\")\n\n self.__dict__ = state\n\n # re-initialise some of the unstored state information\n FigureCanvasBase(self) # Set self.canvas.\n\n if restore_to_pylab:\n # lazy import to avoid circularity\n import matplotlib.pyplot as plt\n import matplotlib._pylab_helpers as pylab_helpers\n allnums = plt.get_fignums()\n num = max(allnums) + 1 if allnums else 1\n backend = plt._get_backend_mod()\n mgr = backend.new_figure_manager_given_figure(num, self)\n pylab_helpers.Gcf._set_new_active_manager(mgr)\n plt.draw_if_interactive()\n\n self.stale = True\n\n def add_axobserver(self, func):\n \"\"\"Whenever the Axes state change, ``func(self)`` will be called.\"\"\"\n # Connect a wrapper lambda and not func itself, to avoid it being\n # weakref-collected.\n self._axobservers.connect(\"_axes_change_event\", lambda arg: func(arg))\n\n def savefig(self, fname, *, transparent=None, **kwargs):\n \"\"\"\n Save the current figure.\n\n Call signature::\n\n savefig(fname, *, dpi='figure', format=None, metadata=None,\n bbox_inches=None, pad_inches=0.1,\n facecolor='auto', edgecolor='auto',\n backend=None, **kwargs\n )\n\n The available output formats depend on the backend being used.\n\n Parameters\n ----------\n fname : str or path-like or binary file-like\n A path, or a Python file-like object, or\n possibly some backend-dependent object such as\n `matplotlib.backends.backend_pdf.PdfPages`.\n\n If *format* is set, it determines the output format, and the file\n is saved as *fname*. Note that *fname* is used verbatim, and there\n is no attempt to make the extension, if any, of *fname* match\n *format*, and no extension is appended.\n\n If *format* is not set, then the format is inferred from the\n extension of *fname*, if there is one. If *format* is not\n set and *fname* has no extension, then the file is saved with\n :rc:`savefig.format` and the appropriate extension is appended to\n *fname*.\n\n Other Parameters\n ----------------\n dpi : float or 'figure', default: :rc:`savefig.dpi`\n The resolution in dots per inch. If 'figure', use the figure's\n dpi value.\n\n format : str\n The file format, e.g. 'png', 'pdf', 'svg', ... The behavior when\n this is unset is documented under *fname*.\n\n metadata : dict, optional\n Key/value pairs to store in the image metadata. The supported keys\n and defaults depend on the image format and backend:\n\n - 'png' with Agg backend: See the parameter ``metadata`` of\n `~.FigureCanvasAgg.print_png`.\n - 'pdf' with pdf backend: See the parameter ``metadata`` of\n `~.backend_pdf.PdfPages`.\n - 'svg' with svg backend: See the parameter ``metadata`` of\n `~.FigureCanvasSVG.print_svg`.\n - 'eps' and 'ps' with PS backend: Only 'Creator' is supported.\n\n Not supported for 'pgf', 'raw', and 'rgba' as those formats do not support\n embedding metadata.\n Does not currently support 'jpg', 'tiff', or 'webp', but may include\n embedding EXIF metadata in the future.\n\n bbox_inches : str or `.Bbox`, default: :rc:`savefig.bbox`\n Bounding box in inches: only the given portion of the figure is\n saved. If 'tight', try to figure out the tight bbox of the figure.\n\n pad_inches : float or 'layout', default: :rc:`savefig.pad_inches`\n Amount of padding in inches around the figure when bbox_inches is\n 'tight'. If 'layout' use the padding from the constrained or\n compressed layout engine; ignored if one of those engines is not in\n use.\n\n facecolor : color or 'auto', default: :rc:`savefig.facecolor`\n The facecolor of the figure. If 'auto', use the current figure\n facecolor.\n\n edgecolor : color or 'auto', default: :rc:`savefig.edgecolor`\n The edgecolor of the figure. If 'auto', use the current figure\n edgecolor.\n\n backend : str, optional\n Use a non-default backend to render the file, e.g. to render a\n png file with the \"cairo\" backend rather than the default \"agg\",\n or a pdf file with the \"pgf\" backend rather than the default\n \"pdf\". Note that the default backend is normally sufficient. See\n :ref:`the-builtin-backends` for a list of valid backends for each\n file format. Custom backends can be referenced as \"module://...\".\n\n orientation : {'landscape', 'portrait'}\n Currently only supported by the postscript backend.\n\n papertype : str\n One of 'letter', 'legal', 'executive', 'ledger', 'a0' through\n 'a10', 'b0' through 'b10'. Only supported for postscript\n output.\n\n transparent : bool\n If *True*, the Axes patches will all be transparent; the\n Figure patch will also be transparent unless *facecolor*\n and/or *edgecolor* are specified via kwargs.\n\n If *False* has no effect and the color of the Axes and\n Figure patches are unchanged (unless the Figure patch\n is specified via the *facecolor* and/or *edgecolor* keyword\n arguments in which case those colors are used).\n\n The transparency of these patches will be restored to their\n original values upon exit of this function.\n\n This is useful, for example, for displaying\n a plot on top of a colored background on a web page.\n\n bbox_extra_artists : list of `~matplotlib.artist.Artist`, optional\n A list of extra artists that will be considered when the\n tight bbox is calculated.\n\n pil_kwargs : dict, optional\n Additional keyword arguments that are passed to\n `PIL.Image.Image.save` when saving the figure.\n\n \"\"\"\n\n kwargs.setdefault('dpi', mpl.rcParams['savefig.dpi'])\n if transparent is None:\n transparent = mpl.rcParams['savefig.transparent']\n\n with ExitStack() as stack:\n if transparent:\n def _recursively_make_subfig_transparent(exit_stack, subfig):\n exit_stack.enter_context(\n subfig.patch._cm_set(\n facecolor=\"none\", edgecolor=\"none\"))\n for ax in subfig.axes:\n exit_stack.enter_context(\n ax.patch._cm_set(\n facecolor=\"none\", edgecolor=\"none\"))\n for sub_subfig in subfig.subfigs:\n _recursively_make_subfig_transparent(\n exit_stack, sub_subfig)\n\n def _recursively_make_axes_transparent(exit_stack, ax):\n exit_stack.enter_context(\n ax.patch._cm_set(facecolor=\"none\", edgecolor=\"none\"))\n for child_ax in ax.child_axes:\n exit_stack.enter_context(\n child_ax.patch._cm_set(\n facecolor=\"none\", edgecolor=\"none\"))\n for child_childax in ax.child_axes:\n _recursively_make_axes_transparent(\n exit_stack, child_childax)\n\n kwargs.setdefault('facecolor', 'none')\n kwargs.setdefault('edgecolor', 'none')\n # set subfigure to appear transparent in printed image\n for subfig in self.subfigs:\n _recursively_make_subfig_transparent(stack, subfig)\n # set axes to be transparent\n for ax in self.axes:\n _recursively_make_axes_transparent(stack, ax)\n self.canvas.print_figure(fname, **kwargs)\n\n def ginput(self, n=1, timeout=30, show_clicks=True,\n mouse_add=MouseButton.LEFT,\n mouse_pop=MouseButton.RIGHT,\n mouse_stop=MouseButton.MIDDLE):\n \"\"\"\n Blocking call to interact with a figure.\n\n Wait until the user clicks *n* times on the figure, and return the\n coordinates of each click in a list.\n\n There are three possible interactions:\n\n - Add a point.\n - Remove the most recently added point.\n - Stop the interaction and return the points added so far.\n\n The actions are assigned to mouse buttons via the arguments\n *mouse_add*, *mouse_pop* and *mouse_stop*.\n\n Parameters\n ----------\n n : int, default: 1\n Number of mouse clicks to accumulate. If negative, accumulate\n clicks until the input is terminated manually.\n timeout : float, default: 30 seconds\n Number of seconds to wait before timing out. If zero or negative\n will never time out.\n show_clicks : bool, default: True\n If True, show a red cross at the location of each click.\n mouse_add : `.MouseButton` or None, default: `.MouseButton.LEFT`\n Mouse button used to add points.\n mouse_pop : `.MouseButton` or None, default: `.MouseButton.RIGHT`\n Mouse button used to remove the most recently added point.\n mouse_stop : `.MouseButton` or None, default: `.MouseButton.MIDDLE`\n Mouse button used to stop input.\n\n Returns\n -------\n list of tuples\n A list of the clicked (x, y) coordinates.\n\n Notes\n -----\n The keyboard can also be used to select points in case your mouse\n does not have one or more of the buttons. The delete and backspace\n keys act like right-clicking (i.e., remove last point), the enter key\n terminates input and any other key (not already used by the window\n manager) selects a point.\n \"\"\"\n clicks = []\n marks = []\n\n def handler(event):\n is_button = event.name == \"button_press_event\"\n is_key = event.name == \"key_press_event\"\n # Quit (even if not in infinite mode; this is consistent with\n # MATLAB and sometimes quite useful, but will require the user to\n # test how many points were actually returned before using data).\n if (is_button and event.button == mouse_stop\n or is_key and event.key in [\"escape\", \"enter\"]):\n self.canvas.stop_event_loop()\n # Pop last click.\n elif (is_button and event.button == mouse_pop\n or is_key and event.key in [\"backspace\", \"delete\"]):\n if clicks:\n clicks.pop()\n if show_clicks:\n marks.pop().remove()\n self.canvas.draw()\n # Add new click.\n elif (is_button and event.button == mouse_add\n # On macOS/gtk, some keys return None.\n or is_key and event.key is not None):\n if event.inaxes:\n clicks.append((event.xdata, event.ydata))\n _log.info(\"input %i: %f, %f\",\n len(clicks), event.xdata, event.ydata)\n if show_clicks:\n line = mpl.lines.Line2D([event.xdata], [event.ydata],\n marker=\"+\", color=\"r\")\n event.inaxes.add_line(line)\n marks.append(line)\n self.canvas.draw()\n if len(clicks) == n and n > 0:\n self.canvas.stop_event_loop()\n\n _blocking_input.blocking_input_loop(\n self, [\"button_press_event\", \"key_press_event\"], timeout, handler)\n\n # Cleanup.\n for mark in marks:\n mark.remove()\n self.canvas.draw()\n\n return clicks\n\n def waitforbuttonpress(self, timeout=-1):\n \"\"\"\n Blocking call to interact with the figure.\n\n Wait for user input and return True if a key was pressed, False if a\n mouse button was pressed and None if no input was given within\n *timeout* seconds. Negative values deactivate *timeout*.\n \"\"\"\n event = None\n\n def handler(ev):\n nonlocal event\n event = ev\n self.canvas.stop_event_loop()\n\n _blocking_input.blocking_input_loop(\n self, [\"button_press_event\", \"key_press_event\"], timeout, handler)\n\n return None if event is None else event.name == \"key_press_event\"\n\n def tight_layout(self, *, pad=1.08, h_pad=None, w_pad=None, rect=None):\n \"\"\"\n Adjust the padding between and around subplots.\n\n To exclude an artist on the Axes from the bounding box calculation\n that determines the subplot parameters (i.e. legend, or annotation),\n set ``a.set_in_layout(False)`` for that artist.\n\n Parameters\n ----------\n pad : float, default: 1.08\n Padding between the figure edge and the edges of subplots,\n as a fraction of the font size.\n h_pad, w_pad : float, default: *pad*\n Padding (height/width) between edges of adjacent subplots,\n as a fraction of the font size.\n rect : tuple (left, bottom, right, top), default: (0, 0, 1, 1)\n A rectangle in normalized figure coordinates into which the whole\n subplots area (including labels) will fit.\n\n See Also\n --------\n .Figure.set_layout_engine\n .pyplot.tight_layout\n \"\"\"\n # note that here we do not permanently set the figures engine to\n # tight_layout but rather just perform the layout in place and remove\n # any previous engines.\n engine = TightLayoutEngine(pad=pad, h_pad=h_pad, w_pad=w_pad,\n rect=rect)\n try:\n previous_engine = self.get_layout_engine()\n self.set_layout_engine(engine)\n engine.execute(self)\n if not isinstance(previous_engine, TightLayoutEngine) \\\n and previous_engine is not None:\n _api.warn_external('The figure layout has changed to tight')\n finally:\n self.set_layout_engine('none')\n\n\ndef figaspect(arg):\n \"\"\"\n Calculate the width and height for a figure with a specified aspect ratio.\n\n While the height is taken from :rc:`figure.figsize`, the width is\n adjusted to match the desired aspect ratio. Additionally, it is ensured\n that the width is in the range [4., 16.] and the height is in the range\n [2., 16.]. If necessary, the default height is adjusted to ensure this.\n\n Parameters\n ----------\n arg : float or 2D array\n If a float, this defines the aspect ratio (i.e. the ratio height /\n width).\n In case of an array the aspect ratio is number of rows / number of\n columns, so that the array could be fitted in the figure undistorted.\n\n Returns\n -------\n width, height : float\n The figure size in inches.\n\n Notes\n -----\n If you want to create an Axes within the figure, that still preserves the\n aspect ratio, be sure to create it with equal width and height. See\n examples below.\n\n Thanks to Fernando Perez for this function.\n\n Examples\n --------\n Make a figure twice as tall as it is wide::\n\n w, h = figaspect(2.)\n fig = Figure(figsize=(w, h))\n ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])\n ax.imshow(A, **kwargs)\n\n Make a figure with the proper aspect for an array::\n\n A = rand(5, 3)\n w, h = figaspect(A)\n fig = Figure(figsize=(w, h))\n ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])\n ax.imshow(A, **kwargs)\n \"\"\"\n\n isarray = hasattr(arg, 'shape') and not np.isscalar(arg)\n\n # min/max sizes to respect when autoscaling. If John likes the idea, they\n # could become rc parameters, for now they're hardwired.\n figsize_min = np.array((4.0, 2.0)) # min length for width/height\n figsize_max = np.array((16.0, 16.0)) # max length for width/height\n\n # Extract the aspect ratio of the array\n if isarray:\n nr, nc = arg.shape[:2]\n arr_ratio = nr / nc\n else:\n arr_ratio = arg\n\n # Height of user figure defaults\n fig_height = mpl.rcParams['figure.figsize'][1]\n\n # New size for the figure, keeping the aspect ratio of the caller\n newsize = np.array((fig_height / arr_ratio, fig_height))\n\n # Sanity checks, don't drop either dimension below figsize_min\n newsize /= min(1.0, *(newsize / figsize_min))\n\n # Avoid humongous windows as well\n newsize /= max(1.0, *(newsize / figsize_max))\n\n # Finally, if we have a really funky aspect ratio, break it but respect\n # the min/max dimensions (we don't want figures 10 feet tall!)\n newsize = np.clip(newsize, figsize_min, figsize_max)\n return newsize\n", "lib/matplotlib/figure.pyi": "import os\n\nfrom matplotlib import backend_bases, projections\nfrom matplotlib.artist import Artist, allow_rasterization\nfrom matplotlib.axes import Axes, SubplotBase\nfrom matplotlib.backend_bases import (\n DrawEvent,\n FigureCanvasBase,\n MouseButton,\n MouseEvent,\n NonGuiException,\n RendererBase,\n)\nfrom matplotlib.colors import Colormap, Normalize\nfrom matplotlib.colorbar import Colorbar\nfrom matplotlib.cm import ScalarMappable\nfrom matplotlib.gridspec import GridSpec, SubplotSpec\nfrom matplotlib.image import _ImageBase\nfrom matplotlib.layout_engine import (\n ConstrainedLayoutEngine,\n LayoutEngine,\n PlaceHolderLayoutEngine,\n TightLayoutEngine,\n)\nfrom matplotlib.legend import Legend\nfrom matplotlib.lines import Line2D\nfrom matplotlib.patches import Rectangle, Patch\nfrom matplotlib.text import Text\nfrom matplotlib.transforms import (\n Affine2D,\n Bbox,\n BboxBase,\n BboxTransformTo,\n TransformedBbox,\n Transform,\n)\n\nimport numpy as np\nfrom numpy.typing import ArrayLike\n\nfrom collections.abc import Callable, Iterable\nfrom typing import Any, IO, Literal, overload\nfrom .typing import ColorType, HashableList\n\nclass SubplotParams:\n def __init__(\n self,\n left: float | None = ...,\n bottom: float | None = ...,\n right: float | None = ...,\n top: float | None = ...,\n wspace: float | None = ...,\n hspace: float | None = ...,\n ) -> None: ...\n left: float\n right: float\n bottom: float\n top: float\n wspace: float\n hspace: float\n def update(\n self,\n left: float | None = ...,\n bottom: float | None = ...,\n right: float | None = ...,\n top: float | None = ...,\n wspace: float | None = ...,\n hspace: float | None = ...,\n ) -> None: ...\n\nclass FigureBase(Artist):\n figure: FigureBase | None\n artists: list[Artist]\n lines: list[Line2D]\n patches: list[Patch]\n texts: list[Text]\n images: list[_ImageBase]\n legends: list[Legend]\n subfigs: list[SubFigure]\n stale: bool\n suppressComposite: bool | None\n def __init__(self, **kwargs) -> None: ...\n def autofmt_xdate(\n self,\n bottom: float = ...,\n rotation: int = ...,\n ha: Literal[\"left\", \"center\", \"right\"] = ...,\n which: Literal[\"major\", \"minor\", \"both\"] = ...,\n ) -> None: ...\n def get_children(self) -> list[Artist]: ...\n def contains(self, mouseevent: MouseEvent) -> tuple[bool, dict[Any, Any]]: ...\n def suptitle(self, t: str, **kwargs) -> Text: ...\n def supxlabel(self, t: str, **kwargs) -> Text: ...\n def supylabel(self, t: str, **kwargs) -> Text: ...\n def get_edgecolor(self) -> ColorType: ...\n def get_facecolor(self) -> ColorType: ...\n def get_frameon(self) -> bool: ...\n def set_linewidth(self, linewidth: float) -> None: ...\n def get_linewidth(self) -> float: ...\n def set_edgecolor(self, color: ColorType) -> None: ...\n def set_facecolor(self, color: ColorType) -> None: ...\n def set_frameon(self, b: bool) -> None: ...\n @property\n def frameon(self) -> bool: ...\n @frameon.setter\n def frameon(self, b: bool) -> None: ...\n def add_artist(self, artist: Artist, clip: bool = ...) -> Artist: ...\n @overload\n def add_axes(self, ax: Axes) -> Axes: ...\n @overload\n def add_axes(\n self,\n rect: tuple[float, float, float, float],\n projection: None | str = ...,\n polar: bool = ...,\n **kwargs\n ) -> Axes: ...\n\n # TODO: docstring indicates SubplotSpec a valid arg, but none of the listed signatures appear to be that\n @overload\n def add_subplot(\n self, nrows: int, ncols: int, index: int | tuple[int, int], **kwargs\n ) -> Axes: ...\n @overload\n def add_subplot(self, pos: int, **kwargs) -> Axes: ...\n @overload\n def add_subplot(self, ax: Axes, **kwargs) -> Axes: ...\n @overload\n def add_subplot(self, ax: SubplotSpec, **kwargs) -> Axes: ...\n @overload\n def add_subplot(self, **kwargs) -> Axes: ...\n @overload\n def subplots(\n self,\n nrows: int = ...,\n ncols: int = ...,\n *,\n sharex: bool | Literal[\"none\", \"all\", \"row\", \"col\"] = ...,\n sharey: bool | Literal[\"none\", \"all\", \"row\", \"col\"] = ...,\n squeeze: Literal[False],\n width_ratios: ArrayLike | None = ...,\n height_ratios: ArrayLike | None = ...,\n subplot_kw: dict[str, Any] | None = ...,\n gridspec_kw: dict[str, Any] | None = ...\n ) -> np.ndarray: ...\n @overload\n def subplots(\n self,\n nrows: int = ...,\n ncols: int = ...,\n *,\n sharex: bool | Literal[\"none\", \"all\", \"row\", \"col\"] = ...,\n sharey: bool | Literal[\"none\", \"all\", \"row\", \"col\"] = ...,\n squeeze: Literal[True] = ...,\n width_ratios: ArrayLike | None = ...,\n height_ratios: ArrayLike | None = ...,\n subplot_kw: dict[str, Any] | None = ...,\n gridspec_kw: dict[str, Any] | None = ...\n ) -> np.ndarray | SubplotBase | Axes: ...\n def delaxes(self, ax: Axes) -> None: ...\n def clear(self, keep_observers: bool = ...) -> None: ...\n def clf(self, keep_observers: bool = ...): ...\n\n @overload\n def legend(self) -> Legend: ...\n @overload\n def legend(self, handles: Iterable[Artist], labels: Iterable[str], **kwargs) -> Legend: ...\n @overload\n def legend(self, *, handles: Iterable[Artist], **kwargs) -> Legend: ...\n @overload\n def legend(self, labels: Iterable[str], **kwargs) -> Legend: ...\n @overload\n def legend(self, **kwargs) -> Legend: ...\n\n def text(\n self,\n x: float,\n y: float,\n s: str,\n fontdict: dict[str, Any] | None = ...,\n **kwargs\n ) -> Text: ...\n def colorbar(\n self,\n mappable: ScalarMappable,\n cax: Axes | None = ...,\n ax: Axes | Iterable[Axes] | None = ...,\n use_gridspec: bool = ...,\n **kwargs\n ) -> Colorbar: ...\n def subplots_adjust(\n self,\n left: float | None = ...,\n bottom: float | None = ...,\n right: float | None = ...,\n top: float | None = ...,\n wspace: float | None = ...,\n hspace: float | None = ...,\n ) -> None: ...\n def align_xlabels(self, axs: Iterable[Axes] | None = ...) -> None: ...\n def align_ylabels(self, axs: Iterable[Axes] | None = ...) -> None: ...\n def align_labels(self, axs: Iterable[Axes] | None = ...) -> None: ...\n def add_gridspec(self, nrows: int = ..., ncols: int = ..., **kwargs): ...\n @overload\n def subfigures(\n self,\n nrows: int = ...,\n ncols: int = ...,\n squeeze: Literal[False] = ...,\n wspace: float | None = ...,\n hspace: float | None = ...,\n width_ratios: ArrayLike | None = ...,\n height_ratios: ArrayLike | None = ...,\n **kwargs\n ) -> np.ndarray: ...\n @overload\n def subfigures(\n self,\n nrows: int = ...,\n ncols: int = ...,\n squeeze: Literal[True] = ...,\n wspace: float | None = ...,\n hspace: float | None = ...,\n width_ratios: ArrayLike | None = ...,\n height_ratios: ArrayLike | None = ...,\n **kwargs\n ) -> np.ndarray | SubFigure: ...\n def add_subfigure(self, subplotspec: SubplotSpec, **kwargs) -> SubFigure: ...\n def sca(self, a: Axes) -> Axes: ...\n def gca(self) -> Axes: ...\n def _gci(self) -> ScalarMappable | None: ...\n def _process_projection_requirements(\n self, *args, axes_class=None, polar=False, projection=None, **kwargs\n ): ...\n def get_default_bbox_extra_artists(self) -> list[Artist]: ...\n def get_tightbbox(\n self,\n renderer: RendererBase | None = ...,\n bbox_extra_artists: Iterable[Artist] | None = ...,\n ) -> Bbox: ...\n\n # Any in list of list is recursive list[list[Hashable | list[Hashable | ...]]] but that can't really be type checked\n def subplot_mosaic(\n self,\n mosaic: str | HashableList,\n *,\n sharex: bool = ...,\n sharey: bool = ...,\n width_ratios: ArrayLike | None = ...,\n height_ratios: ArrayLike | None = ...,\n empty_sentinel: Any = ...,\n subplot_kw: dict[str, Any] | None = ...,\n per_subplot_kw: dict[Any, dict[str, Any]] | None = ...,\n gridspec_kw: dict[str, Any] | None = ...\n ) -> dict[Any, Axes]: ...\n\nclass SubFigure(FigureBase):\n figure: FigureBase\n subplotpars: SubplotParams\n dpi_scale_trans: Affine2D\n canvas: FigureCanvasBase\n transFigure: Transform\n bbox_relative: Bbox\n figbbox: Bbox\n bbox: Bbox\n transSubfigure: Transform\n patch: Rectangle\n def __init__(\n self,\n parent: Figure | SubFigure,\n subplotspec: SubplotSpec,\n *,\n facecolor: ColorType | None = ...,\n edgecolor: ColorType | None = ...,\n linewidth: float = ...,\n frameon: bool | None = ...,\n **kwargs\n ) -> None: ...\n @property\n def dpi(self) -> float: ...\n @dpi.setter\n def dpi(self, value: float) -> None: ...\n def get_dpi(self) -> float: ...\n def set_dpi(self, val) -> None: ...\n def get_constrained_layout(self) -> bool: ...\n def get_constrained_layout_pads(\n self, relative: bool = ...\n ) -> tuple[float, float, float, float]: ...\n def get_layout_engine(self) -> LayoutEngine: ...\n @property # type: ignore[misc]\n def axes(self) -> list[Axes]: ... # type: ignore[override]\n def get_axes(self) -> list[Axes]: ...\n\nclass Figure(FigureBase):\n bbox_inches: Bbox\n dpi_scale_trans: Affine2D\n bbox: Bbox\n figbbox: Bbox\n transFigure: Transform\n transSubfigure: Transform\n patch: Rectangle\n subplotpars: SubplotParams\n def __init__(\n self,\n figsize: tuple[float, float] | None = ...,\n dpi: float | None = ...,\n *,\n facecolor: ColorType | None = ...,\n edgecolor: ColorType | None = ...,\n linewidth: float = ...,\n frameon: bool | None = ...,\n subplotpars: SubplotParams | None = ...,\n tight_layout: bool | dict[str, Any] | None = ...,\n constrained_layout: bool | dict[str, Any] | None = ...,\n layout: Literal[\"constrained\", \"compressed\", \"tight\"]\n | LayoutEngine\n | None = ...,\n **kwargs\n ) -> None: ...\n def pick(self, mouseevent: MouseEvent) -> None: ...\n def set_layout_engine(\n self,\n layout: Literal[\"constrained\", \"compressed\", \"tight\", \"none\"]\n | LayoutEngine\n | None = ...,\n **kwargs\n ) -> None: ...\n def get_layout_engine(self) -> LayoutEngine | None: ...\n def show(self, warn: bool = ...) -> None: ...\n @property # type: ignore[misc]\n def axes(self) -> list[Axes]: ... # type: ignore[override]\n def get_axes(self) -> list[Axes]: ...\n @property\n def dpi(self) -> float: ...\n @dpi.setter\n def dpi(self, dpi: float) -> None: ...\n def get_tight_layout(self) -> bool: ...\n def get_constrained_layout_pads(\n self, relative: bool = ...\n ) -> tuple[float, float, float, float]: ...\n def get_constrained_layout(self) -> bool: ...\n canvas: FigureCanvasBase\n def set_canvas(self, canvas: FigureCanvasBase) -> None: ...\n def figimage(\n self,\n X: ArrayLike,\n xo: int = ...,\n yo: int = ...,\n alpha: float | None = ...,\n norm: str | Normalize | None = ...,\n cmap: str | Colormap | None = ...,\n vmin: float | None = ...,\n vmax: float | None = ...,\n origin: Literal[\"upper\", \"lower\"] | None = ...,\n resize: bool = ...,\n **kwargs\n ): ...\n def set_size_inches(\n self, w: float | tuple[float, float], h: float | None = ..., forward: bool = ...\n ) -> None: ...\n def get_size_inches(self) -> np.ndarray: ...\n def get_figwidth(self) -> float: ...\n def get_figheight(self) -> float: ...\n def get_dpi(self) -> float: ...\n def set_dpi(self, val: float) -> None: ...\n def set_figwidth(self, val: float, forward: bool = ...) -> None: ...\n def set_figheight(self, val: float, forward: bool = ...) -> None: ...\n def clear(self, keep_observers: bool = ...) -> None: ...\n def draw_without_rendering(self) -> None: ...\n def draw_artist(self, a: Artist) -> None: ...\n def add_axobserver(self, func: Callable[[Figure], Any]): ...\n def savefig(\n self,\n fname: str | os.PathLike | IO,\n *,\n transparent: bool | None = ...,\n **kwargs\n ) -> None: ...\n def ginput(\n self,\n n: int = ...,\n timeout: float = ...,\n show_clicks: bool = ...,\n mouse_add: MouseButton = ...,\n mouse_pop: MouseButton = ...,\n mouse_stop: MouseButton = ...,\n ) -> list[tuple[int, int]]: ...\n def waitforbuttonpress(self, timeout: float = ...): ...\n def tight_layout(\n self,\n *,\n pad: float = ...,\n h_pad: float | None = ...,\n w_pad: float | None = ...,\n rect: tuple[float, float, float, float] | None = ...\n ) -> None: ...\n\ndef figaspect(arg: float | ArrayLike) -> tuple[float, float]: ...\n"}
|
diff --git a/doc/users/next_whats_new/get_suptitle.rst b/doc/users/next_whats_new/get_suptitle.rst
new file mode 100644
index 000000000000..b03ad10b1b4c
--- /dev/null
+++ b/doc/users/next_whats_new/get_suptitle.rst
@@ -0,0 +1,4 @@
+``Figure.get_suptitle()``, ``Figure.get_supxlabel()``, ``Figure.get_supylabel()``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+These methods return the strings set by ``Figure.suptitle()``, ``Figure.supxlabel()``
+and ``Figure.supylabel()`` respectively.
diff --git a/lib/matplotlib/figure.pyi b/lib/matplotlib/figure.pyi
index ee21892f32ac..f4c31506a2e1 100644
--- a/lib/matplotlib/figure.pyi
+++ b/lib/matplotlib/figure.pyi
@@ -90,8 +90,11 @@ class FigureBase(Artist):
def get_children(self) -> list[Artist]: ...
def contains(self, mouseevent: MouseEvent) -> tuple[bool, dict[Any, Any]]: ...
def suptitle(self, t: str, **kwargs) -> Text: ...
+ def get_suptitle(self) -> str: ...
def supxlabel(self, t: str, **kwargs) -> Text: ...
+ def get_supxlabel(self) -> str: ...
def supylabel(self, t: str, **kwargs) -> Text: ...
+ def get_supylabel(self) -> str: ...
def get_edgecolor(self) -> ColorType: ...
def get_facecolor(self) -> ColorType: ...
def get_frameon(self) -> bool: ...
|
{"lib/matplotlib/figure.py": [{"type": "function", "name": "FigureBase.get_suptitle", "lines": [391, 394], "signature": "def get_suptitle(self):", "doc": "Return the suptitle as string or an empty string if not set."}, {"type": "function", "name": "FigureBase.get_supxlabel", "lines": [406, 409], "signature": "def get_supxlabel(self):", "doc": "Return the supxlabel as string or an empty string if not set."}, {"type": "function", "name": "FigureBase.get_supylabel", "lines": [422, 425], "signature": "def get_supylabel(self):", "doc": "Return the supylabel as string or an empty string if not set."}]}
|
3.7
|
["lib/matplotlib/tests/test_figure.py::test_get_suptitle_supxlabel_supylabel"]
|
["lib/matplotlib/tests/test_figure.py::test_align_labels[png]", "lib/matplotlib/tests/test_figure.py::test_align_labels_stray_axes", "lib/matplotlib/tests/test_figure.py::test_figure_label", "lib/matplotlib/tests/test_figure.py::test_fignum_exists", "lib/matplotlib/tests/test_figure.py::test_clf_keyword", "lib/matplotlib/tests/test_figure.py::test_figure[png]", "lib/matplotlib/tests/test_figure.py::test_figure_legend[png]", "lib/matplotlib/tests/test_figure.py::test_gca", "lib/matplotlib/tests/test_figure.py::test_add_subplot_subclass", "lib/matplotlib/tests/test_figure.py::test_add_subplot_invalid", "lib/matplotlib/tests/test_figure.py::test_suptitle[png]", "lib/matplotlib/tests/test_figure.py::test_suptitle_fontproperties", "lib/matplotlib/tests/test_figure.py::test_suptitle_subfigures", "lib/matplotlib/tests/test_figure.py::test_alpha[png]", "lib/matplotlib/tests/test_figure.py::test_too_many_figures", "lib/matplotlib/tests/test_figure.py::test_iterability_axes_argument", "lib/matplotlib/tests/test_figure.py::test_set_fig_size", "lib/matplotlib/tests/test_figure.py::test_axes_remove", "lib/matplotlib/tests/test_figure.py::test_figaspect", "lib/matplotlib/tests/test_figure.py::test_autofmt_xdate[both]", "lib/matplotlib/tests/test_figure.py::test_autofmt_xdate[major]", "lib/matplotlib/tests/test_figure.py::test_autofmt_xdate[minor]", "lib/matplotlib/tests/test_figure.py::test_change_dpi", "lib/matplotlib/tests/test_figure.py::test_invalid_figure_size[1-nan]", "lib/matplotlib/tests/test_figure.py::test_invalid_figure_size[-1-1]", "lib/matplotlib/tests/test_figure.py::test_invalid_figure_size[inf-1]", "lib/matplotlib/tests/test_figure.py::test_invalid_figure_add_axes", "lib/matplotlib/tests/test_figure.py::test_subplots_shareax_loglabels", "lib/matplotlib/tests/test_figure.py::test_savefig", "lib/matplotlib/tests/test_figure.py::test_savefig_warns", "lib/matplotlib/tests/test_figure.py::test_savefig_backend", "lib/matplotlib/tests/test_figure.py::test_savefig_pixel_ratio[Agg]", "lib/matplotlib/tests/test_figure.py::test_savefig_pixel_ratio[Cairo]", "lib/matplotlib/tests/test_figure.py::test_savefig_preserve_layout_engine", "lib/matplotlib/tests/test_figure.py::test_savefig_locate_colorbar", "lib/matplotlib/tests/test_figure.py::test_savefig_transparent[png]", "lib/matplotlib/tests/test_figure.py::test_figure_repr", "lib/matplotlib/tests/test_figure.py::test_valid_layouts", "lib/matplotlib/tests/test_figure.py::test_invalid_layouts", "lib/matplotlib/tests/test_figure.py::test_tightlayout_autolayout_deconflict[png]", "lib/matplotlib/tests/test_figure.py::test_layout_change_warning[constrained]", "lib/matplotlib/tests/test_figure.py::test_layout_change_warning[compressed]", "lib/matplotlib/tests/test_figure.py::test_add_artist[png]", "lib/matplotlib/tests/test_figure.py::test_fspath[png]", "lib/matplotlib/tests/test_figure.py::test_fspath[pdf]", "lib/matplotlib/tests/test_figure.py::test_fspath[ps]", "lib/matplotlib/tests/test_figure.py::test_fspath[eps]", "lib/matplotlib/tests/test_figure.py::test_fspath[svg]", "lib/matplotlib/tests/test_figure.py::test_tightbbox", "lib/matplotlib/tests/test_figure.py::test_axes_removal", "lib/matplotlib/tests/test_figure.py::test_removed_axis", "lib/matplotlib/tests/test_figure.py::test_figure_clear[clear]", "lib/matplotlib/tests/test_figure.py::test_figure_clear[clf]", "lib/matplotlib/tests/test_figure.py::test_clf_not_redefined", "lib/matplotlib/tests/test_figure.py::test_picking_does_not_stale", "lib/matplotlib/tests/test_figure.py::test_add_subplot_twotuple", "lib/matplotlib/tests/test_figure.py::test_animated_with_canvas_change[png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_basic[x0-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_basic[x1-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_basic[x2-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_basic[x3-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_all_nested[png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_nested[png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_nested_tuple[png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_nested_width_ratios", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_nested_height_ratios", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_empty[x0-None-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_empty[x1-SKIP-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_empty[x2-0-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_empty[x3-None-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_empty[x4-SKIP-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_empty[x5-0-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_fail_list_of_str", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_subplot_kw[subplot_kw0-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_subplot_kw[subplot_kw1-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_subplot_kw[None-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_per_subplot_kw[BC-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_per_subplot_kw[multi_value1-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_string_parser", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_per_subplot_kw_expander", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_extra_per_subplot_kw", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_single_str_input[AAA\\nBBB-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_single_str_input[\\nAAA\\nBBB\\n-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_single_str_input[ABC\\nDEF-png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_fail[x0-(?m)we", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_fail[x1-There", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_fail[AAA\\nc\\nBBB-All", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_fail[x3-All", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_hashable_keys[png]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_user_order[abc]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_user_order[cab]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_user_order[bca]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_user_order[cba]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_user_order[acb]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_user_order[bac]", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_nested_user_order", "lib/matplotlib/tests/test_figure.py::TestSubplotMosaic::test_share_all", "lib/matplotlib/tests/test_figure.py::test_reused_gridspec", "lib/matplotlib/tests/test_figure.py::test_subfigure[png]", "lib/matplotlib/tests/test_figure.py::test_subfigure_tightbbox", "lib/matplotlib/tests/test_figure.py::test_subfigure_dpi", "lib/matplotlib/tests/test_figure.py::test_subfigure_ss[png]", "lib/matplotlib/tests/test_figure.py::test_subfigure_double[png]", "lib/matplotlib/tests/test_figure.py::test_subfigure_spanning", "lib/matplotlib/tests/test_figure.py::test_subfigure_ticks", "lib/matplotlib/tests/test_figure.py::test_subfigure_scatter_size[png]", "lib/matplotlib/tests/test_figure.py::test_subfigure_pdf", "lib/matplotlib/tests/test_figure.py::test_add_subplot_kwargs", "lib/matplotlib/tests/test_figure.py::test_add_axes_kwargs", "lib/matplotlib/tests/test_figure.py::test_ginput", "lib/matplotlib/tests/test_figure.py::test_waitforbuttonpress", "lib/matplotlib/tests/test_figure.py::test_kwargs_pass", "lib/matplotlib/tests/test_figure.py::test_rcparams[png]", "lib/matplotlib/tests/test_figure.py::test_deepcopy", "lib/matplotlib/tests/test_figure.py::test_unpickle_with_device_pixel_ratio", "lib/matplotlib/tests/test_figure.py::test_gridspec_no_mutate_input", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata[eps]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata[pdf]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata[png]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata[ps]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata[svg]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata[svgz]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata_error[jpeg]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata_error[jpg]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata_error[tif]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata_error[tiff]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata_error[webp]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata_error[raw]", "lib/matplotlib/tests/test_figure.py::test_savefig_metadata_error[rgba]"]
|
a4dca24d04f928a9e614db403c716237446140b2
|
{"first_commit_time": 1681507590.0, "pr_title": "Add Figure methods get_suptitle(), get_subxlabel(), get_supylabel()", "pr_body": "## PR Summary\r\n\r\nWe had no public API to optain these values. The API is modelled analogous to `Axes.get_title()` / `Axes.get_x/ylabel()`, i.e. returning a string (and not the Text object) and returning an empty string if no text has been set yet.\r\n\r\nSide remark. It's historic and unfortunate that the setters don't have the `set_` prefix (in contrast to e.g. `set_title()`). Nevertheless `get_*` is the only sane choice for the getters introduced here.", "pr_timeline": [], "issues": {}}
|
matplotlib/matplotlib
| 26,293
|
https://github.com/matplotlib/matplotlib/pull/26293
|
matplotlib__matplotlib-26293
|
["26281", "0000"]
|
a4dca24d04f928a9e614db403c716237446140b2
|
diff --git a/doc/api/axes_api.rst b/doc/api/axes_api.rst
index 3457368fa51c..8b01a120da5b 100644
--- a/doc/api/axes_api.rst
+++ b/doc/api/axes_api.rst
@@ -335,6 +335,8 @@ Autoscaling and margins
Axes.use_sticky_edges
Axes.margins
+ Axes.get_xmargin
+ Axes.get_ymargin
Axes.set_xmargin
Axes.set_ymargin
diff --git a/doc/api/toolkits/mplot3d/axes3d.rst b/doc/api/toolkits/mplot3d/axes3d.rst
index f6d8e2529896..b581494e4883 100644
--- a/doc/api/toolkits/mplot3d/axes3d.rst
+++ b/doc/api/toolkits/mplot3d/axes3d.rst
@@ -137,6 +137,7 @@ Autoscaling and margins
:template: autosummary.rst
:nosignatures:
+ get_zmargin
set_zmargin
margins
autoscale
diff --git a/doc/users/next_whats_new/margin_getters.rst b/doc/users/next_whats_new/margin_getters.rst
new file mode 100644
index 000000000000..c43709a17d52
--- /dev/null
+++ b/doc/users/next_whats_new/margin_getters.rst
@@ -0,0 +1,4 @@
+Getters for xmargin, ymargin and zmargin
+----------------------------------------
+``.Axes.get_xmargin()``, ``.Axes.get_ymargin()`` and ``.Axes3D.get_zmargin()`` methods have been added to return
+the margin values set by ``.Axes.set_xmargin()``, ``.Axes.set_ymargin()`` and ``.Axes3D.set_zmargin()``, respectively.
diff --git a/lib/matplotlib/axes/_base.py b/lib/matplotlib/axes/_base.py
index 3796d9bbe508..ad06e6d552bc 100644
--- a/lib/matplotlib/axes/_base.py
+++ b/lib/matplotlib/axes/_base.py
@@ -2617,6 +2617,38 @@ def use_sticky_edges(self, b):
self._use_sticky_edges = bool(b)
# No effect until next autoscaling, which will mark the Axes as stale.
+ def get_xmargin(self):
+ """
+ Retrieve autoscaling margin of the x-axis.
+
+ .. versionadded:: 3.9
+
+ Returns
+ -------
+ xmargin : float
+
+ See Also
+ --------
+ matplotlib.axes.Axes.set_xmargin
+ """
+ return self._xmargin
+
+ def get_ymargin(self):
+ """
+ Retrieve autoscaling margin of the y-axis.
+
+ .. versionadded:: 3.9
+
+ Returns
+ -------
+ ymargin : float
+
+ See Also
+ --------
+ matplotlib.axes.Axes.set_ymargin
+ """
+ return self._ymargin
+
def set_xmargin(self, m):
"""
Set padding of X data limits prior to autoscaling.
diff --git a/lib/matplotlib/axes/_base.pyi b/lib/matplotlib/axes/_base.pyi
index d41ecae1803c..e3644585296d 100644
--- a/lib/matplotlib/axes/_base.pyi
+++ b/lib/matplotlib/axes/_base.pyi
@@ -242,6 +242,8 @@ class _AxesBase(martist.Artist):
def use_sticky_edges(self) -> bool: ...
@use_sticky_edges.setter
def use_sticky_edges(self, b: bool) -> None: ...
+ def get_xmargin(self) -> float: ...
+ def get_ymargin(self) -> float: ...
def set_xmargin(self, m: float) -> None: ...
def set_ymargin(self, m: float) -> None: ...
diff --git a/lib/mpl_toolkits/mplot3d/axes3d.py b/lib/mpl_toolkits/mplot3d/axes3d.py
index aeb6a66d2c98..e7abdc0767b5 100644
--- a/lib/mpl_toolkits/mplot3d/axes3d.py
+++ b/lib/mpl_toolkits/mplot3d/axes3d.py
@@ -513,6 +513,22 @@ def update_datalim(self, xys, **kwargs):
get_autoscalez_on = _axis_method_wrapper("zaxis", "_get_autoscale_on")
set_autoscalez_on = _axis_method_wrapper("zaxis", "_set_autoscale_on")
+ def get_zmargin(self):
+ """
+ Retrieve autoscaling margin of the z-axis.
+
+ .. versionadded:: 3.9
+
+ Returns
+ -------
+ zmargin : float
+
+ See Also
+ --------
+ mpl_toolkits.mplot3d.axes3d.Axes3D.set_zmargin
+ """
+ return self._zmargin
+
def set_zmargin(self, m):
"""
Set padding of Z data limits prior to autoscaling.
|
diff --git a/lib/matplotlib/tests/test_axes.py b/lib/matplotlib/tests/test_axes.py
index 0fcb2eb26cbb..6dea39a702fc 100644
--- a/lib/matplotlib/tests/test_axes.py
+++ b/lib/matplotlib/tests/test_axes.py
@@ -6153,6 +6153,14 @@ def test_margins():
ymax + (ymax - ymin) * 0.5)
+def test_margin_getters():
+ fig = plt.figure()
+ ax = fig.add_subplot()
+ ax.margins(0.2, 0.3)
+ assert ax.get_xmargin() == 0.2
+ assert ax.get_ymargin() == 0.3
+
+
def test_set_margin_updates_limits():
mpl.style.use("default")
fig, ax = plt.subplots()
diff --git a/lib/mpl_toolkits/mplot3d/tests/test_axes3d.py b/lib/mpl_toolkits/mplot3d/tests/test_axes3d.py
index 1f8764cbab9d..df9f2ae52fd7 100644
--- a/lib/mpl_toolkits/mplot3d/tests/test_axes3d.py
+++ b/lib/mpl_toolkits/mplot3d/tests/test_axes3d.py
@@ -1994,6 +1994,15 @@ def test_margins():
assert ax.margins() == (0, 0.1, 0)
+def test_margin_getters():
+ fig = plt.figure()
+ ax = fig.add_subplot(projection='3d')
+ ax.margins(0.1, 0.2, 0.3)
+ assert ax.get_xmargin() == 0.1
+ assert ax.get_ymargin() == 0.2
+ assert ax.get_zmargin() == 0.3
+
+
@pytest.mark.parametrize('err, args, kwargs, match', (
(ValueError, (-1,), {}, r'margin must be greater than -0\.5'),
(ValueError, (1, -1, 1), {}, r'margin must be greater than -0\.5'),
| 2023-07-12T06:31:49
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"doc/api/axes_api.rst": "*******************\n``matplotlib.axes``\n*******************\n\nThe `~.axes.Axes` class represents one (sub-)plot in a figure. It contains the\nplotted data, axis ticks, labels, title, legend, etc. Its methods are the main\ninterface for manipulating the plot.\n\n.. currentmodule:: matplotlib.axes\n\n.. contents:: Table of Contents\n :depth: 2\n :local:\n :backlinks: entry\n :class: multicol-toc\n\n.. automodule:: matplotlib.axes\n :no-members:\n :no-undoc-members:\n\nThe Axes class\n==============\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary_class_only.rst\n :nosignatures:\n\n Axes\n\nPlotting\n========\n\nBasic\n-----\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.plot\n Axes.errorbar\n Axes.scatter\n\n Axes.plot_date\n Axes.step\n\n Axes.loglog\n Axes.semilogx\n Axes.semilogy\n\n Axes.fill_between\n Axes.fill_betweenx\n\n Axes.bar\n Axes.barh\n Axes.bar_label\n\n Axes.stem\n Axes.eventplot\n\n Axes.pie\n\n Axes.stackplot\n\n\n Axes.broken_barh\n Axes.vlines\n Axes.hlines\n Axes.fill\n\nSpans\n-----\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.axhline\n Axes.axhspan\n Axes.axvline\n Axes.axvspan\n Axes.axline\n\nSpectral\n--------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.acorr\n Axes.angle_spectrum\n Axes.cohere\n Axes.csd\n Axes.magnitude_spectrum\n Axes.phase_spectrum\n Axes.psd\n Axes.specgram\n Axes.xcorr\n\nStatistics\n----------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.ecdf\n Axes.boxplot\n Axes.violinplot\n\n Axes.bxp\n Axes.violin\n\nBinned\n------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.hexbin\n Axes.hist\n Axes.hist2d\n Axes.stairs\n\nContours\n--------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.clabel\n Axes.contour\n Axes.contourf\n\n2D arrays\n---------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.imshow\n Axes.matshow\n Axes.pcolor\n Axes.pcolorfast\n Axes.pcolormesh\n Axes.spy\n\nUnstructured triangles\n----------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.tripcolor\n Axes.triplot\n Axes.tricontour\n Axes.tricontourf\n\n\nText and annotations\n--------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.annotate\n Axes.text\n Axes.table\n Axes.arrow\n Axes.inset_axes\n Axes.indicate_inset\n Axes.indicate_inset_zoom\n Axes.secondary_xaxis\n Axes.secondary_yaxis\n\n\nVector fields\n-------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.barbs\n Axes.quiver\n Axes.quiverkey\n Axes.streamplot\n\n\nClearing\n========\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.cla\n Axes.clear\n\n\nAppearance\n==========\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n\n Axes.axis\n\n Axes.set_axis_off\n Axes.set_axis_on\n Axes.set_frame_on\n Axes.get_frame_on\n\n Axes.set_axisbelow\n Axes.get_axisbelow\n\n Axes.grid\n\n Axes.get_facecolor\n Axes.set_facecolor\n\n\nProperty cycle\n==============\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.set_prop_cycle\n\n\nAxis / limits\n=============\n\n.. For families of methods of the form {get,set}_{x,y}foo, try to list them in\n the order set_xfoo, get_xfoo, set_yfoo, get_yfoo\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.get_xaxis\n Axes.get_yaxis\n\nAxis limits and direction\n-------------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.invert_xaxis\n Axes.xaxis_inverted\n Axes.invert_yaxis\n Axes.yaxis_inverted\n\n Axes.set_xlim\n Axes.get_xlim\n Axes.set_ylim\n Axes.get_ylim\n\n Axes.update_datalim\n\n Axes.set_xbound\n Axes.get_xbound\n Axes.set_ybound\n Axes.get_ybound\n\nAxis labels, title, and legend\n------------------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.set_xlabel\n Axes.get_xlabel\n Axes.set_ylabel\n Axes.get_ylabel\n Axes.label_outer\n\n Axes.set_title\n Axes.get_title\n Axes.legend\n Axes.get_legend\n Axes.get_legend_handles_labels\n\nAxis scales\n-----------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.set_xscale\n Axes.get_xscale\n Axes.set_yscale\n Axes.get_yscale\n\nAutoscaling and margins\n-----------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.use_sticky_edges\n\n Axes.margins\n Axes.set_xmargin\n Axes.set_ymargin\n\n Axes.relim\n\n Axes.autoscale\n Axes.autoscale_view\n\n Axes.set_autoscale_on\n Axes.get_autoscale_on\n\n Axes.set_autoscalex_on\n Axes.get_autoscalex_on\n\n Axes.set_autoscaley_on\n Axes.get_autoscaley_on\n\nAspect ratio\n------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.apply_aspect\n Axes.set_aspect\n Axes.get_aspect\n\n Axes.set_box_aspect\n Axes.get_box_aspect\n\n Axes.set_adjustable\n Axes.get_adjustable\n\nTicks and tick labels\n---------------------\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.set_xticks\n Axes.get_xticks\n\n Axes.set_xticklabels\n Axes.get_xticklabels\n Axes.get_xmajorticklabels\n Axes.get_xminorticklabels\n\n Axes.get_xgridlines\n Axes.get_xticklines\n\n Axes.xaxis_date\n\n Axes.set_yticks\n Axes.get_yticks\n\n Axes.set_yticklabels\n Axes.get_yticklabels\n Axes.get_ymajorticklabels\n Axes.get_yminorticklabels\n\n Axes.get_ygridlines\n Axes.get_yticklines\n\n Axes.yaxis_date\n\n Axes.minorticks_off\n Axes.minorticks_on\n\n Axes.ticklabel_format\n Axes.tick_params\n\n Axes.locator_params\n\n\nUnits\n=====\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.convert_xunits\n Axes.convert_yunits\n Axes.have_units\n\n\nAdding artists\n==============\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.add_artist\n Axes.add_child_axes\n Axes.add_collection\n Axes.add_container\n Axes.add_image\n Axes.add_line\n Axes.add_patch\n Axes.add_table\n\n\nTwinning and sharing\n====================\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.twinx\n Axes.twiny\n\n Axes.sharex\n Axes.sharey\n\n Axes.get_shared_x_axes\n Axes.get_shared_y_axes\n\n\nAxes position\n=============\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.get_anchor\n Axes.set_anchor\n\n Axes.get_axes_locator\n Axes.set_axes_locator\n\n Axes.get_subplotspec\n Axes.set_subplotspec\n\n Axes.reset_position\n\n Axes.get_position\n Axes.set_position\n\n\nAsync/event based\n=================\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.stale\n Axes.pchanged\n Axes.add_callback\n Axes.remove_callback\n\n\nInteractive\n===========\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n\n Axes.can_pan\n Axes.can_zoom\n\n Axes.get_navigate\n Axes.set_navigate\n Axes.get_navigate_mode\n Axes.set_navigate_mode\n\n Axes.start_pan\n Axes.drag_pan\n Axes.end_pan\n\n Axes.format_coord\n Axes.format_cursor_data\n Axes.format_xdata\n Axes.format_ydata\n\n Axes.mouseover\n Axes.in_axes\n\n Axes.contains\n Axes.contains_point\n\n Axes.get_cursor_data\n\nChildren\n========\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.get_children\n Axes.get_images\n Axes.get_lines\n Axes.findobj\n\n\nDrawing\n=======\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.draw\n Axes.draw_artist\n Axes.redraw_in_frame\n\n Axes.get_rasterization_zorder\n Axes.set_rasterization_zorder\n\n Axes.get_window_extent\n Axes.get_tightbbox\n\n\nProjection\n==========\n\nMethods used by `~matplotlib.axis.Axis` that must be overridden for\nnon-rectilinear Axes.\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.name\n Axes.get_xaxis_transform\n Axes.get_yaxis_transform\n Axes.get_data_ratio\n\n Axes.get_xaxis_text1_transform\n Axes.get_xaxis_text2_transform\n Axes.get_yaxis_text1_transform\n Axes.get_yaxis_text2_transform\n\n\nOther\n=====\n\n.. autosummary::\n :toctree: _as_gen\n :template: autosummary.rst\n :nosignatures:\n\n Axes.zorder\n Axes.get_default_bbox_extra_artists\n Axes.get_transformed_clip_path_and_affine\n Axes.has_data\n Axes.set\n\n.. autoclass:: matplotlib.axes.Axes.ArtistList\n", "doc/api/toolkits/mplot3d/axes3d.rst": "mpl\\_toolkits.mplot3d.axes3d.Axes3D\n===================================\n\n\n.. currentmodule:: mpl_toolkits.mplot3d.axes3d\n\n\n.. autoclass:: Axes3D\n :no-members:\n :no-undoc-members:\n :show-inheritance:\n\n\n.. currentmodule:: mpl_toolkits.mplot3d.axes3d.Axes3D\n\n\nPlotting\n--------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n plot\n scatter\n bar\n bar3d\n\n plot_surface\n plot_wireframe\n plot_trisurf\n\n clabel\n contour\n tricontour\n contourf\n tricontourf\n\n quiver\n voxels\n errorbar\n stem\n\n\nText and annotations\n--------------------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n text\n text2D\n\n\nClearing\n--------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n clear\n\n\nAppearance\n----------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n set_axis_off\n set_axis_on\n grid\n\n\nAxis\n----\n\nAxis limits and direction\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n get_zaxis\n get_xlim\n get_ylim\n get_zlim\n set_zlim\n get_w_lims\n invert_zaxis\n zaxis_inverted\n get_zbound\n set_zbound\n\n\nAxis labels and title\n^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n set_zlabel\n get_zlabel\n set_title\n\n\nAxis scales\n^^^^^^^^^^^\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n set_xscale\n set_yscale\n set_zscale\n get_zscale\n\n\nAutoscaling and margins\n^^^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n set_zmargin\n margins\n autoscale\n autoscale_view\n set_autoscalez_on\n get_autoscalez_on\n auto_scale_xyz\n\n\nAspect ratio\n^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n set_aspect\n set_box_aspect\n apply_aspect\n\n\nTicks\n^^^^^\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n tick_params\n set_zticks\n get_zticks\n set_zticklabels\n get_zticklines\n get_zgridlines\n get_zminorticklabels\n get_zmajorticklabels\n zaxis_date\n\n\nUnits\n-----\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n convert_zunits\n\n\nAdding artists\n--------------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n add_collection3d\n\n\nSharing\n-------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n sharez\n shareview\n\n\nInteractive\n-----------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n can_zoom\n can_pan\n disable_mouse_rotation\n mouse_init\n drag_pan\n format_zdata\n format_coord\n\n\nProjection and perspective\n--------------------------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n view_init\n set_proj_type\n get_proj\n set_top_view\n\n\nDrawing\n-------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n draw\n get_tightbbox\n\n\nAliases and deprecated methods\n------------------------------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n set_zlim3d\n stem3D\n text3D\n tunit_cube\n tunit_edges\n unit_cube\n\n\nOther\n-----\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n get_axis_position\n add_contour_set\n add_contourf_set\n update_datalim\n\n\n.. currentmodule:: mpl_toolkits.mplot3d\n\nSample 3D data\n--------------\n\n.. autosummary::\n :toctree: ../../_as_gen\n :template: autosummary.rst\n :nosignatures:\n\n axes3d.get_test_data\n\n\n.. minigallery:: mpl_toolkits.mplot3d.axes3d.Axes3D\n :add-heading:\n", "doc/users/next_whats_new/margin_getters.rst": null, "lib/matplotlib/axes/_base.py": "from collections.abc import Iterable, Sequence\nfrom contextlib import ExitStack\nimport functools\nimport inspect\nimport logging\nfrom numbers import Real\nfrom operator import attrgetter\nimport types\n\nimport numpy as np\n\nimport matplotlib as mpl\nfrom matplotlib import _api, cbook, _docstring, offsetbox\nimport matplotlib.artist as martist\nimport matplotlib.axis as maxis\nfrom matplotlib.cbook import _OrderedSet, _check_1d, index_of\nimport matplotlib.collections as mcoll\nimport matplotlib.colors as mcolors\nimport matplotlib.font_manager as font_manager\nfrom matplotlib.gridspec import SubplotSpec\nimport matplotlib.image as mimage\nimport matplotlib.lines as mlines\nimport matplotlib.patches as mpatches\nfrom matplotlib.rcsetup import cycler, validate_axisbelow\nimport matplotlib.spines as mspines\nimport matplotlib.table as mtable\nimport matplotlib.text as mtext\nimport matplotlib.ticker as mticker\nimport matplotlib.transforms as mtransforms\n\n_log = logging.getLogger(__name__)\n\n\nclass _axis_method_wrapper:\n \"\"\"\n Helper to generate Axes methods wrapping Axis methods.\n\n After ::\n\n get_foo = _axis_method_wrapper(\"xaxis\", \"get_bar\")\n\n (in the body of a class) ``get_foo`` is a method that forwards it arguments\n to the ``get_bar`` method of the ``xaxis`` attribute, and gets its\n signature and docstring from ``Axis.get_bar``.\n\n The docstring of ``get_foo`` is built by replacing \"this Axis\" by \"the\n {attr_name}\" (i.e., \"the xaxis\", \"the yaxis\") in the wrapped method's\n dedented docstring; additional replacements can be given in *doc_sub*.\n \"\"\"\n\n def __init__(self, attr_name, method_name, *, doc_sub=None):\n self.attr_name = attr_name\n self.method_name = method_name\n # Immediately put the docstring in ``self.__doc__`` so that docstring\n # manipulations within the class body work as expected.\n doc = inspect.getdoc(getattr(maxis.Axis, method_name))\n self._missing_subs = []\n if doc:\n doc_sub = {\"this Axis\": f\"the {self.attr_name}\", **(doc_sub or {})}\n for k, v in doc_sub.items():\n if k not in doc: # Delay raising error until we know qualname.\n self._missing_subs.append(k)\n doc = doc.replace(k, v)\n self.__doc__ = doc\n\n def __set_name__(self, owner, name):\n # This is called at the end of the class body as\n # ``self.__set_name__(cls, name_under_which_self_is_assigned)``; we\n # rely on that to give the wrapper the correct __name__/__qualname__.\n get_method = attrgetter(f\"{self.attr_name}.{self.method_name}\")\n\n def wrapper(self, *args, **kwargs):\n return get_method(self)(*args, **kwargs)\n\n wrapper.__module__ = owner.__module__\n wrapper.__name__ = name\n wrapper.__qualname__ = f\"{owner.__qualname__}.{name}\"\n wrapper.__doc__ = self.__doc__\n # Manually copy the signature instead of using functools.wraps because\n # displaying the Axis method source when asking for the Axes method\n # source would be confusing.\n wrapper.__signature__ = inspect.signature(\n getattr(maxis.Axis, self.method_name))\n\n if self._missing_subs:\n raise ValueError(\n \"The definition of {} expected that the docstring of Axis.{} \"\n \"contains {!r} as substrings\".format(\n wrapper.__qualname__, self.method_name,\n \", \".join(map(repr, self._missing_subs))))\n\n setattr(owner, name, wrapper)\n\n\nclass _TransformedBoundsLocator:\n \"\"\"\n Axes locator for `.Axes.inset_axes` and similarly positioned Axes.\n\n The locator is a callable object used in `.Axes.set_aspect` to compute the\n Axes location depending on the renderer.\n \"\"\"\n\n def __init__(self, bounds, transform):\n \"\"\"\n *bounds* (a ``[l, b, w, h]`` rectangle) and *transform* together\n specify the position of the inset Axes.\n \"\"\"\n self._bounds = bounds\n self._transform = transform\n\n def __call__(self, ax, renderer):\n # Subtracting transSubfigure will typically rely on inverted(),\n # freezing the transform; thus, this needs to be delayed until draw\n # time as transSubfigure may otherwise change after this is evaluated.\n return mtransforms.TransformedBbox(\n mtransforms.Bbox.from_bounds(*self._bounds),\n self._transform - ax.figure.transSubfigure)\n\n\ndef _process_plot_format(fmt, *, ambiguous_fmt_datakey=False):\n \"\"\"\n Convert a MATLAB style color/line style format string to a (*linestyle*,\n *marker*, *color*) tuple.\n\n Example format strings include:\n\n * 'ko': black circles\n * '.b': blue dots\n * 'r--': red dashed lines\n * 'C2--': the third color in the color cycle, dashed lines\n\n The format is absolute in the sense that if a linestyle or marker is not\n defined in *fmt*, there is no line or marker. This is expressed by\n returning 'None' for the respective quantity.\n\n See Also\n --------\n matplotlib.Line2D.lineStyles, matplotlib.colors.cnames\n All possible styles and color format strings.\n \"\"\"\n\n linestyle = None\n marker = None\n color = None\n\n # Is fmt just a colorspec?\n try:\n color = mcolors.to_rgba(fmt)\n\n # We need to differentiate grayscale '1.0' from tri_down marker '1'\n try:\n fmtint = str(int(fmt))\n except ValueError:\n return linestyle, marker, color # Yes\n else:\n if fmt != fmtint:\n # user definitely doesn't want tri_down marker\n return linestyle, marker, color # Yes\n else:\n # ignore converted color\n color = None\n except ValueError:\n pass # No, not just a color.\n\n errfmt = (\"{!r} is neither a data key nor a valid format string ({})\"\n if ambiguous_fmt_datakey else\n \"{!r} is not a valid format string ({})\")\n\n i = 0\n while i < len(fmt):\n c = fmt[i]\n if fmt[i:i+2] in mlines.lineStyles: # First, the two-char styles.\n if linestyle is not None:\n raise ValueError(errfmt.format(fmt, \"two linestyle symbols\"))\n linestyle = fmt[i:i+2]\n i += 2\n elif c in mlines.lineStyles:\n if linestyle is not None:\n raise ValueError(errfmt.format(fmt, \"two linestyle symbols\"))\n linestyle = c\n i += 1\n elif c in mlines.lineMarkers:\n if marker is not None:\n raise ValueError(errfmt.format(fmt, \"two marker symbols\"))\n marker = c\n i += 1\n elif c in mcolors.get_named_colors_mapping():\n if color is not None:\n raise ValueError(errfmt.format(fmt, \"two color symbols\"))\n color = c\n i += 1\n elif c == 'C' and i < len(fmt) - 1:\n color_cycle_number = int(fmt[i + 1])\n color = mcolors.to_rgba(f\"C{color_cycle_number}\")\n i += 2\n else:\n raise ValueError(\n errfmt.format(fmt, f\"unrecognized character {c!r}\"))\n\n if linestyle is None and marker is None:\n linestyle = mpl.rcParams['lines.linestyle']\n if linestyle is None:\n linestyle = 'None'\n if marker is None:\n marker = 'None'\n\n return linestyle, marker, color\n\n\nclass _process_plot_var_args:\n \"\"\"\n Process variable length arguments to `~.Axes.plot`, to support ::\n\n plot(t, s)\n plot(t1, s1, t2, s2)\n plot(t1, s1, 'ko', t2, s2)\n plot(t1, s1, 'ko', t2, s2, 'r--', t3, e3)\n\n an arbitrary number of *x*, *y*, *fmt* are allowed\n \"\"\"\n\n def __init__(self, command='plot'):\n self.command = command\n self.set_prop_cycle(None)\n\n def set_prop_cycle(self, cycler):\n if cycler is None:\n cycler = mpl.rcParams['axes.prop_cycle']\n self._idx = 0\n self._cycler_items = [*cycler]\n self._prop_keys = cycler.keys # This should make a copy\n\n def __call__(self, axes, *args, data=None, **kwargs):\n axes._process_unit_info(kwargs=kwargs)\n\n for pos_only in \"xy\":\n if pos_only in kwargs:\n raise _api.kwarg_error(self.command, pos_only)\n\n if not args:\n return\n\n if data is None: # Process dict views\n args = [cbook.sanitize_sequence(a) for a in args]\n else: # Process the 'data' kwarg.\n replaced = [mpl._replacer(data, arg) for arg in args]\n if len(args) == 1:\n label_namer_idx = 0\n elif len(args) == 2: # Can be x, y or y, c.\n # Figure out what the second argument is.\n # 1) If the second argument cannot be a format shorthand, the\n # second argument is the label_namer.\n # 2) Otherwise (it could have been a format shorthand),\n # a) if we did perform a substitution, emit a warning, and\n # use it as label_namer.\n # b) otherwise, it is indeed a format shorthand; use the\n # first argument as label_namer.\n try:\n _process_plot_format(args[1])\n except ValueError: # case 1)\n label_namer_idx = 1\n else:\n if replaced[1] is not args[1]: # case 2a)\n _api.warn_external(\n f\"Second argument {args[1]!r} is ambiguous: could \"\n f\"be a format string but is in 'data'; using as \"\n f\"data. If it was intended as data, set the \"\n f\"format string to an empty string to suppress \"\n f\"this warning. If it was intended as a format \"\n f\"string, explicitly pass the x-values as well. \"\n f\"Alternatively, rename the entry in 'data'.\",\n RuntimeWarning)\n label_namer_idx = 1\n else: # case 2b)\n label_namer_idx = 0\n elif len(args) == 3:\n label_namer_idx = 1\n else:\n raise ValueError(\n \"Using arbitrary long args with data is not supported due \"\n \"to ambiguity of arguments; use multiple plotting calls \"\n \"instead\")\n if kwargs.get(\"label\") is None:\n kwargs[\"label\"] = mpl._label_from_arg(\n replaced[label_namer_idx], args[label_namer_idx])\n args = replaced\n ambiguous_fmt_datakey = data is not None and len(args) == 2\n\n if len(args) >= 4 and not cbook.is_scalar_or_string(\n kwargs.get(\"label\")):\n raise ValueError(\"plot() with multiple groups of data (i.e., \"\n \"pairs of x and y) does not support multiple \"\n \"labels\")\n\n # Repeatedly grab (x, y) or (x, y, format) from the front of args and\n # massage them into arguments to plot() or fill().\n\n while args:\n this, args = args[:2], args[2:]\n if args and isinstance(args[0], str):\n this += args[0],\n args = args[1:]\n yield from self._plot_args(\n axes, this, kwargs, ambiguous_fmt_datakey=ambiguous_fmt_datakey)\n\n def get_next_color(self):\n \"\"\"Return the next color in the cycle.\"\"\"\n if 'color' not in self._prop_keys:\n return 'k'\n c = self._cycler_items[self._idx]['color']\n self._idx = (self._idx + 1) % len(self._cycler_items)\n return c\n\n def _getdefaults(self, ignore, kw):\n \"\"\"\n If some keys in the property cycle (excluding those in the set\n *ignore*) are absent or set to None in the dict *kw*, return a copy\n of the next entry in the property cycle, excluding keys in *ignore*.\n Otherwise, don't advance the property cycle, and return an empty dict.\n \"\"\"\n prop_keys = self._prop_keys - ignore\n if any(kw.get(k, None) is None for k in prop_keys):\n # Need to copy this dictionary or else the next time around\n # in the cycle, the dictionary could be missing entries.\n default_dict = self._cycler_items[self._idx].copy()\n self._idx = (self._idx + 1) % len(self._cycler_items)\n for p in ignore:\n default_dict.pop(p, None)\n else:\n default_dict = {}\n return default_dict\n\n def _setdefaults(self, defaults, kw):\n \"\"\"\n Add to the dict *kw* the entries in the dict *default* that are absent\n or set to None in *kw*.\n \"\"\"\n for k in defaults:\n if kw.get(k, None) is None:\n kw[k] = defaults[k]\n\n def _makeline(self, axes, x, y, kw, kwargs):\n kw = {**kw, **kwargs} # Don't modify the original kw.\n default_dict = self._getdefaults(set(), kw)\n self._setdefaults(default_dict, kw)\n seg = mlines.Line2D(x, y, **kw)\n return seg, kw\n\n def _makefill(self, axes, x, y, kw, kwargs):\n # Polygon doesn't directly support unitized inputs.\n x = axes.convert_xunits(x)\n y = axes.convert_yunits(y)\n\n kw = kw.copy() # Don't modify the original kw.\n kwargs = kwargs.copy()\n\n # Ignore 'marker'-related properties as they aren't Polygon\n # properties, but they are Line2D properties, and so they are\n # likely to appear in the default cycler construction.\n # This is done here to the defaults dictionary as opposed to the\n # other two dictionaries because we do want to capture when a\n # *user* explicitly specifies a marker which should be an error.\n # We also want to prevent advancing the cycler if there are no\n # defaults needed after ignoring the given properties.\n ignores = {'marker', 'markersize', 'markeredgecolor',\n 'markerfacecolor', 'markeredgewidth'}\n # Also ignore anything provided by *kwargs*.\n for k, v in kwargs.items():\n if v is not None:\n ignores.add(k)\n\n # Only using the first dictionary to use as basis\n # for getting defaults for back-compat reasons.\n # Doing it with both seems to mess things up in\n # various places (probably due to logic bugs elsewhere).\n default_dict = self._getdefaults(ignores, kw)\n self._setdefaults(default_dict, kw)\n\n # Looks like we don't want \"color\" to be interpreted to\n # mean both facecolor and edgecolor for some reason.\n # So the \"kw\" dictionary is thrown out, and only its\n # 'color' value is kept and translated as a 'facecolor'.\n # This design should probably be revisited as it increases\n # complexity.\n facecolor = kw.get('color', None)\n\n # Throw out 'color' as it is now handled as a facecolor\n default_dict.pop('color', None)\n\n # To get other properties set from the cycler\n # modify the kwargs dictionary.\n self._setdefaults(default_dict, kwargs)\n\n seg = mpatches.Polygon(np.column_stack((x, y)),\n facecolor=facecolor,\n fill=kwargs.get('fill', True),\n closed=kw['closed'])\n seg.set(**kwargs)\n return seg, kwargs\n\n def _plot_args(self, axes, tup, kwargs, *,\n return_kwargs=False, ambiguous_fmt_datakey=False):\n \"\"\"\n Process the arguments of ``plot([x], y, [fmt], **kwargs)`` calls.\n\n This processes a single set of ([x], y, [fmt]) parameters; i.e. for\n ``plot(x, y, x2, y2)`` it will be called twice. Once for (x, y) and\n once for (x2, y2).\n\n x and y may be 2D and thus can still represent multiple datasets.\n\n For multiple datasets, if the keyword argument *label* is a list, this\n will unpack the list and assign the individual labels to the datasets.\n\n Parameters\n ----------\n tup : tuple\n A tuple of the positional parameters. This can be one of\n\n - (y,)\n - (x, y)\n - (y, fmt)\n - (x, y, fmt)\n\n kwargs : dict\n The keyword arguments passed to ``plot()``.\n\n return_kwargs : bool\n Whether to also return the effective keyword arguments after label\n unpacking as well.\n\n ambiguous_fmt_datakey : bool\n Whether the format string in *tup* could also have been a\n misspelled data key.\n\n Returns\n -------\n result\n If *return_kwargs* is false, a list of Artists representing the\n dataset(s).\n If *return_kwargs* is true, a list of (Artist, effective_kwargs)\n representing the dataset(s). See *return_kwargs*.\n The Artist is either `.Line2D` (if called from ``plot()``) or\n `.Polygon` otherwise.\n \"\"\"\n if len(tup) > 1 and isinstance(tup[-1], str):\n # xy is tup with fmt stripped (could still be (y,) only)\n *xy, fmt = tup\n linestyle, marker, color = _process_plot_format(\n fmt, ambiguous_fmt_datakey=ambiguous_fmt_datakey)\n elif len(tup) == 3:\n raise ValueError('third arg must be a format string')\n else:\n xy = tup\n linestyle, marker, color = None, None, None\n\n # Don't allow any None value; these would be up-converted to one\n # element array of None which causes problems downstream.\n if any(v is None for v in tup):\n raise ValueError(\"x, y, and format string must not be None\")\n\n kw = {}\n for prop_name, val in zip(('linestyle', 'marker', 'color'),\n (linestyle, marker, color)):\n if val is not None:\n # check for conflicts between fmt and kwargs\n if (fmt.lower() != 'none'\n and prop_name in kwargs\n and val != 'None'):\n # Technically ``plot(x, y, 'o', ls='--')`` is a conflict\n # because 'o' implicitly unsets the linestyle\n # (linestyle='None').\n # We'll gracefully not warn in this case because an\n # explicit set via kwargs can be seen as intention to\n # override an implicit unset.\n # Note: We don't val.lower() != 'none' because val is not\n # necessarily a string (can be a tuple for colors). This\n # is safe, because *val* comes from _process_plot_format()\n # which only returns 'None'.\n _api.warn_external(\n f\"{prop_name} is redundantly defined by the \"\n f\"'{prop_name}' keyword argument and the fmt string \"\n f'\"{fmt}\" (-> {prop_name}={val!r}). The keyword '\n f\"argument will take precedence.\")\n kw[prop_name] = val\n\n if len(xy) == 2:\n x = _check_1d(xy[0])\n y = _check_1d(xy[1])\n else:\n x, y = index_of(xy[-1])\n\n if axes.xaxis is not None:\n axes.xaxis.update_units(x)\n if axes.yaxis is not None:\n axes.yaxis.update_units(y)\n\n if x.shape[0] != y.shape[0]:\n raise ValueError(f\"x and y must have same first dimension, but \"\n f\"have shapes {x.shape} and {y.shape}\")\n if x.ndim > 2 or y.ndim > 2:\n raise ValueError(f\"x and y can be no greater than 2D, but have \"\n f\"shapes {x.shape} and {y.shape}\")\n if x.ndim == 1:\n x = x[:, np.newaxis]\n if y.ndim == 1:\n y = y[:, np.newaxis]\n\n if self.command == 'plot':\n make_artist = self._makeline\n else:\n kw['closed'] = kwargs.get('closed', True)\n make_artist = self._makefill\n\n ncx, ncy = x.shape[1], y.shape[1]\n if ncx > 1 and ncy > 1 and ncx != ncy:\n raise ValueError(f\"x has {ncx} columns but y has {ncy} columns\")\n if ncx == 0 or ncy == 0:\n return []\n\n label = kwargs.get('label')\n n_datasets = max(ncx, ncy)\n if n_datasets > 1 and not cbook.is_scalar_or_string(label):\n if len(label) != n_datasets:\n raise ValueError(f\"label must be scalar or have the same \"\n f\"length as the input data, but found \"\n f\"{len(label)} for {n_datasets} datasets.\")\n labels = label\n else:\n labels = [label] * n_datasets\n\n result = (make_artist(axes, x[:, j % ncx], y[:, j % ncy], kw,\n {**kwargs, 'label': label})\n for j, label in enumerate(labels))\n\n if return_kwargs:\n return list(result)\n else:\n return [l[0] for l in result]\n\n\n@_api.define_aliases({\"facecolor\": [\"fc\"]})\nclass _AxesBase(martist.Artist):\n name = \"rectilinear\"\n\n # axis names are the prefixes for the attributes that contain the\n # respective axis; e.g. 'x' <-> self.xaxis, containing an XAxis.\n # Note that PolarAxes uses these attributes as well, so that we have\n # 'x' <-> self.xaxis, containing a ThetaAxis. In particular we do not\n # have 'theta' in _axis_names.\n # In practice, this is ('x', 'y') for all 2D Axes and ('x', 'y', 'z')\n # for Axes3D.\n _axis_names = (\"x\", \"y\")\n _shared_axes = {name: cbook.Grouper() for name in _axis_names}\n _twinned_axes = cbook.Grouper()\n\n _subclass_uses_cla = False\n\n @property\n def _axis_map(self):\n \"\"\"A mapping of axis names, e.g. 'x', to `Axis` instances.\"\"\"\n return {name: getattr(self, f\"{name}axis\")\n for name in self._axis_names}\n\n def __str__(self):\n return \"{0}({1[0]:g},{1[1]:g};{1[2]:g}x{1[3]:g})\".format(\n type(self).__name__, self._position.bounds)\n\n def __init__(self, fig,\n *args,\n facecolor=None, # defaults to rc axes.facecolor\n frameon=True,\n sharex=None, # use Axes instance's xaxis info\n sharey=None, # use Axes instance's yaxis info\n label='',\n xscale=None,\n yscale=None,\n box_aspect=None,\n **kwargs\n ):\n \"\"\"\n Build an Axes in a figure.\n\n Parameters\n ----------\n fig : `~matplotlib.figure.Figure`\n The Axes is built in the `.Figure` *fig*.\n\n *args\n ``*args`` can be a single ``(left, bottom, width, height)``\n rectangle or a single `.Bbox`. This specifies the rectangle (in\n figure coordinates) where the Axes is positioned.\n\n ``*args`` can also consist of three numbers or a single three-digit\n number; in the latter case, the digits are considered as\n independent numbers. The numbers are interpreted as ``(nrows,\n ncols, index)``: ``(nrows, ncols)`` specifies the size of an array\n of subplots, and ``index`` is the 1-based index of the subplot\n being created. Finally, ``*args`` can also directly be a\n `.SubplotSpec` instance.\n\n sharex, sharey : `~matplotlib.axes.Axes`, optional\n The x- or y-`~.matplotlib.axis` is shared with the x- or y-axis in\n the input `~.axes.Axes`.\n\n frameon : bool, default: True\n Whether the Axes frame is visible.\n\n box_aspect : float, optional\n Set a fixed aspect for the Axes box, i.e. the ratio of height to\n width. See `~.axes.Axes.set_box_aspect` for details.\n\n **kwargs\n Other optional keyword arguments:\n\n %(Axes:kwdoc)s\n\n Returns\n -------\n `~.axes.Axes`\n The new `~.axes.Axes` object.\n \"\"\"\n\n super().__init__()\n if \"rect\" in kwargs:\n if args:\n raise TypeError(\n \"'rect' cannot be used together with positional arguments\")\n rect = kwargs.pop(\"rect\")\n _api.check_isinstance((mtransforms.Bbox, Iterable), rect=rect)\n args = (rect,)\n subplotspec = None\n if len(args) == 1 and isinstance(args[0], mtransforms.Bbox):\n self._position = args[0]\n elif len(args) == 1 and np.iterable(args[0]):\n self._position = mtransforms.Bbox.from_bounds(*args[0])\n else:\n self._position = self._originalPosition = mtransforms.Bbox.unit()\n subplotspec = SubplotSpec._from_subplot_args(fig, args)\n if self._position.width < 0 or self._position.height < 0:\n raise ValueError('Width and height specified must be non-negative')\n self._originalPosition = self._position.frozen()\n self.axes = self\n self._aspect = 'auto'\n self._adjustable = 'box'\n self._anchor = 'C'\n self._stale_viewlims = {name: False for name in self._axis_names}\n self._sharex = sharex\n self._sharey = sharey\n self.set_label(label)\n self.set_figure(fig)\n # The subplotspec needs to be set after the figure (so that\n # figure-level subplotpars are taken into account), but the figure\n # needs to be set after self._position is initialized.\n if subplotspec:\n self.set_subplotspec(subplotspec)\n else:\n self._subplotspec = None\n self.set_box_aspect(box_aspect)\n self._axes_locator = None # Optionally set via update(kwargs).\n\n self._children = []\n\n # placeholder for any colorbars added that use this Axes.\n # (see colorbar.py):\n self._colorbars = []\n self.spines = mspines.Spines.from_dict(self._gen_axes_spines())\n\n # this call may differ for non-sep axes, e.g., polar\n self._init_axis()\n if facecolor is None:\n facecolor = mpl.rcParams['axes.facecolor']\n self._facecolor = facecolor\n self._frameon = frameon\n self.set_axisbelow(mpl.rcParams['axes.axisbelow'])\n\n self._rasterization_zorder = None\n self.clear()\n\n # funcs used to format x and y - fall back on major formatters\n self.fmt_xdata = None\n self.fmt_ydata = None\n\n self.set_navigate(True)\n self.set_navigate_mode(None)\n\n if xscale:\n self.set_xscale(xscale)\n if yscale:\n self.set_yscale(yscale)\n\n self._internal_update(kwargs)\n\n for name, axis in self._axis_map.items():\n axis.callbacks._connect_picklable(\n 'units', self._unit_change_handler(name))\n\n rcParams = mpl.rcParams\n self.tick_params(\n top=rcParams['xtick.top'] and rcParams['xtick.minor.top'],\n bottom=rcParams['xtick.bottom'] and rcParams['xtick.minor.bottom'],\n labeltop=(rcParams['xtick.labeltop'] and\n rcParams['xtick.minor.top']),\n labelbottom=(rcParams['xtick.labelbottom'] and\n rcParams['xtick.minor.bottom']),\n left=rcParams['ytick.left'] and rcParams['ytick.minor.left'],\n right=rcParams['ytick.right'] and rcParams['ytick.minor.right'],\n labelleft=(rcParams['ytick.labelleft'] and\n rcParams['ytick.minor.left']),\n labelright=(rcParams['ytick.labelright'] and\n rcParams['ytick.minor.right']),\n which='minor')\n\n self.tick_params(\n top=rcParams['xtick.top'] and rcParams['xtick.major.top'],\n bottom=rcParams['xtick.bottom'] and rcParams['xtick.major.bottom'],\n labeltop=(rcParams['xtick.labeltop'] and\n rcParams['xtick.major.top']),\n labelbottom=(rcParams['xtick.labelbottom'] and\n rcParams['xtick.major.bottom']),\n left=rcParams['ytick.left'] and rcParams['ytick.major.left'],\n right=rcParams['ytick.right'] and rcParams['ytick.major.right'],\n labelleft=(rcParams['ytick.labelleft'] and\n rcParams['ytick.major.left']),\n labelright=(rcParams['ytick.labelright'] and\n rcParams['ytick.major.right']),\n which='major')\n\n def __init_subclass__(cls, **kwargs):\n parent_uses_cla = super(cls, cls)._subclass_uses_cla\n if 'cla' in cls.__dict__:\n _api.warn_deprecated(\n '3.6',\n pending=True,\n message=f'Overriding `Axes.cla` in {cls.__qualname__} is '\n 'pending deprecation in %(since)s and will be fully '\n 'deprecated in favor of `Axes.clear` in the future. '\n 'Please report '\n f'this to the {cls.__module__!r} author.')\n cls._subclass_uses_cla = 'cla' in cls.__dict__ or parent_uses_cla\n super().__init_subclass__(**kwargs)\n\n def __getstate__(self):\n state = super().__getstate__()\n # Prune the sharing & twinning info to only contain the current group.\n state[\"_shared_axes\"] = {\n name: self._shared_axes[name].get_siblings(self)\n for name in self._axis_names if self in self._shared_axes[name]}\n state[\"_twinned_axes\"] = (self._twinned_axes.get_siblings(self)\n if self in self._twinned_axes else None)\n return state\n\n def __setstate__(self, state):\n # Merge the grouping info back into the global groupers.\n shared_axes = state.pop(\"_shared_axes\")\n for name, shared_siblings in shared_axes.items():\n self._shared_axes[name].join(*shared_siblings)\n twinned_siblings = state.pop(\"_twinned_axes\")\n if twinned_siblings:\n self._twinned_axes.join(*twinned_siblings)\n self.__dict__ = state\n self._stale = True\n\n def __repr__(self):\n fields = []\n if self.get_label():\n fields += [f\"label={self.get_label()!r}\"]\n if hasattr(self, \"get_title\"):\n titles = {}\n for k in [\"left\", \"center\", \"right\"]:\n title = self.get_title(loc=k)\n if title:\n titles[k] = title\n if titles:\n fields += [f\"title={titles}\"]\n for name, axis in self._axis_map.items():\n if axis.get_label() and axis.get_label().get_text():\n fields += [f\"{name}label={axis.get_label().get_text()!r}\"]\n return f\"<{self.__class__.__name__}: \" + \", \".join(fields) + \">\"\n\n def get_subplotspec(self):\n \"\"\"Return the `.SubplotSpec` associated with the subplot, or None.\"\"\"\n return self._subplotspec\n\n def set_subplotspec(self, subplotspec):\n \"\"\"Set the `.SubplotSpec`. associated with the subplot.\"\"\"\n self._subplotspec = subplotspec\n self._set_position(subplotspec.get_position(self.figure))\n\n def get_gridspec(self):\n \"\"\"Return the `.GridSpec` associated with the subplot, or None.\"\"\"\n return self._subplotspec.get_gridspec() if self._subplotspec else None\n\n def get_window_extent(self, renderer=None):\n \"\"\"\n Return the Axes bounding box in display space.\n\n This bounding box does not include the spines, ticks, ticklabels,\n or other labels. For a bounding box including these elements use\n `~matplotlib.axes.Axes.get_tightbbox`.\n\n See Also\n --------\n matplotlib.axes.Axes.get_tightbbox\n matplotlib.axis.Axis.get_tightbbox\n matplotlib.spines.Spine.get_window_extent\n \"\"\"\n return self.bbox\n\n def _init_axis(self):\n # This is moved out of __init__ because non-separable axes don't use it\n self.xaxis = maxis.XAxis(self, clear=False)\n self.spines.bottom.register_axis(self.xaxis)\n self.spines.top.register_axis(self.xaxis)\n self.yaxis = maxis.YAxis(self, clear=False)\n self.spines.left.register_axis(self.yaxis)\n self.spines.right.register_axis(self.yaxis)\n\n def set_figure(self, fig):\n # docstring inherited\n super().set_figure(fig)\n\n self.bbox = mtransforms.TransformedBbox(self._position,\n fig.transSubfigure)\n # these will be updated later as data is added\n self.dataLim = mtransforms.Bbox.null()\n self._viewLim = mtransforms.Bbox.unit()\n self.transScale = mtransforms.TransformWrapper(\n mtransforms.IdentityTransform())\n\n self._set_lim_and_transforms()\n\n def _unstale_viewLim(self):\n # We should arrange to store this information once per share-group\n # instead of on every axis.\n need_scale = {\n name: any(ax._stale_viewlims[name]\n for ax in self._shared_axes[name].get_siblings(self))\n for name in self._axis_names}\n if any(need_scale.values()):\n for name in need_scale:\n for ax in self._shared_axes[name].get_siblings(self):\n ax._stale_viewlims[name] = False\n self.autoscale_view(**{f\"scale{name}\": scale\n for name, scale in need_scale.items()})\n\n @property\n def viewLim(self):\n self._unstale_viewLim()\n return self._viewLim\n\n def _request_autoscale_view(self, axis=\"all\", tight=None):\n \"\"\"\n Mark a single axis, or all of them, as stale wrt. autoscaling.\n\n No computation is performed until the next autoscaling; thus, separate\n calls to control individual axises incur negligible performance cost.\n\n Parameters\n ----------\n axis : str, default: \"all\"\n Either an element of ``self._axis_names``, or \"all\".\n tight : bool or None, default: None\n \"\"\"\n axis_names = _api.check_getitem(\n {**{k: [k] for k in self._axis_names}, \"all\": self._axis_names},\n axis=axis)\n for name in axis_names:\n self._stale_viewlims[name] = True\n if tight is not None:\n self._tight = tight\n\n def _set_lim_and_transforms(self):\n \"\"\"\n Set the *_xaxis_transform*, *_yaxis_transform*, *transScale*,\n *transData*, *transLimits* and *transAxes* transformations.\n\n .. note::\n\n This method is primarily used by rectilinear projections of the\n `~matplotlib.axes.Axes` class, and is meant to be overridden by\n new kinds of projection Axes that need different transformations\n and limits. (See `~matplotlib.projections.polar.PolarAxes` for an\n example.)\n \"\"\"\n self.transAxes = mtransforms.BboxTransformTo(self.bbox)\n\n # Transforms the x and y axis separately by a scale factor.\n # It is assumed that this part will have non-linear components\n # (e.g., for a log scale).\n self.transScale = mtransforms.TransformWrapper(\n mtransforms.IdentityTransform())\n\n # An affine transformation on the data, generally to limit the\n # range of the axes\n self.transLimits = mtransforms.BboxTransformFrom(\n mtransforms.TransformedBbox(self._viewLim, self.transScale))\n\n # The parentheses are important for efficiency here -- they\n # group the last two (which are usually affines) separately\n # from the first (which, with log-scaling can be non-affine).\n self.transData = self.transScale + (self.transLimits + self.transAxes)\n\n self._xaxis_transform = mtransforms.blended_transform_factory(\n self.transData, self.transAxes)\n self._yaxis_transform = mtransforms.blended_transform_factory(\n self.transAxes, self.transData)\n\n def get_xaxis_transform(self, which='grid'):\n \"\"\"\n Get the transformation used for drawing x-axis labels, ticks\n and gridlines. The x-direction is in data coordinates and the\n y-direction is in axis coordinates.\n\n .. note::\n\n This transformation is primarily used by the\n `~matplotlib.axis.Axis` class, and is meant to be\n overridden by new kinds of projections that may need to\n place axis elements in different locations.\n\n Parameters\n ----------\n which : {'grid', 'tick1', 'tick2'}\n \"\"\"\n if which == 'grid':\n return self._xaxis_transform\n elif which == 'tick1':\n # for cartesian projection, this is bottom spine\n return self.spines.bottom.get_spine_transform()\n elif which == 'tick2':\n # for cartesian projection, this is top spine\n return self.spines.top.get_spine_transform()\n else:\n raise ValueError(f'unknown value for which: {which!r}')\n\n def get_xaxis_text1_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing x-axis labels, which will add\n *pad_points* of padding (in points) between the axis and the label.\n The x-direction is in data coordinates and the y-direction is in\n axis coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"xtick.alignment\"]\n return (self.get_xaxis_transform(which='tick1') +\n mtransforms.ScaledTranslation(0, -1 * pad_points / 72,\n self.figure.dpi_scale_trans),\n \"top\", labels_align)\n\n def get_xaxis_text2_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing secondary x-axis labels, which will\n add *pad_points* of padding (in points) between the axis and the\n label. The x-direction is in data coordinates and the y-direction\n is in axis coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"xtick.alignment\"]\n return (self.get_xaxis_transform(which='tick2') +\n mtransforms.ScaledTranslation(0, pad_points / 72,\n self.figure.dpi_scale_trans),\n \"bottom\", labels_align)\n\n def get_yaxis_transform(self, which='grid'):\n \"\"\"\n Get the transformation used for drawing y-axis labels, ticks\n and gridlines. The x-direction is in axis coordinates and the\n y-direction is in data coordinates.\n\n .. note::\n\n This transformation is primarily used by the\n `~matplotlib.axis.Axis` class, and is meant to be\n overridden by new kinds of projections that may need to\n place axis elements in different locations.\n\n Parameters\n ----------\n which : {'grid', 'tick1', 'tick2'}\n \"\"\"\n if which == 'grid':\n return self._yaxis_transform\n elif which == 'tick1':\n # for cartesian projection, this is bottom spine\n return self.spines.left.get_spine_transform()\n elif which == 'tick2':\n # for cartesian projection, this is top spine\n return self.spines.right.get_spine_transform()\n else:\n raise ValueError(f'unknown value for which: {which!r}')\n\n def get_yaxis_text1_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing y-axis labels, which will add\n *pad_points* of padding (in points) between the axis and the label.\n The x-direction is in axis coordinates and the y-direction is in\n data coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"ytick.alignment\"]\n return (self.get_yaxis_transform(which='tick1') +\n mtransforms.ScaledTranslation(-1 * pad_points / 72, 0,\n self.figure.dpi_scale_trans),\n labels_align, \"right\")\n\n def get_yaxis_text2_transform(self, pad_points):\n \"\"\"\n Returns\n -------\n transform : Transform\n The transform used for drawing secondart y-axis labels, which will\n add *pad_points* of padding (in points) between the axis and the\n label. The x-direction is in axis coordinates and the y-direction\n is in data coordinates\n valign : {'center', 'top', 'bottom', 'baseline', 'center_baseline'}\n The text vertical alignment.\n halign : {'center', 'left', 'right'}\n The text horizontal alignment.\n\n Notes\n -----\n This transformation is primarily used by the `~matplotlib.axis.Axis`\n class, and is meant to be overridden by new kinds of projections that\n may need to place axis elements in different locations.\n \"\"\"\n labels_align = mpl.rcParams[\"ytick.alignment\"]\n return (self.get_yaxis_transform(which='tick2') +\n mtransforms.ScaledTranslation(pad_points / 72, 0,\n self.figure.dpi_scale_trans),\n labels_align, \"left\")\n\n def _update_transScale(self):\n self.transScale.set(\n mtransforms.blended_transform_factory(\n self.xaxis.get_transform(), self.yaxis.get_transform()))\n\n def get_position(self, original=False):\n \"\"\"\n Return the position of the Axes within the figure as a `.Bbox`.\n\n Parameters\n ----------\n original : bool\n If ``True``, return the original position. Otherwise, return the\n active position. For an explanation of the positions see\n `.set_position`.\n\n Returns\n -------\n `.Bbox`\n\n \"\"\"\n if original:\n return self._originalPosition.frozen()\n else:\n locator = self.get_axes_locator()\n if not locator:\n self.apply_aspect()\n return self._position.frozen()\n\n def set_position(self, pos, which='both'):\n \"\"\"\n Set the Axes position.\n\n Axes have two position attributes. The 'original' position is the\n position allocated for the Axes. The 'active' position is the\n position the Axes is actually drawn at. These positions are usually\n the same unless a fixed aspect is set to the Axes. See\n `.Axes.set_aspect` for details.\n\n Parameters\n ----------\n pos : [left, bottom, width, height] or `~matplotlib.transforms.Bbox`\n The new position of the Axes in `.Figure` coordinates.\n\n which : {'both', 'active', 'original'}, default: 'both'\n Determines which position variables to change.\n\n See Also\n --------\n matplotlib.transforms.Bbox.from_bounds\n matplotlib.transforms.Bbox.from_extents\n \"\"\"\n self._set_position(pos, which=which)\n # because this is being called externally to the library we\n # don't let it be in the layout.\n self.set_in_layout(False)\n\n def _set_position(self, pos, which='both'):\n \"\"\"\n Private version of set_position.\n\n Call this internally to get the same functionality of `set_position`,\n but not to take the axis out of the constrained_layout hierarchy.\n \"\"\"\n if not isinstance(pos, mtransforms.BboxBase):\n pos = mtransforms.Bbox.from_bounds(*pos)\n for ax in self._twinned_axes.get_siblings(self):\n if which in ('both', 'active'):\n ax._position.set(pos)\n if which in ('both', 'original'):\n ax._originalPosition.set(pos)\n self.stale = True\n\n def reset_position(self):\n \"\"\"\n Reset the active position to the original position.\n\n This undoes changes to the active position (as defined in\n `.set_position`) which may have been performed to satisfy fixed-aspect\n constraints.\n \"\"\"\n for ax in self._twinned_axes.get_siblings(self):\n pos = ax.get_position(original=True)\n ax.set_position(pos, which='active')\n\n def set_axes_locator(self, locator):\n \"\"\"\n Set the Axes locator.\n\n Parameters\n ----------\n locator : Callable[[Axes, Renderer], Bbox]\n \"\"\"\n self._axes_locator = locator\n self.stale = True\n\n def get_axes_locator(self):\n \"\"\"\n Return the axes_locator.\n \"\"\"\n return self._axes_locator\n\n def _set_artist_props(self, a):\n \"\"\"Set the boilerplate props for artists added to Axes.\"\"\"\n a.set_figure(self.figure)\n if not a.is_transform_set():\n a.set_transform(self.transData)\n\n a.axes = self\n if a.get_mouseover():\n self._mouseover_set.add(a)\n\n def _gen_axes_patch(self):\n \"\"\"\n Returns\n -------\n Patch\n The patch used to draw the background of the Axes. It is also used\n as the clipping path for any data elements on the Axes.\n\n In the standard Axes, this is a rectangle, but in other projections\n it may not be.\n\n Notes\n -----\n Intended to be overridden by new projection types.\n \"\"\"\n return mpatches.Rectangle((0.0, 0.0), 1.0, 1.0)\n\n def _gen_axes_spines(self, locations=None, offset=0.0, units='inches'):\n \"\"\"\n Returns\n -------\n dict\n Mapping of spine names to `.Line2D` or `.Patch` instances that are\n used to draw Axes spines.\n\n In the standard Axes, spines are single line segments, but in other\n projections they may not be.\n\n Notes\n -----\n Intended to be overridden by new projection types.\n \"\"\"\n return {side: mspines.Spine.linear_spine(self, side)\n for side in ['left', 'right', 'bottom', 'top']}\n\n def sharex(self, other):\n \"\"\"\n Share the x-axis with *other*.\n\n This is equivalent to passing ``sharex=other`` when constructing the\n Axes, and cannot be used if the x-axis is already being shared with\n another Axes.\n \"\"\"\n _api.check_isinstance(_AxesBase, other=other)\n if self._sharex is not None and other is not self._sharex:\n raise ValueError(\"x-axis is already shared\")\n self._shared_axes[\"x\"].join(self, other)\n self._sharex = other\n self.xaxis.major = other.xaxis.major # Ticker instances holding\n self.xaxis.minor = other.xaxis.minor # locator and formatter.\n x0, x1 = other.get_xlim()\n self.set_xlim(x0, x1, emit=False, auto=other.get_autoscalex_on())\n self.xaxis._scale = other.xaxis._scale\n\n def sharey(self, other):\n \"\"\"\n Share the y-axis with *other*.\n\n This is equivalent to passing ``sharey=other`` when constructing the\n Axes, and cannot be used if the y-axis is already being shared with\n another Axes.\n \"\"\"\n _api.check_isinstance(_AxesBase, other=other)\n if self._sharey is not None and other is not self._sharey:\n raise ValueError(\"y-axis is already shared\")\n self._shared_axes[\"y\"].join(self, other)\n self._sharey = other\n self.yaxis.major = other.yaxis.major # Ticker instances holding\n self.yaxis.minor = other.yaxis.minor # locator and formatter.\n y0, y1 = other.get_ylim()\n self.set_ylim(y0, y1, emit=False, auto=other.get_autoscaley_on())\n self.yaxis._scale = other.yaxis._scale\n\n def __clear(self):\n \"\"\"Clear the Axes.\"\"\"\n # The actual implementation of clear() as long as clear() has to be\n # an adapter delegating to the correct implementation.\n # The implementation can move back into clear() when the\n # deprecation on cla() subclassing expires.\n\n # stash the current visibility state\n if hasattr(self, 'patch'):\n patch_visible = self.patch.get_visible()\n else:\n patch_visible = True\n\n xaxis_visible = self.xaxis.get_visible()\n yaxis_visible = self.yaxis.get_visible()\n\n for axis in self._axis_map.values():\n axis.clear() # Also resets the scale to linear.\n for spine in self.spines.values():\n spine._clear() # Use _clear to not clear Axis again\n\n self.ignore_existing_data_limits = True\n self.callbacks = cbook.CallbackRegistry(\n signals=[\"xlim_changed\", \"ylim_changed\", \"zlim_changed\"])\n\n # update the minor locator for x and y axis based on rcParams\n if mpl.rcParams['xtick.minor.visible']:\n self.xaxis.set_minor_locator(mticker.AutoMinorLocator())\n if mpl.rcParams['ytick.minor.visible']:\n self.yaxis.set_minor_locator(mticker.AutoMinorLocator())\n\n self._xmargin = mpl.rcParams['axes.xmargin']\n self._ymargin = mpl.rcParams['axes.ymargin']\n self._tight = None\n self._use_sticky_edges = True\n\n self._get_lines = _process_plot_var_args()\n self._get_patches_for_fill = _process_plot_var_args('fill')\n\n self._gridOn = mpl.rcParams['axes.grid']\n old_children, self._children = self._children, []\n for chld in old_children:\n chld.axes = chld.figure = None\n self._mouseover_set = _OrderedSet()\n self.child_axes = []\n self._current_image = None # strictly for pyplot via _sci, _gci\n self._projection_init = None # strictly for pyplot.subplot\n self.legend_ = None\n self.containers = []\n\n self.grid(False) # Disable grid on init to use rcParameter\n self.grid(self._gridOn, which=mpl.rcParams['axes.grid.which'],\n axis=mpl.rcParams['axes.grid.axis'])\n props = font_manager.FontProperties(\n size=mpl.rcParams['axes.titlesize'],\n weight=mpl.rcParams['axes.titleweight'])\n\n y = mpl.rcParams['axes.titley']\n if y is None:\n y = 1.0\n self._autotitlepos = True\n else:\n self._autotitlepos = False\n\n self.title = mtext.Text(\n x=0.5, y=y, text='',\n fontproperties=props,\n verticalalignment='baseline',\n horizontalalignment='center',\n )\n self._left_title = mtext.Text(\n x=0.0, y=y, text='',\n fontproperties=props.copy(),\n verticalalignment='baseline',\n horizontalalignment='left', )\n self._right_title = mtext.Text(\n x=1.0, y=y, text='',\n fontproperties=props.copy(),\n verticalalignment='baseline',\n horizontalalignment='right',\n )\n title_offset_points = mpl.rcParams['axes.titlepad']\n # refactor this out so it can be called in ax.set_title if\n # pad argument used...\n self._set_title_offset_trans(title_offset_points)\n\n for _title in (self.title, self._left_title, self._right_title):\n self._set_artist_props(_title)\n\n # The patch draws the background of the Axes. We want this to be below\n # the other artists. We use the frame to draw the edges so we are\n # setting the edgecolor to None.\n self.patch = self._gen_axes_patch()\n self.patch.set_figure(self.figure)\n self.patch.set_facecolor(self._facecolor)\n self.patch.set_edgecolor('none')\n self.patch.set_linewidth(0)\n self.patch.set_transform(self.transAxes)\n\n self.set_axis_on()\n\n self.xaxis.set_clip_path(self.patch)\n self.yaxis.set_clip_path(self.patch)\n\n if self._sharex is not None:\n self.xaxis.set_visible(xaxis_visible)\n self.patch.set_visible(patch_visible)\n if self._sharey is not None:\n self.yaxis.set_visible(yaxis_visible)\n self.patch.set_visible(patch_visible)\n\n # This comes last, as the call to _set_lim may trigger an autoscale (in\n # case of shared axes), requiring children to be already set up.\n for name, axis in self._axis_map.items():\n share = getattr(self, f\"_share{name}\")\n if share is not None:\n getattr(self, f\"share{name}\")(share)\n else:\n # Although the scale was set to linear as part of clear,\n # polar requires that _set_scale is called again\n if self.name == \"polar\":\n axis._set_scale(\"linear\")\n axis._set_lim(0, 1, auto=True)\n self._update_transScale()\n\n self.stale = True\n\n def clear(self):\n \"\"\"Clear the Axes.\"\"\"\n # Act as an alias, or as the superclass implementation depending on the\n # subclass implementation.\n if self._subclass_uses_cla:\n self.cla()\n else:\n self.__clear()\n\n def cla(self):\n \"\"\"Clear the Axes.\"\"\"\n # Act as an alias, or as the superclass implementation depending on the\n # subclass implementation.\n if self._subclass_uses_cla:\n self.__clear()\n else:\n self.clear()\n\n class ArtistList(Sequence):\n \"\"\"\n A sublist of Axes children based on their type.\n\n The type-specific children sublists were made immutable in Matplotlib\n 3.7. In the future these artist lists may be replaced by tuples. Use\n as if this is a tuple already.\n \"\"\"\n def __init__(self, axes, prop_name,\n valid_types=None, invalid_types=None):\n \"\"\"\n Parameters\n ----------\n axes : `~matplotlib.axes.Axes`\n The Axes from which this sublist will pull the children\n Artists.\n prop_name : str\n The property name used to access this sublist from the Axes;\n used to generate deprecation warnings.\n valid_types : list of type, optional\n A list of types that determine which children will be returned\n by this sublist. If specified, then the Artists in the sublist\n must be instances of any of these types. If unspecified, then\n any type of Artist is valid (unless limited by\n *invalid_types*.)\n invalid_types : tuple, optional\n A list of types that determine which children will *not* be\n returned by this sublist. If specified, then Artists in the\n sublist will never be an instance of these types. Otherwise, no\n types will be excluded.\n \"\"\"\n self._axes = axes\n self._prop_name = prop_name\n self._type_check = lambda artist: (\n (not valid_types or isinstance(artist, valid_types)) and\n (not invalid_types or not isinstance(artist, invalid_types))\n )\n\n def __repr__(self):\n return f'<Axes.ArtistList of {len(self)} {self._prop_name}>'\n\n def __len__(self):\n return sum(self._type_check(artist)\n for artist in self._axes._children)\n\n def __iter__(self):\n for artist in list(self._axes._children):\n if self._type_check(artist):\n yield artist\n\n def __getitem__(self, key):\n return [artist\n for artist in self._axes._children\n if self._type_check(artist)][key]\n\n def __add__(self, other):\n if isinstance(other, (list, _AxesBase.ArtistList)):\n return [*self, *other]\n if isinstance(other, (tuple, _AxesBase.ArtistList)):\n return (*self, *other)\n return NotImplemented\n\n def __radd__(self, other):\n if isinstance(other, list):\n return other + list(self)\n if isinstance(other, tuple):\n return other + tuple(self)\n return NotImplemented\n\n @property\n def artists(self):\n return self.ArtistList(self, 'artists', invalid_types=(\n mcoll.Collection, mimage.AxesImage, mlines.Line2D, mpatches.Patch,\n mtable.Table, mtext.Text))\n\n @property\n def collections(self):\n return self.ArtistList(self, 'collections',\n valid_types=mcoll.Collection)\n\n @property\n def images(self):\n return self.ArtistList(self, 'images', valid_types=mimage.AxesImage)\n\n @property\n def lines(self):\n return self.ArtistList(self, 'lines', valid_types=mlines.Line2D)\n\n @property\n def patches(self):\n return self.ArtistList(self, 'patches', valid_types=mpatches.Patch)\n\n @property\n def tables(self):\n return self.ArtistList(self, 'tables', valid_types=mtable.Table)\n\n @property\n def texts(self):\n return self.ArtistList(self, 'texts', valid_types=mtext.Text)\n\n def get_facecolor(self):\n \"\"\"Get the facecolor of the Axes.\"\"\"\n return self.patch.get_facecolor()\n\n def set_facecolor(self, color):\n \"\"\"\n Set the facecolor of the Axes.\n\n Parameters\n ----------\n color : color\n \"\"\"\n self._facecolor = color\n self.stale = True\n return self.patch.set_facecolor(color)\n\n def _set_title_offset_trans(self, title_offset_points):\n \"\"\"\n Set the offset for the title either from :rc:`axes.titlepad`\n or from set_title kwarg ``pad``.\n \"\"\"\n self.titleOffsetTrans = mtransforms.ScaledTranslation(\n 0.0, title_offset_points / 72,\n self.figure.dpi_scale_trans)\n for _title in (self.title, self._left_title, self._right_title):\n _title.set_transform(self.transAxes + self.titleOffsetTrans)\n _title.set_clip_box(None)\n\n def set_prop_cycle(self, *args, **kwargs):\n \"\"\"\n Set the property cycle of the Axes.\n\n The property cycle controls the style properties such as color,\n marker and linestyle of future plot commands. The style properties\n of data already added to the Axes are not modified.\n\n Call signatures::\n\n set_prop_cycle(cycler)\n set_prop_cycle(label=values[, label2=values2[, ...]])\n set_prop_cycle(label, values)\n\n Form 1 sets given `~cycler.Cycler` object.\n\n Form 2 creates a `~cycler.Cycler` which cycles over one or more\n properties simultaneously and set it as the property cycle of the\n Axes. If multiple properties are given, their value lists must have\n the same length. This is just a shortcut for explicitly creating a\n cycler and passing it to the function, i.e. it's short for\n ``set_prop_cycle(cycler(label=values label2=values2, ...))``.\n\n Form 3 creates a `~cycler.Cycler` for a single property and set it\n as the property cycle of the Axes. This form exists for compatibility\n with the original `cycler.cycler` interface. Its use is discouraged\n in favor of the kwarg form, i.e. ``set_prop_cycle(label=values)``.\n\n Parameters\n ----------\n cycler : Cycler\n Set the given Cycler. *None* resets to the cycle defined by the\n current style.\n\n label : str\n The property key. Must be a valid `.Artist` property.\n For example, 'color' or 'linestyle'. Aliases are allowed,\n such as 'c' for 'color' and 'lw' for 'linewidth'.\n\n values : iterable\n Finite-length iterable of the property values. These values\n are validated and will raise a ValueError if invalid.\n\n See Also\n --------\n matplotlib.rcsetup.cycler\n Convenience function for creating validated cyclers for properties.\n cycler.cycler\n The original function for creating unvalidated cyclers.\n\n Examples\n --------\n Setting the property cycle for a single property:\n\n >>> ax.set_prop_cycle(color=['red', 'green', 'blue'])\n\n Setting the property cycle for simultaneously cycling over multiple\n properties (e.g. red circle, green plus, blue cross):\n\n >>> ax.set_prop_cycle(color=['red', 'green', 'blue'],\n ... marker=['o', '+', 'x'])\n\n \"\"\"\n if args and kwargs:\n raise TypeError(\"Cannot supply both positional and keyword \"\n \"arguments to this method.\")\n # Can't do `args == (None,)` as that crashes cycler.\n if len(args) == 1 and args[0] is None:\n prop_cycle = None\n else:\n prop_cycle = cycler(*args, **kwargs)\n self._get_lines.set_prop_cycle(prop_cycle)\n self._get_patches_for_fill.set_prop_cycle(prop_cycle)\n\n def get_aspect(self):\n \"\"\"\n Return the aspect ratio of the axes scaling.\n\n This is either \"auto\" or a float giving the ratio of y/x-scale.\n \"\"\"\n return self._aspect\n\n def set_aspect(self, aspect, adjustable=None, anchor=None, share=False):\n \"\"\"\n Set the aspect ratio of the axes scaling, i.e. y/x-scale.\n\n Parameters\n ----------\n aspect : {'auto', 'equal'} or float\n Possible values:\n\n - 'auto': fill the position rectangle with data.\n - 'equal': same as ``aspect=1``, i.e. same scaling for x and y.\n - *float*: The displayed size of 1 unit in y-data coordinates will\n be *aspect* times the displayed size of 1 unit in x-data\n coordinates; e.g. for ``aspect=2`` a square in data coordinates\n will be rendered with a height of twice its width.\n\n adjustable : None or {'box', 'datalim'}, optional\n If not ``None``, this defines which parameter will be adjusted to\n meet the required aspect. See `.set_adjustable` for further\n details.\n\n anchor : None or str or (float, float), optional\n If not ``None``, this defines where the Axes will be drawn if there\n is extra space due to aspect constraints. The most common way\n to specify the anchor are abbreviations of cardinal directions:\n\n ===== =====================\n value description\n ===== =====================\n 'C' centered\n 'SW' lower left corner\n 'S' middle of bottom edge\n 'SE' lower right corner\n etc.\n ===== =====================\n\n See `~.Axes.set_anchor` for further details.\n\n share : bool, default: False\n If ``True``, apply the settings to all shared Axes.\n\n See Also\n --------\n matplotlib.axes.Axes.set_adjustable\n Set how the Axes adjusts to achieve the required aspect ratio.\n matplotlib.axes.Axes.set_anchor\n Set the position in case of extra space.\n \"\"\"\n if cbook._str_equal(aspect, 'equal'):\n aspect = 1\n if not cbook._str_equal(aspect, 'auto'):\n aspect = float(aspect) # raise ValueError if necessary\n if aspect <= 0 or not np.isfinite(aspect):\n raise ValueError(\"aspect must be finite and positive \")\n\n if share:\n axes = {sibling for name in self._axis_names\n for sibling in self._shared_axes[name].get_siblings(self)}\n else:\n axes = [self]\n\n for ax in axes:\n ax._aspect = aspect\n\n if adjustable is None:\n adjustable = self._adjustable\n self.set_adjustable(adjustable, share=share) # Handle sharing.\n\n if anchor is not None:\n self.set_anchor(anchor, share=share)\n self.stale = True\n\n def get_adjustable(self):\n \"\"\"\n Return whether the Axes will adjust its physical dimension ('box') or\n its data limits ('datalim') to achieve the desired aspect ratio.\n\n See Also\n --------\n matplotlib.axes.Axes.set_adjustable\n Set how the Axes adjusts to achieve the required aspect ratio.\n matplotlib.axes.Axes.set_aspect\n For a description of aspect handling.\n \"\"\"\n return self._adjustable\n\n def set_adjustable(self, adjustable, share=False):\n \"\"\"\n Set how the Axes adjusts to achieve the required aspect ratio.\n\n Parameters\n ----------\n adjustable : {'box', 'datalim'}\n If 'box', change the physical dimensions of the Axes.\n If 'datalim', change the ``x`` or ``y`` data limits.\n\n share : bool, default: False\n If ``True``, apply the settings to all shared Axes.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n For a description of aspect handling.\n\n Notes\n -----\n Shared Axes (of which twinned Axes are a special case)\n impose restrictions on how aspect ratios can be imposed.\n For twinned Axes, use 'datalim'. For Axes that share both\n x and y, use 'box'. Otherwise, either 'datalim' or 'box'\n may be used. These limitations are partly a requirement\n to avoid over-specification, and partly a result of the\n particular implementation we are currently using, in\n which the adjustments for aspect ratios are done sequentially\n and independently on each Axes as it is drawn.\n \"\"\"\n _api.check_in_list([\"box\", \"datalim\"], adjustable=adjustable)\n if share:\n axs = {sibling for name in self._axis_names\n for sibling in self._shared_axes[name].get_siblings(self)}\n else:\n axs = [self]\n if (adjustable == \"datalim\"\n and any(getattr(ax.get_data_ratio, \"__func__\", None)\n != _AxesBase.get_data_ratio\n for ax in axs)):\n # Limits adjustment by apply_aspect assumes that the axes' aspect\n # ratio can be computed from the data limits and scales.\n raise ValueError(\"Cannot set Axes adjustable to 'datalim' for \"\n \"Axes which override 'get_data_ratio'\")\n for ax in axs:\n ax._adjustable = adjustable\n self.stale = True\n\n def get_box_aspect(self):\n \"\"\"\n Return the Axes box aspect, i.e. the ratio of height to width.\n\n The box aspect is ``None`` (i.e. chosen depending on the available\n figure space) unless explicitly specified.\n\n See Also\n --------\n matplotlib.axes.Axes.set_box_aspect\n for a description of box aspect.\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n return self._box_aspect\n\n def set_box_aspect(self, aspect=None):\n \"\"\"\n Set the Axes box aspect, i.e. the ratio of height to width.\n\n This defines the aspect of the Axes in figure space and is not to be\n confused with the data aspect (see `~.Axes.set_aspect`).\n\n Parameters\n ----------\n aspect : float or None\n Changes the physical dimensions of the Axes, such that the ratio\n of the Axes height to the Axes width in physical units is equal to\n *aspect*. Defining a box aspect will change the *adjustable*\n property to 'datalim' (see `~.Axes.set_adjustable`).\n\n *None* will disable a fixed box aspect so that height and width\n of the Axes are chosen independently.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n axs = {*self._twinned_axes.get_siblings(self),\n *self._twinned_axes.get_siblings(self)}\n\n if aspect is not None:\n aspect = float(aspect)\n # when box_aspect is set to other than ´None`,\n # adjustable must be \"datalim\"\n for ax in axs:\n ax.set_adjustable(\"datalim\")\n\n for ax in axs:\n ax._box_aspect = aspect\n ax.stale = True\n\n def get_anchor(self):\n \"\"\"\n Get the anchor location.\n\n See Also\n --------\n matplotlib.axes.Axes.set_anchor\n for a description of the anchor.\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n return self._anchor\n\n def set_anchor(self, anchor, share=False):\n \"\"\"\n Define the anchor location.\n\n The actual drawing area (active position) of the Axes may be smaller\n than the Bbox (original position) when a fixed aspect is required. The\n anchor defines where the drawing area will be located within the\n available space.\n\n Parameters\n ----------\n anchor : (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}\n Either an (*x*, *y*) pair of relative coordinates (0 is left or\n bottom, 1 is right or top), 'C' (center), or a cardinal direction\n ('SW', southwest, is bottom left, etc.). str inputs are shorthands\n for (*x*, *y*) coordinates, as shown in the following diagram::\n\n ┌─────────────────┬─────────────────┬─────────────────┐\n │ 'NW' (0.0, 1.0) │ 'N' (0.5, 1.0) │ 'NE' (1.0, 1.0) │\n ├─────────────────┼─────────────────┼─────────────────┤\n │ 'W' (0.0, 0.5) │ 'C' (0.5, 0.5) │ 'E' (1.0, 0.5) │\n ├─────────────────┼─────────────────┼─────────────────┤\n │ 'SW' (0.0, 0.0) │ 'S' (0.5, 0.0) │ 'SE' (1.0, 0.0) │\n └─────────────────┴─────────────────┴─────────────────┘\n\n share : bool, default: False\n If ``True``, apply the settings to all shared Axes.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n for a description of aspect handling.\n \"\"\"\n if not (anchor in mtransforms.Bbox.coefs or len(anchor) == 2):\n raise ValueError('argument must be among %s' %\n ', '.join(mtransforms.Bbox.coefs))\n if share:\n axes = {sibling for name in self._axis_names\n for sibling in self._shared_axes[name].get_siblings(self)}\n else:\n axes = [self]\n for ax in axes:\n ax._anchor = anchor\n\n self.stale = True\n\n def get_data_ratio(self):\n \"\"\"\n Return the aspect ratio of the scaled data.\n\n Notes\n -----\n This method is intended to be overridden by new projection types.\n \"\"\"\n txmin, txmax = self.xaxis.get_transform().transform(self.get_xbound())\n tymin, tymax = self.yaxis.get_transform().transform(self.get_ybound())\n xsize = max(abs(txmax - txmin), 1e-30)\n ysize = max(abs(tymax - tymin), 1e-30)\n return ysize / xsize\n\n def apply_aspect(self, position=None):\n \"\"\"\n Adjust the Axes for a specified data aspect ratio.\n\n Depending on `.get_adjustable` this will modify either the\n Axes box (position) or the view limits. In the former case,\n `~matplotlib.axes.Axes.get_anchor` will affect the position.\n\n Parameters\n ----------\n position : None or .Bbox\n If not ``None``, this defines the position of the\n Axes within the figure as a Bbox. See `~.Axes.get_position`\n for further details.\n\n Notes\n -----\n This is called automatically when each Axes is drawn. You may need\n to call it yourself if you need to update the Axes position and/or\n view limits before the Figure is drawn.\n\n See Also\n --------\n matplotlib.axes.Axes.set_aspect\n For a description of aspect ratio handling.\n matplotlib.axes.Axes.set_adjustable\n Set how the Axes adjusts to achieve the required aspect ratio.\n matplotlib.axes.Axes.set_anchor\n Set the position in case of extra space.\n \"\"\"\n if position is None:\n position = self.get_position(original=True)\n\n aspect = self.get_aspect()\n\n if aspect == 'auto' and self._box_aspect is None:\n self._set_position(position, which='active')\n return\n\n trans = self.get_figure().transSubfigure\n bb = mtransforms.Bbox.unit().transformed(trans)\n # this is the physical aspect of the panel (or figure):\n fig_aspect = bb.height / bb.width\n\n if self._adjustable == 'box':\n if self in self._twinned_axes:\n raise RuntimeError(\"Adjustable 'box' is not allowed in a \"\n \"twinned Axes; use 'datalim' instead\")\n box_aspect = aspect * self.get_data_ratio()\n pb = position.frozen()\n pb1 = pb.shrunk_to_aspect(box_aspect, pb, fig_aspect)\n self._set_position(pb1.anchored(self.get_anchor(), pb), 'active')\n return\n\n # The following is only seen if self._adjustable == 'datalim'\n if self._box_aspect is not None:\n pb = position.frozen()\n pb1 = pb.shrunk_to_aspect(self._box_aspect, pb, fig_aspect)\n self._set_position(pb1.anchored(self.get_anchor(), pb), 'active')\n if aspect == \"auto\":\n return\n\n # reset active to original in case it had been changed by prior use\n # of 'box'\n if self._box_aspect is None:\n self._set_position(position, which='active')\n else:\n position = pb1.anchored(self.get_anchor(), pb)\n\n x_trf = self.xaxis.get_transform()\n y_trf = self.yaxis.get_transform()\n xmin, xmax = x_trf.transform(self.get_xbound())\n ymin, ymax = y_trf.transform(self.get_ybound())\n xsize = max(abs(xmax - xmin), 1e-30)\n ysize = max(abs(ymax - ymin), 1e-30)\n\n box_aspect = fig_aspect * (position.height / position.width)\n data_ratio = box_aspect / aspect\n\n y_expander = data_ratio * xsize / ysize - 1\n # If y_expander > 0, the dy/dx viewLim ratio needs to increase\n if abs(y_expander) < 0.005:\n return\n\n dL = self.dataLim\n x0, x1 = x_trf.transform(dL.intervalx)\n y0, y1 = y_trf.transform(dL.intervaly)\n xr = 1.05 * (x1 - x0)\n yr = 1.05 * (y1 - y0)\n\n xmarg = xsize - xr\n ymarg = ysize - yr\n Ysize = data_ratio * xsize\n Xsize = ysize / data_ratio\n Xmarg = Xsize - xr\n Ymarg = Ysize - yr\n # Setting these targets to, e.g., 0.05*xr does not seem to help.\n xm = 0\n ym = 0\n\n shared_x = self in self._shared_axes[\"x\"]\n shared_y = self in self._shared_axes[\"y\"]\n\n if shared_x and shared_y:\n raise RuntimeError(\"set_aspect(..., adjustable='datalim') or \"\n \"axis('equal') are not allowed when both axes \"\n \"are shared. Try set_aspect(..., \"\n \"adjustable='box').\")\n\n # If y is shared, then we are only allowed to change x, etc.\n if shared_y:\n adjust_y = False\n else:\n if xmarg > xm and ymarg > ym:\n adjy = ((Ymarg > 0 and y_expander < 0) or\n (Xmarg < 0 and y_expander > 0))\n else:\n adjy = y_expander > 0\n adjust_y = shared_x or adjy # (Ymarg > xmarg)\n\n if adjust_y:\n yc = 0.5 * (ymin + ymax)\n y0 = yc - Ysize / 2.0\n y1 = yc + Ysize / 2.0\n self.set_ybound(y_trf.inverted().transform([y0, y1]))\n else:\n xc = 0.5 * (xmin + xmax)\n x0 = xc - Xsize / 2.0\n x1 = xc + Xsize / 2.0\n self.set_xbound(x_trf.inverted().transform([x0, x1]))\n\n def axis(self, arg=None, /, *, emit=True, **kwargs):\n \"\"\"\n Convenience method to get or set some axis properties.\n\n Call signatures::\n\n xmin, xmax, ymin, ymax = axis()\n xmin, xmax, ymin, ymax = axis([xmin, xmax, ymin, ymax])\n xmin, xmax, ymin, ymax = axis(option)\n xmin, xmax, ymin, ymax = axis(**kwargs)\n\n Parameters\n ----------\n xmin, xmax, ymin, ymax : float, optional\n The axis limits to be set. This can also be achieved using ::\n\n ax.set(xlim=(xmin, xmax), ylim=(ymin, ymax))\n\n option : bool or str\n If a bool, turns axis lines and labels on or off. If a string,\n possible values are:\n\n ======== ==========================================================\n Value Description\n ======== ==========================================================\n 'on' Turn on axis lines and labels. Same as ``True``.\n 'off' Turn off axis lines and labels. Same as ``False``.\n 'equal' Set equal scaling (i.e., make circles circular) by\n changing axis limits. This is the same as\n ``ax.set_aspect('equal', adjustable='datalim')``.\n Explicit data limits may not be respected in this case.\n 'scaled' Set equal scaling (i.e., make circles circular) by\n changing dimensions of the plot box. This is the same as\n ``ax.set_aspect('equal', adjustable='box', anchor='C')``.\n Additionally, further autoscaling will be disabled.\n 'tight' Set limits just large enough to show all data, then\n disable further autoscaling.\n 'auto' Automatic scaling (fill plot box with data).\n 'image' 'scaled' with axis limits equal to data limits.\n 'square' Square plot; similar to 'scaled', but initially forcing\n ``xmax-xmin == ymax-ymin``.\n ======== ==========================================================\n\n emit : bool, default: True\n Whether observers are notified of the axis limit change.\n This option is passed on to `~.Axes.set_xlim` and\n `~.Axes.set_ylim`.\n\n Returns\n -------\n xmin, xmax, ymin, ymax : float\n The axis limits.\n\n See Also\n --------\n matplotlib.axes.Axes.set_xlim\n matplotlib.axes.Axes.set_ylim\n\n Notes\n -----\n For 3D axes, this method additionally takes *zmin*, *zmax* as\n parameters and likewise returns them.\n \"\"\"\n if isinstance(arg, (str, bool)):\n if arg is True:\n arg = 'on'\n if arg is False:\n arg = 'off'\n arg = arg.lower()\n if arg == 'on':\n self.set_axis_on()\n elif arg == 'off':\n self.set_axis_off()\n elif arg in [\n 'equal', 'tight', 'scaled', 'auto', 'image', 'square']:\n self.set_autoscale_on(True)\n self.set_aspect('auto')\n self.autoscale_view(tight=False)\n if arg == 'equal':\n self.set_aspect('equal', adjustable='datalim')\n elif arg == 'scaled':\n self.set_aspect('equal', adjustable='box', anchor='C')\n self.set_autoscale_on(False) # Req. by Mark Bakker\n elif arg == 'tight':\n self.autoscale_view(tight=True)\n self.set_autoscale_on(False)\n elif arg == 'image':\n self.autoscale_view(tight=True)\n self.set_autoscale_on(False)\n self.set_aspect('equal', adjustable='box', anchor='C')\n elif arg == 'square':\n self.set_aspect('equal', adjustable='box', anchor='C')\n self.set_autoscale_on(False)\n xlim = self.get_xlim()\n ylim = self.get_ylim()\n edge_size = max(np.diff(xlim), np.diff(ylim))[0]\n self.set_xlim([xlim[0], xlim[0] + edge_size],\n emit=emit, auto=False)\n self.set_ylim([ylim[0], ylim[0] + edge_size],\n emit=emit, auto=False)\n else:\n raise ValueError(f\"Unrecognized string {arg!r} to axis; \"\n \"try 'on' or 'off'\")\n else:\n if arg is not None:\n if len(arg) != 2*len(self._axis_names):\n raise TypeError(\n \"The first argument to axis() must be an iterable of the form \"\n \"[{}]\".format(\", \".join(\n f\"{name}min, {name}max\" for name in self._axis_names)))\n limits = {\n name: arg[2*i:2*(i+1)]\n for i, name in enumerate(self._axis_names)\n }\n else:\n limits = {}\n for name in self._axis_names:\n ax_min = kwargs.pop(f'{name}min', None)\n ax_max = kwargs.pop(f'{name}max', None)\n limits[name] = (ax_min, ax_max)\n for name, (ax_min, ax_max) in limits.items():\n ax_auto = (None # Keep autoscale state as is.\n if ax_min is None and ax_max is None\n else False) # Turn off autoscale.\n set_ax_lim = getattr(self, f'set_{name}lim')\n set_ax_lim(ax_min, ax_max, emit=emit, auto=ax_auto)\n if kwargs:\n raise _api.kwarg_error(\"axis\", kwargs)\n lims = ()\n for name in self._axis_names:\n get_ax_lim = getattr(self, f'get_{name}lim')\n lims += get_ax_lim()\n return lims\n\n def get_legend(self):\n \"\"\"Return the `.Legend` instance, or None if no legend is defined.\"\"\"\n return self.legend_\n\n def get_images(self):\n r\"\"\"Return a list of `.AxesImage`\\s contained by the Axes.\"\"\"\n return cbook.silent_list('AxesImage', self.images)\n\n def get_lines(self):\n \"\"\"Return a list of lines contained by the Axes.\"\"\"\n return cbook.silent_list('Line2D', self.lines)\n\n def get_xaxis(self):\n \"\"\"\n [*Discouraged*] Return the XAxis instance.\n\n .. admonition:: Discouraged\n\n The use of this function is discouraged. You should instead\n directly access the attribute ``ax.xaxis``.\n \"\"\"\n return self.xaxis\n\n def get_yaxis(self):\n \"\"\"\n [*Discouraged*] Return the YAxis instance.\n\n .. admonition:: Discouraged\n\n The use of this function is discouraged. You should instead\n directly access the attribute ``ax.yaxis``.\n \"\"\"\n return self.yaxis\n\n get_xgridlines = _axis_method_wrapper(\"xaxis\", \"get_gridlines\")\n get_xticklines = _axis_method_wrapper(\"xaxis\", \"get_ticklines\")\n get_ygridlines = _axis_method_wrapper(\"yaxis\", \"get_gridlines\")\n get_yticklines = _axis_method_wrapper(\"yaxis\", \"get_ticklines\")\n\n # Adding and tracking artists\n\n def _sci(self, im):\n \"\"\"\n Set the current image.\n\n This image will be the target of colormap functions like\n ``pyplot.viridis``, and other functions such as `~.pyplot.clim`. The\n current image is an attribute of the current Axes.\n \"\"\"\n _api.check_isinstance((mcoll.Collection, mimage.AxesImage), im=im)\n if im not in self._children:\n raise ValueError(\"Argument must be an image or collection in this Axes\")\n self._current_image = im\n\n def _gci(self):\n \"\"\"Helper for `~matplotlib.pyplot.gci`; do not use elsewhere.\"\"\"\n return self._current_image\n\n def has_data(self):\n \"\"\"\n Return whether any artists have been added to the Axes.\n\n This should not be used to determine whether the *dataLim*\n need to be updated, and may not actually be useful for\n anything.\n \"\"\"\n return any(isinstance(a, (mcoll.Collection, mimage.AxesImage,\n mlines.Line2D, mpatches.Patch))\n for a in self._children)\n\n def add_artist(self, a):\n \"\"\"\n Add an `.Artist` to the Axes; return the artist.\n\n Use `add_artist` only for artists for which there is no dedicated\n \"add\" method; and if necessary, use a method such as `update_datalim`\n to manually update the dataLim if the artist is to be included in\n autoscaling.\n\n If no ``transform`` has been specified when creating the artist (e.g.\n ``artist.get_transform() == None``) then the transform is set to\n ``ax.transData``.\n \"\"\"\n a.axes = self\n self._children.append(a)\n a._remove_method = self._children.remove\n self._set_artist_props(a)\n if a.get_clip_path() is None:\n a.set_clip_path(self.patch)\n self.stale = True\n return a\n\n def add_child_axes(self, ax):\n \"\"\"\n Add an `.AxesBase` to the Axes' children; return the child Axes.\n\n This is the lowlevel version. See `.axes.Axes.inset_axes`.\n \"\"\"\n\n # normally Axes have themselves as the Axes, but these need to have\n # their parent...\n # Need to bypass the getter...\n ax._axes = self\n ax.stale_callback = martist._stale_axes_callback\n\n self.child_axes.append(ax)\n ax._remove_method = self.child_axes.remove\n self.stale = True\n return ax\n\n def add_collection(self, collection, autolim=True):\n \"\"\"\n Add a `.Collection` to the Axes; return the collection.\n \"\"\"\n _api.check_isinstance(mcoll.Collection, collection=collection)\n if not collection.get_label():\n collection.set_label(f'_child{len(self._children)}')\n self._children.append(collection)\n collection._remove_method = self._children.remove\n self._set_artist_props(collection)\n\n if collection.get_clip_path() is None:\n collection.set_clip_path(self.patch)\n\n if autolim:\n # Make sure viewLim is not stale (mostly to match\n # pre-lazy-autoscale behavior, which is not really better).\n self._unstale_viewLim()\n datalim = collection.get_datalim(self.transData)\n points = datalim.get_points()\n if not np.isinf(datalim.minpos).all():\n # By definition, if minpos (minimum positive value) is set\n # (i.e., non-inf), then min(points) <= minpos <= max(points),\n # and minpos would be superfluous. However, we add minpos to\n # the call so that self.dataLim will update its own minpos.\n # This ensures that log scales see the correct minimum.\n points = np.concatenate([points, [datalim.minpos]])\n self.update_datalim(points)\n\n self.stale = True\n return collection\n\n def add_image(self, image):\n \"\"\"\n Add an `.AxesImage` to the Axes; return the image.\n \"\"\"\n _api.check_isinstance(mimage.AxesImage, image=image)\n self._set_artist_props(image)\n if not image.get_label():\n image.set_label(f'_child{len(self._children)}')\n self._children.append(image)\n image._remove_method = self._children.remove\n self.stale = True\n return image\n\n def _update_image_limits(self, image):\n xmin, xmax, ymin, ymax = image.get_extent()\n self.axes.update_datalim(((xmin, ymin), (xmax, ymax)))\n\n def add_line(self, line):\n \"\"\"\n Add a `.Line2D` to the Axes; return the line.\n \"\"\"\n _api.check_isinstance(mlines.Line2D, line=line)\n self._set_artist_props(line)\n if line.get_clip_path() is None:\n line.set_clip_path(self.patch)\n\n self._update_line_limits(line)\n if not line.get_label():\n line.set_label(f'_child{len(self._children)}')\n self._children.append(line)\n line._remove_method = self._children.remove\n self.stale = True\n return line\n\n def _add_text(self, txt):\n \"\"\"\n Add a `.Text` to the Axes; return the text.\n \"\"\"\n _api.check_isinstance(mtext.Text, txt=txt)\n self._set_artist_props(txt)\n self._children.append(txt)\n txt._remove_method = self._children.remove\n self.stale = True\n return txt\n\n def _update_line_limits(self, line):\n \"\"\"\n Figures out the data limit of the given line, updating self.dataLim.\n \"\"\"\n path = line.get_path()\n if path.vertices.size == 0:\n return\n\n line_trf = line.get_transform()\n\n if line_trf == self.transData:\n data_path = path\n elif any(line_trf.contains_branch_seperately(self.transData)):\n # Compute the transform from line coordinates to data coordinates.\n trf_to_data = line_trf - self.transData\n # If transData is affine we can use the cached non-affine component\n # of line's path (since the non-affine part of line_trf is\n # entirely encapsulated in trf_to_data).\n if self.transData.is_affine:\n line_trans_path = line._get_transformed_path()\n na_path, _ = line_trans_path.get_transformed_path_and_affine()\n data_path = trf_to_data.transform_path_affine(na_path)\n else:\n data_path = trf_to_data.transform_path(path)\n else:\n # For backwards compatibility we update the dataLim with the\n # coordinate range of the given path, even though the coordinate\n # systems are completely different. This may occur in situations\n # such as when ax.transAxes is passed through for absolute\n # positioning.\n data_path = path\n\n if not data_path.vertices.size:\n return\n\n updatex, updatey = line_trf.contains_branch_seperately(self.transData)\n if self.name != \"rectilinear\":\n # This block is mostly intended to handle axvline in polar plots,\n # for which updatey would otherwise be True.\n if updatex and line_trf == self.get_yaxis_transform():\n updatex = False\n if updatey and line_trf == self.get_xaxis_transform():\n updatey = False\n self.dataLim.update_from_path(data_path,\n self.ignore_existing_data_limits,\n updatex=updatex, updatey=updatey)\n self.ignore_existing_data_limits = False\n\n def add_patch(self, p):\n \"\"\"\n Add a `.Patch` to the Axes; return the patch.\n \"\"\"\n _api.check_isinstance(mpatches.Patch, p=p)\n self._set_artist_props(p)\n if p.get_clip_path() is None:\n p.set_clip_path(self.patch)\n self._update_patch_limits(p)\n self._children.append(p)\n p._remove_method = self._children.remove\n return p\n\n def _update_patch_limits(self, patch):\n \"\"\"Update the data limits for the given patch.\"\"\"\n # hist can add zero height Rectangles, which is useful to keep\n # the bins, counts and patches lined up, but it throws off log\n # scaling. We'll ignore rects with zero height or width in\n # the auto-scaling\n\n # cannot check for '==0' since unitized data may not compare to zero\n # issue #2150 - we update the limits if patch has non zero width\n # or height.\n if (isinstance(patch, mpatches.Rectangle) and\n ((not patch.get_width()) and (not patch.get_height()))):\n return\n p = patch.get_path()\n # Get all vertices on the path\n # Loop through each segment to get extrema for Bezier curve sections\n vertices = []\n for curve, code in p.iter_bezier(simplify=False):\n # Get distance along the curve of any extrema\n _, dzeros = curve.axis_aligned_extrema()\n # Calculate vertices of start, end and any extrema in between\n vertices.append(curve([0, *dzeros, 1]))\n\n if len(vertices):\n vertices = np.row_stack(vertices)\n\n patch_trf = patch.get_transform()\n updatex, updatey = patch_trf.contains_branch_seperately(self.transData)\n if not (updatex or updatey):\n return\n if self.name != \"rectilinear\":\n # As in _update_line_limits, but for axvspan.\n if updatex and patch_trf == self.get_yaxis_transform():\n updatex = False\n if updatey and patch_trf == self.get_xaxis_transform():\n updatey = False\n trf_to_data = patch_trf - self.transData\n xys = trf_to_data.transform(vertices)\n self.update_datalim(xys, updatex=updatex, updatey=updatey)\n\n def add_table(self, tab):\n \"\"\"\n Add a `.Table` to the Axes; return the table.\n \"\"\"\n _api.check_isinstance(mtable.Table, tab=tab)\n self._set_artist_props(tab)\n self._children.append(tab)\n if tab.get_clip_path() is None:\n tab.set_clip_path(self.patch)\n tab._remove_method = self._children.remove\n return tab\n\n def add_container(self, container):\n \"\"\"\n Add a `.Container` to the Axes' containers; return the container.\n \"\"\"\n label = container.get_label()\n if not label:\n container.set_label('_container%d' % len(self.containers))\n self.containers.append(container)\n container._remove_method = self.containers.remove\n return container\n\n def _unit_change_handler(self, axis_name, event=None):\n \"\"\"\n Process axis units changes: requests updates to data and view limits.\n \"\"\"\n if event is None: # Allow connecting `self._unit_change_handler(name)`\n return functools.partial(\n self._unit_change_handler, axis_name, event=object())\n _api.check_in_list(self._axis_map, axis_name=axis_name)\n for line in self.lines:\n line.recache_always()\n self.relim()\n self._request_autoscale_view(axis_name)\n\n def relim(self, visible_only=False):\n \"\"\"\n Recompute the data limits based on current artists.\n\n At present, `.Collection` instances are not supported.\n\n Parameters\n ----------\n visible_only : bool, default: False\n Whether to exclude invisible artists.\n \"\"\"\n # Collections are deliberately not supported (yet); see\n # the TODO note in artists.py.\n self.dataLim.ignore(True)\n self.dataLim.set_points(mtransforms.Bbox.null().get_points())\n self.ignore_existing_data_limits = True\n\n for artist in self._children:\n if not visible_only or artist.get_visible():\n if isinstance(artist, mlines.Line2D):\n self._update_line_limits(artist)\n elif isinstance(artist, mpatches.Patch):\n self._update_patch_limits(artist)\n elif isinstance(artist, mimage.AxesImage):\n self._update_image_limits(artist)\n\n def update_datalim(self, xys, updatex=True, updatey=True):\n \"\"\"\n Extend the `~.Axes.dataLim` Bbox to include the given points.\n\n If no data is set currently, the Bbox will ignore its limits and set\n the bound to be the bounds of the xydata (*xys*). Otherwise, it will\n compute the bounds of the union of its current data and the data in\n *xys*.\n\n Parameters\n ----------\n xys : 2D array-like\n The points to include in the data limits Bbox. This can be either\n a list of (x, y) tuples or a (N, 2) array.\n\n updatex, updatey : bool, default: True\n Whether to update the x/y limits.\n \"\"\"\n xys = np.asarray(xys)\n if not np.any(np.isfinite(xys)):\n return\n self.dataLim.update_from_data_xy(xys, self.ignore_existing_data_limits,\n updatex=updatex, updatey=updatey)\n self.ignore_existing_data_limits = False\n\n def _process_unit_info(self, datasets=None, kwargs=None, *, convert=True):\n \"\"\"\n Set axis units based on *datasets* and *kwargs*, and optionally apply\n unit conversions to *datasets*.\n\n Parameters\n ----------\n datasets : list\n List of (axis_name, dataset) pairs (where the axis name is defined\n as in `._axis_map`). Individual datasets can also be None\n (which gets passed through).\n kwargs : dict\n Other parameters from which unit info (i.e., the *xunits*,\n *yunits*, *zunits* (for 3D Axes), *runits* and *thetaunits* (for\n polar) entries) is popped, if present. Note that this dict is\n mutated in-place!\n convert : bool, default: True\n Whether to return the original datasets or the converted ones.\n\n Returns\n -------\n list\n Either the original datasets if *convert* is False, or the\n converted ones if *convert* is True (the default).\n \"\"\"\n # The API makes datasets a list of pairs rather than an axis_name to\n # dataset mapping because it is sometimes necessary to process multiple\n # datasets for a single axis, and concatenating them may be tricky\n # (e.g. if some are scalars, etc.).\n datasets = datasets or []\n kwargs = kwargs or {}\n axis_map = self._axis_map\n for axis_name, data in datasets:\n try:\n axis = axis_map[axis_name]\n except KeyError:\n raise ValueError(f\"Invalid axis name: {axis_name!r}\") from None\n # Update from data if axis is already set but no unit is set yet.\n if axis is not None and data is not None and not axis.have_units():\n axis.update_units(data)\n for axis_name, axis in axis_map.items():\n # Return if no axis is set.\n if axis is None:\n continue\n # Check for units in the kwargs, and if present update axis.\n units = kwargs.pop(f\"{axis_name}units\", axis.units)\n if self.name == \"polar\":\n # Special case: polar supports \"thetaunits\"/\"runits\".\n polar_units = {\"x\": \"thetaunits\", \"y\": \"runits\"}\n units = kwargs.pop(polar_units[axis_name], units)\n if units != axis.units and units is not None:\n axis.set_units(units)\n # If the units being set imply a different converter,\n # we need to update again.\n for dataset_axis_name, data in datasets:\n if dataset_axis_name == axis_name and data is not None:\n axis.update_units(data)\n return [axis_map[axis_name].convert_units(data)\n if convert and data is not None else data\n for axis_name, data in datasets]\n\n def in_axes(self, mouseevent):\n \"\"\"\n Return whether the given event (in display coords) is in the Axes.\n \"\"\"\n return self.patch.contains(mouseevent)[0]\n\n get_autoscalex_on = _axis_method_wrapper(\"xaxis\", \"_get_autoscale_on\")\n get_autoscaley_on = _axis_method_wrapper(\"yaxis\", \"_get_autoscale_on\")\n set_autoscalex_on = _axis_method_wrapper(\"xaxis\", \"_set_autoscale_on\")\n set_autoscaley_on = _axis_method_wrapper(\"yaxis\", \"_set_autoscale_on\")\n\n def get_autoscale_on(self):\n \"\"\"Return True if each axis is autoscaled, False otherwise.\"\"\"\n return all(axis._get_autoscale_on()\n for axis in self._axis_map.values())\n\n def set_autoscale_on(self, b):\n \"\"\"\n Set whether autoscaling is applied to each axis on the next draw or\n call to `.Axes.autoscale_view`.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n for axis in self._axis_map.values():\n axis._set_autoscale_on(b)\n\n @property\n def use_sticky_edges(self):\n \"\"\"\n When autoscaling, whether to obey all `Artist.sticky_edges`.\n\n Default is ``True``.\n\n Setting this to ``False`` ensures that the specified margins\n will be applied, even if the plot includes an image, for\n example, which would otherwise force a view limit to coincide\n with its data limit.\n\n The changing this property does not change the plot until\n `autoscale` or `autoscale_view` is called.\n \"\"\"\n return self._use_sticky_edges\n\n @use_sticky_edges.setter\n def use_sticky_edges(self, b):\n self._use_sticky_edges = bool(b)\n # No effect until next autoscaling, which will mark the Axes as stale.\n\n def set_xmargin(self, m):\n \"\"\"\n Set padding of X data limits prior to autoscaling.\n\n *m* times the data interval will be added to each end of that interval\n before it is used in autoscaling. If *m* is negative, this will clip\n the data range instead of expanding it.\n\n For example, if your data is in the range [0, 2], a margin of 0.1 will\n result in a range [-0.2, 2.2]; a margin of -0.1 will result in a range\n of [0.2, 1.8].\n\n Parameters\n ----------\n m : float greater than -0.5\n \"\"\"\n if m <= -0.5:\n raise ValueError(\"margin must be greater than -0.5\")\n self._xmargin = m\n self._request_autoscale_view(\"x\")\n self.stale = True\n\n def set_ymargin(self, m):\n \"\"\"\n Set padding of Y data limits prior to autoscaling.\n\n *m* times the data interval will be added to each end of that interval\n before it is used in autoscaling. If *m* is negative, this will clip\n the data range instead of expanding it.\n\n For example, if your data is in the range [0, 2], a margin of 0.1 will\n result in a range [-0.2, 2.2]; a margin of -0.1 will result in a range\n of [0.2, 1.8].\n\n Parameters\n ----------\n m : float greater than -0.5\n \"\"\"\n if m <= -0.5:\n raise ValueError(\"margin must be greater than -0.5\")\n self._ymargin = m\n self._request_autoscale_view(\"y\")\n self.stale = True\n\n def margins(self, *margins, x=None, y=None, tight=True):\n \"\"\"\n Set or retrieve autoscaling margins.\n\n The padding added to each limit of the Axes is the *margin*\n times the data interval. All input parameters must be floats\n greater than -0.5. Passing both positional and keyword\n arguments is invalid and will raise a TypeError. If no\n arguments (positional or otherwise) are provided, the current\n margins will remain unchanged and simply be returned.\n\n Specifying any margin changes only the autoscaling; for example,\n if *xmargin* is not None, then *xmargin* times the X data\n interval will be added to each end of that interval before\n it is used in autoscaling.\n\n Parameters\n ----------\n *margins : float, optional\n If a single positional argument is provided, it specifies\n both margins of the x-axis and y-axis limits. If two\n positional arguments are provided, they will be interpreted\n as *xmargin*, *ymargin*. If setting the margin on a single\n axis is desired, use the keyword arguments described below.\n\n x, y : float, optional\n Specific margin values for the x-axis and y-axis,\n respectively. These cannot be used with positional\n arguments, but can be used individually to alter on e.g.,\n only the y-axis.\n\n tight : bool or None, default: True\n The *tight* parameter is passed to `~.axes.Axes.autoscale_view`,\n which is executed after a margin is changed; the default\n here is *True*, on the assumption that when margins are\n specified, no additional padding to match tick marks is\n usually desired. Setting *tight* to *None* preserves\n the previous setting.\n\n Returns\n -------\n xmargin, ymargin : float\n\n Notes\n -----\n If a previously used Axes method such as :meth:`pcolor` has set\n :attr:`use_sticky_edges` to `True`, only the limits not set by\n the \"sticky artists\" will be modified. To force all of the\n margins to be set, set :attr:`use_sticky_edges` to `False`\n before calling :meth:`margins`.\n \"\"\"\n\n if margins and (x is not None or y is not None):\n raise TypeError('Cannot pass both positional and keyword '\n 'arguments for x and/or y.')\n elif len(margins) == 1:\n x = y = margins[0]\n elif len(margins) == 2:\n x, y = margins\n elif margins:\n raise TypeError('Must pass a single positional argument for all '\n 'margins, or one for each margin (x, y).')\n\n if x is None and y is None:\n if tight is not True:\n _api.warn_external(f'ignoring tight={tight!r} in get mode')\n return self._xmargin, self._ymargin\n\n if tight is not None:\n self._tight = tight\n if x is not None:\n self.set_xmargin(x)\n if y is not None:\n self.set_ymargin(y)\n\n def set_rasterization_zorder(self, z):\n \"\"\"\n Set the zorder threshold for rasterization for vector graphics output.\n\n All artists with a zorder below the given value will be rasterized if\n they support rasterization.\n\n This setting is ignored for pixel-based output.\n\n See also :doc:`/gallery/misc/rasterization_demo`.\n\n Parameters\n ----------\n z : float or None\n The zorder below which artists are rasterized.\n If ``None`` rasterization based on zorder is deactivated.\n \"\"\"\n self._rasterization_zorder = z\n self.stale = True\n\n def get_rasterization_zorder(self):\n \"\"\"Return the zorder value below which artists will be rasterized.\"\"\"\n return self._rasterization_zorder\n\n def autoscale(self, enable=True, axis='both', tight=None):\n \"\"\"\n Autoscale the axis view to the data (toggle).\n\n Convenience method for simple axis view autoscaling.\n It turns autoscaling on or off, and then,\n if autoscaling for either axis is on, it performs\n the autoscaling on the specified axis or Axes.\n\n Parameters\n ----------\n enable : bool or None, default: True\n True turns autoscaling on, False turns it off.\n None leaves the autoscaling state unchanged.\n axis : {'both', 'x', 'y'}, default: 'both'\n The axis on which to operate. (For 3D Axes, *axis* can also be set\n to 'z', and 'both' refers to all three axes.)\n tight : bool or None, default: None\n If True, first set the margins to zero. Then, this argument is\n forwarded to `~.axes.Axes.autoscale_view` (regardless of\n its value); see the description of its behavior there.\n \"\"\"\n if enable is None:\n scalex = True\n scaley = True\n else:\n if axis in ['x', 'both']:\n self.set_autoscalex_on(bool(enable))\n scalex = self.get_autoscalex_on()\n else:\n scalex = False\n if axis in ['y', 'both']:\n self.set_autoscaley_on(bool(enable))\n scaley = self.get_autoscaley_on()\n else:\n scaley = False\n if tight and scalex:\n self._xmargin = 0\n if tight and scaley:\n self._ymargin = 0\n if scalex:\n self._request_autoscale_view(\"x\", tight=tight)\n if scaley:\n self._request_autoscale_view(\"y\", tight=tight)\n\n def autoscale_view(self, tight=None, scalex=True, scaley=True):\n \"\"\"\n Autoscale the view limits using the data limits.\n\n Parameters\n ----------\n tight : bool or None\n If *True*, only expand the axis limits using the margins. Note\n that unlike for `autoscale`, ``tight=True`` does *not* set the\n margins to zero.\n\n If *False* and :rc:`axes.autolimit_mode` is 'round_numbers', then\n after expansion by the margins, further expand the axis limits\n using the axis major locator.\n\n If None (the default), reuse the value set in the previous call to\n `autoscale_view` (the initial value is False, but the default style\n sets :rc:`axes.autolimit_mode` to 'data', in which case this\n behaves like True).\n\n scalex : bool, default: True\n Whether to autoscale the x-axis.\n\n scaley : bool, default: True\n Whether to autoscale the y-axis.\n\n Notes\n -----\n The autoscaling preserves any preexisting axis direction reversal.\n\n The data limits are not updated automatically when artist data are\n changed after the artist has been added to an Axes instance. In that\n case, use :meth:`matplotlib.axes.Axes.relim` prior to calling\n autoscale_view.\n\n If the views of the Axes are fixed, e.g. via `set_xlim`, they will\n not be changed by autoscale_view().\n See :meth:`matplotlib.axes.Axes.autoscale` for an alternative.\n \"\"\"\n if tight is not None:\n self._tight = bool(tight)\n\n x_stickies = y_stickies = np.array([])\n if self.use_sticky_edges:\n if self._xmargin and scalex and self.get_autoscalex_on():\n x_stickies = np.sort(np.concatenate([\n artist.sticky_edges.x\n for ax in self._shared_axes[\"x\"].get_siblings(self)\n for artist in ax.get_children()]))\n if self._ymargin and scaley and self.get_autoscaley_on():\n y_stickies = np.sort(np.concatenate([\n artist.sticky_edges.y\n for ax in self._shared_axes[\"y\"].get_siblings(self)\n for artist in ax.get_children()]))\n if self.get_xscale() == 'log':\n x_stickies = x_stickies[x_stickies > 0]\n if self.get_yscale() == 'log':\n y_stickies = y_stickies[y_stickies > 0]\n\n def handle_single_axis(\n scale, shared_axes, name, axis, margin, stickies, set_bound):\n\n if not (scale and axis._get_autoscale_on()):\n return # nothing to do...\n\n shared = shared_axes.get_siblings(self)\n # Base autoscaling on finite data limits when there is at least one\n # finite data limit among all the shared_axes and intervals.\n values = [val for ax in shared\n for val in getattr(ax.dataLim, f\"interval{name}\")\n if np.isfinite(val)]\n if values:\n x0, x1 = (min(values), max(values))\n elif getattr(self._viewLim, f\"mutated{name}\")():\n # No data, but explicit viewLims already set:\n # in mutatedx or mutatedy.\n return\n else:\n x0, x1 = (-np.inf, np.inf)\n # If x0 and x1 are nonfinite, get default limits from the locator.\n locator = axis.get_major_locator()\n x0, x1 = locator.nonsingular(x0, x1)\n # Find the minimum minpos for use in the margin calculation.\n minimum_minpos = min(\n getattr(ax.dataLim, f\"minpos{name}\") for ax in shared)\n\n # Prevent margin addition from crossing a sticky value. A small\n # tolerance must be added due to floating point issues with\n # streamplot; it is defined relative to x0, x1, x1-x0 but has\n # no absolute term (e.g. \"+1e-8\") to avoid issues when working with\n # datasets where all values are tiny (less than 1e-8).\n tol = 1e-5 * max(abs(x0), abs(x1), abs(x1 - x0))\n # Index of largest element < x0 + tol, if any.\n i0 = stickies.searchsorted(x0 + tol) - 1\n x0bound = stickies[i0] if i0 != -1 else None\n # Index of smallest element > x1 - tol, if any.\n i1 = stickies.searchsorted(x1 - tol)\n x1bound = stickies[i1] if i1 != len(stickies) else None\n\n # Add the margin in figure space and then transform back, to handle\n # non-linear scales.\n transform = axis.get_transform()\n inverse_trans = transform.inverted()\n x0, x1 = axis._scale.limit_range_for_scale(x0, x1, minimum_minpos)\n x0t, x1t = transform.transform([x0, x1])\n delta = (x1t - x0t) * margin\n if not np.isfinite(delta):\n delta = 0 # If a bound isn't finite, set margin to zero.\n x0, x1 = inverse_trans.transform([x0t - delta, x1t + delta])\n\n # Apply sticky bounds.\n if x0bound is not None:\n x0 = max(x0, x0bound)\n if x1bound is not None:\n x1 = min(x1, x1bound)\n\n if not self._tight:\n x0, x1 = locator.view_limits(x0, x1)\n set_bound(x0, x1)\n # End of definition of internal function 'handle_single_axis'.\n\n handle_single_axis(\n scalex, self._shared_axes[\"x\"], 'x', self.xaxis, self._xmargin,\n x_stickies, self.set_xbound)\n handle_single_axis(\n scaley, self._shared_axes[\"y\"], 'y', self.yaxis, self._ymargin,\n y_stickies, self.set_ybound)\n\n def _update_title_position(self, renderer):\n \"\"\"\n Update the title position based on the bounding box enclosing\n all the ticklabels and x-axis spine and xlabel...\n \"\"\"\n if self._autotitlepos is not None and not self._autotitlepos:\n _log.debug('title position was updated manually, not adjusting')\n return\n\n titles = (self.title, self._left_title, self._right_title)\n\n # Need to check all our twins too, and all the children as well.\n axs = self._twinned_axes.get_siblings(self) + self.child_axes\n for ax in self.child_axes: # Child positions must be updated first.\n locator = ax.get_axes_locator()\n ax.apply_aspect(locator(self, renderer) if locator else None)\n\n for title in titles:\n x, _ = title.get_position()\n # need to start again in case of window resizing\n title.set_position((x, 1.0))\n top = -np.inf\n for ax in axs:\n bb = None\n if (ax.xaxis.get_ticks_position() in ['top', 'unknown']\n or ax.xaxis.get_label_position() == 'top'):\n bb = ax.xaxis.get_tightbbox(renderer)\n if bb is None:\n if 'outline' in ax.spines:\n # Special case for colorbars:\n bb = ax.spines['outline'].get_window_extent()\n else:\n bb = ax.get_window_extent(renderer)\n top = max(top, bb.ymax)\n if title.get_text():\n ax.yaxis.get_tightbbox(renderer) # update offsetText\n if ax.yaxis.offsetText.get_text():\n bb = ax.yaxis.offsetText.get_tightbbox(renderer)\n if bb.intersection(title.get_tightbbox(renderer), bb):\n top = bb.ymax\n if top < 0:\n # the top of Axes is not even on the figure, so don't try and\n # automatically place it.\n _log.debug('top of Axes not in the figure, so title not moved')\n return\n if title.get_window_extent(renderer).ymin < top:\n _, y = self.transAxes.inverted().transform((0, top))\n title.set_position((x, y))\n # empirically, this doesn't always get the min to top,\n # so we need to adjust again.\n if title.get_window_extent(renderer).ymin < top:\n _, y = self.transAxes.inverted().transform(\n (0., 2 * top - title.get_window_extent(renderer).ymin))\n title.set_position((x, y))\n\n ymax = max(title.get_position()[1] for title in titles)\n for title in titles:\n # now line up all the titles at the highest baseline.\n x, _ = title.get_position()\n title.set_position((x, ymax))\n\n # Drawing\n @martist.allow_rasterization\n def draw(self, renderer):\n # docstring inherited\n if renderer is None:\n raise RuntimeError('No renderer defined')\n if not self.get_visible():\n return\n self._unstale_viewLim()\n\n renderer.open_group('axes', gid=self.get_gid())\n\n # prevent triggering call backs during the draw process\n self._stale = True\n\n # loop over self and child Axes...\n locator = self.get_axes_locator()\n self.apply_aspect(locator(self, renderer) if locator else None)\n\n artists = self.get_children()\n artists.remove(self.patch)\n\n # the frame draws the edges around the Axes patch -- we\n # decouple these so the patch can be in the background and the\n # frame in the foreground. Do this before drawing the axis\n # objects so that the spine has the opportunity to update them.\n if not (self.axison and self._frameon):\n for spine in self.spines.values():\n artists.remove(spine)\n\n self._update_title_position(renderer)\n\n if not self.axison:\n for _axis in self._axis_map.values():\n artists.remove(_axis)\n\n if not self.figure.canvas.is_saving():\n artists = [\n a for a in artists\n if not a.get_animated() or isinstance(a, mimage.AxesImage)]\n artists = sorted(artists, key=attrgetter('zorder'))\n\n # rasterize artists with negative zorder\n # if the minimum zorder is negative, start rasterization\n rasterization_zorder = self._rasterization_zorder\n\n if (rasterization_zorder is not None and\n artists and artists[0].zorder < rasterization_zorder):\n split_index = np.searchsorted(\n [art.zorder for art in artists],\n rasterization_zorder, side='right'\n )\n artists_rasterized = artists[:split_index]\n artists = artists[split_index:]\n else:\n artists_rasterized = []\n\n if self.axison and self._frameon:\n if artists_rasterized:\n artists_rasterized = [self.patch] + artists_rasterized\n else:\n artists = [self.patch] + artists\n\n if artists_rasterized:\n _draw_rasterized(self.figure, artists_rasterized, renderer)\n\n mimage._draw_list_compositing_images(\n renderer, self, artists, self.figure.suppressComposite)\n\n renderer.close_group('axes')\n self.stale = False\n\n def draw_artist(self, a):\n \"\"\"\n Efficiently redraw a single artist.\n \"\"\"\n a.draw(self.figure.canvas.get_renderer())\n\n def redraw_in_frame(self):\n \"\"\"\n Efficiently redraw Axes data, but not axis ticks, labels, etc.\n \"\"\"\n with ExitStack() as stack:\n for artist in [*self._axis_map.values(),\n self.title, self._left_title, self._right_title]:\n stack.enter_context(artist._cm_set(visible=False))\n self.draw(self.figure.canvas.get_renderer())\n\n # Axes rectangle characteristics\n\n def get_frame_on(self):\n \"\"\"Get whether the Axes rectangle patch is drawn.\"\"\"\n return self._frameon\n\n def set_frame_on(self, b):\n \"\"\"\n Set whether the Axes rectangle patch is drawn.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self._frameon = b\n self.stale = True\n\n def get_axisbelow(self):\n \"\"\"\n Get whether axis ticks and gridlines are above or below most artists.\n\n Returns\n -------\n bool or 'line'\n\n See Also\n --------\n set_axisbelow\n \"\"\"\n return self._axisbelow\n\n def set_axisbelow(self, b):\n \"\"\"\n Set whether axis ticks and gridlines are above or below most artists.\n\n This controls the zorder of the ticks and gridlines. For more\n information on the zorder see :doc:`/gallery/misc/zorder_demo`.\n\n Parameters\n ----------\n b : bool or 'line'\n Possible values:\n\n - *True* (zorder = 0.5): Ticks and gridlines are below all Artists.\n - 'line' (zorder = 1.5): Ticks and gridlines are above patches\n (e.g. rectangles, with default zorder = 1) but still below lines\n and markers (with their default zorder = 2).\n - *False* (zorder = 2.5): Ticks and gridlines are above patches\n and lines / markers.\n\n See Also\n --------\n get_axisbelow\n \"\"\"\n # Check that b is True, False or 'line'\n self._axisbelow = axisbelow = validate_axisbelow(b)\n zorder = {\n True: 0.5,\n 'line': 1.5,\n False: 2.5,\n }[axisbelow]\n for axis in self._axis_map.values():\n axis.set_zorder(zorder)\n self.stale = True\n\n @_docstring.dedent_interpd\n def grid(self, visible=None, which='major', axis='both', **kwargs):\n \"\"\"\n Configure the grid lines.\n\n Parameters\n ----------\n visible : bool or None, optional\n Whether to show the grid lines. If any *kwargs* are supplied, it\n is assumed you want the grid on and *visible* will be set to True.\n\n If *visible* is *None* and there are no *kwargs*, this toggles the\n visibility of the lines.\n\n which : {'major', 'minor', 'both'}, optional\n The grid lines to apply the changes on.\n\n axis : {'both', 'x', 'y'}, optional\n The axis to apply the changes on.\n\n **kwargs : `~matplotlib.lines.Line2D` properties\n Define the line properties of the grid, e.g.::\n\n grid(color='r', linestyle='-', linewidth=2)\n\n Valid keyword arguments are:\n\n %(Line2D:kwdoc)s\n\n Notes\n -----\n The axis is drawn as a unit, so the effective zorder for drawing the\n grid is determined by the zorder of each axis, not by the zorder of the\n `.Line2D` objects comprising the grid. Therefore, to set grid zorder,\n use `.set_axisbelow` or, for more control, call the\n `~.Artist.set_zorder` method of each axis.\n \"\"\"\n _api.check_in_list(['x', 'y', 'both'], axis=axis)\n if axis in ['x', 'both']:\n self.xaxis.grid(visible, which=which, **kwargs)\n if axis in ['y', 'both']:\n self.yaxis.grid(visible, which=which, **kwargs)\n\n def ticklabel_format(self, *, axis='both', style='', scilimits=None,\n useOffset=None, useLocale=None, useMathText=None):\n r\"\"\"\n Configure the `.ScalarFormatter` used by default for linear Axes.\n\n If a parameter is not set, the corresponding property of the formatter\n is left unchanged.\n\n Parameters\n ----------\n axis : {'x', 'y', 'both'}, default: 'both'\n The axis to configure. Only major ticks are affected.\n\n style : {'sci', 'scientific', 'plain'}\n Whether to use scientific notation.\n The formatter default is to use scientific notation.\n\n scilimits : pair of ints (m, n)\n Scientific notation is used only for numbers outside the range\n 10\\ :sup:`m` to 10\\ :sup:`n` (and only if the formatter is\n configured to use scientific notation at all). Use (0, 0) to\n include all numbers. Use (m, m) where m != 0 to fix the order of\n magnitude to 10\\ :sup:`m`.\n The formatter default is :rc:`axes.formatter.limits`.\n\n useOffset : bool or float\n If True, the offset is calculated as needed.\n If False, no offset is used.\n If a numeric value, it sets the offset.\n The formatter default is :rc:`axes.formatter.useoffset`.\n\n useLocale : bool\n Whether to format the number using the current locale or using the\n C (English) locale. This affects e.g. the decimal separator. The\n formatter default is :rc:`axes.formatter.use_locale`.\n\n useMathText : bool\n Render the offset and scientific notation in mathtext.\n The formatter default is :rc:`axes.formatter.use_mathtext`.\n\n Raises\n ------\n AttributeError\n If the current formatter is not a `.ScalarFormatter`.\n \"\"\"\n style = style.lower()\n axis = axis.lower()\n if scilimits is not None:\n try:\n m, n = scilimits\n m + n + 1 # check that both are numbers\n except (ValueError, TypeError) as err:\n raise ValueError(\"scilimits must be a sequence of 2 integers\"\n ) from err\n STYLES = {'sci': True, 'scientific': True, 'plain': False, '': None}\n is_sci_style = _api.check_getitem(STYLES, style=style)\n axis_map = {**{k: [v] for k, v in self._axis_map.items()},\n 'both': list(self._axis_map.values())}\n axises = _api.check_getitem(axis_map, axis=axis)\n try:\n for axis in axises:\n if is_sci_style is not None:\n axis.major.formatter.set_scientific(is_sci_style)\n if scilimits is not None:\n axis.major.formatter.set_powerlimits(scilimits)\n if useOffset is not None:\n axis.major.formatter.set_useOffset(useOffset)\n if useLocale is not None:\n axis.major.formatter.set_useLocale(useLocale)\n if useMathText is not None:\n axis.major.formatter.set_useMathText(useMathText)\n except AttributeError as err:\n raise AttributeError(\n \"This method only works with the ScalarFormatter\") from err\n\n def locator_params(self, axis='both', tight=None, **kwargs):\n \"\"\"\n Control behavior of major tick locators.\n\n Because the locator is involved in autoscaling, `~.Axes.autoscale_view`\n is called automatically after the parameters are changed.\n\n Parameters\n ----------\n axis : {'both', 'x', 'y'}, default: 'both'\n The axis on which to operate. (For 3D Axes, *axis* can also be\n set to 'z', and 'both' refers to all three axes.)\n tight : bool or None, optional\n Parameter passed to `~.Axes.autoscale_view`.\n Default is None, for no change.\n\n Other Parameters\n ----------------\n **kwargs\n Remaining keyword arguments are passed to directly to the\n ``set_params()`` method of the locator. Supported keywords depend\n on the type of the locator. See for example\n `~.ticker.MaxNLocator.set_params` for the `.ticker.MaxNLocator`\n used by default for linear.\n\n Examples\n --------\n When plotting small subplots, one might want to reduce the maximum\n number of ticks and use tight bounds, for example::\n\n ax.locator_params(tight=True, nbins=4)\n\n \"\"\"\n _api.check_in_list([*self._axis_names, \"both\"], axis=axis)\n for name in self._axis_names:\n if axis in [name, \"both\"]:\n loc = self._axis_map[name].get_major_locator()\n loc.set_params(**kwargs)\n self._request_autoscale_view(name, tight=tight)\n self.stale = True\n\n def tick_params(self, axis='both', **kwargs):\n \"\"\"\n Change the appearance of ticks, tick labels, and gridlines.\n\n Tick properties that are not explicitly set using the keyword\n arguments remain unchanged unless *reset* is True. For the current\n style settings, see `.Axis.get_tick_params`.\n\n Parameters\n ----------\n axis : {'x', 'y', 'both'}, default: 'both'\n The axis to which the parameters are applied.\n which : {'major', 'minor', 'both'}, default: 'major'\n The group of ticks to which the parameters are applied.\n reset : bool, default: False\n Whether to reset the ticks to defaults before updating them.\n\n Other Parameters\n ----------------\n direction : {'in', 'out', 'inout'}\n Puts ticks inside the Axes, outside the Axes, or both.\n length : float\n Tick length in points.\n width : float\n Tick width in points.\n color : color\n Tick color.\n pad : float\n Distance in points between tick and label.\n labelsize : float or str\n Tick label font size in points or as a string (e.g., 'large').\n labelcolor : color\n Tick label color.\n labelfontfamily : str\n Tick label font.\n colors : color\n Tick color and label color.\n zorder : float\n Tick and label zorder.\n bottom, top, left, right : bool\n Whether to draw the respective ticks.\n labelbottom, labeltop, labelleft, labelright : bool\n Whether to draw the respective tick labels.\n labelrotation : float\n Tick label rotation\n grid_color : color\n Gridline color.\n grid_alpha : float\n Transparency of gridlines: 0 (transparent) to 1 (opaque).\n grid_linewidth : float\n Width of gridlines in points.\n grid_linestyle : str\n Any valid `.Line2D` line style spec.\n\n Examples\n --------\n ::\n\n ax.tick_params(direction='out', length=6, width=2, colors='r',\n grid_color='r', grid_alpha=0.5)\n\n This will make all major ticks be red, pointing out of the box,\n and with dimensions 6 points by 2 points. Tick labels will\n also be red. Gridlines will be red and translucent.\n\n \"\"\"\n _api.check_in_list(['x', 'y', 'both'], axis=axis)\n if axis in ['x', 'both']:\n xkw = dict(kwargs)\n xkw.pop('left', None)\n xkw.pop('right', None)\n xkw.pop('labelleft', None)\n xkw.pop('labelright', None)\n self.xaxis.set_tick_params(**xkw)\n if axis in ['y', 'both']:\n ykw = dict(kwargs)\n ykw.pop('top', None)\n ykw.pop('bottom', None)\n ykw.pop('labeltop', None)\n ykw.pop('labelbottom', None)\n self.yaxis.set_tick_params(**ykw)\n\n def set_axis_off(self):\n \"\"\"\n Turn the x- and y-axis off.\n\n This affects the axis lines, ticks, ticklabels, grid and axis labels.\n \"\"\"\n self.axison = False\n self.stale = True\n\n def set_axis_on(self):\n \"\"\"\n Turn the x- and y-axis on.\n\n This affects the axis lines, ticks, ticklabels, grid and axis labels.\n \"\"\"\n self.axison = True\n self.stale = True\n\n # data limits, ticks, tick labels, and formatting\n\n def get_xlabel(self):\n \"\"\"\n Get the xlabel text string.\n \"\"\"\n label = self.xaxis.get_label()\n return label.get_text()\n\n def set_xlabel(self, xlabel, fontdict=None, labelpad=None, *,\n loc=None, **kwargs):\n \"\"\"\n Set the label for the x-axis.\n\n Parameters\n ----------\n xlabel : str\n The label text.\n\n labelpad : float, default: :rc:`axes.labelpad`\n Spacing in points from the Axes bounding box including ticks\n and tick labels. If None, the previous value is left as is.\n\n loc : {'left', 'center', 'right'}, default: :rc:`xaxis.labellocation`\n The label position. This is a high-level alternative for passing\n parameters *x* and *horizontalalignment*.\n\n Other Parameters\n ----------------\n **kwargs : `~matplotlib.text.Text` properties\n `.Text` properties control the appearance of the label.\n\n See Also\n --------\n text : Documents the properties supported by `.Text`.\n \"\"\"\n if labelpad is not None:\n self.xaxis.labelpad = labelpad\n protected_kw = ['x', 'horizontalalignment', 'ha']\n if {*kwargs} & {*protected_kw}:\n if loc is not None:\n raise TypeError(f\"Specifying 'loc' is disallowed when any of \"\n f\"its corresponding low level keyword \"\n f\"arguments ({protected_kw}) are also \"\n f\"supplied\")\n\n else:\n loc = (loc if loc is not None\n else mpl.rcParams['xaxis.labellocation'])\n _api.check_in_list(('left', 'center', 'right'), loc=loc)\n\n x = {\n 'left': 0,\n 'center': 0.5,\n 'right': 1,\n }[loc]\n kwargs.update(x=x, horizontalalignment=loc)\n\n return self.xaxis.set_label_text(xlabel, fontdict, **kwargs)\n\n def invert_xaxis(self):\n \"\"\"\n Invert the x-axis.\n\n See Also\n --------\n xaxis_inverted\n get_xlim, set_xlim\n get_xbound, set_xbound\n \"\"\"\n self.xaxis.set_inverted(not self.xaxis.get_inverted())\n\n xaxis_inverted = _axis_method_wrapper(\"xaxis\", \"get_inverted\")\n\n def get_xbound(self):\n \"\"\"\n Return the lower and upper x-axis bounds, in increasing order.\n\n See Also\n --------\n set_xbound\n get_xlim, set_xlim\n invert_xaxis, xaxis_inverted\n \"\"\"\n left, right = self.get_xlim()\n if left < right:\n return left, right\n else:\n return right, left\n\n def set_xbound(self, lower=None, upper=None):\n \"\"\"\n Set the lower and upper numerical bounds of the x-axis.\n\n This method will honor axis inversion regardless of parameter order.\n It will not change the autoscaling setting (`.get_autoscalex_on()`).\n\n Parameters\n ----------\n lower, upper : float or None\n The lower and upper bounds. If *None*, the respective axis bound\n is not modified.\n\n See Also\n --------\n get_xbound\n get_xlim, set_xlim\n invert_xaxis, xaxis_inverted\n \"\"\"\n if upper is None and np.iterable(lower):\n lower, upper = lower\n\n old_lower, old_upper = self.get_xbound()\n if lower is None:\n lower = old_lower\n if upper is None:\n upper = old_upper\n\n self.set_xlim(sorted((lower, upper),\n reverse=bool(self.xaxis_inverted())),\n auto=None)\n\n def get_xlim(self):\n \"\"\"\n Return the x-axis view limits.\n\n Returns\n -------\n left, right : (float, float)\n The current x-axis limits in data coordinates.\n\n See Also\n --------\n .Axes.set_xlim\n .Axes.set_xbound, .Axes.get_xbound\n .Axes.invert_xaxis, .Axes.xaxis_inverted\n\n Notes\n -----\n The x-axis may be inverted, in which case the *left* value will\n be greater than the *right* value.\n \"\"\"\n return tuple(self.viewLim.intervalx)\n\n def _validate_converted_limits(self, limit, convert):\n \"\"\"\n Raise ValueError if converted limits are non-finite.\n\n Note that this function also accepts None as a limit argument.\n\n Returns\n -------\n The limit value after call to convert(), or None if limit is None.\n \"\"\"\n if limit is not None:\n converted_limit = convert(limit)\n if (isinstance(converted_limit, Real)\n and not np.isfinite(converted_limit)):\n raise ValueError(\"Axis limits cannot be NaN or Inf\")\n return converted_limit\n\n def set_xlim(self, left=None, right=None, *, emit=True, auto=False,\n xmin=None, xmax=None):\n \"\"\"\n Set the x-axis view limits.\n\n Parameters\n ----------\n left : float, optional\n The left xlim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n The left and right xlims may also be passed as the tuple\n (*left*, *right*) as the first positional argument (or as\n the *left* keyword argument).\n\n .. ACCEPTS: (bottom: float, top: float)\n\n right : float, optional\n The right xlim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n emit : bool, default: True\n Whether to notify observers of limit change.\n\n auto : bool or None, default: False\n Whether to turn on autoscaling of the x-axis. True turns on,\n False turns off, None leaves unchanged.\n\n xmin, xmax : float, optional\n They are equivalent to left and right respectively, and it is an\n error to pass both *xmin* and *left* or *xmax* and *right*.\n\n Returns\n -------\n left, right : (float, float)\n The new x-axis limits in data coordinates.\n\n See Also\n --------\n get_xlim\n set_xbound, get_xbound\n invert_xaxis, xaxis_inverted\n\n Notes\n -----\n The *left* value may be greater than the *right* value, in which\n case the x-axis values will decrease from left to right.\n\n Examples\n --------\n >>> set_xlim(left, right)\n >>> set_xlim((left, right))\n >>> left, right = set_xlim(left, right)\n\n One limit may be left unchanged.\n\n >>> set_xlim(right=right_lim)\n\n Limits may be passed in reverse order to flip the direction of\n the x-axis. For example, suppose *x* represents the number of\n years before present. The x-axis limits might be set like the\n following so 5000 years ago is on the left of the plot and the\n present is on the right.\n\n >>> set_xlim(5000, 0)\n \"\"\"\n if right is None and np.iterable(left):\n left, right = left\n if xmin is not None:\n if left is not None:\n raise TypeError(\"Cannot pass both 'left' and 'xmin'\")\n left = xmin\n if xmax is not None:\n if right is not None:\n raise TypeError(\"Cannot pass both 'right' and 'xmax'\")\n right = xmax\n return self.xaxis._set_lim(left, right, emit=emit, auto=auto)\n\n get_xscale = _axis_method_wrapper(\"xaxis\", \"get_scale\")\n set_xscale = _axis_method_wrapper(\"xaxis\", \"_set_axes_scale\")\n get_xticks = _axis_method_wrapper(\"xaxis\", \"get_ticklocs\")\n set_xticks = _axis_method_wrapper(\"xaxis\", \"set_ticks\")\n get_xmajorticklabels = _axis_method_wrapper(\"xaxis\", \"get_majorticklabels\")\n get_xminorticklabels = _axis_method_wrapper(\"xaxis\", \"get_minorticklabels\")\n get_xticklabels = _axis_method_wrapper(\"xaxis\", \"get_ticklabels\")\n set_xticklabels = _axis_method_wrapper(\n \"xaxis\", \"set_ticklabels\",\n doc_sub={\"Axis.set_ticks\": \"Axes.set_xticks\"})\n\n def get_ylabel(self):\n \"\"\"\n Get the ylabel text string.\n \"\"\"\n label = self.yaxis.get_label()\n return label.get_text()\n\n def set_ylabel(self, ylabel, fontdict=None, labelpad=None, *,\n loc=None, **kwargs):\n \"\"\"\n Set the label for the y-axis.\n\n Parameters\n ----------\n ylabel : str\n The label text.\n\n labelpad : float, default: :rc:`axes.labelpad`\n Spacing in points from the Axes bounding box including ticks\n and tick labels. If None, the previous value is left as is.\n\n loc : {'bottom', 'center', 'top'}, default: :rc:`yaxis.labellocation`\n The label position. This is a high-level alternative for passing\n parameters *y* and *horizontalalignment*.\n\n Other Parameters\n ----------------\n **kwargs : `~matplotlib.text.Text` properties\n `.Text` properties control the appearance of the label.\n\n See Also\n --------\n text : Documents the properties supported by `.Text`.\n \"\"\"\n if labelpad is not None:\n self.yaxis.labelpad = labelpad\n protected_kw = ['y', 'horizontalalignment', 'ha']\n if {*kwargs} & {*protected_kw}:\n if loc is not None:\n raise TypeError(f\"Specifying 'loc' is disallowed when any of \"\n f\"its corresponding low level keyword \"\n f\"arguments ({protected_kw}) are also \"\n f\"supplied\")\n\n else:\n loc = (loc if loc is not None\n else mpl.rcParams['yaxis.labellocation'])\n _api.check_in_list(('bottom', 'center', 'top'), loc=loc)\n\n y, ha = {\n 'bottom': (0, 'left'),\n 'center': (0.5, 'center'),\n 'top': (1, 'right')\n }[loc]\n kwargs.update(y=y, horizontalalignment=ha)\n\n return self.yaxis.set_label_text(ylabel, fontdict, **kwargs)\n\n def invert_yaxis(self):\n \"\"\"\n Invert the y-axis.\n\n See Also\n --------\n yaxis_inverted\n get_ylim, set_ylim\n get_ybound, set_ybound\n \"\"\"\n self.yaxis.set_inverted(not self.yaxis.get_inverted())\n\n yaxis_inverted = _axis_method_wrapper(\"yaxis\", \"get_inverted\")\n\n def get_ybound(self):\n \"\"\"\n Return the lower and upper y-axis bounds, in increasing order.\n\n See Also\n --------\n set_ybound\n get_ylim, set_ylim\n invert_yaxis, yaxis_inverted\n \"\"\"\n bottom, top = self.get_ylim()\n if bottom < top:\n return bottom, top\n else:\n return top, bottom\n\n def set_ybound(self, lower=None, upper=None):\n \"\"\"\n Set the lower and upper numerical bounds of the y-axis.\n\n This method will honor axis inversion regardless of parameter order.\n It will not change the autoscaling setting (`.get_autoscaley_on()`).\n\n Parameters\n ----------\n lower, upper : float or None\n The lower and upper bounds. If *None*, the respective axis bound\n is not modified.\n\n See Also\n --------\n get_ybound\n get_ylim, set_ylim\n invert_yaxis, yaxis_inverted\n \"\"\"\n if upper is None and np.iterable(lower):\n lower, upper = lower\n\n old_lower, old_upper = self.get_ybound()\n if lower is None:\n lower = old_lower\n if upper is None:\n upper = old_upper\n\n self.set_ylim(sorted((lower, upper),\n reverse=bool(self.yaxis_inverted())),\n auto=None)\n\n def get_ylim(self):\n \"\"\"\n Return the y-axis view limits.\n\n Returns\n -------\n bottom, top : (float, float)\n The current y-axis limits in data coordinates.\n\n See Also\n --------\n .Axes.set_ylim\n .Axes.set_ybound, .Axes.get_ybound\n .Axes.invert_yaxis, .Axes.yaxis_inverted\n\n Notes\n -----\n The y-axis may be inverted, in which case the *bottom* value\n will be greater than the *top* value.\n \"\"\"\n return tuple(self.viewLim.intervaly)\n\n def set_ylim(self, bottom=None, top=None, *, emit=True, auto=False,\n ymin=None, ymax=None):\n \"\"\"\n Set the y-axis view limits.\n\n Parameters\n ----------\n bottom : float, optional\n The bottom ylim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n The bottom and top ylims may also be passed as the tuple\n (*bottom*, *top*) as the first positional argument (or as\n the *bottom* keyword argument).\n\n .. ACCEPTS: (bottom: float, top: float)\n\n top : float, optional\n The top ylim in data coordinates. Passing *None* leaves the\n limit unchanged.\n\n emit : bool, default: True\n Whether to notify observers of limit change.\n\n auto : bool or None, default: False\n Whether to turn on autoscaling of the y-axis. *True* turns on,\n *False* turns off, *None* leaves unchanged.\n\n ymin, ymax : float, optional\n They are equivalent to bottom and top respectively, and it is an\n error to pass both *ymin* and *bottom* or *ymax* and *top*.\n\n Returns\n -------\n bottom, top : (float, float)\n The new y-axis limits in data coordinates.\n\n See Also\n --------\n get_ylim\n set_ybound, get_ybound\n invert_yaxis, yaxis_inverted\n\n Notes\n -----\n The *bottom* value may be greater than the *top* value, in which\n case the y-axis values will decrease from *bottom* to *top*.\n\n Examples\n --------\n >>> set_ylim(bottom, top)\n >>> set_ylim((bottom, top))\n >>> bottom, top = set_ylim(bottom, top)\n\n One limit may be left unchanged.\n\n >>> set_ylim(top=top_lim)\n\n Limits may be passed in reverse order to flip the direction of\n the y-axis. For example, suppose ``y`` represents depth of the\n ocean in m. The y-axis limits might be set like the following\n so 5000 m depth is at the bottom of the plot and the surface,\n 0 m, is at the top.\n\n >>> set_ylim(5000, 0)\n \"\"\"\n if top is None and np.iterable(bottom):\n bottom, top = bottom\n if ymin is not None:\n if bottom is not None:\n raise TypeError(\"Cannot pass both 'bottom' and 'ymin'\")\n bottom = ymin\n if ymax is not None:\n if top is not None:\n raise TypeError(\"Cannot pass both 'top' and 'ymax'\")\n top = ymax\n return self.yaxis._set_lim(bottom, top, emit=emit, auto=auto)\n\n get_yscale = _axis_method_wrapper(\"yaxis\", \"get_scale\")\n set_yscale = _axis_method_wrapper(\"yaxis\", \"_set_axes_scale\")\n get_yticks = _axis_method_wrapper(\"yaxis\", \"get_ticklocs\")\n set_yticks = _axis_method_wrapper(\"yaxis\", \"set_ticks\")\n get_ymajorticklabels = _axis_method_wrapper(\"yaxis\", \"get_majorticklabels\")\n get_yminorticklabels = _axis_method_wrapper(\"yaxis\", \"get_minorticklabels\")\n get_yticklabels = _axis_method_wrapper(\"yaxis\", \"get_ticklabels\")\n set_yticklabels = _axis_method_wrapper(\n \"yaxis\", \"set_ticklabels\",\n doc_sub={\"Axis.set_ticks\": \"Axes.set_yticks\"})\n\n xaxis_date = _axis_method_wrapper(\"xaxis\", \"axis_date\")\n yaxis_date = _axis_method_wrapper(\"yaxis\", \"axis_date\")\n\n def format_xdata(self, x):\n \"\"\"\n Return *x* formatted as an x-value.\n\n This function will use the `.fmt_xdata` attribute if it is not None,\n else will fall back on the xaxis major formatter.\n \"\"\"\n return (self.fmt_xdata if self.fmt_xdata is not None\n else self.xaxis.get_major_formatter().format_data_short)(x)\n\n def format_ydata(self, y):\n \"\"\"\n Return *y* formatted as a y-value.\n\n This function will use the `.fmt_ydata` attribute if it is not None,\n else will fall back on the yaxis major formatter.\n \"\"\"\n return (self.fmt_ydata if self.fmt_ydata is not None\n else self.yaxis.get_major_formatter().format_data_short)(y)\n\n def format_coord(self, x, y):\n \"\"\"Return a format string formatting the *x*, *y* coordinates.\"\"\"\n return \"x={} y={}\".format(\n \"???\" if x is None else self.format_xdata(x),\n \"???\" if y is None else self.format_ydata(y),\n )\n\n def minorticks_on(self):\n \"\"\"\n Display minor ticks on the Axes.\n\n Displaying minor ticks may reduce performance; you may turn them off\n using `minorticks_off()` if drawing speed is a problem.\n \"\"\"\n for ax in (self.xaxis, self.yaxis):\n scale = ax.get_scale()\n if scale == 'log':\n s = ax._scale\n ax.set_minor_locator(mticker.LogLocator(s.base, s.subs))\n elif scale == 'symlog':\n s = ax._scale\n ax.set_minor_locator(\n mticker.SymmetricalLogLocator(s._transform, s.subs))\n else:\n ax.set_minor_locator(mticker.AutoMinorLocator())\n\n def minorticks_off(self):\n \"\"\"Remove minor ticks from the Axes.\"\"\"\n self.xaxis.set_minor_locator(mticker.NullLocator())\n self.yaxis.set_minor_locator(mticker.NullLocator())\n\n # Interactive manipulation\n\n def can_zoom(self):\n \"\"\"\n Return whether this Axes supports the zoom box button functionality.\n \"\"\"\n return True\n\n def can_pan(self):\n \"\"\"\n Return whether this Axes supports any pan/zoom button functionality.\n \"\"\"\n return True\n\n def get_navigate(self):\n \"\"\"\n Get whether the Axes responds to navigation commands.\n \"\"\"\n return self._navigate\n\n def set_navigate(self, b):\n \"\"\"\n Set whether the Axes responds to navigation toolbar commands.\n\n Parameters\n ----------\n b : bool\n \"\"\"\n self._navigate = b\n\n def get_navigate_mode(self):\n \"\"\"\n Get the navigation toolbar button status: 'PAN', 'ZOOM', or None.\n \"\"\"\n return self._navigate_mode\n\n def set_navigate_mode(self, b):\n \"\"\"\n Set the navigation toolbar button status.\n\n .. warning::\n This is not a user-API function.\n\n \"\"\"\n self._navigate_mode = b\n\n def _get_view(self):\n \"\"\"\n Save information required to reproduce the current view.\n\n This method is called before a view is changed, such as during a pan or zoom\n initiated by the user. It returns an opaque object that describes the current\n view, in a format compatible with :meth:`_set_view`.\n\n The default implementation saves the view limits and autoscaling state.\n Subclasses may override this as needed, as long as :meth:`_set_view` is also\n adjusted accordingly.\n \"\"\"\n return {\n \"xlim\": self.get_xlim(), \"autoscalex_on\": self.get_autoscalex_on(),\n \"ylim\": self.get_ylim(), \"autoscaley_on\": self.get_autoscaley_on(),\n }\n\n def _set_view(self, view):\n \"\"\"\n Apply a previously saved view.\n\n This method is called when restoring a view (with the return value of\n :meth:`_get_view` as argument), such as with the navigation buttons.\n\n Subclasses that override :meth:`_get_view` also need to override this method\n accordingly.\n \"\"\"\n self.set(**view)\n\n def _prepare_view_from_bbox(self, bbox, direction='in',\n mode=None, twinx=False, twiny=False):\n \"\"\"\n Helper function to prepare the new bounds from a bbox.\n\n This helper function returns the new x and y bounds from the zoom\n bbox. This a convenience method to abstract the bbox logic\n out of the base setter.\n \"\"\"\n if len(bbox) == 3:\n xp, yp, scl = bbox # Zooming code\n if scl == 0: # Should not happen\n scl = 1.\n if scl > 1:\n direction = 'in'\n else:\n direction = 'out'\n scl = 1/scl\n # get the limits of the axes\n (xmin, ymin), (xmax, ymax) = self.transData.transform(\n np.transpose([self.get_xlim(), self.get_ylim()]))\n # set the range\n xwidth = xmax - xmin\n ywidth = ymax - ymin\n xcen = (xmax + xmin)*.5\n ycen = (ymax + ymin)*.5\n xzc = (xp*(scl - 1) + xcen)/scl\n yzc = (yp*(scl - 1) + ycen)/scl\n bbox = [xzc - xwidth/2./scl, yzc - ywidth/2./scl,\n xzc + xwidth/2./scl, yzc + ywidth/2./scl]\n elif len(bbox) != 4:\n # should be len 3 or 4 but nothing else\n _api.warn_external(\n \"Warning in _set_view_from_bbox: bounding box is not a tuple \"\n \"of length 3 or 4. Ignoring the view change.\")\n return\n\n # Original limits.\n xmin0, xmax0 = self.get_xbound()\n ymin0, ymax0 = self.get_ybound()\n # The zoom box in screen coords.\n startx, starty, stopx, stopy = bbox\n # Convert to data coords.\n (startx, starty), (stopx, stopy) = self.transData.inverted().transform(\n [(startx, starty), (stopx, stopy)])\n # Clip to axes limits.\n xmin, xmax = np.clip(sorted([startx, stopx]), xmin0, xmax0)\n ymin, ymax = np.clip(sorted([starty, stopy]), ymin0, ymax0)\n # Don't double-zoom twinned axes or if zooming only the other axis.\n if twinx or mode == \"y\":\n xmin, xmax = xmin0, xmax0\n if twiny or mode == \"x\":\n ymin, ymax = ymin0, ymax0\n\n if direction == \"in\":\n new_xbound = xmin, xmax\n new_ybound = ymin, ymax\n\n elif direction == \"out\":\n x_trf = self.xaxis.get_transform()\n sxmin0, sxmax0, sxmin, sxmax = x_trf.transform(\n [xmin0, xmax0, xmin, xmax]) # To screen space.\n factor = (sxmax0 - sxmin0) / (sxmax - sxmin) # Unzoom factor.\n # Move original bounds away by\n # (factor) x (distance between unzoom box and Axes bbox).\n sxmin1 = sxmin0 - factor * (sxmin - sxmin0)\n sxmax1 = sxmax0 + factor * (sxmax0 - sxmax)\n # And back to data space.\n new_xbound = x_trf.inverted().transform([sxmin1, sxmax1])\n\n y_trf = self.yaxis.get_transform()\n symin0, symax0, symin, symax = y_trf.transform(\n [ymin0, ymax0, ymin, ymax])\n factor = (symax0 - symin0) / (symax - symin)\n symin1 = symin0 - factor * (symin - symin0)\n symax1 = symax0 + factor * (symax0 - symax)\n new_ybound = y_trf.inverted().transform([symin1, symax1])\n\n return new_xbound, new_ybound\n\n def _set_view_from_bbox(self, bbox, direction='in',\n mode=None, twinx=False, twiny=False):\n \"\"\"\n Update view from a selection bbox.\n\n .. note::\n\n Intended to be overridden by new projection types, but if not, the\n default implementation sets the view limits to the bbox directly.\n\n Parameters\n ----------\n bbox : 4-tuple or 3 tuple\n * If bbox is a 4 tuple, it is the selected bounding box limits,\n in *display* coordinates.\n * If bbox is a 3 tuple, it is an (xp, yp, scl) triple, where\n (xp, yp) is the center of zooming and scl the scale factor to\n zoom by.\n\n direction : str\n The direction to apply the bounding box.\n * `'in'` - The bounding box describes the view directly, i.e.,\n it zooms in.\n * `'out'` - The bounding box describes the size to make the\n existing view, i.e., it zooms out.\n\n mode : str or None\n The selection mode, whether to apply the bounding box in only the\n `'x'` direction, `'y'` direction or both (`None`).\n\n twinx : bool\n Whether this axis is twinned in the *x*-direction.\n\n twiny : bool\n Whether this axis is twinned in the *y*-direction.\n \"\"\"\n new_xbound, new_ybound = self._prepare_view_from_bbox(\n bbox, direction=direction, mode=mode, twinx=twinx, twiny=twiny)\n if not twinx and mode != \"y\":\n self.set_xbound(new_xbound)\n self.set_autoscalex_on(False)\n if not twiny and mode != \"x\":\n self.set_ybound(new_ybound)\n self.set_autoscaley_on(False)\n\n def start_pan(self, x, y, button):\n \"\"\"\n Called when a pan operation has started.\n\n Parameters\n ----------\n x, y : float\n The mouse coordinates in display coords.\n button : `.MouseButton`\n The pressed mouse button.\n\n Notes\n -----\n This is intended to be overridden by new projection types.\n \"\"\"\n self._pan_start = types.SimpleNamespace(\n lim=self.viewLim.frozen(),\n trans=self.transData.frozen(),\n trans_inverse=self.transData.inverted().frozen(),\n bbox=self.bbox.frozen(),\n x=x,\n y=y)\n\n def end_pan(self):\n \"\"\"\n Called when a pan operation completes (when the mouse button is up.)\n\n Notes\n -----\n This is intended to be overridden by new projection types.\n \"\"\"\n del self._pan_start\n\n def _get_pan_points(self, button, key, x, y):\n \"\"\"\n Helper function to return the new points after a pan.\n\n This helper function returns the points on the axis after a pan has\n occurred. This is a convenience method to abstract the pan logic\n out of the base setter.\n \"\"\"\n def format_deltas(key, dx, dy):\n if key == 'control':\n if abs(dx) > abs(dy):\n dy = dx\n else:\n dx = dy\n elif key == 'x':\n dy = 0\n elif key == 'y':\n dx = 0\n elif key == 'shift':\n if 2 * abs(dx) < abs(dy):\n dx = 0\n elif 2 * abs(dy) < abs(dx):\n dy = 0\n elif abs(dx) > abs(dy):\n dy = dy / abs(dy) * abs(dx)\n else:\n dx = dx / abs(dx) * abs(dy)\n return dx, dy\n\n p = self._pan_start\n dx = x - p.x\n dy = y - p.y\n if dx == dy == 0:\n return\n if button == 1:\n dx, dy = format_deltas(key, dx, dy)\n result = p.bbox.translated(-dx, -dy).transformed(p.trans_inverse)\n elif button == 3:\n try:\n dx = -dx / self.bbox.width\n dy = -dy / self.bbox.height\n dx, dy = format_deltas(key, dx, dy)\n if self.get_aspect() != 'auto':\n dx = dy = 0.5 * (dx + dy)\n alpha = np.power(10.0, (dx, dy))\n start = np.array([p.x, p.y])\n oldpoints = p.lim.transformed(p.trans)\n newpoints = start + alpha * (oldpoints - start)\n result = (mtransforms.Bbox(newpoints)\n .transformed(p.trans_inverse))\n except OverflowError:\n _api.warn_external('Overflow while panning')\n return\n else:\n return\n\n valid = np.isfinite(result.transformed(p.trans))\n points = result.get_points().astype(object)\n # Just ignore invalid limits (typically, underflow in log-scale).\n points[~valid] = None\n return points\n\n def drag_pan(self, button, key, x, y):\n \"\"\"\n Called when the mouse moves during a pan operation.\n\n Parameters\n ----------\n button : `.MouseButton`\n The pressed mouse button.\n key : str or None\n The pressed key, if any.\n x, y : float\n The mouse coordinates in display coords.\n\n Notes\n -----\n This is intended to be overridden by new projection types.\n \"\"\"\n points = self._get_pan_points(button, key, x, y)\n if points is not None:\n self.set_xlim(points[:, 0])\n self.set_ylim(points[:, 1])\n\n def get_children(self):\n # docstring inherited.\n return [\n *self._children,\n *self.spines.values(),\n *self._axis_map.values(),\n self.title, self._left_title, self._right_title,\n *self.child_axes,\n *([self.legend_] if self.legend_ is not None else []),\n self.patch,\n ]\n\n def contains(self, mouseevent):\n # docstring inherited.\n return self.patch.contains(mouseevent)\n\n def contains_point(self, point):\n \"\"\"\n Return whether *point* (pair of pixel coordinates) is inside the Axes\n patch.\n \"\"\"\n return self.patch.contains_point(point, radius=1.0)\n\n def get_default_bbox_extra_artists(self):\n \"\"\"\n Return a default list of artists that are used for the bounding box\n calculation.\n\n Artists are excluded either by not being visible or\n ``artist.set_in_layout(False)``.\n \"\"\"\n\n artists = self.get_children()\n\n for axis in self._axis_map.values():\n # axis tight bboxes are calculated separately inside\n # Axes.get_tightbbox() using for_layout_only=True\n artists.remove(axis)\n if not (self.axison and self._frameon):\n # don't do bbox on spines if frame not on.\n for spine in self.spines.values():\n artists.remove(spine)\n\n artists.remove(self.title)\n artists.remove(self._left_title)\n artists.remove(self._right_title)\n\n # always include types that do not internally implement clipping\n # to Axes. may have clip_on set to True and clip_box equivalent\n # to ax.bbox but then ignore these properties during draws.\n noclip = (_AxesBase, maxis.Axis,\n offsetbox.AnnotationBbox, offsetbox.OffsetBox)\n return [a for a in artists if a.get_visible() and a.get_in_layout()\n and (isinstance(a, noclip) or not a._fully_clipped_to_axes())]\n\n @_api.make_keyword_only(\"3.8\", \"call_axes_locator\")\n def get_tightbbox(self, renderer=None, call_axes_locator=True,\n bbox_extra_artists=None, *, for_layout_only=False):\n \"\"\"\n Return the tight bounding box of the Axes, including axis and their\n decorators (xlabel, title, etc).\n\n Artists that have ``artist.set_in_layout(False)`` are not included\n in the bbox.\n\n Parameters\n ----------\n renderer : `.RendererBase` subclass\n renderer that will be used to draw the figures (i.e.\n ``fig.canvas.get_renderer()``)\n\n bbox_extra_artists : list of `.Artist` or ``None``\n List of artists to include in the tight bounding box. If\n ``None`` (default), then all artist children of the Axes are\n included in the tight bounding box.\n\n call_axes_locator : bool, default: True\n If *call_axes_locator* is ``False``, it does not call the\n ``_axes_locator`` attribute, which is necessary to get the correct\n bounding box. ``call_axes_locator=False`` can be used if the\n caller is only interested in the relative size of the tightbbox\n compared to the Axes bbox.\n\n for_layout_only : default: False\n The bounding box will *not* include the x-extent of the title and\n the xlabel, or the y-extent of the ylabel.\n\n Returns\n -------\n `.BboxBase`\n Bounding box in figure pixel coordinates.\n\n See Also\n --------\n matplotlib.axes.Axes.get_window_extent\n matplotlib.axis.Axis.get_tightbbox\n matplotlib.spines.Spine.get_window_extent\n \"\"\"\n\n bb = []\n if renderer is None:\n renderer = self.figure._get_renderer()\n\n if not self.get_visible():\n return None\n\n locator = self.get_axes_locator()\n self.apply_aspect(\n locator(self, renderer) if locator and call_axes_locator else None)\n\n for axis in self._axis_map.values():\n if self.axison and axis.get_visible():\n ba = martist._get_tightbbox_for_layout_only(axis, renderer)\n if ba:\n bb.append(ba)\n self._update_title_position(renderer)\n axbbox = self.get_window_extent(renderer)\n bb.append(axbbox)\n\n for title in [self.title, self._left_title, self._right_title]:\n if title.get_visible():\n bt = title.get_window_extent(renderer)\n if for_layout_only and bt.width > 0:\n # make the title bbox 1 pixel wide so its width\n # is not accounted for in bbox calculations in\n # tight/constrained_layout\n bt.x0 = (bt.x0 + bt.x1) / 2 - 0.5\n bt.x1 = bt.x0 + 1.0\n bb.append(bt)\n\n bbox_artists = bbox_extra_artists\n if bbox_artists is None:\n bbox_artists = self.get_default_bbox_extra_artists()\n\n for a in bbox_artists:\n bbox = a.get_tightbbox(renderer)\n if (bbox is not None\n and 0 < bbox.width < np.inf\n and 0 < bbox.height < np.inf):\n bb.append(bbox)\n return mtransforms.Bbox.union(\n [b for b in bb if b.width != 0 or b.height != 0])\n\n def _make_twin_axes(self, *args, **kwargs):\n \"\"\"Make a twinx Axes of self. This is used for twinx and twiny.\"\"\"\n if 'sharex' in kwargs and 'sharey' in kwargs:\n # The following line is added in v2.2 to avoid breaking Seaborn,\n # which currently uses this internal API.\n if kwargs[\"sharex\"] is not self and kwargs[\"sharey\"] is not self:\n raise ValueError(\"Twinned Axes may share only one axis\")\n ss = self.get_subplotspec()\n if ss:\n twin = self.figure.add_subplot(ss, *args, **kwargs)\n else:\n twin = self.figure.add_axes(\n self.get_position(True), *args, **kwargs,\n axes_locator=_TransformedBoundsLocator(\n [0, 0, 1, 1], self.transAxes))\n self.set_adjustable('datalim')\n twin.set_adjustable('datalim')\n self._twinned_axes.join(self, twin)\n return twin\n\n def twinx(self):\n \"\"\"\n Create a twin Axes sharing the xaxis.\n\n Create a new Axes with an invisible x-axis and an independent\n y-axis positioned opposite to the original one (i.e. at right). The\n x-axis autoscale setting will be inherited from the original\n Axes. To ensure that the tick marks of both y-axes align, see\n `~matplotlib.ticker.LinearLocator`.\n\n Returns\n -------\n Axes\n The newly created Axes instance\n\n Notes\n -----\n For those who are 'picking' artists while using twinx, pick\n events are only called for the artists in the top-most Axes.\n \"\"\"\n ax2 = self._make_twin_axes(sharex=self)\n ax2.yaxis.tick_right()\n ax2.yaxis.set_label_position('right')\n ax2.yaxis.set_offset_position('right')\n ax2.set_autoscalex_on(self.get_autoscalex_on())\n self.yaxis.tick_left()\n ax2.xaxis.set_visible(False)\n ax2.patch.set_visible(False)\n ax2.xaxis.units = self.xaxis.units\n return ax2\n\n def twiny(self):\n \"\"\"\n Create a twin Axes sharing the yaxis.\n\n Create a new Axes with an invisible y-axis and an independent\n x-axis positioned opposite to the original one (i.e. at top). The\n y-axis autoscale setting will be inherited from the original Axes.\n To ensure that the tick marks of both x-axes align, see\n `~matplotlib.ticker.LinearLocator`.\n\n Returns\n -------\n Axes\n The newly created Axes instance\n\n Notes\n -----\n For those who are 'picking' artists while using twiny, pick\n events are only called for the artists in the top-most Axes.\n \"\"\"\n ax2 = self._make_twin_axes(sharey=self)\n ax2.xaxis.tick_top()\n ax2.xaxis.set_label_position('top')\n ax2.set_autoscaley_on(self.get_autoscaley_on())\n self.xaxis.tick_bottom()\n ax2.yaxis.set_visible(False)\n ax2.patch.set_visible(False)\n ax2.yaxis.units = self.yaxis.units\n return ax2\n\n def get_shared_x_axes(self):\n \"\"\"Return an immutable view on the shared x-axes Grouper.\"\"\"\n return cbook.GrouperView(self._shared_axes[\"x\"])\n\n def get_shared_y_axes(self):\n \"\"\"Return an immutable view on the shared y-axes Grouper.\"\"\"\n return cbook.GrouperView(self._shared_axes[\"y\"])\n\n def label_outer(self, remove_inner_ticks=False):\n \"\"\"\n Only show \"outer\" labels and tick labels.\n\n x-labels are only kept for subplots on the last row (or first row, if\n labels are on the top side); y-labels only for subplots on the first\n column (or last column, if labels are on the right side).\n\n Parameters\n ----------\n remove_inner_ticks : bool, default: False\n If True, remove the inner ticks as well (not only tick labels).\n\n .. versionadded:: 3.8\n \"\"\"\n self._label_outer_xaxis(skip_non_rectangular_axes=False,\n remove_inner_ticks=remove_inner_ticks)\n self._label_outer_yaxis(skip_non_rectangular_axes=False,\n remove_inner_ticks=remove_inner_ticks)\n\n def _label_outer_xaxis(self, *, skip_non_rectangular_axes,\n remove_inner_ticks=False):\n # see documentation in label_outer.\n if skip_non_rectangular_axes and not isinstance(self.patch,\n mpl.patches.Rectangle):\n return\n ss = self.get_subplotspec()\n if not ss:\n return\n label_position = self.xaxis.get_label_position()\n if not ss.is_first_row(): # Remove top label/ticklabels/offsettext.\n if label_position == \"top\":\n self.set_xlabel(\"\")\n top_kw = {'top': False} if remove_inner_ticks else {}\n self.xaxis.set_tick_params(\n which=\"both\", labeltop=False, **top_kw)\n if self.xaxis.offsetText.get_position()[1] == 1:\n self.xaxis.offsetText.set_visible(False)\n if not ss.is_last_row(): # Remove bottom label/ticklabels/offsettext.\n if label_position == \"bottom\":\n self.set_xlabel(\"\")\n bottom_kw = {'bottom': False} if remove_inner_ticks else {}\n self.xaxis.set_tick_params(\n which=\"both\", labelbottom=False, **bottom_kw)\n if self.xaxis.offsetText.get_position()[1] == 0:\n self.xaxis.offsetText.set_visible(False)\n\n def _label_outer_yaxis(self, *, skip_non_rectangular_axes,\n remove_inner_ticks=False):\n # see documentation in label_outer.\n if skip_non_rectangular_axes and not isinstance(self.patch,\n mpl.patches.Rectangle):\n return\n ss = self.get_subplotspec()\n if not ss:\n return\n label_position = self.yaxis.get_label_position()\n if not ss.is_first_col(): # Remove left label/ticklabels/offsettext.\n if label_position == \"left\":\n self.set_ylabel(\"\")\n left_kw = {'left': False} if remove_inner_ticks else {}\n self.yaxis.set_tick_params(\n which=\"both\", labelleft=False, **left_kw)\n if self.yaxis.offsetText.get_position()[0] == 0:\n self.yaxis.offsetText.set_visible(False)\n if not ss.is_last_col(): # Remove right label/ticklabels/offsettext.\n if label_position == \"right\":\n self.set_ylabel(\"\")\n right_kw = {'right': False} if remove_inner_ticks else {}\n self.yaxis.set_tick_params(\n which=\"both\", labelright=False, **right_kw)\n if self.yaxis.offsetText.get_position()[0] == 1:\n self.yaxis.offsetText.set_visible(False)\n\n\ndef _draw_rasterized(figure, artists, renderer):\n \"\"\"\n A helper function for rasterizing the list of artists.\n\n The bookkeeping to track if we are or are not in rasterizing mode\n with the mixed-mode backends is relatively complicated and is now\n handled in the matplotlib.artist.allow_rasterization decorator.\n\n This helper defines the absolute minimum methods and attributes on a\n shim class to be compatible with that decorator and then uses it to\n rasterize the list of artists.\n\n This is maybe too-clever, but allows us to re-use the same code that is\n used on normal artists to participate in the \"are we rasterizing\"\n accounting.\n\n Please do not use this outside of the \"rasterize below a given zorder\"\n functionality of Axes.\n\n Parameters\n ----------\n figure : matplotlib.figure.Figure\n The figure all of the artists belong to (not checked). We need this\n because we can at the figure level suppress composition and insert each\n rasterized artist as its own image.\n\n artists : List[matplotlib.artist.Artist]\n The list of Artists to be rasterized. These are assumed to all\n be in the same Figure.\n\n renderer : matplotlib.backendbases.RendererBase\n The currently active renderer\n\n Returns\n -------\n None\n\n \"\"\"\n class _MinimalArtist:\n def get_rasterized(self):\n return True\n\n def get_agg_filter(self):\n return None\n\n def __init__(self, figure, artists):\n self.figure = figure\n self.artists = artists\n\n @martist.allow_rasterization\n def draw(self, renderer):\n for a in self.artists:\n a.draw(renderer)\n\n return _MinimalArtist(figure, artists).draw(renderer)\n", "lib/matplotlib/axes/_base.pyi": "import matplotlib.artist as martist\n\nimport datetime\nfrom collections.abc import Callable, Iterable, Iterator, Sequence\nfrom matplotlib import cbook\nfrom matplotlib.artist import Artist\nfrom matplotlib.axis import XAxis, YAxis, Tick\nfrom matplotlib.backend_bases import RendererBase, MouseButton, MouseEvent\nfrom matplotlib.container import Container\nfrom matplotlib.collections import Collection\nfrom matplotlib.cm import ScalarMappable\nfrom matplotlib.legend import Legend\nfrom matplotlib.lines import Line2D\nfrom matplotlib.gridspec import SubplotSpec, GridSpec\nfrom matplotlib.figure import Figure\nfrom matplotlib.image import AxesImage\nfrom matplotlib.patches import Patch\nfrom matplotlib.scale import ScaleBase\nfrom matplotlib.spines import Spines\nfrom matplotlib.table import Table\nfrom matplotlib.text import Text\nfrom matplotlib.transforms import Transform, Bbox\n\nfrom cycler import Cycler\n\nimport numpy as np\nfrom numpy.typing import ArrayLike\nfrom typing import Any, Literal, overload\nfrom matplotlib.typing import ColorType\n\nclass _axis_method_wrapper:\n attr_name: str\n method_name: str\n __doc__: str\n def __init__(\n self, attr_name: str, method_name: str, *, doc_sub: dict[str, str] | None = ...\n ) -> None: ...\n def __set_name__(self, owner: Any, name: str) -> None: ...\n\nclass _AxesBase(martist.Artist):\n name: str\n patch: Patch\n spines: Spines\n fmt_xdata: Callable[[float], str] | None\n fmt_ydata: Callable[[float], str] | None\n xaxis: XAxis\n yaxis: YAxis\n bbox: Bbox\n dataLim: Bbox\n transAxes: Transform\n transScale: Transform\n transLimits: Transform\n transData: Transform\n ignore_existing_data_limits: bool\n axison: bool\n _projection_init: Any\n\n def __init__(\n self,\n fig: Figure,\n *args: tuple[float, float, float, float] | Bbox | int,\n facecolor: ColorType | None = ...,\n frameon: bool = ...,\n sharex: _AxesBase | None = ...,\n sharey: _AxesBase | None = ...,\n label: Any = ...,\n xscale: str | ScaleBase | None = ...,\n yscale: str | ScaleBase | None = ...,\n box_aspect: float | None = ...,\n **kwargs\n ) -> None: ...\n def get_subplotspec(self) -> SubplotSpec | None: ...\n def set_subplotspec(self, subplotspec: SubplotSpec) -> None: ...\n def get_gridspec(self) -> GridSpec | None: ...\n def set_figure(self, fig: Figure) -> None: ...\n @property\n def viewLim(self) -> Bbox: ...\n def get_xaxis_transform(\n self, which: Literal[\"grid\", \"tick1\", \"tick2\"] = ...\n ) -> Transform: ...\n def get_xaxis_text1_transform(\n self, pad_points: float\n ) -> tuple[\n Transform,\n Literal[\"center\", \"top\", \"bottom\", \"baseline\", \"center_baseline\"],\n Literal[\"center\", \"left\", \"right\"],\n ]: ...\n def get_xaxis_text2_transform(\n self, pad_points\n ) -> tuple[\n Transform,\n Literal[\"center\", \"top\", \"bottom\", \"baseline\", \"center_baseline\"],\n Literal[\"center\", \"left\", \"right\"],\n ]: ...\n def get_yaxis_transform(\n self, which: Literal[\"grid\", \"tick1\", \"tick2\"] = ...\n ) -> Transform: ...\n def get_yaxis_text1_transform(\n self, pad_points\n ) -> tuple[\n Transform,\n Literal[\"center\", \"top\", \"bottom\", \"baseline\", \"center_baseline\"],\n Literal[\"center\", \"left\", \"right\"],\n ]: ...\n def get_yaxis_text2_transform(\n self, pad_points\n ) -> tuple[\n Transform,\n Literal[\"center\", \"top\", \"bottom\", \"baseline\", \"center_baseline\"],\n Literal[\"center\", \"left\", \"right\"],\n ]: ...\n def get_position(self, original: bool = ...) -> Bbox: ...\n def set_position(\n self,\n pos: Bbox | tuple[float, float, float, float],\n which: Literal[\"both\", \"active\", \"original\"] = ...,\n ) -> None: ...\n def reset_position(self) -> None: ...\n def set_axes_locator(\n self, locator: Callable[[_AxesBase, RendererBase], Bbox]\n ) -> None: ...\n def get_axes_locator(self) -> Callable[[_AxesBase, RendererBase], Bbox]: ...\n def sharex(self, other: _AxesBase) -> None: ...\n def sharey(self, other: _AxesBase) -> None: ...\n def clear(self) -> None: ...\n def cla(self) -> None: ...\n\n # Could be made generic, but comments indicate it may be temporary anyway\n class ArtistList(Sequence[Artist]):\n def __init__(\n self,\n axes: _AxesBase,\n prop_name: str,\n valid_types: type | Iterable[type] | None = ...,\n invalid_types: type | Iterable[type] | None = ...,\n ) -> None: ...\n def __len__(self) -> int: ...\n def __iter__(self) -> Iterator[Artist]: ...\n @overload\n def __getitem__(self, key: int) -> Artist: ...\n @overload\n def __getitem__(self, key: slice) -> list[Artist]: ...\n\n @overload\n def __add__(self, other: _AxesBase.ArtistList) -> list[Artist]: ...\n @overload\n def __add__(self, other: list[Any]) -> list[Any]: ...\n @overload\n def __add__(self, other: tuple[Any]) -> tuple[Any]: ...\n\n @overload\n def __radd__(self, other: _AxesBase.ArtistList) -> list[Artist]: ...\n @overload\n def __radd__(self, other: list[Any]) -> list[Any]: ...\n @overload\n def __radd__(self, other: tuple[Any]) -> tuple[Any]: ...\n\n @property\n def artists(self) -> _AxesBase.ArtistList: ...\n @property\n def collections(self) -> _AxesBase.ArtistList: ...\n @property\n def images(self) -> _AxesBase.ArtistList: ...\n @property\n def lines(self) -> _AxesBase.ArtistList: ...\n @property\n def patches(self) -> _AxesBase.ArtistList: ...\n @property\n def tables(self) -> _AxesBase.ArtistList: ...\n @property\n def texts(self) -> _AxesBase.ArtistList: ...\n def get_facecolor(self) -> ColorType: ...\n def set_facecolor(self, color: ColorType | None) -> None: ...\n @overload\n def set_prop_cycle(self, cycler: Cycler) -> None: ...\n @overload\n def set_prop_cycle(self, label: str, values: Iterable[Any]) -> None: ...\n @overload\n def set_prop_cycle(self, **kwargs: Iterable[Any]) -> None: ...\n def get_aspect(self) -> float | Literal[\"auto\"]: ...\n def set_aspect(\n self,\n aspect: float | Literal[\"auto\", \"equal\"],\n adjustable: Literal[\"box\", \"datalim\"] | None = ...,\n anchor: str | tuple[float, float] | None = ...,\n share: bool = ...,\n ) -> None: ...\n def get_adjustable(self) -> Literal[\"box\", \"datalim\"]: ...\n def set_adjustable(\n self, adjustable: Literal[\"box\", \"datalim\"], share: bool = ...\n ) -> None: ...\n def get_box_aspect(self) -> float | None: ...\n def set_box_aspect(self, aspect: float | None = ...) -> None: ...\n def get_anchor(self) -> str | tuple[float, float]: ...\n def set_anchor(\n self, anchor: str | tuple[float, float], share: bool = ...\n ) -> None: ...\n def get_data_ratio(self) -> float: ...\n def apply_aspect(self, position: Bbox | None = ...) -> None: ...\n @overload\n def axis(\n self,\n arg: tuple[float, float, float, float] | bool | str | None = ...,\n /,\n *,\n emit: bool = ...\n ) -> tuple[float, float, float, float]: ...\n @overload\n def axis(\n self,\n *,\n emit: bool = ...,\n xmin: float | None = ...,\n xmax: float | None = ...,\n ymin: float | None = ...,\n ymax: float | None = ...\n ) -> tuple[float, float, float, float]: ...\n def get_legend(self) -> Legend: ...\n def get_images(self) -> list[AxesImage]: ...\n def get_lines(self) -> list[Line2D]: ...\n def get_xaxis(self) -> XAxis: ...\n def get_yaxis(self) -> YAxis: ...\n def has_data(self) -> bool: ...\n def add_artist(self, a: Artist) -> Artist: ...\n def add_child_axes(self, ax: _AxesBase) -> _AxesBase: ...\n def add_collection(\n self, collection: Collection, autolim: bool = ...\n ) -> Collection: ...\n def add_image(self, image: AxesImage) -> AxesImage: ...\n def add_line(self, line: Line2D) -> Line2D: ...\n def add_patch(self, p: Patch) -> Patch: ...\n def add_table(self, tab: Table) -> Table: ...\n def add_container(self, container: Container) -> Container: ...\n def relim(self, visible_only: bool = ...) -> None: ...\n def update_datalim(\n self, xys: ArrayLike, updatex: bool = ..., updatey: bool = ...\n ) -> None: ...\n def in_axes(self, mouseevent: MouseEvent) -> bool: ...\n def get_autoscale_on(self) -> bool: ...\n def set_autoscale_on(self, b: bool) -> None: ...\n @property\n def use_sticky_edges(self) -> bool: ...\n @use_sticky_edges.setter\n def use_sticky_edges(self, b: bool) -> None: ...\n def set_xmargin(self, m: float) -> None: ...\n def set_ymargin(self, m: float) -> None: ...\n\n # Probably could be made better with overloads\n def margins(\n self,\n *margins: float,\n x: float | None = ...,\n y: float | None = ...,\n tight: bool | None = ...\n ) -> tuple[float, float] | None: ...\n def set_rasterization_zorder(self, z: float | None) -> None: ...\n def get_rasterization_zorder(self) -> float | None: ...\n def autoscale(\n self,\n enable: bool = ...,\n axis: Literal[\"both\", \"x\", \"y\"] = ...,\n tight: bool | None = ...,\n ) -> None: ...\n def autoscale_view(\n self, tight: bool | None = ..., scalex: bool = ..., scaley: bool = ...\n ) -> None: ...\n def draw_artist(self, a: Artist) -> None: ...\n def redraw_in_frame(self) -> None: ...\n def get_frame_on(self) -> bool: ...\n def set_frame_on(self, b: bool) -> None: ...\n def get_axisbelow(self) -> bool | Literal[\"line\"]: ...\n def set_axisbelow(self, b: bool | Literal[\"line\"]) -> None: ...\n def grid(\n self,\n visible: bool | None = ...,\n which: Literal[\"major\", \"minor\", \"both\"] = ...,\n axis: Literal[\"both\", \"x\", \"y\"] = ...,\n **kwargs\n ) -> None: ...\n def ticklabel_format(\n self,\n *,\n axis: Literal[\"both\", \"x\", \"y\"] = ...,\n style: Literal[\"\", \"sci\", \"scientific\", \"plain\"] = ...,\n scilimits: tuple[int, int] | None = ...,\n useOffset: bool | float | None = ...,\n useLocale: bool | None = ...,\n useMathText: bool | None = ...\n ) -> None: ...\n def locator_params(\n self, axis: Literal[\"both\", \"x\", \"y\"] = ..., tight: bool | None = ..., **kwargs\n ) -> None: ...\n def tick_params(self, axis: Literal[\"both\", \"x\", \"y\"] = ..., **kwargs) -> None: ...\n def set_axis_off(self) -> None: ...\n def set_axis_on(self) -> None: ...\n def get_xlabel(self) -> str: ...\n def set_xlabel(\n self,\n xlabel: str,\n fontdict: dict[str, Any] | None = ...,\n labelpad: float | None = ...,\n *,\n loc: Literal[\"left\", \"center\", \"right\"] | None = ...,\n **kwargs\n ) -> Text: ...\n def invert_xaxis(self) -> None: ...\n def get_xbound(self) -> tuple[float, float]: ...\n def set_xbound(\n self, lower: float | None = ..., upper: float | None = ...\n ) -> None: ...\n def get_xlim(self) -> tuple[float, float]: ...\n def set_xlim(\n self,\n left: float | tuple[float, float] | None = ...,\n right: float | None = ...,\n *,\n emit: bool = ...,\n auto: bool | None = ...,\n xmin: float | None = ...,\n xmax: float | None = ...\n ) -> tuple[float, float]: ...\n def get_ylabel(self) -> str: ...\n def set_ylabel(\n self,\n ylabel: str,\n fontdict: dict[str, Any] | None = ...,\n labelpad: float | None = ...,\n *,\n loc: Literal[\"bottom\", \"center\", \"top\"] | None = ...,\n **kwargs\n ) -> Text: ...\n def invert_yaxis(self) -> None: ...\n def get_ybound(self) -> tuple[float, float]: ...\n def set_ybound(\n self, lower: float | None = ..., upper: float | None = ...\n ) -> None: ...\n def get_ylim(self) -> tuple[float, float]: ...\n def set_ylim(\n self,\n bottom: float | tuple[float, float] | None = ...,\n top: float | None = ...,\n *,\n emit: bool = ...,\n auto: bool | None = ...,\n ymin: float | None = ...,\n ymax: float | None = ...\n ) -> tuple[float, float]: ...\n def format_xdata(self, x: float) -> str: ...\n def format_ydata(self, y: float) -> str: ...\n def format_coord(self, x: float, y: float) -> str: ...\n def minorticks_on(self) -> None: ...\n def minorticks_off(self) -> None: ...\n def can_zoom(self) -> bool: ...\n def can_pan(self) -> bool: ...\n def get_navigate(self) -> bool: ...\n def set_navigate(self, b: bool) -> None: ...\n def get_navigate_mode(self) -> Literal[\"PAN\", \"ZOOM\"] | None: ...\n def set_navigate_mode(self, b: Literal[\"PAN\", \"ZOOM\"] | None) -> None: ...\n def start_pan(self, x: float, y: float, button: MouseButton) -> None: ...\n def end_pan(self) -> None: ...\n def drag_pan(\n self, button: MouseButton, key: str | None, x: float, y: float\n ) -> None: ...\n def get_children(self) -> list[Artist]: ...\n def contains_point(self, point: tuple[int, int]) -> bool: ...\n def get_default_bbox_extra_artists(self) -> list[Artist]: ...\n def get_tightbbox(\n self,\n renderer: RendererBase | None = ...,\n *,\n call_axes_locator: bool = ...,\n bbox_extra_artists: Sequence[Artist] | None = ...,\n for_layout_only: bool = ...\n ) -> Bbox | None: ...\n def twinx(self) -> _AxesBase: ...\n def twiny(self) -> _AxesBase: ...\n def get_shared_x_axes(self) -> cbook.GrouperView: ...\n def get_shared_y_axes(self) -> cbook.GrouperView: ...\n def label_outer(self, remove_inner_ticks: bool = ...) -> None: ...\n\n # The methods underneath this line are added via the `_axis_method_wrapper` class\n # Initially they are set to an object, but that object uses `__set_name__` to override\n # itself with a method modified from the Axis methods for the x or y Axis.\n # As such, they are typed according to the resultant method rather than as that object.\n\n def get_xgridlines(self) -> list[Line2D]: ...\n def get_xticklines(self, minor: bool = ...) -> list[Line2D]: ...\n def get_ygridlines(self) -> list[Line2D]: ...\n def get_yticklines(self, minor: bool = ...) -> list[Line2D]: ...\n def _sci(self, im: ScalarMappable) -> None: ...\n def get_autoscalex_on(self) -> bool: ...\n def get_autoscaley_on(self) -> bool: ...\n def set_autoscalex_on(self, b: bool) -> None: ...\n def set_autoscaley_on(self, b: bool) -> None: ...\n def xaxis_inverted(self) -> bool: ...\n def get_xscale(self) -> str: ...\n def set_xscale(self, value: str | ScaleBase, **kwargs) -> None: ...\n def get_xticks(self, *, minor: bool = ...) -> np.ndarray: ...\n def set_xticks(\n self,\n ticks: ArrayLike,\n labels: Iterable[str] | None = ...,\n *,\n minor: bool = ...,\n **kwargs\n ) -> list[Tick]: ...\n def get_xmajorticklabels(self) -> list[Text]: ...\n def get_xminorticklabels(self) -> list[Text]: ...\n def get_xticklabels(\n self, minor: bool = ..., which: Literal[\"major\", \"minor\", \"both\"] | None = ...\n ) -> list[Text]: ...\n def set_xticklabels(\n self,\n labels: Iterable[str | Text],\n *,\n minor: bool = ...,\n fontdict: dict[str, Any] | None = ...,\n **kwargs\n ) -> list[Text]: ...\n def yaxis_inverted(self) -> bool: ...\n def get_yscale(self) -> str: ...\n def set_yscale(self, value: str | ScaleBase, **kwargs) -> None: ...\n def get_yticks(self, *, minor: bool = ...) -> np.ndarray: ...\n def set_yticks(\n self,\n ticks: ArrayLike,\n labels: Iterable[str] | None = ...,\n *,\n minor: bool = ...,\n **kwargs\n ) -> list[Tick]: ...\n def get_ymajorticklabels(self) -> list[Text]: ...\n def get_yminorticklabels(self) -> list[Text]: ...\n def get_yticklabels(\n self, minor: bool = ..., which: Literal[\"major\", \"minor\", \"both\"] | None = ...\n ) -> list[Text]: ...\n def set_yticklabels(\n self,\n labels: Iterable[str | Text],\n *,\n minor: bool = ...,\n fontdict: dict[str, Any] | None = ...,\n **kwargs\n ) -> list[Text]: ...\n def xaxis_date(self, tz: str | datetime.tzinfo | None = ...) -> None: ...\n def yaxis_date(self, tz: str | datetime.tzinfo | None = ...) -> None: ...\n", "lib/mpl_toolkits/mplot3d/axes3d.py": "\"\"\"\naxes3d.py, original mplot3d version by John Porter\nCreated: 23 Sep 2005\n\nParts fixed by Reinier Heeres <[email protected]>\nMinor additions by Ben Axelrod <[email protected]>\nSignificant updates and revisions by Ben Root <[email protected]>\n\nModule containing Axes3D, an object which can plot 3D objects on a\n2D matplotlib figure.\n\"\"\"\n\nfrom collections import defaultdict\nimport functools\nimport itertools\nimport math\nimport textwrap\n\nimport numpy as np\n\nimport matplotlib as mpl\nfrom matplotlib import _api, cbook, _docstring, _preprocess_data\nimport matplotlib.artist as martist\nimport matplotlib.axes as maxes\nimport matplotlib.collections as mcoll\nimport matplotlib.colors as mcolors\nimport matplotlib.image as mimage\nimport matplotlib.lines as mlines\nimport matplotlib.patches as mpatches\nimport matplotlib.container as mcontainer\nimport matplotlib.transforms as mtransforms\nfrom matplotlib.axes import Axes\nfrom matplotlib.axes._base import _axis_method_wrapper, _process_plot_format\nfrom matplotlib.transforms import Bbox\nfrom matplotlib.tri._triangulation import Triangulation\n\nfrom . import art3d\nfrom . import proj3d\nfrom . import axis3d\n\n\n@_docstring.interpd\n@_api.define_aliases({\n \"xlim\": [\"xlim3d\"], \"ylim\": [\"ylim3d\"], \"zlim\": [\"zlim3d\"]})\nclass Axes3D(Axes):\n \"\"\"\n 3D Axes object.\n\n .. note::\n\n As a user, you do not instantiate Axes directly, but use Axes creation\n methods instead; e.g. from `.pyplot` or `.Figure`:\n `~.pyplot.subplots`, `~.pyplot.subplot_mosaic` or `.Figure.add_axes`.\n \"\"\"\n name = '3d'\n\n _axis_names = (\"x\", \"y\", \"z\")\n Axes._shared_axes[\"z\"] = cbook.Grouper()\n Axes._shared_axes[\"view\"] = cbook.Grouper()\n\n vvec = _api.deprecate_privatize_attribute(\"3.7\")\n eye = _api.deprecate_privatize_attribute(\"3.7\")\n sx = _api.deprecate_privatize_attribute(\"3.7\")\n sy = _api.deprecate_privatize_attribute(\"3.7\")\n\n def __init__(\n self, fig, rect=None, *args,\n elev=30, azim=-60, roll=0, sharez=None, proj_type='persp',\n box_aspect=None, computed_zorder=True, focal_length=None,\n shareview=None,\n **kwargs):\n \"\"\"\n Parameters\n ----------\n fig : Figure\n The parent figure.\n rect : tuple (left, bottom, width, height), default: None.\n The ``(left, bottom, width, height)`` axes position.\n elev : float, default: 30\n The elevation angle in degrees rotates the camera above and below\n the x-y plane, with a positive angle corresponding to a location\n above the plane.\n azim : float, default: -60\n The azimuthal angle in degrees rotates the camera about the z axis,\n with a positive angle corresponding to a right-handed rotation. In\n other words, a positive azimuth rotates the camera about the origin\n from its location along the +x axis towards the +y axis.\n roll : float, default: 0\n The roll angle in degrees rotates the camera about the viewing\n axis. A positive angle spins the camera clockwise, causing the\n scene to rotate counter-clockwise.\n sharez : Axes3D, optional\n Other Axes to share z-limits with.\n proj_type : {'persp', 'ortho'}\n The projection type, default 'persp'.\n box_aspect : 3-tuple of floats, default: None\n Changes the physical dimensions of the Axes3D, such that the ratio\n of the axis lengths in display units is x:y:z.\n If None, defaults to 4:4:3\n computed_zorder : bool, default: True\n If True, the draw order is computed based on the average position\n of the `.Artist`\\\\s along the view direction.\n Set to False if you want to manually control the order in which\n Artists are drawn on top of each other using their *zorder*\n attribute. This can be used for fine-tuning if the automatic order\n does not produce the desired result. Note however, that a manual\n zorder will only be correct for a limited view angle. If the figure\n is rotated by the user, it will look wrong from certain angles.\n focal_length : float, default: None\n For a projection type of 'persp', the focal length of the virtual\n camera. Must be > 0. If None, defaults to 1.\n For a projection type of 'ortho', must be set to either None\n or infinity (numpy.inf). If None, defaults to infinity.\n The focal length can be computed from a desired Field Of View via\n the equation: focal_length = 1/tan(FOV/2)\n shareview : Axes3D, optional\n Other Axes to share view angles with.\n\n **kwargs\n Other optional keyword arguments:\n\n %(Axes3D:kwdoc)s\n \"\"\"\n\n if rect is None:\n rect = [0.0, 0.0, 1.0, 1.0]\n\n self.initial_azim = azim\n self.initial_elev = elev\n self.initial_roll = roll\n self.set_proj_type(proj_type, focal_length)\n self.computed_zorder = computed_zorder\n\n self.xy_viewLim = Bbox.unit()\n self.zz_viewLim = Bbox.unit()\n self.xy_dataLim = Bbox.unit()\n # z-limits are encoded in the x-component of the Bbox, y is un-used\n self.zz_dataLim = Bbox.unit()\n\n # inhibit autoscale_view until the axes are defined\n # they can't be defined until Axes.__init__ has been called\n self.view_init(self.initial_elev, self.initial_azim, self.initial_roll)\n\n self._sharez = sharez\n if sharez is not None:\n self._shared_axes[\"z\"].join(self, sharez)\n self._adjustable = 'datalim'\n\n self._shareview = shareview\n if shareview is not None:\n self._shared_axes[\"view\"].join(self, shareview)\n\n if kwargs.pop('auto_add_to_figure', False):\n raise AttributeError(\n 'auto_add_to_figure is no longer supported for Axes3D. '\n 'Use fig.add_axes(ax) instead.'\n )\n\n super().__init__(\n fig, rect, frameon=True, box_aspect=box_aspect, *args, **kwargs\n )\n # Disable drawing of axes by base class\n super().set_axis_off()\n # Enable drawing of axes by Axes3D class\n self.set_axis_on()\n self.M = None\n self.invM = None\n\n # func used to format z -- fall back on major formatters\n self.fmt_zdata = None\n\n self.mouse_init()\n self.figure.canvas.callbacks._connect_picklable(\n 'motion_notify_event', self._on_move)\n self.figure.canvas.callbacks._connect_picklable(\n 'button_press_event', self._button_press)\n self.figure.canvas.callbacks._connect_picklable(\n 'button_release_event', self._button_release)\n self.set_top_view()\n\n self.patch.set_linewidth(0)\n # Calculate the pseudo-data width and height\n pseudo_bbox = self.transLimits.inverted().transform([(0, 0), (1, 1)])\n self._pseudo_w, self._pseudo_h = pseudo_bbox[1] - pseudo_bbox[0]\n\n # mplot3d currently manages its own spines and needs these turned off\n # for bounding box calculations\n self.spines[:].set_visible(False)\n\n def set_axis_off(self):\n self._axis3don = False\n self.stale = True\n\n def set_axis_on(self):\n self._axis3don = True\n self.stale = True\n\n def convert_zunits(self, z):\n \"\"\"\n For artists in an Axes, if the zaxis has units support,\n convert *z* using zaxis unit type\n \"\"\"\n return self.zaxis.convert_units(z)\n\n def set_top_view(self):\n # this happens to be the right view for the viewing coordinates\n # moved up and to the left slightly to fit labels and axes\n xdwl = 0.95 / self._dist\n xdw = 0.9 / self._dist\n ydwl = 0.95 / self._dist\n ydw = 0.9 / self._dist\n # Set the viewing pane.\n self.viewLim.intervalx = (-xdwl, xdw)\n self.viewLim.intervaly = (-ydwl, ydw)\n self.stale = True\n\n def _init_axis(self):\n \"\"\"Init 3D axes; overrides creation of regular X/Y axes.\"\"\"\n self.xaxis = axis3d.XAxis(self)\n self.yaxis = axis3d.YAxis(self)\n self.zaxis = axis3d.ZAxis(self)\n\n def get_zaxis(self):\n \"\"\"Return the ``ZAxis`` (`~.axis3d.Axis`) instance.\"\"\"\n return self.zaxis\n\n get_zgridlines = _axis_method_wrapper(\"zaxis\", \"get_gridlines\")\n get_zticklines = _axis_method_wrapper(\"zaxis\", \"get_ticklines\")\n\n @_api.deprecated(\"3.7\")\n def unit_cube(self, vals=None):\n return self._unit_cube(vals)\n\n def _unit_cube(self, vals=None):\n minx, maxx, miny, maxy, minz, maxz = vals or self.get_w_lims()\n return [(minx, miny, minz),\n (maxx, miny, minz),\n (maxx, maxy, minz),\n (minx, maxy, minz),\n (minx, miny, maxz),\n (maxx, miny, maxz),\n (maxx, maxy, maxz),\n (minx, maxy, maxz)]\n\n @_api.deprecated(\"3.7\")\n def tunit_cube(self, vals=None, M=None):\n return self._tunit_cube(vals, M)\n\n def _tunit_cube(self, vals=None, M=None):\n if M is None:\n M = self.M\n xyzs = self._unit_cube(vals)\n tcube = proj3d._proj_points(xyzs, M)\n return tcube\n\n @_api.deprecated(\"3.7\")\n def tunit_edges(self, vals=None, M=None):\n return self._tunit_edges(vals, M)\n\n def _tunit_edges(self, vals=None, M=None):\n tc = self._tunit_cube(vals, M)\n edges = [(tc[0], tc[1]),\n (tc[1], tc[2]),\n (tc[2], tc[3]),\n (tc[3], tc[0]),\n\n (tc[0], tc[4]),\n (tc[1], tc[5]),\n (tc[2], tc[6]),\n (tc[3], tc[7]),\n\n (tc[4], tc[5]),\n (tc[5], tc[6]),\n (tc[6], tc[7]),\n (tc[7], tc[4])]\n return edges\n\n def set_aspect(self, aspect, adjustable=None, anchor=None, share=False):\n \"\"\"\n Set the aspect ratios.\n\n Parameters\n ----------\n aspect : {'auto', 'equal', 'equalxy', 'equalxz', 'equalyz'}\n Possible values:\n\n ========= ==================================================\n value description\n ========= ==================================================\n 'auto' automatic; fill the position rectangle with data.\n 'equal' adapt all the axes to have equal aspect ratios.\n 'equalxy' adapt the x and y axes to have equal aspect ratios.\n 'equalxz' adapt the x and z axes to have equal aspect ratios.\n 'equalyz' adapt the y and z axes to have equal aspect ratios.\n ========= ==================================================\n\n adjustable : None or {'box', 'datalim'}, optional\n If not *None*, this defines which parameter will be adjusted to\n meet the required aspect. See `.set_adjustable` for further\n details.\n\n anchor : None or str or 2-tuple of float, optional\n If not *None*, this defines where the Axes will be drawn if there\n is extra space due to aspect constraints. The most common way to\n specify the anchor are abbreviations of cardinal directions:\n\n ===== =====================\n value description\n ===== =====================\n 'C' centered\n 'SW' lower left corner\n 'S' middle of bottom edge\n 'SE' lower right corner\n etc.\n ===== =====================\n\n See `~.Axes.set_anchor` for further details.\n\n share : bool, default: False\n If ``True``, apply the settings to all shared Axes.\n\n See Also\n --------\n mpl_toolkits.mplot3d.axes3d.Axes3D.set_box_aspect\n \"\"\"\n _api.check_in_list(('auto', 'equal', 'equalxy', 'equalyz', 'equalxz'),\n aspect=aspect)\n super().set_aspect(\n aspect='auto', adjustable=adjustable, anchor=anchor, share=share)\n self._aspect = aspect\n\n if aspect in ('equal', 'equalxy', 'equalxz', 'equalyz'):\n ax_indices = self._equal_aspect_axis_indices(aspect)\n\n view_intervals = np.array([self.xaxis.get_view_interval(),\n self.yaxis.get_view_interval(),\n self.zaxis.get_view_interval()])\n ptp = np.ptp(view_intervals, axis=1)\n if self._adjustable == 'datalim':\n mean = np.mean(view_intervals, axis=1)\n scale = max(ptp[ax_indices] / self._box_aspect[ax_indices])\n deltas = scale * self._box_aspect\n\n for i, set_lim in enumerate((self.set_xlim3d,\n self.set_ylim3d,\n self.set_zlim3d)):\n if i in ax_indices:\n set_lim(mean[i] - deltas[i]/2., mean[i] + deltas[i]/2.)\n else: # 'box'\n # Change the box aspect such that the ratio of the length of\n # the unmodified axis to the length of the diagonal\n # perpendicular to it remains unchanged.\n box_aspect = np.array(self._box_aspect)\n box_aspect[ax_indices] = ptp[ax_indices]\n remaining_ax_indices = {0, 1, 2}.difference(ax_indices)\n if remaining_ax_indices:\n remaining = remaining_ax_indices.pop()\n old_diag = np.linalg.norm(self._box_aspect[ax_indices])\n new_diag = np.linalg.norm(box_aspect[ax_indices])\n box_aspect[remaining] *= new_diag / old_diag\n self.set_box_aspect(box_aspect)\n\n def _equal_aspect_axis_indices(self, aspect):\n \"\"\"\n Get the indices for which of the x, y, z axes are constrained to have\n equal aspect ratios.\n\n Parameters\n ----------\n aspect : {'auto', 'equal', 'equalxy', 'equalxz', 'equalyz'}\n See descriptions in docstring for `.set_aspect()`.\n \"\"\"\n ax_indices = [] # aspect == 'auto'\n if aspect == 'equal':\n ax_indices = [0, 1, 2]\n elif aspect == 'equalxy':\n ax_indices = [0, 1]\n elif aspect == 'equalxz':\n ax_indices = [0, 2]\n elif aspect == 'equalyz':\n ax_indices = [1, 2]\n return ax_indices\n\n def set_box_aspect(self, aspect, *, zoom=1):\n \"\"\"\n Set the Axes box aspect.\n\n The box aspect is the ratio of height to width in display\n units for each face of the box when viewed perpendicular to\n that face. This is not to be confused with the data aspect (see\n `~.Axes3D.set_aspect`). The default ratios are 4:4:3 (x:y:z).\n\n To simulate having equal aspect in data space, set the box\n aspect to match your data range in each dimension.\n\n *zoom* controls the overall size of the Axes3D in the figure.\n\n Parameters\n ----------\n aspect : 3-tuple of floats or None\n Changes the physical dimensions of the Axes3D, such that the ratio\n of the axis lengths in display units is x:y:z.\n If None, defaults to (4, 4, 3).\n\n zoom : float, default: 1\n Control overall size of the Axes3D in the figure. Must be > 0.\n \"\"\"\n if zoom <= 0:\n raise ValueError(f'Argument zoom = {zoom} must be > 0')\n\n if aspect is None:\n aspect = np.asarray((4, 4, 3), dtype=float)\n else:\n aspect = np.asarray(aspect, dtype=float)\n _api.check_shape((3,), aspect=aspect)\n # default scale tuned to match the mpl32 appearance.\n aspect *= 1.8294640721620434 * zoom / np.linalg.norm(aspect)\n\n self._box_aspect = aspect\n self.stale = True\n\n def apply_aspect(self, position=None):\n if position is None:\n position = self.get_position(original=True)\n\n # in the superclass, we would go through and actually deal with axis\n # scales and box/datalim. Those are all irrelevant - all we need to do\n # is make sure our coordinate system is square.\n trans = self.get_figure().transSubfigure\n bb = mtransforms.Bbox.unit().transformed(trans)\n # this is the physical aspect of the panel (or figure):\n fig_aspect = bb.height / bb.width\n\n box_aspect = 1\n pb = position.frozen()\n pb1 = pb.shrunk_to_aspect(box_aspect, pb, fig_aspect)\n self._set_position(pb1.anchored(self.get_anchor(), pb), 'active')\n\n @martist.allow_rasterization\n def draw(self, renderer):\n if not self.get_visible():\n return\n self._unstale_viewLim()\n\n # draw the background patch\n self.patch.draw(renderer)\n self._frameon = False\n\n # first, set the aspect\n # this is duplicated from `axes._base._AxesBase.draw`\n # but must be called before any of the artist are drawn as\n # it adjusts the view limits and the size of the bounding box\n # of the Axes\n locator = self.get_axes_locator()\n self.apply_aspect(locator(self, renderer) if locator else None)\n\n # add the projection matrix to the renderer\n self.M = self.get_proj()\n self.invM = np.linalg.inv(self.M)\n\n collections_and_patches = (\n artist for artist in self._children\n if isinstance(artist, (mcoll.Collection, mpatches.Patch))\n and artist.get_visible())\n if self.computed_zorder:\n # Calculate projection of collections and patches and zorder\n # them. Make sure they are drawn above the grids.\n zorder_offset = max(axis.get_zorder()\n for axis in self._axis_map.values()) + 1\n collection_zorder = patch_zorder = zorder_offset\n\n for artist in sorted(collections_and_patches,\n key=lambda artist: artist.do_3d_projection(),\n reverse=True):\n if isinstance(artist, mcoll.Collection):\n artist.zorder = collection_zorder\n collection_zorder += 1\n elif isinstance(artist, mpatches.Patch):\n artist.zorder = patch_zorder\n patch_zorder += 1\n else:\n for artist in collections_and_patches:\n artist.do_3d_projection()\n\n if self._axis3don:\n # Draw panes first\n for axis in self._axis_map.values():\n axis.draw_pane(renderer)\n # Then gridlines\n for axis in self._axis_map.values():\n axis.draw_grid(renderer)\n # Then axes, labels, text, and ticks\n for axis in self._axis_map.values():\n axis.draw(renderer)\n\n # Then rest\n super().draw(renderer)\n\n def get_axis_position(self):\n vals = self.get_w_lims()\n tc = self._tunit_cube(vals, self.M)\n xhigh = tc[1][2] > tc[2][2]\n yhigh = tc[3][2] > tc[2][2]\n zhigh = tc[0][2] > tc[2][2]\n return xhigh, yhigh, zhigh\n\n def update_datalim(self, xys, **kwargs):\n \"\"\"\n Not implemented in `~mpl_toolkits.mplot3d.axes3d.Axes3D`.\n \"\"\"\n pass\n\n get_autoscalez_on = _axis_method_wrapper(\"zaxis\", \"_get_autoscale_on\")\n set_autoscalez_on = _axis_method_wrapper(\"zaxis\", \"_set_autoscale_on\")\n\n def set_zmargin(self, m):\n \"\"\"\n Set padding of Z data limits prior to autoscaling.\n\n *m* times the data interval will be added to each end of that interval\n before it is used in autoscaling. If *m* is negative, this will clip\n the data range instead of expanding it.\n\n For example, if your data is in the range [0, 2], a margin of 0.1 will\n result in a range [-0.2, 2.2]; a margin of -0.1 will result in a range\n of [0.2, 1.8].\n\n Parameters\n ----------\n m : float greater than -0.5\n \"\"\"\n if m <= -0.5:\n raise ValueError(\"margin must be greater than -0.5\")\n self._zmargin = m\n self._request_autoscale_view(\"z\")\n self.stale = True\n\n def margins(self, *margins, x=None, y=None, z=None, tight=True):\n \"\"\"\n Set or retrieve autoscaling margins.\n\n See `.Axes.margins` for full documentation. Because this function\n applies to 3D Axes, it also takes a *z* argument, and returns\n ``(xmargin, ymargin, zmargin)``.\n \"\"\"\n if margins and (x is not None or y is not None or z is not None):\n raise TypeError('Cannot pass both positional and keyword '\n 'arguments for x, y, and/or z.')\n elif len(margins) == 1:\n x = y = z = margins[0]\n elif len(margins) == 3:\n x, y, z = margins\n elif margins:\n raise TypeError('Must pass a single positional argument for all '\n 'margins, or one for each margin (x, y, z).')\n\n if x is None and y is None and z is None:\n if tight is not True:\n _api.warn_external(f'ignoring tight={tight!r} in get mode')\n return self._xmargin, self._ymargin, self._zmargin\n\n if x is not None:\n self.set_xmargin(x)\n if y is not None:\n self.set_ymargin(y)\n if z is not None:\n self.set_zmargin(z)\n\n self.autoscale_view(\n tight=tight, scalex=(x is not None), scaley=(y is not None),\n scalez=(z is not None)\n )\n\n def autoscale(self, enable=True, axis='both', tight=None):\n \"\"\"\n Convenience method for simple axis view autoscaling.\n\n See `.Axes.autoscale` for full documentation. Because this function\n applies to 3D Axes, *axis* can also be set to 'z', and setting *axis*\n to 'both' autoscales all three axes.\n \"\"\"\n if enable is None:\n scalex = True\n scaley = True\n scalez = True\n else:\n if axis in ['x', 'both']:\n self.set_autoscalex_on(bool(enable))\n scalex = self.get_autoscalex_on()\n else:\n scalex = False\n if axis in ['y', 'both']:\n self.set_autoscaley_on(bool(enable))\n scaley = self.get_autoscaley_on()\n else:\n scaley = False\n if axis in ['z', 'both']:\n self.set_autoscalez_on(bool(enable))\n scalez = self.get_autoscalez_on()\n else:\n scalez = False\n if scalex:\n self._request_autoscale_view(\"x\", tight=tight)\n if scaley:\n self._request_autoscale_view(\"y\", tight=tight)\n if scalez:\n self._request_autoscale_view(\"z\", tight=tight)\n\n def auto_scale_xyz(self, X, Y, Z=None, had_data=None):\n # This updates the bounding boxes as to keep a record as to what the\n # minimum sized rectangular volume holds the data.\n if np.shape(X) == np.shape(Y):\n self.xy_dataLim.update_from_data_xy(\n np.column_stack([np.ravel(X), np.ravel(Y)]), not had_data)\n else:\n self.xy_dataLim.update_from_data_x(X, not had_data)\n self.xy_dataLim.update_from_data_y(Y, not had_data)\n if Z is not None:\n self.zz_dataLim.update_from_data_x(Z, not had_data)\n # Let autoscale_view figure out how to use this data.\n self.autoscale_view()\n\n def autoscale_view(self, tight=None, scalex=True, scaley=True,\n scalez=True):\n \"\"\"\n Autoscale the view limits using the data limits.\n\n See `.Axes.autoscale_view` for full documentation. Because this\n function applies to 3D Axes, it also takes a *scalez* argument.\n \"\"\"\n # This method looks at the rectangular volume (see above)\n # of data and decides how to scale the view portal to fit it.\n if tight is None:\n _tight = self._tight\n if not _tight:\n # if image data only just use the datalim\n for artist in self._children:\n if isinstance(artist, mimage.AxesImage):\n _tight = True\n elif isinstance(artist, (mlines.Line2D, mpatches.Patch)):\n _tight = False\n break\n else:\n _tight = self._tight = bool(tight)\n\n if scalex and self.get_autoscalex_on():\n x0, x1 = self.xy_dataLim.intervalx\n xlocator = self.xaxis.get_major_locator()\n x0, x1 = xlocator.nonsingular(x0, x1)\n if self._xmargin > 0:\n delta = (x1 - x0) * self._xmargin\n x0 -= delta\n x1 += delta\n if not _tight:\n x0, x1 = xlocator.view_limits(x0, x1)\n self.set_xbound(x0, x1)\n\n if scaley and self.get_autoscaley_on():\n y0, y1 = self.xy_dataLim.intervaly\n ylocator = self.yaxis.get_major_locator()\n y0, y1 = ylocator.nonsingular(y0, y1)\n if self._ymargin > 0:\n delta = (y1 - y0) * self._ymargin\n y0 -= delta\n y1 += delta\n if not _tight:\n y0, y1 = ylocator.view_limits(y0, y1)\n self.set_ybound(y0, y1)\n\n if scalez and self.get_autoscalez_on():\n z0, z1 = self.zz_dataLim.intervalx\n zlocator = self.zaxis.get_major_locator()\n z0, z1 = zlocator.nonsingular(z0, z1)\n if self._zmargin > 0:\n delta = (z1 - z0) * self._zmargin\n z0 -= delta\n z1 += delta\n if not _tight:\n z0, z1 = zlocator.view_limits(z0, z1)\n self.set_zbound(z0, z1)\n\n def get_w_lims(self):\n \"\"\"Get 3D world limits.\"\"\"\n minx, maxx = self.get_xlim3d()\n miny, maxy = self.get_ylim3d()\n minz, maxz = self.get_zlim3d()\n return minx, maxx, miny, maxy, minz, maxz\n\n # set_xlim, set_ylim are directly inherited from base Axes.\n def set_zlim(self, bottom=None, top=None, *, emit=True, auto=False,\n zmin=None, zmax=None):\n \"\"\"\n Set 3D z limits.\n\n See `.Axes.set_ylim` for full documentation\n \"\"\"\n if top is None and np.iterable(bottom):\n bottom, top = bottom\n if zmin is not None:\n if bottom is not None:\n raise TypeError(\"Cannot pass both 'bottom' and 'zmin'\")\n bottom = zmin\n if zmax is not None:\n if top is not None:\n raise TypeError(\"Cannot pass both 'top' and 'zmax'\")\n top = zmax\n return self.zaxis._set_lim(bottom, top, emit=emit, auto=auto)\n\n set_xlim3d = maxes.Axes.set_xlim\n set_ylim3d = maxes.Axes.set_ylim\n set_zlim3d = set_zlim\n\n def get_xlim(self):\n # docstring inherited\n return tuple(self.xy_viewLim.intervalx)\n\n def get_ylim(self):\n # docstring inherited\n return tuple(self.xy_viewLim.intervaly)\n\n def get_zlim(self):\n \"\"\"\n Return the 3D z-axis view limits.\n\n Returns\n -------\n left, right : (float, float)\n The current z-axis limits in data coordinates.\n\n See Also\n --------\n set_zlim\n set_zbound, get_zbound\n invert_zaxis, zaxis_inverted\n\n Notes\n -----\n The z-axis may be inverted, in which case the *left* value will\n be greater than the *right* value.\n \"\"\"\n return tuple(self.zz_viewLim.intervalx)\n\n get_zscale = _axis_method_wrapper(\"zaxis\", \"get_scale\")\n\n # Redefine all three methods to overwrite their docstrings.\n set_xscale = _axis_method_wrapper(\"xaxis\", \"_set_axes_scale\")\n set_yscale = _axis_method_wrapper(\"yaxis\", \"_set_axes_scale\")\n set_zscale = _axis_method_wrapper(\"zaxis\", \"_set_axes_scale\")\n set_xscale.__doc__, set_yscale.__doc__, set_zscale.__doc__ = map(\n \"\"\"\n Set the {}-axis scale.\n\n Parameters\n ----------\n value : {{\"linear\"}}\n The axis scale type to apply. 3D axes currently only support\n linear scales; other scales yield nonsensical results.\n\n **kwargs\n Keyword arguments are nominally forwarded to the scale class, but\n none of them is applicable for linear scales.\n \"\"\".format,\n [\"x\", \"y\", \"z\"])\n\n get_zticks = _axis_method_wrapper(\"zaxis\", \"get_ticklocs\")\n set_zticks = _axis_method_wrapper(\"zaxis\", \"set_ticks\")\n get_zmajorticklabels = _axis_method_wrapper(\"zaxis\", \"get_majorticklabels\")\n get_zminorticklabels = _axis_method_wrapper(\"zaxis\", \"get_minorticklabels\")\n get_zticklabels = _axis_method_wrapper(\"zaxis\", \"get_ticklabels\")\n set_zticklabels = _axis_method_wrapper(\n \"zaxis\", \"set_ticklabels\",\n doc_sub={\"Axis.set_ticks\": \"Axes3D.set_zticks\"})\n\n zaxis_date = _axis_method_wrapper(\"zaxis\", \"axis_date\")\n if zaxis_date.__doc__:\n zaxis_date.__doc__ += textwrap.dedent(\"\"\"\n\n Notes\n -----\n This function is merely provided for completeness, but 3D axes do not\n support dates for ticks, and so this may not work as expected.\n \"\"\")\n\n def clabel(self, *args, **kwargs):\n \"\"\"Currently not implemented for 3D axes, and returns *None*.\"\"\"\n return None\n\n def view_init(self, elev=None, azim=None, roll=None, vertical_axis=\"z\",\n share=False):\n \"\"\"\n Set the elevation and azimuth of the axes in degrees (not radians).\n\n This can be used to rotate the axes programmatically.\n\n To look normal to the primary planes, the following elevation and\n azimuth angles can be used. A roll angle of 0, 90, 180, or 270 deg\n will rotate these views while keeping the axes at right angles.\n\n ========== ==== ====\n view plane elev azim\n ========== ==== ====\n XY 90 -90\n XZ 0 -90\n YZ 0 0\n -XY -90 90\n -XZ 0 90\n -YZ 0 180\n ========== ==== ====\n\n Parameters\n ----------\n elev : float, default: None\n The elevation angle in degrees rotates the camera above the plane\n pierced by the vertical axis, with a positive angle corresponding\n to a location above that plane. For example, with the default\n vertical axis of 'z', the elevation defines the angle of the camera\n location above the x-y plane.\n If None, then the initial value as specified in the `Axes3D`\n constructor is used.\n azim : float, default: None\n The azimuthal angle in degrees rotates the camera about the\n vertical axis, with a positive angle corresponding to a\n right-handed rotation. For example, with the default vertical axis\n of 'z', a positive azimuth rotates the camera about the origin from\n its location along the +x axis towards the +y axis.\n If None, then the initial value as specified in the `Axes3D`\n constructor is used.\n roll : float, default: None\n The roll angle in degrees rotates the camera about the viewing\n axis. A positive angle spins the camera clockwise, causing the\n scene to rotate counter-clockwise.\n If None, then the initial value as specified in the `Axes3D`\n constructor is used.\n vertical_axis : {\"z\", \"x\", \"y\"}, default: \"z\"\n The axis to align vertically. *azim* rotates about this axis.\n share : bool, default: False\n If ``True``, apply the settings to all Axes with shared views.\n \"\"\"\n\n self._dist = 10 # The camera distance from origin. Behaves like zoom\n\n if elev is None:\n elev = self.initial_elev\n if azim is None:\n azim = self.initial_azim\n if roll is None:\n roll = self.initial_roll\n vertical_axis = _api.check_getitem(\n dict(x=0, y=1, z=2), vertical_axis=vertical_axis\n )\n\n if share:\n axes = {sibling for sibling\n in self._shared_axes['view'].get_siblings(self)}\n else:\n axes = [self]\n\n for ax in axes:\n ax.elev = elev\n ax.azim = azim\n ax.roll = roll\n ax._vertical_axis = vertical_axis\n\n def set_proj_type(self, proj_type, focal_length=None):\n \"\"\"\n Set the projection type.\n\n Parameters\n ----------\n proj_type : {'persp', 'ortho'}\n The projection type.\n focal_length : float, default: None\n For a projection type of 'persp', the focal length of the virtual\n camera. Must be > 0. If None, defaults to 1.\n The focal length can be computed from a desired Field Of View via\n the equation: focal_length = 1/tan(FOV/2)\n \"\"\"\n _api.check_in_list(['persp', 'ortho'], proj_type=proj_type)\n if proj_type == 'persp':\n if focal_length is None:\n focal_length = 1\n elif focal_length <= 0:\n raise ValueError(f\"focal_length = {focal_length} must be \"\n \"greater than 0\")\n self._focal_length = focal_length\n else: # 'ortho':\n if focal_length not in (None, np.inf):\n raise ValueError(f\"focal_length = {focal_length} must be \"\n f\"None for proj_type = {proj_type}\")\n self._focal_length = np.inf\n\n def _roll_to_vertical(self, arr):\n \"\"\"Roll arrays to match the different vertical axis.\"\"\"\n return np.roll(arr, self._vertical_axis - 2)\n\n def get_proj(self):\n \"\"\"Create the projection matrix from the current viewing position.\"\"\"\n\n # Transform to uniform world coordinates 0-1, 0-1, 0-1\n box_aspect = self._roll_to_vertical(self._box_aspect)\n worldM = proj3d.world_transformation(\n *self.get_xlim3d(),\n *self.get_ylim3d(),\n *self.get_zlim3d(),\n pb_aspect=box_aspect,\n )\n\n # Look into the middle of the world coordinates:\n R = 0.5 * box_aspect\n\n # elev: elevation angle in the z plane.\n # azim: azimuth angle in the xy plane.\n # Coordinates for a point that rotates around the box of data.\n # p0, p1 corresponds to rotating the box only around the vertical axis.\n # p2 corresponds to rotating the box only around the horizontal axis.\n elev_rad = np.deg2rad(self.elev)\n azim_rad = np.deg2rad(self.azim)\n p0 = np.cos(elev_rad) * np.cos(azim_rad)\n p1 = np.cos(elev_rad) * np.sin(azim_rad)\n p2 = np.sin(elev_rad)\n\n # When changing vertical axis the coordinates changes as well.\n # Roll the values to get the same behaviour as the default:\n ps = self._roll_to_vertical([p0, p1, p2])\n\n # The coordinates for the eye viewing point. The eye is looking\n # towards the middle of the box of data from a distance:\n eye = R + self._dist * ps\n\n # vvec, self._vvec and self._eye are unused, remove when deprecated\n vvec = R - eye\n self._eye = eye\n self._vvec = vvec / np.linalg.norm(vvec)\n\n # Calculate the viewing axes for the eye position\n u, v, w = self._calc_view_axes(eye)\n self._view_u = u # _view_u is towards the right of the screen\n self._view_v = v # _view_v is towards the top of the screen\n self._view_w = w # _view_w is out of the screen\n\n # Generate the view and projection transformation matrices\n if self._focal_length == np.inf:\n # Orthographic projection\n viewM = proj3d._view_transformation_uvw(u, v, w, eye)\n projM = proj3d._ortho_transformation(-self._dist, self._dist)\n else:\n # Perspective projection\n # Scale the eye dist to compensate for the focal length zoom effect\n eye_focal = R + self._dist * ps * self._focal_length\n viewM = proj3d._view_transformation_uvw(u, v, w, eye_focal)\n projM = proj3d._persp_transformation(-self._dist,\n self._dist,\n self._focal_length)\n\n # Combine all the transformation matrices to get the final projection\n M0 = np.dot(viewM, worldM)\n M = np.dot(projM, M0)\n return M\n\n def mouse_init(self, rotate_btn=1, pan_btn=2, zoom_btn=3):\n \"\"\"\n Set the mouse buttons for 3D rotation and zooming.\n\n Parameters\n ----------\n rotate_btn : int or list of int, default: 1\n The mouse button or buttons to use for 3D rotation of the axes.\n pan_btn : int or list of int, default: 2\n The mouse button or buttons to use to pan the 3D axes.\n zoom_btn : int or list of int, default: 3\n The mouse button or buttons to use to zoom the 3D axes.\n \"\"\"\n self.button_pressed = None\n # coerce scalars into array-like, then convert into\n # a regular list to avoid comparisons against None\n # which breaks in recent versions of numpy.\n self._rotate_btn = np.atleast_1d(rotate_btn).tolist()\n self._pan_btn = np.atleast_1d(pan_btn).tolist()\n self._zoom_btn = np.atleast_1d(zoom_btn).tolist()\n\n def disable_mouse_rotation(self):\n \"\"\"Disable mouse buttons for 3D rotation, panning, and zooming.\"\"\"\n self.mouse_init(rotate_btn=[], pan_btn=[], zoom_btn=[])\n\n def can_zoom(self):\n # doc-string inherited\n return True\n\n def can_pan(self):\n # doc-string inherited\n return True\n\n def sharez(self, other):\n \"\"\"\n Share the z-axis with *other*.\n\n This is equivalent to passing ``sharez=other`` when constructing the\n Axes, and cannot be used if the z-axis is already being shared with\n another Axes.\n \"\"\"\n _api.check_isinstance(Axes3D, other=other)\n if self._sharez is not None and other is not self._sharez:\n raise ValueError(\"z-axis is already shared\")\n self._shared_axes[\"z\"].join(self, other)\n self._sharez = other\n self.zaxis.major = other.zaxis.major # Ticker instances holding\n self.zaxis.minor = other.zaxis.minor # locator and formatter.\n z0, z1 = other.get_zlim()\n self.set_zlim(z0, z1, emit=False, auto=other.get_autoscalez_on())\n self.zaxis._scale = other.zaxis._scale\n\n def shareview(self, other):\n \"\"\"\n Share the view angles with *other*.\n\n This is equivalent to passing ``shareview=other`` when\n constructing the Axes, and cannot be used if the view angles are\n already being shared with another Axes.\n \"\"\"\n _api.check_isinstance(Axes3D, other=other)\n if self._shareview is not None and other is not self._shareview:\n raise ValueError(\"view angles are already shared\")\n self._shared_axes[\"view\"].join(self, other)\n self._shareview = other\n vertical_axis = {0: \"x\", 1: \"y\", 2: \"z\"}[other._vertical_axis]\n self.view_init(elev=other.elev, azim=other.azim, roll=other.roll,\n vertical_axis=vertical_axis, share=True)\n\n def clear(self):\n # docstring inherited.\n super().clear()\n if self._focal_length == np.inf:\n self._zmargin = mpl.rcParams['axes.zmargin']\n else:\n self._zmargin = 0.\n self.grid(mpl.rcParams['axes3d.grid'])\n\n def _button_press(self, event):\n if event.inaxes == self:\n self.button_pressed = event.button\n self._sx, self._sy = event.xdata, event.ydata\n toolbar = self.figure.canvas.toolbar\n if toolbar and toolbar._nav_stack() is None:\n toolbar.push_current()\n\n def _button_release(self, event):\n self.button_pressed = None\n toolbar = self.figure.canvas.toolbar\n # backend_bases.release_zoom and backend_bases.release_pan call\n # push_current, so check the navigation mode so we don't call it twice\n if toolbar and self.get_navigate_mode() is None:\n toolbar.push_current()\n\n def _get_view(self):\n # docstring inherited\n return {\n \"xlim\": self.get_xlim(), \"autoscalex_on\": self.get_autoscalex_on(),\n \"ylim\": self.get_ylim(), \"autoscaley_on\": self.get_autoscaley_on(),\n \"zlim\": self.get_zlim(), \"autoscalez_on\": self.get_autoscalez_on(),\n }, (self.elev, self.azim, self.roll)\n\n def _set_view(self, view):\n # docstring inherited\n props, (elev, azim, roll) = view\n self.set(**props)\n self.elev = elev\n self.azim = azim\n self.roll = roll\n\n def format_zdata(self, z):\n \"\"\"\n Return *z* string formatted. This function will use the\n :attr:`fmt_zdata` attribute if it is callable, else will fall\n back on the zaxis major formatter\n \"\"\"\n try:\n return self.fmt_zdata(z)\n except (AttributeError, TypeError):\n func = self.zaxis.get_major_formatter().format_data_short\n val = func(z)\n return val\n\n def format_coord(self, xv, yv, renderer=None):\n \"\"\"\n Return a string giving the current view rotation angles, or the x, y, z\n coordinates of the point on the nearest axis pane underneath the mouse\n cursor, depending on the mouse button pressed.\n \"\"\"\n coords = ''\n\n if self.button_pressed in self._rotate_btn:\n # ignore xv and yv and display angles instead\n coords = self._rotation_coords()\n\n elif self.M is not None:\n coords = self._location_coords(xv, yv, renderer)\n\n return coords\n\n def _rotation_coords(self):\n \"\"\"\n Return the rotation angles as a string.\n \"\"\"\n norm_elev = art3d._norm_angle(self.elev)\n norm_azim = art3d._norm_angle(self.azim)\n norm_roll = art3d._norm_angle(self.roll)\n coords = (f\"elevation={norm_elev:.0f}\\N{DEGREE SIGN}, \"\n f\"azimuth={norm_azim:.0f}\\N{DEGREE SIGN}, \"\n f\"roll={norm_roll:.0f}\\N{DEGREE SIGN}\"\n ).replace(\"-\", \"\\N{MINUS SIGN}\")\n return coords\n\n def _location_coords(self, xv, yv, renderer):\n \"\"\"\n Return the location on the axis pane underneath the cursor as a string.\n \"\"\"\n p1, pane_idx = self._calc_coord(xv, yv, renderer)\n xs = self.format_xdata(p1[0])\n ys = self.format_ydata(p1[1])\n zs = self.format_zdata(p1[2])\n if pane_idx == 0:\n coords = f'x pane={xs}, y={ys}, z={zs}'\n elif pane_idx == 1:\n coords = f'x={xs}, y pane={ys}, z={zs}'\n elif pane_idx == 2:\n coords = f'x={xs}, y={ys}, z pane={zs}'\n return coords\n\n def _get_camera_loc(self):\n \"\"\"\n Returns the current camera location in data coordinates.\n \"\"\"\n cx, cy, cz, dx, dy, dz = self._get_w_centers_ranges()\n c = np.array([cx, cy, cz])\n r = np.array([dx, dy, dz])\n\n if self._focal_length == np.inf: # orthographic projection\n focal_length = 1e9 # large enough to be effectively infinite\n else: # perspective projection\n focal_length = self._focal_length\n eye = c + self._view_w * self._dist * r / self._box_aspect * focal_length\n return eye\n\n def _calc_coord(self, xv, yv, renderer=None):\n \"\"\"\n Given the 2D view coordinates, find the point on the nearest axis pane\n that lies directly below those coordinates. Returns a 3D point in data\n coordinates.\n \"\"\"\n if self._focal_length == np.inf: # orthographic projection\n zv = 1\n else: # perspective projection\n zv = -1 / self._focal_length\n\n # Convert point on view plane to data coordinates\n p1 = np.array(proj3d.inv_transform(xv, yv, zv, self.invM)).ravel()\n\n # Get the vector from the camera to the point on the view plane\n vec = self._get_camera_loc() - p1\n\n # Get the pane locations for each of the axes\n pane_locs = []\n for axis in self._axis_map.values():\n xys, loc = axis.active_pane(renderer)\n pane_locs.append(loc)\n\n # Find the distance to the nearest pane by projecting the view vector\n scales = np.zeros(3)\n for i in range(3):\n if vec[i] == 0:\n scales[i] = np.inf\n else:\n scales[i] = (p1[i] - pane_locs[i]) / vec[i]\n pane_idx = np.argmin(abs(scales))\n scale = scales[pane_idx]\n\n # Calculate the point on the closest pane\n p2 = p1 - scale*vec\n return p2, pane_idx\n\n def _on_move(self, event):\n \"\"\"\n Mouse moving.\n\n By default, button-1 rotates, button-2 pans, and button-3 zooms;\n these buttons can be modified via `mouse_init`.\n \"\"\"\n\n if not self.button_pressed:\n return\n\n if self.get_navigate_mode() is not None:\n # we don't want to rotate if we are zooming/panning\n # from the toolbar\n return\n\n if self.M is None:\n return\n\n x, y = event.xdata, event.ydata\n # In case the mouse is out of bounds.\n if x is None or event.inaxes != self:\n return\n\n dx, dy = x - self._sx, y - self._sy\n w = self._pseudo_w\n h = self._pseudo_h\n\n # Rotation\n if self.button_pressed in self._rotate_btn:\n # rotate viewing point\n # get the x and y pixel coords\n if dx == 0 and dy == 0:\n return\n\n roll = np.deg2rad(self.roll)\n delev = -(dy/h)*180*np.cos(roll) + (dx/w)*180*np.sin(roll)\n dazim = -(dy/h)*180*np.sin(roll) - (dx/w)*180*np.cos(roll)\n elev = self.elev + delev\n azim = self.azim + dazim\n self.view_init(elev=elev, azim=azim, roll=roll, share=True)\n self.stale = True\n\n # Pan\n elif self.button_pressed in self._pan_btn:\n # Start the pan event with pixel coordinates\n px, py = self.transData.transform([self._sx, self._sy])\n self.start_pan(px, py, 2)\n # pan view (takes pixel coordinate input)\n self.drag_pan(2, None, event.x, event.y)\n self.end_pan()\n\n # Zoom\n elif self.button_pressed in self._zoom_btn:\n # zoom view (dragging down zooms in)\n scale = h/(h - dy)\n self._scale_axis_limits(scale, scale, scale)\n\n # Store the event coordinates for the next time through.\n self._sx, self._sy = x, y\n # Always request a draw update at the end of interaction\n self.figure.canvas.draw_idle()\n\n def drag_pan(self, button, key, x, y):\n # docstring inherited\n\n # Get the coordinates from the move event\n p = self._pan_start\n (xdata, ydata), (xdata_start, ydata_start) = p.trans_inverse.transform(\n [(x, y), (p.x, p.y)])\n self._sx, self._sy = xdata, ydata\n # Calling start_pan() to set the x/y of this event as the starting\n # move location for the next event\n self.start_pan(x, y, button)\n du, dv = xdata - xdata_start, ydata - ydata_start\n dw = 0\n if key == 'x':\n dv = 0\n elif key == 'y':\n du = 0\n if du == 0 and dv == 0:\n return\n\n # Transform the pan from the view axes to the data axes\n R = np.array([self._view_u, self._view_v, self._view_w])\n R = -R / self._box_aspect * self._dist\n duvw_projected = R.T @ np.array([du, dv, dw])\n\n # Calculate pan distance\n minx, maxx, miny, maxy, minz, maxz = self.get_w_lims()\n dx = (maxx - minx) * duvw_projected[0]\n dy = (maxy - miny) * duvw_projected[1]\n dz = (maxz - minz) * duvw_projected[2]\n\n # Set the new axis limits\n self.set_xlim3d(minx + dx, maxx + dx)\n self.set_ylim3d(miny + dy, maxy + dy)\n self.set_zlim3d(minz + dz, maxz + dz)\n\n def _calc_view_axes(self, eye):\n \"\"\"\n Get the unit vectors for the viewing axes in data coordinates.\n `u` is towards the right of the screen\n `v` is towards the top of the screen\n `w` is out of the screen\n \"\"\"\n elev_rad = np.deg2rad(art3d._norm_angle(self.elev))\n roll_rad = np.deg2rad(art3d._norm_angle(self.roll))\n\n # Look into the middle of the world coordinates\n R = 0.5 * self._roll_to_vertical(self._box_aspect)\n\n # Define which axis should be vertical. A negative value\n # indicates the plot is upside down and therefore the values\n # have been reversed:\n V = np.zeros(3)\n V[self._vertical_axis] = -1 if abs(elev_rad) > np.pi/2 else 1\n\n u, v, w = proj3d._view_axes(eye, R, V, roll_rad)\n return u, v, w\n\n def _set_view_from_bbox(self, bbox, direction='in',\n mode=None, twinx=False, twiny=False):\n \"\"\"\n Zoom in or out of the bounding box.\n\n Will center the view in the center of the bounding box, and zoom by\n the ratio of the size of the bounding box to the size of the Axes3D.\n \"\"\"\n (start_x, start_y, stop_x, stop_y) = bbox\n if mode == 'x':\n start_y = self.bbox.min[1]\n stop_y = self.bbox.max[1]\n elif mode == 'y':\n start_x = self.bbox.min[0]\n stop_x = self.bbox.max[0]\n\n # Clip to bounding box limits\n start_x, stop_x = np.clip(sorted([start_x, stop_x]),\n self.bbox.min[0], self.bbox.max[0])\n start_y, stop_y = np.clip(sorted([start_y, stop_y]),\n self.bbox.min[1], self.bbox.max[1])\n\n # Move the center of the view to the center of the bbox\n zoom_center_x = (start_x + stop_x)/2\n zoom_center_y = (start_y + stop_y)/2\n\n ax_center_x = (self.bbox.max[0] + self.bbox.min[0])/2\n ax_center_y = (self.bbox.max[1] + self.bbox.min[1])/2\n\n self.start_pan(zoom_center_x, zoom_center_y, 2)\n self.drag_pan(2, None, ax_center_x, ax_center_y)\n self.end_pan()\n\n # Calculate zoom level\n dx = abs(start_x - stop_x)\n dy = abs(start_y - stop_y)\n scale_u = dx / (self.bbox.max[0] - self.bbox.min[0])\n scale_v = dy / (self.bbox.max[1] - self.bbox.min[1])\n\n # Keep aspect ratios equal\n scale = max(scale_u, scale_v)\n\n # Zoom out\n if direction == 'out':\n scale = 1 / scale\n\n self._zoom_data_limits(scale, scale, scale)\n\n def _zoom_data_limits(self, scale_u, scale_v, scale_w):\n \"\"\"\n Zoom in or out of a 3D plot.\n\n Will scale the data limits by the scale factors. These will be\n transformed to the x, y, z data axes based on the current view angles.\n A scale factor > 1 zooms out and a scale factor < 1 zooms in.\n\n For an axes that has had its aspect ratio set to 'equal', 'equalxy',\n 'equalyz', or 'equalxz', the relevant axes are constrained to zoom\n equally.\n\n Parameters\n ----------\n scale_u : float\n Scale factor for the u view axis (view screen horizontal).\n scale_v : float\n Scale factor for the v view axis (view screen vertical).\n scale_w : float\n Scale factor for the w view axis (view screen depth).\n \"\"\"\n scale = np.array([scale_u, scale_v, scale_w])\n\n # Only perform frame conversion if unequal scale factors\n if not np.allclose(scale, scale_u):\n # Convert the scale factors from the view frame to the data frame\n R = np.array([self._view_u, self._view_v, self._view_w])\n S = scale * np.eye(3)\n scale = np.linalg.norm(R.T @ S, axis=1)\n\n # Set the constrained scale factors to the factor closest to 1\n if self._aspect in ('equal', 'equalxy', 'equalxz', 'equalyz'):\n ax_idxs = self._equal_aspect_axis_indices(self._aspect)\n min_ax_idxs = np.argmin(np.abs(scale[ax_idxs] - 1))\n scale[ax_idxs] = scale[ax_idxs][min_ax_idxs]\n\n self._scale_axis_limits(scale[0], scale[1], scale[2])\n\n def _scale_axis_limits(self, scale_x, scale_y, scale_z):\n \"\"\"\n Keeping the center of the x, y, and z data axes fixed, scale their\n limits by scale factors. A scale factor > 1 zooms out and a scale\n factor < 1 zooms in.\n\n Parameters\n ----------\n scale_x : float\n Scale factor for the x data axis.\n scale_y : float\n Scale factor for the y data axis.\n scale_z : float\n Scale factor for the z data axis.\n \"\"\"\n # Get the axis centers and ranges\n cx, cy, cz, dx, dy, dz = self._get_w_centers_ranges()\n\n # Set the scaled axis limits\n self.set_xlim3d(cx - dx*scale_x/2, cx + dx*scale_x/2)\n self.set_ylim3d(cy - dy*scale_y/2, cy + dy*scale_y/2)\n self.set_zlim3d(cz - dz*scale_z/2, cz + dz*scale_z/2)\n\n def _get_w_centers_ranges(self):\n \"\"\"Get 3D world centers and axis ranges.\"\"\"\n # Calculate center of axis limits\n minx, maxx, miny, maxy, minz, maxz = self.get_w_lims()\n cx = (maxx + minx)/2\n cy = (maxy + miny)/2\n cz = (maxz + minz)/2\n\n # Calculate range of axis limits\n dx = (maxx - minx)\n dy = (maxy - miny)\n dz = (maxz - minz)\n return cx, cy, cz, dx, dy, dz\n\n def set_zlabel(self, zlabel, fontdict=None, labelpad=None, **kwargs):\n \"\"\"\n Set zlabel. See doc for `.set_ylabel` for description.\n \"\"\"\n if labelpad is not None:\n self.zaxis.labelpad = labelpad\n return self.zaxis.set_label_text(zlabel, fontdict, **kwargs)\n\n def get_zlabel(self):\n \"\"\"\n Get the z-label text string.\n \"\"\"\n label = self.zaxis.get_label()\n return label.get_text()\n\n # Axes rectangle characteristics\n\n # The frame_on methods are not available for 3D axes.\n # Python will raise a TypeError if they are called.\n get_frame_on = None\n set_frame_on = None\n\n def grid(self, visible=True, **kwargs):\n \"\"\"\n Set / unset 3D grid.\n\n .. note::\n\n Currently, this function does not behave the same as\n `.axes.Axes.grid`, but it is intended to eventually support that\n behavior.\n \"\"\"\n # TODO: Operate on each axes separately\n if len(kwargs):\n visible = True\n self._draw_grid = visible\n self.stale = True\n\n def tick_params(self, axis='both', **kwargs):\n \"\"\"\n Convenience method for changing the appearance of ticks and\n tick labels.\n\n See `.Axes.tick_params` for full documentation. Because this function\n applies to 3D Axes, *axis* can also be set to 'z', and setting *axis*\n to 'both' autoscales all three axes.\n\n Also, because of how Axes3D objects are drawn very differently\n from regular 2D axes, some of these settings may have\n ambiguous meaning. For simplicity, the 'z' axis will\n accept settings as if it was like the 'y' axis.\n\n .. note::\n Axes3D currently ignores some of these settings.\n \"\"\"\n _api.check_in_list(['x', 'y', 'z', 'both'], axis=axis)\n if axis in ['x', 'y', 'both']:\n super().tick_params(axis, **kwargs)\n if axis in ['z', 'both']:\n zkw = dict(kwargs)\n zkw.pop('top', None)\n zkw.pop('bottom', None)\n zkw.pop('labeltop', None)\n zkw.pop('labelbottom', None)\n self.zaxis.set_tick_params(**zkw)\n\n # data limits, ticks, tick labels, and formatting\n\n def invert_zaxis(self):\n \"\"\"\n Invert the z-axis.\n\n See Also\n --------\n zaxis_inverted\n get_zlim, set_zlim\n get_zbound, set_zbound\n \"\"\"\n bottom, top = self.get_zlim()\n self.set_zlim(top, bottom, auto=None)\n\n zaxis_inverted = _axis_method_wrapper(\"zaxis\", \"get_inverted\")\n\n def get_zbound(self):\n \"\"\"\n Return the lower and upper z-axis bounds, in increasing order.\n\n See Also\n --------\n set_zbound\n get_zlim, set_zlim\n invert_zaxis, zaxis_inverted\n \"\"\"\n bottom, top = self.get_zlim()\n if bottom < top:\n return bottom, top\n else:\n return top, bottom\n\n def set_zbound(self, lower=None, upper=None):\n \"\"\"\n Set the lower and upper numerical bounds of the z-axis.\n\n This method will honor axes inversion regardless of parameter order.\n It will not change the autoscaling setting (`.get_autoscalez_on()`).\n\n Parameters\n ----------\n lower, upper : float or None\n The lower and upper bounds. If *None*, the respective axis bound\n is not modified.\n\n See Also\n --------\n get_zbound\n get_zlim, set_zlim\n invert_zaxis, zaxis_inverted\n \"\"\"\n if upper is None and np.iterable(lower):\n lower, upper = lower\n\n old_lower, old_upper = self.get_zbound()\n if lower is None:\n lower = old_lower\n if upper is None:\n upper = old_upper\n\n self.set_zlim(sorted((lower, upper),\n reverse=bool(self.zaxis_inverted())),\n auto=None)\n\n def text(self, x, y, z, s, zdir=None, **kwargs):\n \"\"\"\n Add the text *s* to the 3D Axes at location *x*, *y*, *z* in data coordinates.\n\n Parameters\n ----------\n x, y, z : float\n The position to place the text.\n s : str\n The text.\n zdir : {'x', 'y', 'z', 3-tuple}, optional\n The direction to be used as the z-direction. Default: 'z'.\n See `.get_dir_vector` for a description of the values.\n **kwargs\n Other arguments are forwarded to `matplotlib.axes.Axes.text`.\n\n Returns\n -------\n `.Text3D`\n The created `.Text3D` instance.\n \"\"\"\n text = super().text(x, y, s, **kwargs)\n art3d.text_2d_to_3d(text, z, zdir)\n return text\n\n text3D = text\n text2D = Axes.text\n\n def plot(self, xs, ys, *args, zdir='z', **kwargs):\n \"\"\"\n Plot 2D or 3D data.\n\n Parameters\n ----------\n xs : 1D array-like\n x coordinates of vertices.\n ys : 1D array-like\n y coordinates of vertices.\n zs : float or 1D array-like\n z coordinates of vertices; either one for all points or one for\n each point.\n zdir : {'x', 'y', 'z'}, default: 'z'\n When plotting 2D data, the direction to use as z.\n **kwargs\n Other arguments are forwarded to `matplotlib.axes.Axes.plot`.\n \"\"\"\n had_data = self.has_data()\n\n # `zs` can be passed positionally or as keyword; checking whether\n # args[0] is a string matches the behavior of 2D `plot` (via\n # `_process_plot_var_args`).\n if args and not isinstance(args[0], str):\n zs, *args = args\n if 'zs' in kwargs:\n raise TypeError(\"plot() for multiple values for argument 'z'\")\n else:\n zs = kwargs.pop('zs', 0)\n\n # Match length\n zs = np.broadcast_to(zs, np.shape(xs))\n\n lines = super().plot(xs, ys, *args, **kwargs)\n for line in lines:\n art3d.line_2d_to_3d(line, zs=zs, zdir=zdir)\n\n xs, ys, zs = art3d.juggle_axes(xs, ys, zs, zdir)\n self.auto_scale_xyz(xs, ys, zs, had_data)\n return lines\n\n plot3D = plot\n\n def plot_surface(self, X, Y, Z, *, norm=None, vmin=None,\n vmax=None, lightsource=None, **kwargs):\n \"\"\"\n Create a surface plot.\n\n By default, it will be colored in shades of a solid color, but it also\n supports colormapping by supplying the *cmap* argument.\n\n .. note::\n\n The *rcount* and *ccount* kwargs, which both default to 50,\n determine the maximum number of samples used in each direction. If\n the input data is larger, it will be downsampled (by slicing) to\n these numbers of points.\n\n .. note::\n\n To maximize rendering speed consider setting *rstride* and *cstride*\n to divisors of the number of rows minus 1 and columns minus 1\n respectively. For example, given 51 rows rstride can be any of the\n divisors of 50.\n\n Similarly, a setting of *rstride* and *cstride* equal to 1 (or\n *rcount* and *ccount* equal the number of rows and columns) can use\n the optimized path.\n\n Parameters\n ----------\n X, Y, Z : 2D arrays\n Data values.\n\n rcount, ccount : int\n Maximum number of samples used in each direction. If the input\n data is larger, it will be downsampled (by slicing) to these\n numbers of points. Defaults to 50.\n\n rstride, cstride : int\n Downsampling stride in each direction. These arguments are\n mutually exclusive with *rcount* and *ccount*. If only one of\n *rstride* or *cstride* is set, the other defaults to 10.\n\n 'classic' mode uses a default of ``rstride = cstride = 10`` instead\n of the new default of ``rcount = ccount = 50``.\n\n color : color-like\n Color of the surface patches.\n\n cmap : Colormap\n Colormap of the surface patches.\n\n facecolors : array-like of colors.\n Colors of each individual patch.\n\n norm : Normalize\n Normalization for the colormap.\n\n vmin, vmax : float\n Bounds for the normalization.\n\n shade : bool, default: True\n Whether to shade the facecolors. Shading is always disabled when\n *cmap* is specified.\n\n lightsource : `~matplotlib.colors.LightSource`\n The lightsource to use when *shade* is True.\n\n **kwargs\n Other keyword arguments are forwarded to `.Poly3DCollection`.\n \"\"\"\n\n had_data = self.has_data()\n\n if Z.ndim != 2:\n raise ValueError(\"Argument Z must be 2-dimensional.\")\n\n Z = cbook._to_unmasked_float_array(Z)\n X, Y, Z = np.broadcast_arrays(X, Y, Z)\n rows, cols = Z.shape\n\n has_stride = 'rstride' in kwargs or 'cstride' in kwargs\n has_count = 'rcount' in kwargs or 'ccount' in kwargs\n\n if has_stride and has_count:\n raise ValueError(\"Cannot specify both stride and count arguments\")\n\n rstride = kwargs.pop('rstride', 10)\n cstride = kwargs.pop('cstride', 10)\n rcount = kwargs.pop('rcount', 50)\n ccount = kwargs.pop('ccount', 50)\n\n if mpl.rcParams['_internal.classic_mode']:\n # Strides have priority over counts in classic mode.\n # So, only compute strides from counts\n # if counts were explicitly given\n compute_strides = has_count\n else:\n # If the strides are provided then it has priority.\n # Otherwise, compute the strides from the counts.\n compute_strides = not has_stride\n\n if compute_strides:\n rstride = int(max(np.ceil(rows / rcount), 1))\n cstride = int(max(np.ceil(cols / ccount), 1))\n\n fcolors = kwargs.pop('facecolors', None)\n\n cmap = kwargs.get('cmap', None)\n shade = kwargs.pop('shade', cmap is None)\n if shade is None:\n raise ValueError(\"shade cannot be None.\")\n\n colset = [] # the sampled facecolor\n if (rows - 1) % rstride == 0 and \\\n (cols - 1) % cstride == 0 and \\\n fcolors is None:\n polys = np.stack(\n [cbook._array_patch_perimeters(a, rstride, cstride)\n for a in (X, Y, Z)],\n axis=-1)\n else:\n # evenly spaced, and including both endpoints\n row_inds = list(range(0, rows-1, rstride)) + [rows-1]\n col_inds = list(range(0, cols-1, cstride)) + [cols-1]\n\n polys = []\n for rs, rs_next in zip(row_inds[:-1], row_inds[1:]):\n for cs, cs_next in zip(col_inds[:-1], col_inds[1:]):\n ps = [\n # +1 ensures we share edges between polygons\n cbook._array_perimeter(a[rs:rs_next+1, cs:cs_next+1])\n for a in (X, Y, Z)\n ]\n # ps = np.stack(ps, axis=-1)\n ps = np.array(ps).T\n polys.append(ps)\n\n if fcolors is not None:\n colset.append(fcolors[rs][cs])\n\n # In cases where there are non-finite values in the data (possibly NaNs from\n # masked arrays), artifacts can be introduced. Here check whether such values\n # are present and remove them.\n if not isinstance(polys, np.ndarray) or not np.isfinite(polys).all():\n new_polys = []\n new_colset = []\n\n # Depending on fcolors, colset is either an empty list or has as\n # many elements as polys. In the former case new_colset results in\n # a list with None entries, that is discarded later.\n for p, col in itertools.zip_longest(polys, colset):\n new_poly = np.array(p)[np.isfinite(p).all(axis=1)]\n if len(new_poly):\n new_polys.append(new_poly)\n new_colset.append(col)\n\n # Replace previous polys and, if fcolors is not None, colset\n polys = new_polys\n if fcolors is not None:\n colset = new_colset\n\n # note that the striding causes some polygons to have more coordinates\n # than others\n\n if fcolors is not None:\n polyc = art3d.Poly3DCollection(\n polys, edgecolors=colset, facecolors=colset, shade=shade,\n lightsource=lightsource, **kwargs)\n elif cmap:\n polyc = art3d.Poly3DCollection(polys, **kwargs)\n # can't always vectorize, because polys might be jagged\n if isinstance(polys, np.ndarray):\n avg_z = polys[..., 2].mean(axis=-1)\n else:\n avg_z = np.array([ps[:, 2].mean() for ps in polys])\n polyc.set_array(avg_z)\n if vmin is not None or vmax is not None:\n polyc.set_clim(vmin, vmax)\n if norm is not None:\n polyc.set_norm(norm)\n else:\n color = kwargs.pop('color', None)\n if color is None:\n color = self._get_lines.get_next_color()\n color = np.array(mcolors.to_rgba(color))\n\n polyc = art3d.Poly3DCollection(\n polys, facecolors=color, shade=shade,\n lightsource=lightsource, **kwargs)\n\n self.add_collection(polyc)\n self.auto_scale_xyz(X, Y, Z, had_data)\n\n return polyc\n\n def plot_wireframe(self, X, Y, Z, **kwargs):\n \"\"\"\n Plot a 3D wireframe.\n\n .. note::\n\n The *rcount* and *ccount* kwargs, which both default to 50,\n determine the maximum number of samples used in each direction. If\n the input data is larger, it will be downsampled (by slicing) to\n these numbers of points.\n\n Parameters\n ----------\n X, Y, Z : 2D arrays\n Data values.\n\n rcount, ccount : int\n Maximum number of samples used in each direction. If the input\n data is larger, it will be downsampled (by slicing) to these\n numbers of points. Setting a count to zero causes the data to be\n not sampled in the corresponding direction, producing a 3D line\n plot rather than a wireframe plot. Defaults to 50.\n\n rstride, cstride : int\n Downsampling stride in each direction. These arguments are\n mutually exclusive with *rcount* and *ccount*. If only one of\n *rstride* or *cstride* is set, the other defaults to 1. Setting a\n stride to zero causes the data to be not sampled in the\n corresponding direction, producing a 3D line plot rather than a\n wireframe plot.\n\n 'classic' mode uses a default of ``rstride = cstride = 1`` instead\n of the new default of ``rcount = ccount = 50``.\n\n **kwargs\n Other keyword arguments are forwarded to `.Line3DCollection`.\n \"\"\"\n\n had_data = self.has_data()\n if Z.ndim != 2:\n raise ValueError(\"Argument Z must be 2-dimensional.\")\n # FIXME: Support masked arrays\n X, Y, Z = np.broadcast_arrays(X, Y, Z)\n rows, cols = Z.shape\n\n has_stride = 'rstride' in kwargs or 'cstride' in kwargs\n has_count = 'rcount' in kwargs or 'ccount' in kwargs\n\n if has_stride and has_count:\n raise ValueError(\"Cannot specify both stride and count arguments\")\n\n rstride = kwargs.pop('rstride', 1)\n cstride = kwargs.pop('cstride', 1)\n rcount = kwargs.pop('rcount', 50)\n ccount = kwargs.pop('ccount', 50)\n\n if mpl.rcParams['_internal.classic_mode']:\n # Strides have priority over counts in classic mode.\n # So, only compute strides from counts\n # if counts were explicitly given\n if has_count:\n rstride = int(max(np.ceil(rows / rcount), 1)) if rcount else 0\n cstride = int(max(np.ceil(cols / ccount), 1)) if ccount else 0\n else:\n # If the strides are provided then it has priority.\n # Otherwise, compute the strides from the counts.\n if not has_stride:\n rstride = int(max(np.ceil(rows / rcount), 1)) if rcount else 0\n cstride = int(max(np.ceil(cols / ccount), 1)) if ccount else 0\n\n # We want two sets of lines, one running along the \"rows\" of\n # Z and another set of lines running along the \"columns\" of Z.\n # This transpose will make it easy to obtain the columns.\n tX, tY, tZ = np.transpose(X), np.transpose(Y), np.transpose(Z)\n\n if rstride:\n rii = list(range(0, rows, rstride))\n # Add the last index only if needed\n if rows > 0 and rii[-1] != (rows - 1):\n rii += [rows-1]\n else:\n rii = []\n if cstride:\n cii = list(range(0, cols, cstride))\n # Add the last index only if needed\n if cols > 0 and cii[-1] != (cols - 1):\n cii += [cols-1]\n else:\n cii = []\n\n if rstride == 0 and cstride == 0:\n raise ValueError(\"Either rstride or cstride must be non zero\")\n\n # If the inputs were empty, then just\n # reset everything.\n if Z.size == 0:\n rii = []\n cii = []\n\n xlines = [X[i] for i in rii]\n ylines = [Y[i] for i in rii]\n zlines = [Z[i] for i in rii]\n\n txlines = [tX[i] for i in cii]\n tylines = [tY[i] for i in cii]\n tzlines = [tZ[i] for i in cii]\n\n lines = ([list(zip(xl, yl, zl))\n for xl, yl, zl in zip(xlines, ylines, zlines)]\n + [list(zip(xl, yl, zl))\n for xl, yl, zl in zip(txlines, tylines, tzlines)])\n\n linec = art3d.Line3DCollection(lines, **kwargs)\n self.add_collection(linec)\n self.auto_scale_xyz(X, Y, Z, had_data)\n\n return linec\n\n def plot_trisurf(self, *args, color=None, norm=None, vmin=None, vmax=None,\n lightsource=None, **kwargs):\n \"\"\"\n Plot a triangulated surface.\n\n The (optional) triangulation can be specified in one of two ways;\n either::\n\n plot_trisurf(triangulation, ...)\n\n where triangulation is a `~matplotlib.tri.Triangulation` object, or::\n\n plot_trisurf(X, Y, ...)\n plot_trisurf(X, Y, triangles, ...)\n plot_trisurf(X, Y, triangles=triangles, ...)\n\n in which case a Triangulation object will be created. See\n `.Triangulation` for an explanation of these possibilities.\n\n The remaining arguments are::\n\n plot_trisurf(..., Z)\n\n where *Z* is the array of values to contour, one per point\n in the triangulation.\n\n Parameters\n ----------\n X, Y, Z : array-like\n Data values as 1D arrays.\n color\n Color of the surface patches.\n cmap\n A colormap for the surface patches.\n norm : Normalize\n An instance of Normalize to map values to colors.\n vmin, vmax : float, default: None\n Minimum and maximum value to map.\n shade : bool, default: True\n Whether to shade the facecolors. Shading is always disabled when\n *cmap* is specified.\n lightsource : `~matplotlib.colors.LightSource`\n The lightsource to use when *shade* is True.\n **kwargs\n All other keyword arguments are passed on to\n :class:`~mpl_toolkits.mplot3d.art3d.Poly3DCollection`\n\n Examples\n --------\n .. plot:: gallery/mplot3d/trisurf3d.py\n .. plot:: gallery/mplot3d/trisurf3d_2.py\n \"\"\"\n\n had_data = self.has_data()\n\n # TODO: Support custom face colours\n if color is None:\n color = self._get_lines.get_next_color()\n color = np.array(mcolors.to_rgba(color))\n\n cmap = kwargs.get('cmap', None)\n shade = kwargs.pop('shade', cmap is None)\n\n tri, args, kwargs = \\\n Triangulation.get_from_args_and_kwargs(*args, **kwargs)\n try:\n z = kwargs.pop('Z')\n except KeyError:\n # We do this so Z doesn't get passed as an arg to PolyCollection\n z, *args = args\n z = np.asarray(z)\n\n triangles = tri.get_masked_triangles()\n xt = tri.x[triangles]\n yt = tri.y[triangles]\n zt = z[triangles]\n verts = np.stack((xt, yt, zt), axis=-1)\n\n if cmap:\n polyc = art3d.Poly3DCollection(verts, *args, **kwargs)\n # average over the three points of each triangle\n avg_z = verts[:, :, 2].mean(axis=1)\n polyc.set_array(avg_z)\n if vmin is not None or vmax is not None:\n polyc.set_clim(vmin, vmax)\n if norm is not None:\n polyc.set_norm(norm)\n else:\n polyc = art3d.Poly3DCollection(\n verts, *args, shade=shade, lightsource=lightsource,\n facecolors=color, **kwargs)\n\n self.add_collection(polyc)\n self.auto_scale_xyz(tri.x, tri.y, z, had_data)\n\n return polyc\n\n def _3d_extend_contour(self, cset, stride=5):\n \"\"\"\n Extend a contour in 3D by creating\n \"\"\"\n\n dz = (cset.levels[1] - cset.levels[0]) / 2\n polyverts = []\n colors = []\n for idx, level in enumerate(cset.levels):\n path = cset.get_paths()[idx]\n subpaths = [*path._iter_connected_components()]\n color = cset.get_edgecolor()[idx]\n top = art3d._paths_to_3d_segments(subpaths, level - dz)\n bot = art3d._paths_to_3d_segments(subpaths, level + dz)\n if not len(top[0]):\n continue\n nsteps = max(round(len(top[0]) / stride), 2)\n stepsize = (len(top[0]) - 1) / (nsteps - 1)\n polyverts.extend([\n (top[0][round(i * stepsize)], top[0][round((i + 1) * stepsize)],\n bot[0][round((i + 1) * stepsize)], bot[0][round(i * stepsize)])\n for i in range(round(nsteps) - 1)])\n colors.extend([color] * (round(nsteps) - 1))\n self.add_collection3d(art3d.Poly3DCollection(\n np.array(polyverts), # All polygons have 4 vertices, so vectorize.\n facecolors=colors, edgecolors=colors, shade=True))\n cset.remove()\n\n def add_contour_set(\n self, cset, extend3d=False, stride=5, zdir='z', offset=None):\n zdir = '-' + zdir\n if extend3d:\n self._3d_extend_contour(cset, stride)\n else:\n art3d.collection_2d_to_3d(\n cset, zs=offset if offset is not None else cset.levels, zdir=zdir)\n\n def add_contourf_set(self, cset, zdir='z', offset=None):\n self._add_contourf_set(cset, zdir=zdir, offset=offset)\n\n def _add_contourf_set(self, cset, zdir='z', offset=None):\n \"\"\"\n Returns\n -------\n levels : `numpy.ndarray`\n Levels at which the filled contours are added.\n \"\"\"\n zdir = '-' + zdir\n\n midpoints = cset.levels[:-1] + np.diff(cset.levels) / 2\n # Linearly interpolate to get levels for any extensions\n if cset._extend_min:\n min_level = cset.levels[0] - np.diff(cset.levels[:2]) / 2\n midpoints = np.insert(midpoints, 0, min_level)\n if cset._extend_max:\n max_level = cset.levels[-1] + np.diff(cset.levels[-2:]) / 2\n midpoints = np.append(midpoints, max_level)\n\n art3d.collection_2d_to_3d(\n cset, zs=offset if offset is not None else midpoints, zdir=zdir)\n return midpoints\n\n @_preprocess_data()\n def contour(self, X, Y, Z, *args,\n extend3d=False, stride=5, zdir='z', offset=None, **kwargs):\n \"\"\"\n Create a 3D contour plot.\n\n Parameters\n ----------\n X, Y, Z : array-like,\n Input data. See `.Axes.contour` for supported data shapes.\n extend3d : bool, default: False\n Whether to extend contour in 3D.\n stride : int\n Step size for extending contour.\n zdir : {'x', 'y', 'z'}, default: 'z'\n The direction to use.\n offset : float, optional\n If specified, plot a projection of the contour lines at this\n position in a plane normal to *zdir*.\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n\n *args, **kwargs\n Other arguments are forwarded to `matplotlib.axes.Axes.contour`.\n\n Returns\n -------\n matplotlib.contour.QuadContourSet\n \"\"\"\n had_data = self.has_data()\n\n jX, jY, jZ = art3d.rotate_axes(X, Y, Z, zdir)\n cset = super().contour(jX, jY, jZ, *args, **kwargs)\n self.add_contour_set(cset, extend3d, stride, zdir, offset)\n\n self.auto_scale_xyz(X, Y, Z, had_data)\n return cset\n\n contour3D = contour\n\n @_preprocess_data()\n def tricontour(self, *args,\n extend3d=False, stride=5, zdir='z', offset=None, **kwargs):\n \"\"\"\n Create a 3D contour plot.\n\n .. note::\n This method currently produces incorrect output due to a\n longstanding bug in 3D PolyCollection rendering.\n\n Parameters\n ----------\n X, Y, Z : array-like\n Input data. See `.Axes.tricontour` for supported data shapes.\n extend3d : bool, default: False\n Whether to extend contour in 3D.\n stride : int\n Step size for extending contour.\n zdir : {'x', 'y', 'z'}, default: 'z'\n The direction to use.\n offset : float, optional\n If specified, plot a projection of the contour lines at this\n position in a plane normal to *zdir*.\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n *args, **kwargs\n Other arguments are forwarded to `matplotlib.axes.Axes.tricontour`.\n\n Returns\n -------\n matplotlib.tri._tricontour.TriContourSet\n \"\"\"\n had_data = self.has_data()\n\n tri, args, kwargs = Triangulation.get_from_args_and_kwargs(\n *args, **kwargs)\n X = tri.x\n Y = tri.y\n if 'Z' in kwargs:\n Z = kwargs.pop('Z')\n else:\n # We do this so Z doesn't get passed as an arg to Axes.tricontour\n Z, *args = args\n\n jX, jY, jZ = art3d.rotate_axes(X, Y, Z, zdir)\n tri = Triangulation(jX, jY, tri.triangles, tri.mask)\n\n cset = super().tricontour(tri, jZ, *args, **kwargs)\n self.add_contour_set(cset, extend3d, stride, zdir, offset)\n\n self.auto_scale_xyz(X, Y, Z, had_data)\n return cset\n\n def _auto_scale_contourf(self, X, Y, Z, zdir, levels, had_data):\n # Autoscale in the zdir based on the levels added, which are\n # different from data range if any contour extensions are present\n dim_vals = {'x': X, 'y': Y, 'z': Z, zdir: levels}\n # Input data and levels have different sizes, but auto_scale_xyz\n # expected same-size input, so manually take min/max limits\n limits = [(np.nanmin(dim_vals[dim]), np.nanmax(dim_vals[dim]))\n for dim in ['x', 'y', 'z']]\n self.auto_scale_xyz(*limits, had_data)\n\n @_preprocess_data()\n def contourf(self, X, Y, Z, *args, zdir='z', offset=None, **kwargs):\n \"\"\"\n Create a 3D filled contour plot.\n\n Parameters\n ----------\n X, Y, Z : array-like\n Input data. See `.Axes.contourf` for supported data shapes.\n zdir : {'x', 'y', 'z'}, default: 'z'\n The direction to use.\n offset : float, optional\n If specified, plot a projection of the contour lines at this\n position in a plane normal to *zdir*.\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n *args, **kwargs\n Other arguments are forwarded to `matplotlib.axes.Axes.contourf`.\n\n Returns\n -------\n matplotlib.contour.QuadContourSet\n \"\"\"\n had_data = self.has_data()\n\n jX, jY, jZ = art3d.rotate_axes(X, Y, Z, zdir)\n cset = super().contourf(jX, jY, jZ, *args, **kwargs)\n levels = self._add_contourf_set(cset, zdir, offset)\n\n self._auto_scale_contourf(X, Y, Z, zdir, levels, had_data)\n return cset\n\n contourf3D = contourf\n\n @_preprocess_data()\n def tricontourf(self, *args, zdir='z', offset=None, **kwargs):\n \"\"\"\n Create a 3D filled contour plot.\n\n .. note::\n This method currently produces incorrect output due to a\n longstanding bug in 3D PolyCollection rendering.\n\n Parameters\n ----------\n X, Y, Z : array-like\n Input data. See `.Axes.tricontourf` for supported data shapes.\n zdir : {'x', 'y', 'z'}, default: 'z'\n The direction to use.\n offset : float, optional\n If specified, plot a projection of the contour lines at this\n position in a plane normal to zdir.\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n *args, **kwargs\n Other arguments are forwarded to\n `matplotlib.axes.Axes.tricontourf`.\n\n Returns\n -------\n matplotlib.tri._tricontour.TriContourSet\n \"\"\"\n had_data = self.has_data()\n\n tri, args, kwargs = Triangulation.get_from_args_and_kwargs(\n *args, **kwargs)\n X = tri.x\n Y = tri.y\n if 'Z' in kwargs:\n Z = kwargs.pop('Z')\n else:\n # We do this so Z doesn't get passed as an arg to Axes.tricontourf\n Z, *args = args\n\n jX, jY, jZ = art3d.rotate_axes(X, Y, Z, zdir)\n tri = Triangulation(jX, jY, tri.triangles, tri.mask)\n\n cset = super().tricontourf(tri, jZ, *args, **kwargs)\n levels = self._add_contourf_set(cset, zdir, offset)\n\n self._auto_scale_contourf(X, Y, Z, zdir, levels, had_data)\n return cset\n\n def add_collection3d(self, col, zs=0, zdir='z'):\n \"\"\"\n Add a 3D collection object to the plot.\n\n 2D collection types are converted to a 3D version by\n modifying the object and adding z coordinate information.\n\n Supported are:\n\n - PolyCollection\n - LineCollection\n - PatchCollection\n \"\"\"\n zvals = np.atleast_1d(zs)\n zsortval = (np.min(zvals) if zvals.size\n else 0) # FIXME: arbitrary default\n\n # FIXME: use issubclass() (although, then a 3D collection\n # object would also pass.) Maybe have a collection3d\n # abstract class to test for and exclude?\n if type(col) is mcoll.PolyCollection:\n art3d.poly_collection_2d_to_3d(col, zs=zs, zdir=zdir)\n col.set_sort_zpos(zsortval)\n elif type(col) is mcoll.LineCollection:\n art3d.line_collection_2d_to_3d(col, zs=zs, zdir=zdir)\n col.set_sort_zpos(zsortval)\n elif type(col) is mcoll.PatchCollection:\n art3d.patch_collection_2d_to_3d(col, zs=zs, zdir=zdir)\n col.set_sort_zpos(zsortval)\n\n collection = super().add_collection(col)\n return collection\n\n @_preprocess_data(replace_names=[\"xs\", \"ys\", \"zs\", \"s\",\n \"edgecolors\", \"c\", \"facecolor\",\n \"facecolors\", \"color\"])\n def scatter(self, xs, ys, zs=0, zdir='z', s=20, c=None, depthshade=True,\n *args, **kwargs):\n \"\"\"\n Create a scatter plot.\n\n Parameters\n ----------\n xs, ys : array-like\n The data positions.\n zs : float or array-like, default: 0\n The z-positions. Either an array of the same length as *xs* and\n *ys* or a single value to place all points in the same plane.\n zdir : {'x', 'y', 'z', '-x', '-y', '-z'}, default: 'z'\n The axis direction for the *zs*. This is useful when plotting 2D\n data on a 3D Axes. The data must be passed as *xs*, *ys*. Setting\n *zdir* to 'y' then plots the data to the x-z-plane.\n\n See also :doc:`/gallery/mplot3d/2dcollections3d`.\n\n s : float or array-like, default: 20\n The marker size in points**2. Either an array of the same length\n as *xs* and *ys* or a single value to make all markers the same\n size.\n c : color, sequence, or sequence of colors, optional\n The marker color. Possible values:\n\n - A single color format string.\n - A sequence of colors of length n.\n - A sequence of n numbers to be mapped to colors using *cmap* and\n *norm*.\n - A 2D array in which the rows are RGB or RGBA.\n\n For more details see the *c* argument of `~.axes.Axes.scatter`.\n depthshade : bool, default: True\n Whether to shade the scatter markers to give the appearance of\n depth. Each call to ``scatter()`` will perform its depthshading\n independently.\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n **kwargs\n All other keyword arguments are passed on to `~.axes.Axes.scatter`.\n\n Returns\n -------\n paths : `~matplotlib.collections.PathCollection`\n \"\"\"\n\n had_data = self.has_data()\n zs_orig = zs\n\n xs, ys, zs = np.broadcast_arrays(\n *[np.ravel(np.ma.filled(t, np.nan)) for t in [xs, ys, zs]])\n s = np.ma.ravel(s) # This doesn't have to match x, y in size.\n\n xs, ys, zs, s, c, color = cbook.delete_masked_points(\n xs, ys, zs, s, c, kwargs.get('color', None)\n )\n if kwargs.get('color', None):\n kwargs['color'] = color\n\n # For xs and ys, 2D scatter() will do the copying.\n if np.may_share_memory(zs_orig, zs): # Avoid unnecessary copies.\n zs = zs.copy()\n\n patches = super().scatter(xs, ys, s=s, c=c, *args, **kwargs)\n art3d.patch_collection_2d_to_3d(patches, zs=zs, zdir=zdir,\n depthshade=depthshade)\n\n if self._zmargin < 0.05 and xs.size > 0:\n self.set_zmargin(0.05)\n\n self.auto_scale_xyz(xs, ys, zs, had_data)\n\n return patches\n\n scatter3D = scatter\n\n @_preprocess_data()\n def bar(self, left, height, zs=0, zdir='z', *args, **kwargs):\n \"\"\"\n Add 2D bar(s).\n\n Parameters\n ----------\n left : 1D array-like\n The x coordinates of the left sides of the bars.\n height : 1D array-like\n The height of the bars.\n zs : float or 1D array-like\n Z coordinate of bars; if a single value is specified, it will be\n used for all bars.\n zdir : {'x', 'y', 'z'}, default: 'z'\n When plotting 2D data, the direction to use as z ('x', 'y' or 'z').\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n **kwargs\n Other keyword arguments are forwarded to\n `matplotlib.axes.Axes.bar`.\n\n Returns\n -------\n mpl_toolkits.mplot3d.art3d.Patch3DCollection\n \"\"\"\n had_data = self.has_data()\n\n patches = super().bar(left, height, *args, **kwargs)\n\n zs = np.broadcast_to(zs, len(left))\n\n verts = []\n verts_zs = []\n for p, z in zip(patches, zs):\n vs = art3d._get_patch_verts(p)\n verts += vs.tolist()\n verts_zs += [z] * len(vs)\n art3d.patch_2d_to_3d(p, z, zdir)\n if 'alpha' in kwargs:\n p.set_alpha(kwargs['alpha'])\n\n if len(verts) > 0:\n # the following has to be skipped if verts is empty\n # NOTE: Bugs could still occur if len(verts) > 0,\n # but the \"2nd dimension\" is empty.\n xs, ys = zip(*verts)\n else:\n xs, ys = [], []\n\n xs, ys, verts_zs = art3d.juggle_axes(xs, ys, verts_zs, zdir)\n self.auto_scale_xyz(xs, ys, verts_zs, had_data)\n\n return patches\n\n @_preprocess_data()\n def bar3d(self, x, y, z, dx, dy, dz, color=None,\n zsort='average', shade=True, lightsource=None, *args, **kwargs):\n \"\"\"\n Generate a 3D barplot.\n\n This method creates three-dimensional barplot where the width,\n depth, height, and color of the bars can all be uniquely set.\n\n Parameters\n ----------\n x, y, z : array-like\n The coordinates of the anchor point of the bars.\n\n dx, dy, dz : float or array-like\n The width, depth, and height of the bars, respectively.\n\n color : sequence of colors, optional\n The color of the bars can be specified globally or\n individually. This parameter can be:\n\n - A single color, to color all bars the same color.\n - An array of colors of length N bars, to color each bar\n independently.\n - An array of colors of length 6, to color the faces of the\n bars similarly.\n - An array of colors of length 6 * N bars, to color each face\n independently.\n\n When coloring the faces of the boxes specifically, this is\n the order of the coloring:\n\n 1. -Z (bottom of box)\n 2. +Z (top of box)\n 3. -Y\n 4. +Y\n 5. -X\n 6. +X\n\n zsort : str, optional\n The z-axis sorting scheme passed onto `~.art3d.Poly3DCollection`\n\n shade : bool, default: True\n When true, this shades the dark sides of the bars (relative\n to the plot's source of light).\n\n lightsource : `~matplotlib.colors.LightSource`\n The lightsource to use when *shade* is True.\n\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n\n **kwargs\n Any additional keyword arguments are passed onto\n `~.art3d.Poly3DCollection`.\n\n Returns\n -------\n collection : `~.art3d.Poly3DCollection`\n A collection of three-dimensional polygons representing the bars.\n \"\"\"\n\n had_data = self.has_data()\n\n x, y, z, dx, dy, dz = np.broadcast_arrays(\n np.atleast_1d(x), y, z, dx, dy, dz)\n minx = np.min(x)\n maxx = np.max(x + dx)\n miny = np.min(y)\n maxy = np.max(y + dy)\n minz = np.min(z)\n maxz = np.max(z + dz)\n\n # shape (6, 4, 3)\n # All faces are oriented facing outwards - when viewed from the\n # outside, their vertices are in a counterclockwise ordering.\n cuboid = np.array([\n # -z\n (\n (0, 0, 0),\n (0, 1, 0),\n (1, 1, 0),\n (1, 0, 0),\n ),\n # +z\n (\n (0, 0, 1),\n (1, 0, 1),\n (1, 1, 1),\n (0, 1, 1),\n ),\n # -y\n (\n (0, 0, 0),\n (1, 0, 0),\n (1, 0, 1),\n (0, 0, 1),\n ),\n # +y\n (\n (0, 1, 0),\n (0, 1, 1),\n (1, 1, 1),\n (1, 1, 0),\n ),\n # -x\n (\n (0, 0, 0),\n (0, 0, 1),\n (0, 1, 1),\n (0, 1, 0),\n ),\n # +x\n (\n (1, 0, 0),\n (1, 1, 0),\n (1, 1, 1),\n (1, 0, 1),\n ),\n ])\n\n # indexed by [bar, face, vertex, coord]\n polys = np.empty(x.shape + cuboid.shape)\n\n # handle each coordinate separately\n for i, p, dp in [(0, x, dx), (1, y, dy), (2, z, dz)]:\n p = p[..., np.newaxis, np.newaxis]\n dp = dp[..., np.newaxis, np.newaxis]\n polys[..., i] = p + dp * cuboid[..., i]\n\n # collapse the first two axes\n polys = polys.reshape((-1,) + polys.shape[2:])\n\n facecolors = []\n if color is None:\n color = [self._get_patches_for_fill.get_next_color()]\n\n color = list(mcolors.to_rgba_array(color))\n\n if len(color) == len(x):\n # bar colors specified, need to expand to number of faces\n for c in color:\n facecolors.extend([c] * 6)\n else:\n # a single color specified, or face colors specified explicitly\n facecolors = color\n if len(facecolors) < len(x):\n facecolors *= (6 * len(x))\n\n col = art3d.Poly3DCollection(polys,\n zsort=zsort,\n facecolors=facecolors,\n shade=shade,\n lightsource=lightsource,\n *args, **kwargs)\n self.add_collection(col)\n\n self.auto_scale_xyz((minx, maxx), (miny, maxy), (minz, maxz), had_data)\n\n return col\n\n def set_title(self, label, fontdict=None, loc='center', **kwargs):\n # docstring inherited\n ret = super().set_title(label, fontdict=fontdict, loc=loc, **kwargs)\n (x, y) = self.title.get_position()\n self.title.set_y(0.92 * y)\n return ret\n\n @_preprocess_data()\n def quiver(self, X, Y, Z, U, V, W, *,\n length=1, arrow_length_ratio=.3, pivot='tail', normalize=False,\n **kwargs):\n \"\"\"\n Plot a 3D field of arrows.\n\n The arguments can be array-like or scalars, so long as they can be\n broadcast together. The arguments can also be masked arrays. If an\n element in any of argument is masked, then that corresponding quiver\n element will not be plotted.\n\n Parameters\n ----------\n X, Y, Z : array-like\n The x, y and z coordinates of the arrow locations (default is\n tail of arrow; see *pivot* kwarg).\n\n U, V, W : array-like\n The x, y and z components of the arrow vectors.\n\n length : float, default: 1\n The length of each quiver.\n\n arrow_length_ratio : float, default: 0.3\n The ratio of the arrow head with respect to the quiver.\n\n pivot : {'tail', 'middle', 'tip'}, default: 'tail'\n The part of the arrow that is at the grid point; the arrow\n rotates about this point, hence the name *pivot*.\n\n normalize : bool, default: False\n Whether all arrows are normalized to have the same length, or keep\n the lengths defined by *u*, *v*, and *w*.\n\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n\n **kwargs\n Any additional keyword arguments are delegated to\n :class:`.Line3DCollection`\n \"\"\"\n\n def calc_arrows(UVW):\n # get unit direction vector perpendicular to (u, v, w)\n x = UVW[:, 0]\n y = UVW[:, 1]\n norm = np.linalg.norm(UVW[:, :2], axis=1)\n x_p = np.divide(y, norm, where=norm != 0, out=np.zeros_like(x))\n y_p = np.divide(-x, norm, where=norm != 0, out=np.ones_like(x))\n # compute the two arrowhead direction unit vectors\n rangle = math.radians(15)\n c = math.cos(rangle)\n s = math.sin(rangle)\n # construct the rotation matrices of shape (3, 3, n)\n r13 = y_p * s\n r32 = x_p * s\n r12 = x_p * y_p * (1 - c)\n Rpos = np.array(\n [[c + (x_p ** 2) * (1 - c), r12, r13],\n [r12, c + (y_p ** 2) * (1 - c), -r32],\n [-r13, r32, np.full_like(x_p, c)]])\n # opposite rotation negates all the sin terms\n Rneg = Rpos.copy()\n Rneg[[0, 1, 2, 2], [2, 2, 0, 1]] *= -1\n # Batch n (3, 3) x (3) matrix multiplications ((3, 3, n) x (n, 3)).\n Rpos_vecs = np.einsum(\"ij...,...j->...i\", Rpos, UVW)\n Rneg_vecs = np.einsum(\"ij...,...j->...i\", Rneg, UVW)\n # Stack into (n, 2, 3) result.\n return np.stack([Rpos_vecs, Rneg_vecs], axis=1)\n\n had_data = self.has_data()\n\n input_args = [X, Y, Z, U, V, W]\n\n # extract the masks, if any\n masks = [k.mask for k in input_args\n if isinstance(k, np.ma.MaskedArray)]\n # broadcast to match the shape\n bcast = np.broadcast_arrays(*input_args, *masks)\n input_args = bcast[:6]\n masks = bcast[6:]\n if masks:\n # combine the masks into one\n mask = functools.reduce(np.logical_or, masks)\n # put mask on and compress\n input_args = [np.ma.array(k, mask=mask).compressed()\n for k in input_args]\n else:\n input_args = [np.ravel(k) for k in input_args]\n\n if any(len(v) == 0 for v in input_args):\n # No quivers, so just make an empty collection and return early\n linec = art3d.Line3DCollection([], **kwargs)\n self.add_collection(linec)\n return linec\n\n shaft_dt = np.array([0., length], dtype=float)\n arrow_dt = shaft_dt * arrow_length_ratio\n\n _api.check_in_list(['tail', 'middle', 'tip'], pivot=pivot)\n if pivot == 'tail':\n shaft_dt -= length\n elif pivot == 'middle':\n shaft_dt -= length / 2\n\n XYZ = np.column_stack(input_args[:3])\n UVW = np.column_stack(input_args[3:]).astype(float)\n\n # Normalize rows of UVW\n norm = np.linalg.norm(UVW, axis=1)\n\n # If any row of UVW is all zeros, don't make a quiver for it\n mask = norm > 0\n XYZ = XYZ[mask]\n if normalize:\n UVW = UVW[mask] / norm[mask].reshape((-1, 1))\n else:\n UVW = UVW[mask]\n\n if len(XYZ) > 0:\n # compute the shaft lines all at once with an outer product\n shafts = (XYZ - np.multiply.outer(shaft_dt, UVW)).swapaxes(0, 1)\n # compute head direction vectors, n heads x 2 sides x 3 dimensions\n head_dirs = calc_arrows(UVW)\n # compute all head lines at once, starting from the shaft ends\n heads = shafts[:, :1] - np.multiply.outer(arrow_dt, head_dirs)\n # stack left and right head lines together\n heads = heads.reshape((len(arrow_dt), -1, 3))\n # transpose to get a list of lines\n heads = heads.swapaxes(0, 1)\n\n lines = [*shafts, *heads]\n else:\n lines = []\n\n linec = art3d.Line3DCollection(lines, **kwargs)\n self.add_collection(linec)\n\n self.auto_scale_xyz(XYZ[:, 0], XYZ[:, 1], XYZ[:, 2], had_data)\n\n return linec\n\n quiver3D = quiver\n\n def voxels(self, *args, facecolors=None, edgecolors=None, shade=True,\n lightsource=None, **kwargs):\n \"\"\"\n ax.voxels([x, y, z,] /, filled, facecolors=None, edgecolors=None, \\\n**kwargs)\n\n Plot a set of filled voxels\n\n All voxels are plotted as 1x1x1 cubes on the axis, with\n ``filled[0, 0, 0]`` placed with its lower corner at the origin.\n Occluded faces are not plotted.\n\n Parameters\n ----------\n filled : 3D np.array of bool\n A 3D array of values, with truthy values indicating which voxels\n to fill\n\n x, y, z : 3D np.array, optional\n The coordinates of the corners of the voxels. This should broadcast\n to a shape one larger in every dimension than the shape of\n *filled*. These can be used to plot non-cubic voxels.\n\n If not specified, defaults to increasing integers along each axis,\n like those returned by :func:`~numpy.indices`.\n As indicated by the ``/`` in the function signature, these\n arguments can only be passed positionally.\n\n facecolors, edgecolors : array-like, optional\n The color to draw the faces and edges of the voxels. Can only be\n passed as keyword arguments.\n These parameters can be:\n\n - A single color value, to color all voxels the same color. This\n can be either a string, or a 1D RGB/RGBA array\n - ``None``, the default, to use a single color for the faces, and\n the style default for the edges.\n - A 3D `~numpy.ndarray` of color names, with each item the color\n for the corresponding voxel. The size must match the voxels.\n - A 4D `~numpy.ndarray` of RGB/RGBA data, with the components\n along the last axis.\n\n shade : bool, default: True\n Whether to shade the facecolors.\n\n lightsource : `~matplotlib.colors.LightSource`\n The lightsource to use when *shade* is True.\n\n **kwargs\n Additional keyword arguments to pass onto\n `~mpl_toolkits.mplot3d.art3d.Poly3DCollection`.\n\n Returns\n -------\n faces : dict\n A dictionary indexed by coordinate, where ``faces[i, j, k]`` is a\n `.Poly3DCollection` of the faces drawn for the voxel\n ``filled[i, j, k]``. If no faces were drawn for a given voxel,\n either because it was not asked to be drawn, or it is fully\n occluded, then ``(i, j, k) not in faces``.\n\n Examples\n --------\n .. plot:: gallery/mplot3d/voxels.py\n .. plot:: gallery/mplot3d/voxels_rgb.py\n .. plot:: gallery/mplot3d/voxels_torus.py\n .. plot:: gallery/mplot3d/voxels_numpy_logo.py\n \"\"\"\n\n # work out which signature we should be using, and use it to parse\n # the arguments. Name must be voxels for the correct error message\n if len(args) >= 3:\n # underscores indicate position only\n def voxels(__x, __y, __z, filled, **kwargs):\n return (__x, __y, __z), filled, kwargs\n else:\n def voxels(filled, **kwargs):\n return None, filled, kwargs\n\n xyz, filled, kwargs = voxels(*args, **kwargs)\n\n # check dimensions\n if filled.ndim != 3:\n raise ValueError(\"Argument filled must be 3-dimensional\")\n size = np.array(filled.shape, dtype=np.intp)\n\n # check xyz coordinates, which are one larger than the filled shape\n coord_shape = tuple(size + 1)\n if xyz is None:\n x, y, z = np.indices(coord_shape)\n else:\n x, y, z = (np.broadcast_to(c, coord_shape) for c in xyz)\n\n def _broadcast_color_arg(color, name):\n if np.ndim(color) in (0, 1):\n # single color, like \"red\" or [1, 0, 0]\n return np.broadcast_to(color, filled.shape + np.shape(color))\n elif np.ndim(color) in (3, 4):\n # 3D array of strings, or 4D array with last axis rgb\n if np.shape(color)[:3] != filled.shape:\n raise ValueError(\n f\"When multidimensional, {name} must match the shape \"\n \"of filled\")\n return color\n else:\n raise ValueError(f\"Invalid {name} argument\")\n\n # broadcast and default on facecolors\n if facecolors is None:\n facecolors = self._get_patches_for_fill.get_next_color()\n facecolors = _broadcast_color_arg(facecolors, 'facecolors')\n\n # broadcast but no default on edgecolors\n edgecolors = _broadcast_color_arg(edgecolors, 'edgecolors')\n\n # scale to the full array, even if the data is only in the center\n self.auto_scale_xyz(x, y, z)\n\n # points lying on corners of a square\n square = np.array([\n [0, 0, 0],\n [1, 0, 0],\n [1, 1, 0],\n [0, 1, 0],\n ], dtype=np.intp)\n\n voxel_faces = defaultdict(list)\n\n def permutation_matrices(n):\n \"\"\"Generate cyclic permutation matrices.\"\"\"\n mat = np.eye(n, dtype=np.intp)\n for i in range(n):\n yield mat\n mat = np.roll(mat, 1, axis=0)\n\n # iterate over each of the YZ, ZX, and XY orientations, finding faces\n # to render\n for permute in permutation_matrices(3):\n # find the set of ranges to iterate over\n pc, qc, rc = permute.T.dot(size)\n pinds = np.arange(pc)\n qinds = np.arange(qc)\n rinds = np.arange(rc)\n\n square_rot_pos = square.dot(permute.T)\n square_rot_neg = square_rot_pos[::-1]\n\n # iterate within the current plane\n for p in pinds:\n for q in qinds:\n # iterate perpendicularly to the current plane, handling\n # boundaries. We only draw faces between a voxel and an\n # empty space, to avoid drawing internal faces.\n\n # draw lower faces\n p0 = permute.dot([p, q, 0])\n i0 = tuple(p0)\n if filled[i0]:\n voxel_faces[i0].append(p0 + square_rot_neg)\n\n # draw middle faces\n for r1, r2 in zip(rinds[:-1], rinds[1:]):\n p1 = permute.dot([p, q, r1])\n p2 = permute.dot([p, q, r2])\n\n i1 = tuple(p1)\n i2 = tuple(p2)\n\n if filled[i1] and not filled[i2]:\n voxel_faces[i1].append(p2 + square_rot_pos)\n elif not filled[i1] and filled[i2]:\n voxel_faces[i2].append(p2 + square_rot_neg)\n\n # draw upper faces\n pk = permute.dot([p, q, rc-1])\n pk2 = permute.dot([p, q, rc])\n ik = tuple(pk)\n if filled[ik]:\n voxel_faces[ik].append(pk2 + square_rot_pos)\n\n # iterate over the faces, and generate a Poly3DCollection for each\n # voxel\n polygons = {}\n for coord, faces_inds in voxel_faces.items():\n # convert indices into 3D positions\n if xyz is None:\n faces = faces_inds\n else:\n faces = []\n for face_inds in faces_inds:\n ind = face_inds[:, 0], face_inds[:, 1], face_inds[:, 2]\n face = np.empty(face_inds.shape)\n face[:, 0] = x[ind]\n face[:, 1] = y[ind]\n face[:, 2] = z[ind]\n faces.append(face)\n\n # shade the faces\n facecolor = facecolors[coord]\n edgecolor = edgecolors[coord]\n\n poly = art3d.Poly3DCollection(\n faces, facecolors=facecolor, edgecolors=edgecolor,\n shade=shade, lightsource=lightsource, **kwargs)\n self.add_collection3d(poly)\n polygons[coord] = poly\n\n return polygons\n\n @_preprocess_data(replace_names=[\"x\", \"y\", \"z\", \"xerr\", \"yerr\", \"zerr\"])\n def errorbar(self, x, y, z, zerr=None, yerr=None, xerr=None, fmt='',\n barsabove=False, errorevery=1, ecolor=None, elinewidth=None,\n capsize=None, capthick=None, xlolims=False, xuplims=False,\n ylolims=False, yuplims=False, zlolims=False, zuplims=False,\n **kwargs):\n \"\"\"\n Plot lines and/or markers with errorbars around them.\n\n *x*/*y*/*z* define the data locations, and *xerr*/*yerr*/*zerr* define\n the errorbar sizes. By default, this draws the data markers/lines as\n well the errorbars. Use fmt='none' to draw errorbars only.\n\n Parameters\n ----------\n x, y, z : float or array-like\n The data positions.\n\n xerr, yerr, zerr : float or array-like, shape (N,) or (2, N), optional\n The errorbar sizes:\n\n - scalar: Symmetric +/- values for all data points.\n - shape(N,): Symmetric +/-values for each data point.\n - shape(2, N): Separate - and + values for each bar. First row\n contains the lower errors, the second row contains the upper\n errors.\n - *None*: No errorbar.\n\n Note that all error arrays should have *positive* values.\n\n fmt : str, default: ''\n The format for the data points / data lines. See `.plot` for\n details.\n\n Use 'none' (case-insensitive) to plot errorbars without any data\n markers.\n\n ecolor : color, default: None\n The color of the errorbar lines. If None, use the color of the\n line connecting the markers.\n\n elinewidth : float, default: None\n The linewidth of the errorbar lines. If None, the linewidth of\n the current style is used.\n\n capsize : float, default: :rc:`errorbar.capsize`\n The length of the error bar caps in points.\n\n capthick : float, default: None\n An alias to the keyword argument *markeredgewidth* (a.k.a. *mew*).\n This setting is a more sensible name for the property that\n controls the thickness of the error bar cap in points. For\n backwards compatibility, if *mew* or *markeredgewidth* are given,\n then they will over-ride *capthick*. This may change in future\n releases.\n\n barsabove : bool, default: False\n If True, will plot the errorbars above the plot\n symbols. Default is below.\n\n xlolims, ylolims, zlolims : bool, default: False\n These arguments can be used to indicate that a value gives only\n lower limits. In that case a caret symbol is used to indicate\n this. *lims*-arguments may be scalars, or array-likes of the same\n length as the errors. To use limits with inverted axes,\n `~.Axes.set_xlim` or `~.Axes.set_ylim` must be called before\n `errorbar`. Note the tricky parameter names: setting e.g.\n *ylolims* to True means that the y-value is a *lower* limit of the\n True value, so, only an *upward*-pointing arrow will be drawn!\n\n xuplims, yuplims, zuplims : bool, default: False\n Same as above, but for controlling the upper limits.\n\n errorevery : int or (int, int), default: 1\n draws error bars on a subset of the data. *errorevery* =N draws\n error bars on the points (x[::N], y[::N], z[::N]).\n *errorevery* =(start, N) draws error bars on the points\n (x[start::N], y[start::N], z[start::N]). e.g. *errorevery* =(6, 3)\n adds error bars to the data at (x[6], x[9], x[12], x[15], ...).\n Used to avoid overlapping error bars when two series share x-axis\n values.\n\n Returns\n -------\n errlines : list\n List of `~mpl_toolkits.mplot3d.art3d.Line3DCollection` instances\n each containing an errorbar line.\n caplines : list\n List of `~mpl_toolkits.mplot3d.art3d.Line3D` instances each\n containing a capline object.\n limmarks : list\n List of `~mpl_toolkits.mplot3d.art3d.Line3D` instances each\n containing a marker with an upper or lower limit.\n\n Other Parameters\n ----------------\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n\n **kwargs\n All other keyword arguments for styling errorbar lines are passed\n `~mpl_toolkits.mplot3d.art3d.Line3DCollection`.\n\n Examples\n --------\n .. plot:: gallery/mplot3d/errorbar3d.py\n \"\"\"\n had_data = self.has_data()\n\n kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D)\n # Drop anything that comes in as None to use the default instead.\n kwargs = {k: v for k, v in kwargs.items() if v is not None}\n kwargs.setdefault('zorder', 2)\n\n self._process_unit_info([(\"x\", x), (\"y\", y), (\"z\", z)], kwargs,\n convert=False)\n\n # make sure all the args are iterable; use lists not arrays to\n # preserve units\n x = x if np.iterable(x) else [x]\n y = y if np.iterable(y) else [y]\n z = z if np.iterable(z) else [z]\n\n if not len(x) == len(y) == len(z):\n raise ValueError(\"'x', 'y', and 'z' must have the same size\")\n\n everymask = self._errorevery_to_mask(x, errorevery)\n\n label = kwargs.pop(\"label\", None)\n kwargs['label'] = '_nolegend_'\n\n # Create the main line and determine overall kwargs for child artists.\n # We avoid calling self.plot() directly, or self._get_lines(), because\n # that would call self._process_unit_info again, and do other indirect\n # data processing.\n (data_line, base_style), = self._get_lines._plot_args(\n self, (x, y) if fmt == '' else (x, y, fmt), kwargs, return_kwargs=True)\n art3d.line_2d_to_3d(data_line, zs=z)\n\n # Do this after creating `data_line` to avoid modifying `base_style`.\n if barsabove:\n data_line.set_zorder(kwargs['zorder'] - .1)\n else:\n data_line.set_zorder(kwargs['zorder'] + .1)\n\n # Add line to plot, or throw it away and use it to determine kwargs.\n if fmt.lower() != 'none':\n self.add_line(data_line)\n else:\n data_line = None\n # Remove alpha=0 color that _process_plot_format returns.\n base_style.pop('color')\n\n if 'color' not in base_style:\n base_style['color'] = 'C0'\n if ecolor is None:\n ecolor = base_style['color']\n\n # Eject any line-specific information from format string, as it's not\n # needed for bars or caps.\n for key in ['marker', 'markersize', 'markerfacecolor',\n 'markeredgewidth', 'markeredgecolor', 'markevery',\n 'linestyle', 'fillstyle', 'drawstyle', 'dash_capstyle',\n 'dash_joinstyle', 'solid_capstyle', 'solid_joinstyle']:\n base_style.pop(key, None)\n\n # Make the style dict for the line collections (the bars).\n eb_lines_style = {**base_style, 'color': ecolor}\n\n if elinewidth:\n eb_lines_style['linewidth'] = elinewidth\n elif 'linewidth' in kwargs:\n eb_lines_style['linewidth'] = kwargs['linewidth']\n\n for key in ('transform', 'alpha', 'zorder', 'rasterized'):\n if key in kwargs:\n eb_lines_style[key] = kwargs[key]\n\n # Make the style dict for caps (the \"hats\").\n eb_cap_style = {**base_style, 'linestyle': 'None'}\n if capsize is None:\n capsize = mpl.rcParams[\"errorbar.capsize\"]\n if capsize > 0:\n eb_cap_style['markersize'] = 2. * capsize\n if capthick is not None:\n eb_cap_style['markeredgewidth'] = capthick\n eb_cap_style['color'] = ecolor\n\n def _apply_mask(arrays, mask):\n # Return, for each array in *arrays*, the elements for which *mask*\n # is True, without using fancy indexing.\n return [[*itertools.compress(array, mask)] for array in arrays]\n\n def _extract_errs(err, data, lomask, himask):\n # For separate +/- error values we need to unpack err\n if len(err.shape) == 2:\n low_err, high_err = err\n else:\n low_err, high_err = err, err\n\n lows = np.where(lomask | ~everymask, data, data - low_err)\n highs = np.where(himask | ~everymask, data, data + high_err)\n\n return lows, highs\n\n # collect drawn items while looping over the three coordinates\n errlines, caplines, limmarks = [], [], []\n\n # list of endpoint coordinates, used for auto-scaling\n coorderrs = []\n\n # define the markers used for errorbar caps and limits below\n # the dictionary key is mapped by the `i_xyz` helper dictionary\n capmarker = {0: '|', 1: '|', 2: '_'}\n i_xyz = {'x': 0, 'y': 1, 'z': 2}\n\n # Calculate marker size from points to quiver length. Because these are\n # not markers, and 3D Axes do not use the normal transform stack, this\n # is a bit involved. Since the quiver arrows will change size as the\n # scene is rotated, they are given a standard size based on viewing\n # them directly in planar form.\n quiversize = eb_cap_style.get('markersize',\n mpl.rcParams['lines.markersize']) ** 2\n quiversize *= self.figure.dpi / 72\n quiversize = self.transAxes.inverted().transform([\n (0, 0), (quiversize, quiversize)])\n quiversize = np.mean(np.diff(quiversize, axis=0))\n # quiversize is now in Axes coordinates, and to convert back to data\n # coordinates, we need to run it through the inverse 3D transform. For\n # consistency, this uses a fixed elevation, azimuth, and roll.\n with cbook._setattr_cm(self, elev=0, azim=0, roll=0):\n invM = np.linalg.inv(self.get_proj())\n # elev=azim=roll=0 produces the Y-Z plane, so quiversize in 2D 'x' is\n # 'y' in 3D, hence the 1 index.\n quiversize = np.dot(invM, [quiversize, 0, 0, 0])[1]\n # Quivers use a fixed 15-degree arrow head, so scale up the length so\n # that the size corresponds to the base. In other words, this constant\n # corresponds to the equation tan(15) = (base / 2) / (arrow length).\n quiversize *= 1.8660254037844388\n eb_quiver_style = {**eb_cap_style,\n 'length': quiversize, 'arrow_length_ratio': 1}\n eb_quiver_style.pop('markersize', None)\n\n # loop over x-, y-, and z-direction and draw relevant elements\n for zdir, data, err, lolims, uplims in zip(\n ['x', 'y', 'z'], [x, y, z], [xerr, yerr, zerr],\n [xlolims, ylolims, zlolims], [xuplims, yuplims, zuplims]):\n\n dir_vector = art3d.get_dir_vector(zdir)\n i_zdir = i_xyz[zdir]\n\n if err is None:\n continue\n\n if not np.iterable(err):\n err = [err] * len(data)\n\n err = np.atleast_1d(err)\n\n # arrays fine here, they are booleans and hence not units\n lolims = np.broadcast_to(lolims, len(data)).astype(bool)\n uplims = np.broadcast_to(uplims, len(data)).astype(bool)\n\n # a nested list structure that expands to (xl,xh),(yl,yh),(zl,zh),\n # where x/y/z and l/h correspond to dimensions and low/high\n # positions of errorbars in a dimension we're looping over\n coorderr = [\n _extract_errs(err * dir_vector[i], coord, lolims, uplims)\n for i, coord in enumerate([x, y, z])]\n (xl, xh), (yl, yh), (zl, zh) = coorderr\n\n # draws capmarkers - flat caps orthogonal to the error bars\n nolims = ~(lolims | uplims)\n if nolims.any() and capsize > 0:\n lo_caps_xyz = _apply_mask([xl, yl, zl], nolims & everymask)\n hi_caps_xyz = _apply_mask([xh, yh, zh], nolims & everymask)\n\n # setting '_' for z-caps and '|' for x- and y-caps;\n # these markers will rotate as the viewing angle changes\n cap_lo = art3d.Line3D(*lo_caps_xyz, ls='',\n marker=capmarker[i_zdir],\n **eb_cap_style)\n cap_hi = art3d.Line3D(*hi_caps_xyz, ls='',\n marker=capmarker[i_zdir],\n **eb_cap_style)\n self.add_line(cap_lo)\n self.add_line(cap_hi)\n caplines.append(cap_lo)\n caplines.append(cap_hi)\n\n if lolims.any():\n xh0, yh0, zh0 = _apply_mask([xh, yh, zh], lolims & everymask)\n self.quiver(xh0, yh0, zh0, *dir_vector, **eb_quiver_style)\n if uplims.any():\n xl0, yl0, zl0 = _apply_mask([xl, yl, zl], uplims & everymask)\n self.quiver(xl0, yl0, zl0, *-dir_vector, **eb_quiver_style)\n\n errline = art3d.Line3DCollection(np.array(coorderr).T,\n **eb_lines_style)\n self.add_collection(errline)\n errlines.append(errline)\n coorderrs.append(coorderr)\n\n coorderrs = np.array(coorderrs)\n\n def _digout_minmax(err_arr, coord_label):\n return (np.nanmin(err_arr[:, i_xyz[coord_label], :, :]),\n np.nanmax(err_arr[:, i_xyz[coord_label], :, :]))\n\n minx, maxx = _digout_minmax(coorderrs, 'x')\n miny, maxy = _digout_minmax(coorderrs, 'y')\n minz, maxz = _digout_minmax(coorderrs, 'z')\n self.auto_scale_xyz((minx, maxx), (miny, maxy), (minz, maxz), had_data)\n\n # Adapting errorbar containers for 3d case, assuming z-axis points \"up\"\n errorbar_container = mcontainer.ErrorbarContainer(\n (data_line, tuple(caplines), tuple(errlines)),\n has_xerr=(xerr is not None or yerr is not None),\n has_yerr=(zerr is not None),\n label=label)\n self.containers.append(errorbar_container)\n\n return errlines, caplines, limmarks\n\n @_api.make_keyword_only(\"3.8\", \"call_axes_locator\")\n def get_tightbbox(self, renderer=None, call_axes_locator=True,\n bbox_extra_artists=None, *, for_layout_only=False):\n ret = super().get_tightbbox(renderer,\n call_axes_locator=call_axes_locator,\n bbox_extra_artists=bbox_extra_artists,\n for_layout_only=for_layout_only)\n batch = [ret]\n if self._axis3don:\n for axis in self._axis_map.values():\n if axis.get_visible():\n axis_bb = martist._get_tightbbox_for_layout_only(\n axis, renderer)\n if axis_bb:\n batch.append(axis_bb)\n return mtransforms.Bbox.union(batch)\n\n @_preprocess_data()\n def stem(self, x, y, z, *, linefmt='C0-', markerfmt='C0o', basefmt='C3-',\n bottom=0, label=None, orientation='z'):\n \"\"\"\n Create a 3D stem plot.\n\n A stem plot draws lines perpendicular to a baseline, and places markers\n at the heads. By default, the baseline is defined by *x* and *y*, and\n stems are drawn vertically from *bottom* to *z*.\n\n Parameters\n ----------\n x, y, z : array-like\n The positions of the heads of the stems. The stems are drawn along\n the *orientation*-direction from the baseline at *bottom* (in the\n *orientation*-coordinate) to the heads. By default, the *x* and *y*\n positions are used for the baseline and *z* for the head position,\n but this can be changed by *orientation*.\n\n linefmt : str, default: 'C0-'\n A string defining the properties of the vertical lines. Usually,\n this will be a color or a color and a linestyle:\n\n ========= =============\n Character Line Style\n ========= =============\n ``'-'`` solid line\n ``'--'`` dashed line\n ``'-.'`` dash-dot line\n ``':'`` dotted line\n ========= =============\n\n Note: While it is technically possible to specify valid formats\n other than color or color and linestyle (e.g. 'rx' or '-.'), this\n is beyond the intention of the method and will most likely not\n result in a reasonable plot.\n\n markerfmt : str, default: 'C0o'\n A string defining the properties of the markers at the stem heads.\n\n basefmt : str, default: 'C3-'\n A format string defining the properties of the baseline.\n\n bottom : float, default: 0\n The position of the baseline, in *orientation*-coordinates.\n\n label : str, default: None\n The label to use for the stems in legends.\n\n orientation : {'x', 'y', 'z'}, default: 'z'\n The direction along which stems are drawn.\n\n data : indexable object, optional\n DATA_PARAMETER_PLACEHOLDER\n\n Returns\n -------\n `.StemContainer`\n The container may be treated like a tuple\n (*markerline*, *stemlines*, *baseline*)\n\n Examples\n --------\n .. plot:: gallery/mplot3d/stem3d_demo.py\n \"\"\"\n\n from matplotlib.container import StemContainer\n\n had_data = self.has_data()\n\n _api.check_in_list(['x', 'y', 'z'], orientation=orientation)\n\n xlim = (np.min(x), np.max(x))\n ylim = (np.min(y), np.max(y))\n zlim = (np.min(z), np.max(z))\n\n # Determine the appropriate plane for the baseline and the direction of\n # stemlines based on the value of orientation.\n if orientation == 'x':\n basex, basexlim = y, ylim\n basey, baseylim = z, zlim\n lines = [[(bottom, thisy, thisz), (thisx, thisy, thisz)]\n for thisx, thisy, thisz in zip(x, y, z)]\n elif orientation == 'y':\n basex, basexlim = x, xlim\n basey, baseylim = z, zlim\n lines = [[(thisx, bottom, thisz), (thisx, thisy, thisz)]\n for thisx, thisy, thisz in zip(x, y, z)]\n else:\n basex, basexlim = x, xlim\n basey, baseylim = y, ylim\n lines = [[(thisx, thisy, bottom), (thisx, thisy, thisz)]\n for thisx, thisy, thisz in zip(x, y, z)]\n\n # Determine style for stem lines.\n linestyle, linemarker, linecolor = _process_plot_format(linefmt)\n if linestyle is None:\n linestyle = mpl.rcParams['lines.linestyle']\n\n # Plot everything in required order.\n baseline, = self.plot(basex, basey, basefmt, zs=bottom,\n zdir=orientation, label='_nolegend_')\n stemlines = art3d.Line3DCollection(\n lines, linestyles=linestyle, colors=linecolor, label='_nolegend_')\n self.add_collection(stemlines)\n markerline, = self.plot(x, y, z, markerfmt, label='_nolegend_')\n\n stem_container = StemContainer((markerline, stemlines, baseline),\n label=label)\n self.add_container(stem_container)\n\n jx, jy, jz = art3d.juggle_axes(basexlim, baseylim, [bottom, bottom],\n orientation)\n self.auto_scale_xyz([*jx, *xlim], [*jy, *ylim], [*jz, *zlim], had_data)\n\n return stem_container\n\n stem3D = stem\n\n\ndef get_test_data(delta=0.05):\n \"\"\"Return a tuple X, Y, Z with a test data set.\"\"\"\n x = y = np.arange(-3.0, 3.0, delta)\n X, Y = np.meshgrid(x, y)\n\n Z1 = np.exp(-(X**2 + Y**2) / 2) / (2 * np.pi)\n Z2 = (np.exp(-(((X - 1) / 1.5)**2 + ((Y - 1) / 0.5)**2) / 2) /\n (2 * np.pi * 0.5 * 1.5))\n Z = Z2 - Z1\n\n X = X * 10\n Y = Y * 10\n Z = Z * 500\n return X, Y, Z\n"}
|
diff --git a/doc/api/axes_api.rst b/doc/api/axes_api.rst
index 3457368fa51c..8b01a120da5b 100644
--- a/doc/api/axes_api.rst
+++ b/doc/api/axes_api.rst
@@ -335,6 +335,8 @@ Autoscaling and margins
Axes.use_sticky_edges
Axes.margins
+ Axes.get_xmargin
+ Axes.get_ymargin
Axes.set_xmargin
Axes.set_ymargin
diff --git a/doc/api/toolkits/mplot3d/axes3d.rst b/doc/api/toolkits/mplot3d/axes3d.rst
index f6d8e2529896..b581494e4883 100644
--- a/doc/api/toolkits/mplot3d/axes3d.rst
+++ b/doc/api/toolkits/mplot3d/axes3d.rst
@@ -137,6 +137,7 @@ Autoscaling and margins
:template: autosummary.rst
:nosignatures:
+ get_zmargin
set_zmargin
margins
autoscale
diff --git a/doc/users/next_whats_new/margin_getters.rst b/doc/users/next_whats_new/margin_getters.rst
new file mode 100644
index 000000000000..c43709a17d52
--- /dev/null
+++ b/doc/users/next_whats_new/margin_getters.rst
@@ -0,0 +1,4 @@
+Getters for xmargin, ymargin and zmargin
+----------------------------------------
+``.Axes.get_xmargin()``, ``.Axes.get_ymargin()`` and ``.Axes3D.get_zmargin()`` methods have been added to return
+the margin values set by ``.Axes.set_xmargin()``, ``.Axes.set_ymargin()`` and ``.Axes3D.set_zmargin()``, respectively.
diff --git a/lib/matplotlib/axes/_base.pyi b/lib/matplotlib/axes/_base.pyi
index d41ecae1803c..e3644585296d 100644
--- a/lib/matplotlib/axes/_base.pyi
+++ b/lib/matplotlib/axes/_base.pyi
@@ -242,6 +242,8 @@ class _AxesBase(martist.Artist):
def use_sticky_edges(self) -> bool: ...
@use_sticky_edges.setter
def use_sticky_edges(self, b: bool) -> None: ...
+ def get_xmargin(self) -> float: ...
+ def get_ymargin(self) -> float: ...
def set_xmargin(self, m: float) -> None: ...
def set_ymargin(self, m: float) -> None: ...
|
{"lib/matplotlib/axes/_base.py": [{"type": "function", "name": "_AxesBase.get_xmargin", "lines": [2620, 2634], "signature": "def get_xmargin(self):", "doc": "Retrieve autoscaling margin of the x-axis.\n\n.. versionadded:: 3.9\n\nReturns\n-------\nxmargin : float\n\nSee Also\n--------\nmatplotlib.axes.Axes.set_xmargin"}, {"type": "function", "name": "_AxesBase.get_ymargin", "lines": [2636, 2650], "signature": "def get_ymargin(self):", "doc": "Retrieve autoscaling margin of the y-axis.\n\n.. versionadded:: 3.9\n\nReturns\n-------\nymargin : float\n\nSee Also\n--------\nmatplotlib.axes.Axes.set_ymargin"}], "lib/mpl_toolkits/mplot3d/axes3d.py": [{"type": "function", "name": "Axes3D.get_zmargin", "lines": [516, 530], "signature": "def get_zmargin(self):", "doc": "Retrieve autoscaling margin of the z-axis.\n\n.. versionadded:: 3.9\n\nReturns\n-------\nzmargin : float\n\nSee Also\n--------\nmpl_toolkits.mplot3d.axes3d.Axes3D.set_zmargin"}]}
|
3.7
|
["lib/matplotlib/tests/test_axes.py::test_margin_getters", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margin_getters"]
|
["lib/matplotlib/tests/test_axes.py::test_invisible_axes[png]", "lib/matplotlib/tests/test_axes.py::test_get_labels", "lib/matplotlib/tests/test_axes.py::test_repr", "lib/matplotlib/tests/test_axes.py::test_label_loc_vertical[png]", "lib/matplotlib/tests/test_axes.py::test_label_loc_horizontal[png]", "lib/matplotlib/tests/test_axes.py::test_label_loc_rc[png]", "lib/matplotlib/tests/test_axes.py::test_label_shift", "lib/matplotlib/tests/test_axes.py::test_acorr[png]", "lib/matplotlib/tests/test_axes.py::test_acorr_integers[png]", "lib/matplotlib/tests/test_axes.py::test_spy[png]", "lib/matplotlib/tests/test_axes.py::test_spy_invalid_kwargs", "lib/matplotlib/tests/test_axes.py::test_matshow[png]", "lib/matplotlib/tests/test_axes.py::test_formatter_ticker[png]", "lib/matplotlib/tests/test_axes.py::test_funcformatter_auto_formatter", "lib/matplotlib/tests/test_axes.py::test_strmethodformatter_auto_formatter", "lib/matplotlib/tests/test_axes.py::test_twin_axis_locators_formatters[png]", "lib/matplotlib/tests/test_axes.py::test_twinx_cla", "lib/matplotlib/tests/test_axes.py::test_twin_units[x]", "lib/matplotlib/tests/test_axes.py::test_twin_units[y]", "lib/matplotlib/tests/test_axes.py::test_twin_logscale[png-x]", "lib/matplotlib/tests/test_axes.py::test_twin_logscale[png-y]", "lib/matplotlib/tests/test_axes.py::test_twinx_axis_scales[png]", "lib/matplotlib/tests/test_axes.py::test_twin_inherit_autoscale_setting", "lib/matplotlib/tests/test_axes.py::test_inverted_cla", "lib/matplotlib/tests/test_axes.py::test_subclass_clear_cla", "lib/matplotlib/tests/test_axes.py::test_cla_not_redefined_internally", "lib/matplotlib/tests/test_axes.py::test_minorticks_on_rcParams_both[png]", "lib/matplotlib/tests/test_axes.py::test_autoscale_tiny_range[png]", "lib/matplotlib/tests/test_axes.py::test_autoscale_tight", "lib/matplotlib/tests/test_axes.py::test_autoscale_log_shared", "lib/matplotlib/tests/test_axes.py::test_use_sticky_edges", "lib/matplotlib/tests/test_axes.py::test_sticky_shared_axes[png]", "lib/matplotlib/tests/test_axes.py::test_nargs_stem", "lib/matplotlib/tests/test_axes.py::test_nargs_legend", "lib/matplotlib/tests/test_axes.py::test_nargs_pcolorfast", "lib/matplotlib/tests/test_axes.py::test_basic_annotate[png]", "lib/matplotlib/tests/test_axes.py::test_arrow_simple[png]", "lib/matplotlib/tests/test_axes.py::test_arrow_empty", "lib/matplotlib/tests/test_axes.py::test_arrow_in_view", "lib/matplotlib/tests/test_axes.py::test_annotate_default_arrow", "lib/matplotlib/tests/test_axes.py::test_annotate_signature", "lib/matplotlib/tests/test_axes.py::test_fill_units[png]", "lib/matplotlib/tests/test_axes.py::test_plot_format_kwarg_redundant", "lib/matplotlib/tests/test_axes.py::test_errorbar_dashes[png]", "lib/matplotlib/tests/test_axes.py::test_single_point[png]", "lib/matplotlib/tests/test_axes.py::test_single_date[png]", "lib/matplotlib/tests/test_axes.py::test_shaped_data[png]", "lib/matplotlib/tests/test_axes.py::test_structured_data", "lib/matplotlib/tests/test_axes.py::test_aitoff_proj[png]", "lib/matplotlib/tests/test_axes.py::test_axvspan_epoch[png]", "lib/matplotlib/tests/test_axes.py::test_axhspan_epoch[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_extent[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_empty[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_pickable", "lib/matplotlib/tests/test_axes.py::test_hexbin_log[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_linear[png]", "lib/matplotlib/tests/test_axes.py::test_hexbin_log_clim", "lib/matplotlib/tests/test_axes.py::test_hexbin_mincnt_behavior_upon_C_parameter[png]", "lib/matplotlib/tests/test_axes.py::test_inverted_limits", "lib/matplotlib/tests/test_axes.py::test_nonfinite_limits[png]", "lib/matplotlib/tests/test_axes.py::test_limits_empty_data[png-scatter]", "lib/matplotlib/tests/test_axes.py::test_limits_empty_data[png-plot]", "lib/matplotlib/tests/test_axes.py::test_limits_empty_data[png-fill_between]", "lib/matplotlib/tests/test_axes.py::test_imshow[png]", "lib/matplotlib/tests/test_axes.py::test_imshow_clip[png]", "lib/matplotlib/tests/test_axes.py::test_imshow_norm_vminvmax", "lib/matplotlib/tests/test_axes.py::test_polycollection_joinstyle[png]", "lib/matplotlib/tests/test_axes.py::test_fill_between_input[2d_x_input]", "lib/matplotlib/tests/test_axes.py::test_fill_between_input[2d_y1_input]", "lib/matplotlib/tests/test_axes.py::test_fill_between_input[2d_y2_input]", "lib/matplotlib/tests/test_axes.py::test_fill_betweenx_input[2d_y_input]", "lib/matplotlib/tests/test_axes.py::test_fill_betweenx_input[2d_x1_input]", "lib/matplotlib/tests/test_axes.py::test_fill_betweenx_input[2d_x2_input]", "lib/matplotlib/tests/test_axes.py::test_fill_between_interpolate[png]", "lib/matplotlib/tests/test_axes.py::test_fill_between_interpolate_decreasing[png]", "lib/matplotlib/tests/test_axes.py::test_fill_between_interpolate_nan[png]", "lib/matplotlib/tests/test_axes.py::test_pcolorargs_5205", "lib/matplotlib/tests/test_axes.py::test_pcolormesh[png]", "lib/matplotlib/tests/test_axes.py::test_pcolormesh_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_pcolormesh_rgba[png-3-1]", "lib/matplotlib/tests/test_axes.py::test_pcolormesh_rgba[png-4-0.5]", "lib/matplotlib/tests/test_axes.py::test_pcolormesh_datetime_axis[png]", "lib/matplotlib/tests/test_axes.py::test_pcolor_datetime_axis[png]", "lib/matplotlib/tests/test_axes.py::test_pcolorargs", "lib/matplotlib/tests/test_axes.py::test_pcolorargs_with_read_only", "lib/matplotlib/tests/test_axes.py::test_pcolornearest[png]", "lib/matplotlib/tests/test_axes.py::test_pcolornearestunits[png]", "lib/matplotlib/tests/test_axes.py::test_pcolorflaterror", "lib/matplotlib/tests/test_axes.py::test_samesizepcolorflaterror", "lib/matplotlib/tests/test_axes.py::test_pcolorauto[png-False]", "lib/matplotlib/tests/test_axes.py::test_pcolorauto[png-True]", "lib/matplotlib/tests/test_axes.py::test_canonical[png]", "lib/matplotlib/tests/test_axes.py::test_arc_angles[png]", "lib/matplotlib/tests/test_axes.py::test_arc_ellipse[png]", "lib/matplotlib/tests/test_axes.py::test_marker_as_markerstyle", "lib/matplotlib/tests/test_axes.py::test_markevery[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_line[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_linear_scales[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_linear_scales_zoomed[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_log_scales[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_polar[png]", "lib/matplotlib/tests/test_axes.py::test_markevery_linear_scales_nans[png]", "lib/matplotlib/tests/test_axes.py::test_marker_edges[png]", "lib/matplotlib/tests/test_axes.py::test_bar_tick_label_single[png]", "lib/matplotlib/tests/test_axes.py::test_nan_bar_values", "lib/matplotlib/tests/test_axes.py::test_bar_ticklabel_fail", "lib/matplotlib/tests/test_axes.py::test_bar_tick_label_multiple[png]", "lib/matplotlib/tests/test_axes.py::test_bar_tick_label_multiple_old_alignment[png]", "lib/matplotlib/tests/test_axes.py::test_bar_decimal_center[png]", "lib/matplotlib/tests/test_axes.py::test_barh_decimal_center[png]", "lib/matplotlib/tests/test_axes.py::test_bar_decimal_width[png]", "lib/matplotlib/tests/test_axes.py::test_barh_decimal_height[png]", "lib/matplotlib/tests/test_axes.py::test_bar_color_none_alpha", "lib/matplotlib/tests/test_axes.py::test_bar_edgecolor_none_alpha", "lib/matplotlib/tests/test_axes.py::test_barh_tick_label[png]", "lib/matplotlib/tests/test_axes.py::test_bar_timedelta", "lib/matplotlib/tests/test_axes.py::test_bar_datetime_start", "lib/matplotlib/tests/test_axes.py::test_boxplot_dates_pandas", "lib/matplotlib/tests/test_axes.py::test_boxplot_capwidths", "lib/matplotlib/tests/test_axes.py::test_pcolor_regression", "lib/matplotlib/tests/test_axes.py::test_bar_pandas", "lib/matplotlib/tests/test_axes.py::test_bar_pandas_indexed", "lib/matplotlib/tests/test_axes.py::test_bar_hatches[png]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x-1-x-expected_labels0-x]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x1-width1-label1-expected_labels1-_nolegend_]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x2-width2-label2-expected_labels2-_nolegend_]", "lib/matplotlib/tests/test_axes.py::test_bar_labels[x3-width3-bars-expected_labels3-bars]", "lib/matplotlib/tests/test_axes.py::test_bar_labels_length", "lib/matplotlib/tests/test_axes.py::test_pandas_minimal_plot", "lib/matplotlib/tests/test_axes.py::test_hist_log[png]", "lib/matplotlib/tests/test_axes.py::test_hist_log_2[png]", "lib/matplotlib/tests/test_axes.py::test_hist_log_barstacked", "lib/matplotlib/tests/test_axes.py::test_hist_bar_empty[png]", "lib/matplotlib/tests/test_axes.py::test_hist_float16", "lib/matplotlib/tests/test_axes.py::test_hist_step_empty[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step_filled[png]", "lib/matplotlib/tests/test_axes.py::test_hist_density[png]", "lib/matplotlib/tests/test_axes.py::test_hist_unequal_bins_density", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets_bins[date2num]", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets_bins[datetime.datetime]", "lib/matplotlib/tests/test_axes.py::test_hist_datetime_datasets_bins[np.datetime64]", "lib/matplotlib/tests/test_axes.py::test_hist_with_empty_input[data0-1]", "lib/matplotlib/tests/test_axes.py::test_hist_with_empty_input[data1-1]", "lib/matplotlib/tests/test_axes.py::test_hist_with_empty_input[data2-2]", "lib/matplotlib/tests/test_axes.py::test_hist_zorder[bar-1]", "lib/matplotlib/tests/test_axes.py::test_hist_zorder[step-2]", "lib/matplotlib/tests/test_axes.py::test_hist_zorder[stepfilled-1]", "lib/matplotlib/tests/test_axes.py::test_stairs[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_fill[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_update[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_baseline_0[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_empty", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_nan", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_mismatch", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_update", "lib/matplotlib/tests/test_axes.py::test_stairs_invalid_update2", "lib/matplotlib/tests/test_axes.py::test_stairs_options[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_datetime[png]", "lib/matplotlib/tests/test_axes.py::test_stairs_edge_handling[png]", "lib/matplotlib/tests/test_axes.py::test_contour_hatching[png]", "lib/matplotlib/tests/test_axes.py::test_contour_colorbar[png]", "lib/matplotlib/tests/test_axes.py::test_hist2d[png]", "lib/matplotlib/tests/test_axes.py::test_hist2d_transpose[png]", "lib/matplotlib/tests/test_axes.py::test_hist2d_density", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_plot[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_marker[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_2D[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_decimal[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs0]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs1]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs2]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_color_warning[kwargs3]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_unfilled", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_unfillable", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_size_arg_size", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_edgecolor_RGB", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_invalid_color[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_no_invalid_color[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_norm_vminvmax", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_single_point[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_different_shapes[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[0.5-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case1-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[red-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[none-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[None-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case5-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[jaune-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case7-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case8-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case9-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case10-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case11-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case12-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case13-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case14-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case15-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case16-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case17-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case18-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case19-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case20-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case21-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case22-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case23-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case24-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case25-None]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case26-shape]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case27-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case28-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_c[c_case29-conversion]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_single_color_c[png]", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_linewidths", "lib/matplotlib/tests/test_axes.py::TestScatter::test_scatter_singular_plural_arguments", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params0-expected_result0]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params1-expected_result1]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params2-expected_result2]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params3-expected_result3]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args[params4-expected_result4]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs0-None]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs1-None]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs2-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs3-expected_edgecolors3]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs4-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs5-face]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs6-none]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs7-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs8-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs9-r]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_edgecolors[kwargs10-g]", "lib/matplotlib/tests/test_axes.py::test_parse_scatter_color_args_error", "lib/matplotlib/tests/test_axes.py::test_as_mpl_axes_api", "lib/matplotlib/tests/test_axes.py::test_pyplot_axes", "lib/matplotlib/tests/test_axes.py::test_log_scales", "lib/matplotlib/tests/test_axes.py::test_log_scales_no_data", "lib/matplotlib/tests/test_axes.py::test_log_scales_invalid", "lib/matplotlib/tests/test_axes.py::test_stackplot[png]", "lib/matplotlib/tests/test_axes.py::test_stackplot_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_rangewhis[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_percentilewhis[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_with_xlabels[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_horizontal[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_with_ylabels[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_patchartist[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custompatchartist[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customoutlier[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_showcustommean[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custombox[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custommedian[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customcap[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customwhisker[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_median_bound_by_box[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_shownotches[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_nocaps[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_nobox[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_no_flier_stats[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_showmean[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_showmeanasline[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_scalarwidth[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_customwidths[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custompositions[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_bad_widths", "lib/matplotlib/tests/test_axes.py::test_bxp_bad_positions", "lib/matplotlib/tests/test_axes.py::test_bxp_custom_capwidths[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_custom_capwidth[png]", "lib/matplotlib/tests/test_axes.py::test_bxp_bad_capwidths", "lib/matplotlib/tests/test_axes.py::test_boxplot[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_custom_capwidths[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_sym2[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_sym[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_autorange_whiskers[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_rc_parameters[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_with_CIarray[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_no_weird_whisker[png]", "lib/matplotlib/tests/test_axes.py::test_boxplot_bad_medians", "lib/matplotlib/tests/test_axes.py::test_boxplot_bad_ci", "lib/matplotlib/tests/test_axes.py::test_boxplot_zorder", "lib/matplotlib/tests/test_axes.py::test_boxplot_marker_behavior", "lib/matplotlib/tests/test_axes.py::test_boxplot_mod_artist_after_plotting[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showmeans[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showextrema[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showmedians[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_showall[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_custompoints_10[png]", "lib/matplotlib/tests/test_axes.py::test_vert_violinplot_custompoints_200[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_baseline[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showmedians[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showmeans[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showextrema[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_showall[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_custompoints_10[png]", "lib/matplotlib/tests/test_axes.py::test_horiz_violinplot_custompoints_200[png]", "lib/matplotlib/tests/test_axes.py::test_violinplot_bad_positions", "lib/matplotlib/tests/test_axes.py::test_violinplot_bad_widths", "lib/matplotlib/tests/test_axes.py::test_violinplot_bad_quantiles", "lib/matplotlib/tests/test_axes.py::test_violinplot_outofrange_quantiles", "lib/matplotlib/tests/test_axes.py::test_violinplot_single_list_quantiles[png]", "lib/matplotlib/tests/test_axes.py::test_violinplot_pandas_series[png]", "lib/matplotlib/tests/test_axes.py::test_manage_xticks", "lib/matplotlib/tests/test_axes.py::test_boxplot_not_single", "lib/matplotlib/tests/test_axes.py::test_tick_space_size_0", "lib/matplotlib/tests/test_axes.py::test_errorbar[png]", "lib/matplotlib/tests/test_axes.py::test_mixed_errorbar_polar_caps[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_colorcycle", "lib/matplotlib/tests/test_axes.py::test_errorbar_cycle_ecolor[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_shape", "lib/matplotlib/tests/test_axes.py::test_errorbar_limits[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_nonefmt", "lib/matplotlib/tests/test_axes.py::test_errorbar_line_specific_kwargs", "lib/matplotlib/tests/test_axes.py::test_errorbar_with_prop_cycle[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_every_invalid", "lib/matplotlib/tests/test_axes.py::test_xerr_yerr_not_negative", "lib/matplotlib/tests/test_axes.py::test_errorbar_every[png]", "lib/matplotlib/tests/test_axes.py::test_errorbar_linewidth_type[elinewidth0]", "lib/matplotlib/tests/test_axes.py::test_errorbar_linewidth_type[elinewidth1]", "lib/matplotlib/tests/test_axes.py::test_errorbar_linewidth_type[1]", "lib/matplotlib/tests/test_axes.py::test_errorbar_nan[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled[png]", "lib/matplotlib/tests/test_axes.py::test_hist_offset[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step_horiz[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_weighted[png]", "lib/matplotlib/tests/test_axes.py::test_stem[png]", "lib/matplotlib/tests/test_axes.py::test_stem_args", "lib/matplotlib/tests/test_axes.py::test_stem_markerfmt", "lib/matplotlib/tests/test_axes.py::test_stem_dates", "lib/matplotlib/tests/test_axes.py::test_stem_orientation[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_step[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_density[png]", "lib/matplotlib/tests/test_axes.py::test_hist_step_bottom[png]", "lib/matplotlib/tests/test_axes.py::test_hist_stepfilled_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_step_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stepfilled_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_step_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_step_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_stepfilled_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_step_bottom_geometry", "lib/matplotlib/tests/test_axes.py::test_hist_stacked_bar[png]", "lib/matplotlib/tests/test_axes.py::test_hist_barstacked_bottom_unchanged", "lib/matplotlib/tests/test_axes.py::test_hist_emptydata", "lib/matplotlib/tests/test_axes.py::test_hist_labels", "lib/matplotlib/tests/test_axes.py::test_transparent_markers[png]", "lib/matplotlib/tests/test_axes.py::test_rgba_markers[png]", "lib/matplotlib/tests/test_axes.py::test_mollweide_grid[png]", "lib/matplotlib/tests/test_axes.py::test_mollweide_forward_inverse_closure", "lib/matplotlib/tests/test_axes.py::test_mollweide_inverse_forward_closure", "lib/matplotlib/tests/test_axes.py::test_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_eventplot[png]", "lib/matplotlib/tests/test_axes.py::test_eventplot_defaults[png]", "lib/matplotlib/tests/test_axes.py::test_eventplot_colors[colors0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_colors[colors1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_colors[colors2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_alpha", "lib/matplotlib/tests/test_axes.py::test_eventplot_problem_kwargs[png]", "lib/matplotlib/tests/test_axes.py::test_empty_eventplot", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[None-data0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[None-data1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[None-data2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[vertical-data0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[vertical-data1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[vertical-data2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[horizontal-data0]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[horizontal-data1]", "lib/matplotlib/tests/test_axes.py::test_eventplot_orientation[horizontal-data2]", "lib/matplotlib/tests/test_axes.py::test_eventplot_units_list[png]", "lib/matplotlib/tests/test_axes.py::test_marker_styles[png]", "lib/matplotlib/tests/test_axes.py::test_markers_fillstyle_rcparams[png]", "lib/matplotlib/tests/test_axes.py::test_vertex_markers[png]", "lib/matplotlib/tests/test_axes.py::test_eb_line_zorder[png]", "lib/matplotlib/tests/test_axes.py::test_axline_loglog[png]", "lib/matplotlib/tests/test_axes.py::test_axline[png]", "lib/matplotlib/tests/test_axes.py::test_axline_transaxes[png]", "lib/matplotlib/tests/test_axes.py::test_axline_transaxes_panzoom[png]", "lib/matplotlib/tests/test_axes.py::test_axline_args", "lib/matplotlib/tests/test_axes.py::test_vlines[png]", "lib/matplotlib/tests/test_axes.py::test_vlines_default", "lib/matplotlib/tests/test_axes.py::test_hlines[png]", "lib/matplotlib/tests/test_axes.py::test_hlines_default", "lib/matplotlib/tests/test_axes.py::test_lines_with_colors[png-data0]", "lib/matplotlib/tests/test_axes.py::test_lines_with_colors[png-data1]", "lib/matplotlib/tests/test_axes.py::test_vlines_hlines_blended_transform[png]", "lib/matplotlib/tests/test_axes.py::test_step_linestyle[png]", "lib/matplotlib/tests/test_axes.py::test_mixed_collection[png]", "lib/matplotlib/tests/test_axes.py::test_subplot_key_hash", "lib/matplotlib/tests/test_axes.py::test_specgram[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_magnitude[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_angle[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_fs_none", "lib/matplotlib/tests/test_axes.py::test_specgram_origin_rcparam[png]", "lib/matplotlib/tests/test_axes.py::test_specgram_origin_kwarg", "lib/matplotlib/tests/test_axes.py::test_psd_csd[png]", "lib/matplotlib/tests/test_axes.py::test_spectrum[png]", "lib/matplotlib/tests/test_axes.py::test_psd_csd_edge_cases", "lib/matplotlib/tests/test_axes.py::test_twin_remove[png]", "lib/matplotlib/tests/test_axes.py::test_twin_spines[png]", "lib/matplotlib/tests/test_axes.py::test_twin_spines_on_top[png]", "lib/matplotlib/tests/test_axes.py::test_rcparam_grid_minor[both-True-True]", "lib/matplotlib/tests/test_axes.py::test_rcparam_grid_minor[major-True-False]", "lib/matplotlib/tests/test_axes.py::test_rcparam_grid_minor[minor-False-True]", "lib/matplotlib/tests/test_axes.py::test_grid", "lib/matplotlib/tests/test_axes.py::test_reset_grid", "lib/matplotlib/tests/test_axes.py::test_reset_ticks[png]", "lib/matplotlib/tests/test_axes.py::test_vline_limit", "lib/matplotlib/tests/test_axes.py::test_axline_minmax[axvline-axhline-args0]", "lib/matplotlib/tests/test_axes.py::test_axline_minmax[axvspan-axhspan-args1]", "lib/matplotlib/tests/test_axes.py::test_empty_shared_subplots", "lib/matplotlib/tests/test_axes.py::test_shared_with_aspect_1", "lib/matplotlib/tests/test_axes.py::test_shared_with_aspect_2", "lib/matplotlib/tests/test_axes.py::test_shared_with_aspect_3", "lib/matplotlib/tests/test_axes.py::test_shared_aspect_error", "lib/matplotlib/tests/test_axes.py::test_axis_errors[TypeError-args0-kwargs0-axis\\\\(\\\\)", "lib/matplotlib/tests/test_axes.py::test_axis_errors[ValueError-args1-kwargs1-Unrecognized", "lib/matplotlib/tests/test_axes.py::test_axis_errors[TypeError-args2-kwargs2-The", "lib/matplotlib/tests/test_axes.py::test_axis_errors[TypeError-args3-kwargs3-axis\\\\(\\\\)", "lib/matplotlib/tests/test_axes.py::test_axis_method_errors", "lib/matplotlib/tests/test_axes.py::test_twin_with_aspect[x]", "lib/matplotlib/tests/test_axes.py::test_twin_with_aspect[y]", "lib/matplotlib/tests/test_axes.py::test_relim_visible_only", "lib/matplotlib/tests/test_axes.py::test_text_labelsize", "lib/matplotlib/tests/test_axes.py::test_pie_default[png]", "lib/matplotlib/tests/test_axes.py::test_pie_linewidth_0[png]", "lib/matplotlib/tests/test_axes.py::test_pie_center_radius[png]", "lib/matplotlib/tests/test_axes.py::test_pie_linewidth_2[png]", "lib/matplotlib/tests/test_axes.py::test_pie_ccw_true[png]", "lib/matplotlib/tests/test_axes.py::test_pie_frame_grid[png]", "lib/matplotlib/tests/test_axes.py::test_pie_rotatelabels_true[png]", "lib/matplotlib/tests/test_axes.py::test_pie_nolabel_but_legend[png]", "lib/matplotlib/tests/test_axes.py::test_pie_shadow[png]", "lib/matplotlib/tests/test_axes.py::test_pie_textprops", "lib/matplotlib/tests/test_axes.py::test_pie_get_negative_values", "lib/matplotlib/tests/test_axes.py::test_normalize_kwarg_pie", "lib/matplotlib/tests/test_axes.py::test_pie_hatch_single[png]", "lib/matplotlib/tests/test_axes.py::test_pie_hatch_multi[png]", "lib/matplotlib/tests/test_axes.py::test_set_get_ticklabels[png]", "lib/matplotlib/tests/test_axes.py::test_set_ticks_kwargs_raise_error_without_labels", "lib/matplotlib/tests/test_axes.py::test_set_ticks_with_labels[png]", "lib/matplotlib/tests/test_axes.py::test_xticks_bad_args", "lib/matplotlib/tests/test_axes.py::test_subsampled_ticklabels", "lib/matplotlib/tests/test_axes.py::test_mismatched_ticklabels", "lib/matplotlib/tests/test_axes.py::test_empty_ticks_fixed_loc", "lib/matplotlib/tests/test_axes.py::test_retain_tick_visibility[png]", "lib/matplotlib/tests/test_axes.py::test_warn_too_few_labels", "lib/matplotlib/tests/test_axes.py::test_tick_label_update", "lib/matplotlib/tests/test_axes.py::test_o_marker_path_snap[png]", "lib/matplotlib/tests/test_axes.py::test_margins", "lib/matplotlib/tests/test_axes.py::test_set_margin_updates_limits", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args0-kwargs0-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args1-kwargs1-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args2-kwargs2-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[ValueError-args3-kwargs3-margin", "lib/matplotlib/tests/test_axes.py::test_margins_errors[TypeError-args4-kwargs4-Cannot", "lib/matplotlib/tests/test_axes.py::test_margins_errors[TypeError-args5-kwargs5-Cannot", "lib/matplotlib/tests/test_axes.py::test_margins_errors[TypeError-args6-kwargs6-Must", "lib/matplotlib/tests/test_axes.py::test_length_one_hist", "lib/matplotlib/tests/test_axes.py::test_set_xy_bound", "lib/matplotlib/tests/test_axes.py::test_pathological_hexbin", "lib/matplotlib/tests/test_axes.py::test_color_None", "lib/matplotlib/tests/test_axes.py::test_color_alias", "lib/matplotlib/tests/test_axes.py::test_numerical_hist_label", "lib/matplotlib/tests/test_axes.py::test_unicode_hist_label", "lib/matplotlib/tests/test_axes.py::test_move_offsetlabel", "lib/matplotlib/tests/test_axes.py::test_rc_spines[png]", "lib/matplotlib/tests/test_axes.py::test_rc_grid[png]", "lib/matplotlib/tests/test_axes.py::test_rc_tick", "lib/matplotlib/tests/test_axes.py::test_rc_major_minor_tick", "lib/matplotlib/tests/test_axes.py::test_square_plot", "lib/matplotlib/tests/test_axes.py::test_bad_plot_args", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy0-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy1-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy2-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy3-PcolorImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data0-xy4-QuadMesh]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy0-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy1-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy2-AxesImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy3-PcolorImage]", "lib/matplotlib/tests/test_axes.py::test_pcolorfast[data1-xy4-QuadMesh]", "lib/matplotlib/tests/test_axes.py::test_shared_scale", "lib/matplotlib/tests/test_axes.py::test_shared_bool", "lib/matplotlib/tests/test_axes.py::test_violin_point_mass", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs0]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs1]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs2]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs3]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs4]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs5]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs6]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs7]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs8]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs9]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs10]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs11]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs12]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs13]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs14]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs15]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs16]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs17]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs18]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs19]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs20]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs21]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs22]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs23]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs24]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs25]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs26]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs27]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs28]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs29]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs30]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs31]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs32]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs33]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs34]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs35]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs36]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs37]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs38]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs39]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs40]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs41]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs42]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs43]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs44]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs45]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs46]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs47]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs48]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs49]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs50]", "lib/matplotlib/tests/test_axes.py::test_errorbar_inputs_shotgun[kwargs51]", "lib/matplotlib/tests/test_axes.py::test_dash_offset[png]", "lib/matplotlib/tests/test_axes.py::test_title_pad", "lib/matplotlib/tests/test_axes.py::test_title_location_roundtrip", "lib/matplotlib/tests/test_axes.py::test_title_location_shared[True]", "lib/matplotlib/tests/test_axes.py::test_title_location_shared[False]", "lib/matplotlib/tests/test_axes.py::test_loglog[png]", "lib/matplotlib/tests/test_axes.py::test_loglog_nonpos[png]", "lib/matplotlib/tests/test_axes.py::test_axes_margins", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[gca-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[gca-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots_shared-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[subplots_shared-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[add_axes-x]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes[add_axes-y]", "lib/matplotlib/tests/test_axes.py::test_remove_shared_axes_relim", "lib/matplotlib/tests/test_axes.py::test_shared_axes_autoscale", "lib/matplotlib/tests/test_axes.py::test_adjust_numtick_aspect", "lib/matplotlib/tests/test_axes.py::test_auto_numticks", "lib/matplotlib/tests/test_axes.py::test_auto_numticks_log", "lib/matplotlib/tests/test_axes.py::test_broken_barh_empty", "lib/matplotlib/tests/test_axes.py::test_broken_barh_timedelta", "lib/matplotlib/tests/test_axes.py::test_pandas_pcolormesh", "lib/matplotlib/tests/test_axes.py::test_pandas_indexing_dates", "lib/matplotlib/tests/test_axes.py::test_pandas_errorbar_indexing", "lib/matplotlib/tests/test_axes.py::test_pandas_index_shape", "lib/matplotlib/tests/test_axes.py::test_pandas_indexing_hist", "lib/matplotlib/tests/test_axes.py::test_pandas_bar_align_center", "lib/matplotlib/tests/test_axes.py::test_axis_get_tick_params", "lib/matplotlib/tests/test_axes.py::test_axis_set_tick_params_labelsize_labelcolor", "lib/matplotlib/tests/test_axes.py::test_axes_tick_params_gridlines", "lib/matplotlib/tests/test_axes.py::test_axes_tick_params_ylabelside", "lib/matplotlib/tests/test_axes.py::test_axes_tick_params_xlabelside", "lib/matplotlib/tests/test_axes.py::test_none_kwargs", "lib/matplotlib/tests/test_axes.py::test_bar_uint8", "lib/matplotlib/tests/test_axes.py::test_date_timezone_x[png]", "lib/matplotlib/tests/test_axes.py::test_date_timezone_y[png]", "lib/matplotlib/tests/test_axes.py::test_date_timezone_x_and_y[png]", "lib/matplotlib/tests/test_axes.py::test_axisbelow[png]", "lib/matplotlib/tests/test_axes.py::test_titletwiny", "lib/matplotlib/tests/test_axes.py::test_titlesetpos", "lib/matplotlib/tests/test_axes.py::test_title_xticks_top", "lib/matplotlib/tests/test_axes.py::test_title_xticks_top_both", "lib/matplotlib/tests/test_axes.py::test_title_above_offset[left", "lib/matplotlib/tests/test_axes.py::test_title_above_offset[center", "lib/matplotlib/tests/test_axes.py::test_title_above_offset[both", "lib/matplotlib/tests/test_axes.py::test_title_no_move_off_page", "lib/matplotlib/tests/test_axes.py::test_offset_label_color", "lib/matplotlib/tests/test_axes.py::test_offset_text_visible", "lib/matplotlib/tests/test_axes.py::test_large_offset", "lib/matplotlib/tests/test_axes.py::test_barb_units", "lib/matplotlib/tests/test_axes.py::test_quiver_units", "lib/matplotlib/tests/test_axes.py::test_bar_color_cycle", "lib/matplotlib/tests/test_axes.py::test_tick_param_label_rotation", "lib/matplotlib/tests/test_axes.py::test_fillbetween_cycle", "lib/matplotlib/tests/test_axes.py::test_log_margins", "lib/matplotlib/tests/test_axes.py::test_color_length_mismatch", "lib/matplotlib/tests/test_axes.py::test_eventplot_legend", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args0-kwargs0-lineoffsets", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args1-kwargs1-linelengths", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args2-kwargs2-linewidths", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args3-kwargs3-linestyles", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args4-kwargs4-alpha", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args5-kwargs5-positions", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args6-kwargs6-lineoffsets", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args7-kwargs7-linelengths", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args8-kwargs8-linewidths", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args9-kwargs9-linestyles", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args10-kwargs10-alpha", "lib/matplotlib/tests/test_axes.py::test_eventplot_errors[ValueError-args11-kwargs11-colors", "lib/matplotlib/tests/test_axes.py::test_bar_broadcast_args", "lib/matplotlib/tests/test_axes.py::test_invalid_axis_limits", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[symlog-symlog]", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[symlog-log]", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[log-symlog]", "lib/matplotlib/tests/test_axes.py::test_minorticks_on[log-log]", "lib/matplotlib/tests/test_axes.py::test_twinx_knows_limits", "lib/matplotlib/tests/test_axes.py::test_zero_linewidth", "lib/matplotlib/tests/test_axes.py::test_empty_errorbar_legend", "lib/matplotlib/tests/test_axes.py::test_plot_decimal[png]", "lib/matplotlib/tests/test_axes.py::test_markerfacecolor_none_alpha[png]", "lib/matplotlib/tests/test_axes.py::test_tick_padding_tightbbox", "lib/matplotlib/tests/test_axes.py::test_inset", "lib/matplotlib/tests/test_axes.py::test_zoom_inset", "lib/matplotlib/tests/test_axes.py::test_inset_polar[png]", "lib/matplotlib/tests/test_axes.py::test_inset_projection", "lib/matplotlib/tests/test_axes.py::test_inset_subclass", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[False-False]", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[False-True]", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[True-False]", "lib/matplotlib/tests/test_axes.py::test_indicate_inset_inverted[True-True]", "lib/matplotlib/tests/test_axes.py::test_set_position", "lib/matplotlib/tests/test_axes.py::test_spines_properbbox_after_zoom", "lib/matplotlib/tests/test_axes.py::test_limits_after_scroll_zoom", "lib/matplotlib/tests/test_axes.py::test_gettightbbox_ignore_nan", "lib/matplotlib/tests/test_axes.py::test_scatter_series_non_zero_index", "lib/matplotlib/tests/test_axes.py::test_scatter_empty_data", "lib/matplotlib/tests/test_axes.py::test_annotate_across_transforms[png]", "lib/matplotlib/tests/test_axes.py::test_secondary_xy[png]", "lib/matplotlib/tests/test_axes.py::test_secondary_fail", "lib/matplotlib/tests/test_axes.py::test_secondary_resize", "lib/matplotlib/tests/test_axes.py::test_secondary_minorloc", "lib/matplotlib/tests/test_axes.py::test_secondary_formatter", "lib/matplotlib/tests/test_axes.py::test_secondary_repr", "lib/matplotlib/tests/test_axes.py::test_axis_options[png]", "lib/matplotlib/tests/test_axes.py::test_normal_axes", "lib/matplotlib/tests/test_axes.py::test_nodecorator", "lib/matplotlib/tests/test_axes.py::test_displaced_spine", "lib/matplotlib/tests/test_axes.py::test_tickdirs", "lib/matplotlib/tests/test_axes.py::test_minor_accountedfor", "lib/matplotlib/tests/test_axes.py::test_axis_bool_arguments[png]", "lib/matplotlib/tests/test_axes.py::test_axis_extent_arg", "lib/matplotlib/tests/test_axes.py::test_axis_extent_arg2", "lib/matplotlib/tests/test_axes.py::test_hist_auto_bins", "lib/matplotlib/tests/test_axes.py::test_hist_nan_data", "lib/matplotlib/tests/test_axes.py::test_hist_range_and_density", "lib/matplotlib/tests/test_axes.py::test_bar_errbar_zorder", "lib/matplotlib/tests/test_axes.py::test_set_ticks_inverted", "lib/matplotlib/tests/test_axes.py::test_aspect_nonlinear_adjustable_box", "lib/matplotlib/tests/test_axes.py::test_aspect_nonlinear_adjustable_datalim", "lib/matplotlib/tests/test_axes.py::test_box_aspect", "lib/matplotlib/tests/test_axes.py::test_box_aspect_custom_position", "lib/matplotlib/tests/test_axes.py::test_bbox_aspect_axes_init", "lib/matplotlib/tests/test_axes.py::test_set_aspect_negative", "lib/matplotlib/tests/test_axes.py::test_redraw_in_frame", "lib/matplotlib/tests/test_axes.py::test_invisible_axes_events", "lib/matplotlib/tests/test_axes.py::test_xtickcolor_is_not_markercolor", "lib/matplotlib/tests/test_axes.py::test_ytickcolor_is_not_markercolor", "lib/matplotlib/tests/test_axes.py::test_unautoscale[True-x]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[True-y]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[False-x]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[False-y]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[None-x]", "lib/matplotlib/tests/test_axes.py::test_unautoscale[None-y]", "lib/matplotlib/tests/test_axes.py::test_polar_interpolation_steps_variable_r[png]", "lib/matplotlib/tests/test_axes.py::test_autoscale_tiny_sticky", "lib/matplotlib/tests/test_axes.py::test_xtickcolor_is_not_xticklabelcolor", "lib/matplotlib/tests/test_axes.py::test_ytickcolor_is_not_yticklabelcolor", "lib/matplotlib/tests/test_axes.py::test_xaxis_offsetText_color", "lib/matplotlib/tests/test_axes.py::test_yaxis_offsetText_color", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[xx-small]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[x-small]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[small]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[medium]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[large]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[x-large]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[xx-large]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[larger]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[smaller]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[8]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[10]", "lib/matplotlib/tests/test_axes.py::test_relative_ticklabel_sizes[12]", "lib/matplotlib/tests/test_axes.py::test_multiplot_autoscale", "lib/matplotlib/tests/test_axes.py::test_sharing_does_not_link_positions", "lib/matplotlib/tests/test_axes.py::test_shared_axes_clear[png]", "lib/matplotlib/tests/test_axes.py::test_shared_axes_retick", "lib/matplotlib/tests/test_axes.py::test_ylabel_ha_with_position[left]", "lib/matplotlib/tests/test_axes.py::test_ylabel_ha_with_position[center]", "lib/matplotlib/tests/test_axes.py::test_ylabel_ha_with_position[right]", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_vertical", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_vertical_yinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal_yinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal_xinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_horizontal_xyinverted", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_center", "lib/matplotlib/tests/test_axes.py::test_centered_bar_label_label_beyond_limits", "lib/matplotlib/tests/test_axes.py::test_bar_label_location_errorbars", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt[%.2f]", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt[{:.2f}]", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt[format]", "lib/matplotlib/tests/test_axes.py::test_bar_label_fmt_error", "lib/matplotlib/tests/test_axes.py::test_bar_label_labels", "lib/matplotlib/tests/test_axes.py::test_bar_label_nan_ydata", "lib/matplotlib/tests/test_axes.py::test_bar_label_nan_ydata_inverted", "lib/matplotlib/tests/test_axes.py::test_nan_barlabels", "lib/matplotlib/tests/test_axes.py::test_patch_bounds", "lib/matplotlib/tests/test_axes.py::test_warn_ignored_scatter_kwargs", "lib/matplotlib/tests/test_axes.py::test_artist_sublists", "lib/matplotlib/tests/test_axes.py::test_empty_line_plots", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-f-'f'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-o+-'o\\\\+'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-:--':-'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-rk-'rk'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[None-:o-r-':o-r'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-f-'f'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-o+-'o\\\\+'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-:--':-'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-rk-'rk'", "lib/matplotlib/tests/test_axes.py::test_plot_format_errors[data1-:o-r-':o-r'", "lib/matplotlib/tests/test_axes.py::test_plot_format", "lib/matplotlib/tests/test_axes.py::test_automatic_legend", "lib/matplotlib/tests/test_axes.py::test_plot_errors", "lib/matplotlib/tests/test_axes.py::test_clim", "lib/matplotlib/tests/test_axes.py::test_bezier_autoscale", "lib/matplotlib/tests/test_axes.py::test_small_autoscale", "lib/matplotlib/tests/test_axes.py::test_get_xticklabel", "lib/matplotlib/tests/test_axes.py::test_bar_leading_nan", "lib/matplotlib/tests/test_axes.py::test_bar_all_nan[png]", "lib/matplotlib/tests/test_axes.py::test_extent_units[png]", "lib/matplotlib/tests/test_axes.py::test_cla_clears_children_axes_and_fig", "lib/matplotlib/tests/test_axes.py::test_scatter_color_repr_error", "lib/matplotlib/tests/test_axes.py::test_zorder_and_explicit_rasterization", "lib/matplotlib/tests/test_axes.py::test_preset_clip_paths[png]", "lib/matplotlib/tests/test_axes.py::test_rc_axes_label_formatting", "lib/matplotlib/tests/test_axes.py::test_ecdf[png]", "lib/matplotlib/tests/test_axes.py::test_ecdf_invalid", "lib/matplotlib/tests/test_axes.py::test_fill_between_axes_limits", "lib/matplotlib/tests/test_axes.py::test_tick_param_labelfont", "lib/matplotlib/tests/test_axes.py::test_set_secondary_axis_color", "lib/matplotlib/tests/test_axes.py::test_xylim_changed_shared", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invisible_axes[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_grid_off[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invisible_ticks_axis[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axis_positions[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_aspects[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_aspects_adjust_box[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_repr", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_primary_views[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_bar3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_bar3d_colors", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_bar3d_shaded[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_bar3d_notshaded[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_bar3d_lightsource", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contour3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contour3d_extend3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contourf3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contourf3d_fill[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contourf3d_extend[png-both-levels0]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contourf3d_extend[png-min-levels1]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contourf3d_extend[png-max-levels2]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_tricontour[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_contour3d_1d_input", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_lines3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_plot_scalar[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_line_data", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_mixedsubplots[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_tight_layout_text[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter3d_color[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter3d_linewidth[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter3d_linewidth_modification[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter3d_modification[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter3d_sorting[png-True]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter3d_sorting[png-False]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_marker_draw_order_data_reversed[png--50]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_marker_draw_order_data_reversed[png-130]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_marker_draw_order_view_rotated[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_plot_3d_from_2d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_surface3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_surface3d_label_offset_tick_position[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_surface3d_shaded[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_surface3d_masked[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_plot_surface_None_arg[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_surface3d_masked_strides[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_text3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_text3d_modification[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_trisurf3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_trisurf3d_shaded[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_wireframe3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_wireframe3dzerocstride[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_wireframe3dzerorstride[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_wireframe3dzerostrideraises", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_mixedsamplesraises", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_quiver3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_quiver3d_empty[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_quiver3d_masked[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_patch_modification", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_patch_collection_modification[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_poly3dcollection_verts_validation", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_poly3dcollection_closed[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_poly_collection_2d_to_3d_empty", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_poly3dcollection_alpha[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_add_collection3d_zs_array[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_add_collection3d_zs_scalar[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_labelpad[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_cla[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_rotated[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_plotsurface_1d_raises", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_proj_transform", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_proj_axes_cube[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_proj_axes_cube_ortho[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_world", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_autoscale", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[True-x]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[True-y]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[True-z]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[False-x]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[False-y]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[False-z]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[None-x]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[None-y]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_unautoscale[None-z]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_focal_length_checks", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_focal_length[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_ortho[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_axes3d_isometric[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_xlim3d-left-inf]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_xlim3d-left-nan]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_xlim3d-right-inf]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_xlim3d-right-nan]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_ylim3d-bottom-inf]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_ylim3d-bottom-nan]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_ylim3d-top-inf]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_ylim3d-top-nan]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_zlim3d-bottom-inf]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_zlim3d-bottom-nan]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_zlim3d-top-inf]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_invalid_axes_limits[set_zlim3d-top-nan]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::TestVoxels::test_simple[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::TestVoxels::test_edge_style[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::TestVoxels::test_named_colors[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::TestVoxels::test_rgb_data[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::TestVoxels::test_alpha[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::TestVoxels::test_xyz[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::TestVoxels::test_calling_conventions", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_line3d_set_get_data_3d", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_inverted[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_inverted_cla", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_ax3d_tickcolour", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_ticklabel_format[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_quiver3D_smoke[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_minor_ticks[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_errorbar3d_errorevery[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_errorbar3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_stem3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_equal_box_aspect[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_colorbar_pos", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_inverted_zaxis", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_set_zlim", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_shared_view[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_shared_axes_retick", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_pan", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_toolbar_zoom_pan[zoom-1-None-expected0]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_toolbar_zoom_pan[zoom-1-x-expected1]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_toolbar_zoom_pan[zoom-1-y-expected2]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_toolbar_zoom_pan[zoom-3-None-expected3]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_toolbar_zoom_pan[pan-1-None-expected4]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_toolbar_zoom_pan[pan-1-x-expected5]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_toolbar_zoom_pan[pan-1-y-expected6]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scalarmap_update[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_subfigure_simple", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_computed_zorder[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_format_coord", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_get_axis_position", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[ValueError-args0-kwargs0-margin", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[ValueError-args1-kwargs1-margin", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[ValueError-args2-kwargs2-margin", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[ValueError-args3-kwargs3-margin", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[ValueError-args4-kwargs4-margin", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[ValueError-args5-kwargs5-margin", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[TypeError-args6-kwargs6-Cannot", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[TypeError-args7-kwargs7-Cannot", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[TypeError-args8-kwargs8-Cannot", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_margins_errors[TypeError-args9-kwargs9-Must", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_text_3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_draw_single_lines_from_Nx1", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_pathpatch_3d[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter_spiral[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_Poly3DCollection_get_facecolor", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_Poly3DCollection_get_edgecolor", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_view_init_vertical_axis[z-proj_expected0-axis_lines_expected0-tickdirs_expected0]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_view_init_vertical_axis[y-proj_expected1-axis_lines_expected1-tickdirs_expected1]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_view_init_vertical_axis[x-proj_expected2-axis_lines_expected2-tickdirs_expected2]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_arc_pathpatch[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_panecolor_rcparams[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_mutating_input_arrays_y_and_z[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_scatter_masked_color", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_surface3d_zsort_inf[png]", "lib/mpl_toolkits/mplot3d/tests/test_axes3d.py::test_Poly3DCollection_init_value_error"]
|
a4dca24d04f928a9e614db403c716237446140b2
|
{"first_commit_time": 1689142968.0, "pr_title": "Added get_xmargin(), get_ymargin() and get_zmargin() and tests.", "pr_body": "## PR summary\r\nCloses #26281 by adding `get_xmargin()`, `get_ymargin()` for` _AxesBase` and `get_zmargin()` for `Axes3D`, as well as tests for them. \r\n## PR checklist\r\n<!-- Please mark any checkboxes that do not apply to this PR as [N/A].-->\r\n\r\n- [x] \"closes #0000\" is in the body of the PR description to [link the related issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)\r\n- [x] new and changed code is [tested](https://matplotlib.org/devdocs/devel/testing.html)\r\n- [N/A] *Plotting related* features are demonstrated in an [example](https://matplotlib.org/devdocs/devel/documenting_mpl.html#writing-examples-and-tutorials)\r\n- [ ] *New Features* and *API Changes* are noted with a [directive and release note](https://matplotlib.org/devdocs/devel/coding_guide.html#new-features-and-api-changes)\r\n- [ ] Documentation complies with [general](https://matplotlib.org/devdocs/devel/documenting_mpl.html#writing-rest-pages) and [docstring](https://matplotlib.org/devdocs/devel/documenting_mpl.html#writing-docstrings) guidelines\r\n\r\n<!--\r\nThank you so much for your PR! To help us review your contribution, please\r\nconsider the following points:\r\n\r\n- A development guide is available at https://matplotlib.org/devdocs/devel/index.html.\r\n\r\n- Help with git and github is available at https://matplotlib.org/devdocs/devel/development_workflow.html\r\n\r\n- Create a separate branch for your changes and open the PR from this branch. Please avoid working on `main`.\r\n\r\n- The PR title should summarize the changes, for example \"Raise ValueError on\r\n non-numeric input to set_xlim\". Avoid non-descriptive titles such as\r\n \"Addresses issue #8576\".\r\n\r\n- The summary should provide at least 1-2 sentences describing the pull request\r\n in detail (Why is this change required? What problem does it solve?) and\r\n link to any relevant issues.\r\n\r\n- If you are contributing fixes to docstrings, please pay attention to\r\n https://matplotlib.org/stable/devel/documenting_mpl.html#formatting-conventions. In particular,\r\n note the difference between using single backquotes, double backquotes, and\r\n asterisks in the markup.\r\n\r\nWe understand that PRs can sometimes be overwhelming, especially as the\r\nreviews start coming in. Please let us know if the reviews are unclear or\r\nthe recommended next step seems overly demanding, if you would like help in\r\naddressing a reviewer's comments, or if you have been waiting too long to hear\r\nback on your PR.\r\n-->\r\n", "pr_timeline": [{"time": 1691239786.0, "comment": "Hi @turnipseason is this one waiting for review? If so, please hit the \"Ready for review\" button. PRs marked as draft tend to be skipped over."}, {"time": 1691734816.0, "comment": "Thanks for the update @turnipseason - looks good.\r\n\r\nI just learned (on gitter) that the new methods also need adding here so that they appear in the documentation.\r\nhttps://github.com/matplotlib/matplotlib/blob/main/doc/api/axes_api.rst?plain=1\r\nhttps://github.com/matplotlib/matplotlib/blob/main/doc/api/toolkits/mplot3d/axes3d.rst?plain=1\r\n\r\nFor most classes this is automatic and the methods appear in alphabetical order. For Axes we add them manually so we can group similar things together."}, {"time": 1692902323.0, "comment": "Thank you so much for your persistence, this looks good to go. Would you like one of us to squash merge or would you like to try rebasing down to 1 commit? Either choice is fine, and we can help if you'd like to do the latter. "}, {"time": 1692910482.0, "comment": "> Thank you so much for your persistence, this looks good to go. Would you like one of us to squash merge or would you like to try rebasing down to 1 commit? Either choice is fine, and we can help if you'd like to do the latter.\r\n\r\nGlad I could help! I'm a little bit afraid to screw things up with git\ud83d\ude05. So I think it's better if you squash merge it. I'll read more on interactive rebasing (that's what would be used, right?) and try it some other time. "}, {"time": 1692914160.0, "comment": "> I'll read more on interactive rebasing (that's what would be used, right?\n\nOr regular rebasing, yup! "}], "issues": {"26281": {"issue_title": "[ENH]: Add get_xmargin, get_ymargin, get_zmargin axes methods", "issue_body": "### Problem\n\nCurrently, I think the only public API to retrieve the margins settings on an Axes is ax.margins(), which has a somewhat peculiar API (basically inherited from pyplot/matlab); adding get_xmargin/get_ymargin/get_zmargin Axes methods would be nice (there's already the corresponding setters).\r\n\r\nBonus points for moving the private _xmargin/_ymargin/_zmargin to the Axis instances, adding Axis.{get,set}_margin, and using axis_method_wrapper, though I haven't checked it'll actually work.\n\n### Proposed solution\n\n_No response_", "issue_timeline": [{"time": 1689123358.0, "comment": "Hi! \r\nDoes zmargin exist? I couldn't find it -- both the public docs and the code in _base.py only have xmargin and ymargin. Am I missing something?"}, {"time": 1689137939.0, "comment": "It is in Axes3D."}]}}}
|
open-mmlab/mmengine
| 609
|
https://github.com/open-mmlab/mmengine/pull/609
|
open-mmlab__mmengine-609
|
[]
|
bc37c838d424b3d9ce5cbd4859540cdc03204026
|
diff --git a/docs/en/api/utils.rst b/docs/en/api/utils.rst
index 92466352a5..681e15d2c0 100644
--- a/docs/en/api/utils.rst
+++ b/docs/en/api/utils.rst
@@ -109,6 +109,7 @@ Miscellaneous
to_ntuple
check_prerequisites
deprecated_api_warning
+ deprecated_function
has_method
is_method_overridden
import_modules_from_strings
diff --git a/docs/zh_cn/api/utils.rst b/docs/zh_cn/api/utils.rst
index 92466352a5..681e15d2c0 100644
--- a/docs/zh_cn/api/utils.rst
+++ b/docs/zh_cn/api/utils.rst
@@ -109,6 +109,7 @@ Miscellaneous
to_ntuple
check_prerequisites
deprecated_api_warning
+ deprecated_function
has_method
is_method_overridden
import_modules_from_strings
diff --git a/mmengine/utils/misc.py b/mmengine/utils/misc.py
index cdb24d8c52..d7fa1fb39b 100644
--- a/mmengine/utils/misc.py
+++ b/mmengine/utils/misc.py
@@ -2,7 +2,10 @@
import collections.abc
import functools
import itertools
+import logging
+import re
import subprocess
+import textwrap
import warnings
from collections import abc
from importlib import import_module
@@ -387,3 +390,72 @@ def has_method(obj: object, method: str) -> bool:
bool: True if the object has the method else False.
"""
return hasattr(obj, method) and callable(getattr(obj, method))
+
+
+def deprecated_function(since: str, removed_in: str,
+ instructions: str) -> Callable:
+ """Marks functions as deprecated.
+
+ Throw a warning when a deprecated function is called, and add a note in the
+ docstring. Modified from https://github.com/pytorch/pytorch/blob/master/torch/onnx/_deprecation.py
+
+ Args:
+ since (str): The version when the function was first deprecated.
+ removed_in (str): The version when the function will be removed.
+ instructions (str): The action users should take.
+
+ Returns:
+ Callable: A new function, which will be deprecated soon.
+ """ # noqa: E501
+ from mmengine import print_log
+
+ def decorator(function):
+
+ @functools.wraps(function)
+ def wrapper(*args, **kwargs):
+ print_log(
+ f"'{function.__module__}.{function.__name__}' "
+ f'is deprecated in version {since} and will be '
+ f'removed in version {removed_in}. Please {instructions}.',
+ logger='current',
+ level=logging.WARNING,
+ )
+ return function(*args, **kwargs)
+
+ indent = ' '
+ # Add a deprecation note to the docstring.
+ docstring = function.__doc__ or ''
+ # Add a note to the docstring.
+ deprecation_note = textwrap.dedent(f"""\
+ .. deprecated:: {since}
+ Deprecated and will be removed in version {removed_in}.
+ Please {instructions}.
+ """)
+ # Split docstring at first occurrence of newline
+ pattern = '\n\n'
+ summary_and_body = re.split(pattern, docstring, 1)
+
+ if len(summary_and_body) > 1:
+ summary, body = summary_and_body
+ body = textwrap.indent(textwrap.dedent(body), indent)
+ summary = '\n'.join(
+ [textwrap.dedent(string) for string in summary.split('\n')])
+ summary = textwrap.indent(summary, prefix=indent)
+ # Dedent the body. We cannot do this with the presence of the
+ # summary because the body contains leading whitespaces when the
+ # summary does not.
+ new_docstring_parts = [
+ deprecation_note, '\n\n', summary, '\n\n', body
+ ]
+ else:
+ summary = summary_and_body[0]
+ summary = '\n'.join(
+ [textwrap.dedent(string) for string in summary.split('\n')])
+ summary = textwrap.indent(summary, prefix=indent)
+ new_docstring_parts = [deprecation_note, '\n\n', summary]
+
+ wrapper.__doc__ = ''.join(new_docstring_parts)
+
+ return wrapper
+
+ return decorator
|
diff --git a/tests/test_utils/test_misc.py b/tests/test_utils/test_misc.py
new file mode 100644
index 0000000000..95d7a006bd
--- /dev/null
+++ b/tests/test_utils/test_misc.py
@@ -0,0 +1,285 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import pytest
+
+from mmengine import MMLogger
+# yapf: disable
+from mmengine.utils.misc import (concat_list, deprecated_api_warning,
+ deprecated_function, has_method,
+ import_modules_from_strings, is_list_of,
+ is_method_overridden, is_seq_of, is_tuple_of,
+ iter_cast, list_cast, requires_executable,
+ requires_package, slice_list, to_1tuple,
+ to_2tuple, to_3tuple, to_4tuple, to_ntuple,
+ tuple_cast)
+
+# yapf: enable
+
+
+def test_to_ntuple():
+ single_number = 2
+ assert to_1tuple(single_number) == (single_number, )
+ assert to_2tuple(single_number) == (single_number, single_number)
+ assert to_3tuple(single_number) == (single_number, single_number,
+ single_number)
+ assert to_4tuple(single_number) == (single_number, single_number,
+ single_number, single_number)
+ assert to_ntuple(5)(single_number) == (single_number, single_number,
+ single_number, single_number,
+ single_number)
+ assert to_ntuple(6)(single_number) == (single_number, single_number,
+ single_number, single_number,
+ single_number, single_number)
+
+
+def test_iter_cast():
+ assert list_cast([1, 2, 3], int) == [1, 2, 3]
+ assert list_cast(['1.1', 2, '3'], float) == [1.1, 2.0, 3.0]
+ assert list_cast([1, 2, 3], str) == ['1', '2', '3']
+ assert tuple_cast((1, 2, 3), str) == ('1', '2', '3')
+ assert next(iter_cast([1, 2, 3], str)) == '1'
+ with pytest.raises(TypeError):
+ iter_cast([1, 2, 3], '')
+ with pytest.raises(TypeError):
+ iter_cast(1, str)
+
+
+def test_is_seq_of():
+ assert is_seq_of([1.0, 2.0, 3.0], float)
+ assert is_seq_of([(1, ), (2, ), (3, )], tuple)
+ assert is_seq_of((1.0, 2.0, 3.0), float)
+ assert is_list_of([1.0, 2.0, 3.0], float)
+ assert not is_seq_of((1.0, 2.0, 3.0), float, seq_type=list)
+ assert not is_tuple_of([1.0, 2.0, 3.0], float)
+ assert not is_seq_of([1.0, 2, 3], int)
+ assert not is_seq_of((1.0, 2, 3), int)
+
+
+def test_slice_list():
+ in_list = [1, 2, 3, 4, 5, 6]
+ assert slice_list(in_list, [1, 2, 3]) == [[1], [2, 3], [4, 5, 6]]
+ assert slice_list(in_list, [len(in_list)]) == [in_list]
+ with pytest.raises(TypeError):
+ slice_list(in_list, 2.0)
+ with pytest.raises(ValueError):
+ slice_list(in_list, [1, 2])
+
+
+def test_concat_list():
+ assert concat_list([[1, 2]]) == [1, 2]
+ assert concat_list([[1, 2], [3, 4, 5], [6]]) == [1, 2, 3, 4, 5, 6]
+
+
+def test_requires_package(capsys):
+
+ @requires_package('nnn')
+ def func_a():
+ pass
+
+ @requires_package(['numpy', 'n1', 'n2'])
+ def func_b():
+ pass
+
+ @requires_package('numpy')
+ def func_c():
+ return 1
+
+ with pytest.raises(RuntimeError):
+ func_a()
+ out, _ = capsys.readouterr()
+ assert out == ('Prerequisites "nnn" are required in method "func_a" but '
+ 'not found, please install them first.\n')
+
+ with pytest.raises(RuntimeError):
+ func_b()
+ out, _ = capsys.readouterr()
+ assert out == (
+ 'Prerequisites "n1, n2" are required in method "func_b" but not found,'
+ ' please install them first.\n')
+
+ assert func_c() == 1
+
+
+def test_requires_executable(capsys):
+
+ @requires_executable('nnn')
+ def func_a():
+ pass
+
+ @requires_executable(['ls', 'n1', 'n2'])
+ def func_b():
+ pass
+
+ @requires_executable('mv')
+ def func_c():
+ return 1
+
+ with pytest.raises(RuntimeError):
+ func_a()
+ out, _ = capsys.readouterr()
+ assert out == ('Prerequisites "nnn" are required in method "func_a" but '
+ 'not found, please install them first.\n')
+
+ with pytest.raises(RuntimeError):
+ func_b()
+ out, _ = capsys.readouterr()
+ assert out == (
+ 'Prerequisites "n1, n2" are required in method "func_b" but not found,'
+ ' please install them first.\n')
+
+ assert func_c() == 1
+
+
+def test_import_modules_from_strings():
+ # multiple imports
+ import os.path as osp_
+ import sys as sys_
+ osp, sys = import_modules_from_strings(['os.path', 'sys'])
+ assert osp == osp_
+ assert sys == sys_
+
+ # single imports
+ osp = import_modules_from_strings('os.path')
+ assert osp == osp_
+ # No imports
+ assert import_modules_from_strings(None) is None
+ assert import_modules_from_strings([]) is None
+ assert import_modules_from_strings('') is None
+ # Unsupported types
+ with pytest.raises(TypeError):
+ import_modules_from_strings(1)
+ with pytest.raises(TypeError):
+ import_modules_from_strings([1])
+ # Failed imports
+ with pytest.raises(ImportError):
+ import_modules_from_strings('_not_implemented_module')
+ with pytest.warns(UserWarning):
+ imported = import_modules_from_strings(
+ '_not_implemented_module', allow_failed_imports=True)
+ assert imported is None
+ with pytest.warns(UserWarning):
+ imported = import_modules_from_strings(['os.path', '_not_implemented'],
+ allow_failed_imports=True)
+ assert imported[0] == osp
+ assert imported[1] is None
+
+
+def test_is_method_overridden():
+
+ class Base:
+
+ def foo1():
+ pass
+
+ def foo2():
+ pass
+
+ class Sub(Base):
+
+ def foo1():
+ pass
+
+ # test passing sub class directly
+ assert is_method_overridden('foo1', Base, Sub)
+ assert not is_method_overridden('foo2', Base, Sub)
+
+ # test passing instance of sub class
+ sub_instance = Sub()
+ assert is_method_overridden('foo1', Base, sub_instance)
+ assert not is_method_overridden('foo2', Base, sub_instance)
+
+ # base_class should be a class, not instance
+ base_instance = Base()
+ with pytest.raises(AssertionError):
+ is_method_overridden('foo1', base_instance, sub_instance)
+
+
+def test_has_method():
+
+ class Foo:
+
+ def __init__(self, name):
+ self.name = name
+
+ def print_name(self):
+ print(self.name)
+
+ foo = Foo('foo')
+ assert not has_method(foo, 'name')
+ assert has_method(foo, 'print_name')
+
+
+def test_deprecated_api_warning():
+
+ @deprecated_api_warning(name_dict=dict(old_key='new_key'))
+ def dummy_func(new_key=1):
+ return new_key
+
+ # replace `old_key` to `new_key`
+ assert dummy_func(old_key=2) == 2
+
+ # The expected behavior is to replace the
+ # deprecated key `old_key` to `new_key`,
+ # but got them in the arguments at the same time
+ with pytest.raises(AssertionError):
+ dummy_func(old_key=1, new_key=2)
+
+
+def test_deprecated_function():
+
+ @deprecated_function('0.2.0', '0.3.0', 'toy instruction')
+ def deprecated_demo(arg1: int, arg2: int) -> tuple:
+ """This is a long summary. This is a long summary. This is a long
+ summary. This is a long summary.
+
+ Args:
+ arg1 (int): Long description with a line break. Long description
+ with a line break.
+ arg2 (int): short description.
+
+ Returns:
+ Long description without a line break. Long description without
+ a line break.
+ """
+
+ return arg1, arg2
+
+ MMLogger.get_instance('test_deprecated_function')
+ deprecated_demo(1, 2)
+ # out, _ = capsys.readouterr()
+ # assert "'test_misc.deprecated_demo' is deprecated" in out
+ assert (1, 2) == deprecated_demo(1, 2)
+
+ expected_docstring = \
+ """.. deprecated:: 0.2.0
+ Deprecated and will be removed in version 0.3.0.
+ Please toy instruction.
+
+
+ This is a long summary. This is a long summary. This is a long
+ summary. This is a long summary.
+
+ Args:
+ arg1 (int): Long description with a line break. Long description
+ with a line break.
+ arg2 (int): short description.
+
+ Returns:
+ Long description without a line break. Long description without
+ a line break.
+ """ # noqa: E122
+ assert expected_docstring.strip(' ') == deprecated_demo.__doc__
+ MMLogger._instance_dict.clear()
+
+ # Test with short summary without args.
+ @deprecated_function('0.2.0', '0.3.0', 'toy instruction')
+ def deprecated_demo1():
+ """Short summary."""
+
+ expected_docstring = \
+ """.. deprecated:: 0.2.0
+ Deprecated and will be removed in version 0.3.0.
+ Please toy instruction.
+
+
+ Short summary.""" # noqa: E122
+ assert expected_docstring.strip(' ') == deprecated_demo1.__doc__
| 2022-10-14T07:40:30
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/en/api/utils.rst": ".. role:: hidden\n :class: hidden-section\n\nmmengine.utils\n===================================\n\n.. contents:: mmengine.utils\n :depth: 2\n :local:\n :backlinks: top\n\n.. currentmodule:: mmengine.utils\n\nManager\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n :template: classtemplate.rst\n\n ManagerMeta\n ManagerMixin\n\nPath\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n check_file_exist\n fopen\n is_abs\n is_filepath\n mkdir_or_exist\n scandir\n symlink\n\nPackage\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n call_command\n install_package\n get_installed_path\n is_installed\n\nVersion\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n digit_version\n get_git_hash\n\nProgress Bar\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n :template: classtemplate.rst\n\n ProgressBar\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n track_iter_progress\n track_parallel_progress\n track_progress\n\n\nMiscellaneous\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n :template: classtemplate.rst\n\n Timer\n TimerError\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n is_list_of\n is_tuple_of\n is_seq_of\n is_str\n iter_cast\n list_cast\n tuple_cast\n concat_list\n slice_list\n to_1tuple\n to_2tuple\n to_3tuple\n to_4tuple\n to_ntuple\n check_prerequisites\n deprecated_api_warning\n has_method\n is_method_overridden\n import_modules_from_strings\n requires_executable\n requires_package\n check_time\n", "docs/zh_cn/api/utils.rst": ".. role:: hidden\n :class: hidden-section\n\nmmengine.utils\n===================================\n\n.. contents:: mmengine.utils\n :depth: 2\n :local:\n :backlinks: top\n\n.. currentmodule:: mmengine.utils\n\nManager\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n :template: classtemplate.rst\n\n ManagerMeta\n ManagerMixin\n\nPath\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n check_file_exist\n fopen\n is_abs\n is_filepath\n mkdir_or_exist\n scandir\n symlink\n\nPackage\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n call_command\n install_package\n get_installed_path\n is_installed\n\nVersion\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n digit_version\n get_git_hash\n\nProgress Bar\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n :template: classtemplate.rst\n\n ProgressBar\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n track_iter_progress\n track_parallel_progress\n track_progress\n\n\nMiscellaneous\n----------------\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n :template: classtemplate.rst\n\n Timer\n TimerError\n\n.. autosummary::\n :toctree: generated\n :nosignatures:\n\n is_list_of\n is_tuple_of\n is_seq_of\n is_str\n iter_cast\n list_cast\n tuple_cast\n concat_list\n slice_list\n to_1tuple\n to_2tuple\n to_3tuple\n to_4tuple\n to_ntuple\n check_prerequisites\n deprecated_api_warning\n has_method\n is_method_overridden\n import_modules_from_strings\n requires_executable\n requires_package\n check_time\n", "mmengine/utils/misc.py": "# Copyright (c) OpenMMLab. All rights reserved.\nimport collections.abc\nimport functools\nimport itertools\nimport subprocess\nimport warnings\nfrom collections import abc\nfrom importlib import import_module\nfrom inspect import getfullargspec\nfrom itertools import repeat\nfrom typing import Any, Callable, Optional, Type, Union\n\n\n# From PyTorch internals\ndef _ntuple(n):\n\n def parse(x):\n if isinstance(x, collections.abc.Iterable):\n return x\n return tuple(repeat(x, n))\n\n return parse\n\n\nto_1tuple = _ntuple(1)\nto_2tuple = _ntuple(2)\nto_3tuple = _ntuple(3)\nto_4tuple = _ntuple(4)\nto_ntuple = _ntuple\n\n\ndef is_str(x):\n \"\"\"Whether the input is an string instance.\n\n Note: This method is deprecated since python 2 is no longer supported.\n \"\"\"\n return isinstance(x, str)\n\n\ndef import_modules_from_strings(imports, allow_failed_imports=False):\n \"\"\"Import modules from the given list of strings.\n\n Args:\n imports (list | str | None): The given module names to be imported.\n allow_failed_imports (bool): If True, the failed imports will return\n None. Otherwise, an ImportError is raise. Default: False.\n\n Returns:\n list[module] | module | None: The imported modules.\n\n Examples:\n >>> osp, sys = import_modules_from_strings(\n ... ['os.path', 'sys'])\n >>> import os.path as osp_\n >>> import sys as sys_\n >>> assert osp == osp_\n >>> assert sys == sys_\n \"\"\"\n if not imports:\n return\n single_import = False\n if isinstance(imports, str):\n single_import = True\n imports = [imports]\n if not isinstance(imports, list):\n raise TypeError(\n f'custom_imports must be a list but got type {type(imports)}')\n imported = []\n for imp in imports:\n if not isinstance(imp, str):\n raise TypeError(\n f'{imp} is of type {type(imp)} and cannot be imported.')\n try:\n imported_tmp = import_module(imp)\n except ImportError:\n if allow_failed_imports:\n warnings.warn(f'{imp} failed to import and is ignored.',\n UserWarning)\n imported_tmp = None\n else:\n raise ImportError(f'Failed to import {imp}')\n imported.append(imported_tmp)\n if single_import:\n imported = imported[0]\n return imported\n\n\ndef iter_cast(inputs, dst_type, return_type=None):\n \"\"\"Cast elements of an iterable object into some type.\n\n Args:\n inputs (Iterable): The input object.\n dst_type (type): Destination type.\n return_type (type, optional): If specified, the output object will be\n converted to this type, otherwise an iterator.\n\n Returns:\n iterator or specified type: The converted object.\n \"\"\"\n if not isinstance(inputs, abc.Iterable):\n raise TypeError('inputs must be an iterable object')\n if not isinstance(dst_type, type):\n raise TypeError('\"dst_type\" must be a valid type')\n\n out_iterable = map(dst_type, inputs)\n\n if return_type is None:\n return out_iterable\n else:\n return return_type(out_iterable)\n\n\ndef list_cast(inputs, dst_type):\n \"\"\"Cast elements of an iterable object into a list of some type.\n\n A partial method of :func:`iter_cast`.\n \"\"\"\n return iter_cast(inputs, dst_type, return_type=list)\n\n\ndef tuple_cast(inputs, dst_type):\n \"\"\"Cast elements of an iterable object into a tuple of some type.\n\n A partial method of :func:`iter_cast`.\n \"\"\"\n return iter_cast(inputs, dst_type, return_type=tuple)\n\n\ndef is_seq_of(seq: Any,\n expected_type: Union[Type, tuple],\n seq_type: Type = None) -> bool:\n \"\"\"Check whether it is a sequence of some type.\n\n Args:\n seq (Sequence): The sequence to be checked.\n expected_type (type or tuple): Expected type of sequence items.\n seq_type (type, optional): Expected sequence type. Defaults to None.\n\n Returns:\n bool: Return True if ``seq`` is valid else False.\n\n Examples:\n >>> from mmengine.utils import is_seq_of\n >>> seq = ['a', 'b', 'c']\n >>> is_seq_of(seq, str)\n True\n >>> is_seq_of(seq, int)\n False\n \"\"\"\n if seq_type is None:\n exp_seq_type = abc.Sequence\n else:\n assert isinstance(seq_type, type)\n exp_seq_type = seq_type\n if not isinstance(seq, exp_seq_type):\n return False\n for item in seq:\n if not isinstance(item, expected_type):\n return False\n return True\n\n\ndef is_list_of(seq, expected_type):\n \"\"\"Check whether it is a list of some type.\n\n A partial method of :func:`is_seq_of`.\n \"\"\"\n return is_seq_of(seq, expected_type, seq_type=list)\n\n\ndef is_tuple_of(seq, expected_type):\n \"\"\"Check whether it is a tuple of some type.\n\n A partial method of :func:`is_seq_of`.\n \"\"\"\n return is_seq_of(seq, expected_type, seq_type=tuple)\n\n\ndef slice_list(in_list, lens):\n \"\"\"Slice a list into several sub lists by a list of given length.\n\n Args:\n in_list (list): The list to be sliced.\n lens(int or list): The expected length of each out list.\n\n Returns:\n list: A list of sliced list.\n \"\"\"\n if isinstance(lens, int):\n assert len(in_list) % lens == 0\n lens = [lens] * int(len(in_list) / lens)\n if not isinstance(lens, list):\n raise TypeError('\"indices\" must be an integer or a list of integers')\n elif sum(lens) != len(in_list):\n raise ValueError('sum of lens and list length does not '\n f'match: {sum(lens)} != {len(in_list)}')\n out_list = []\n idx = 0\n for i in range(len(lens)):\n out_list.append(in_list[idx:idx + lens[i]])\n idx += lens[i]\n return out_list\n\n\ndef concat_list(in_list):\n \"\"\"Concatenate a list of list into a single list.\n\n Args:\n in_list (list): The list of list to be merged.\n\n Returns:\n list: The concatenated flat list.\n \"\"\"\n return list(itertools.chain(*in_list))\n\n\ndef check_prerequisites(\n prerequisites,\n checker,\n msg_tmpl='Prerequisites \"{}\" are required in method \"{}\" but not '\n 'found, please install them first.'): # yapf: disable\n \"\"\"A decorator factory to check if prerequisites are satisfied.\n\n Args:\n prerequisites (str of list[str]): Prerequisites to be checked.\n checker (callable): The checker method that returns True if a\n prerequisite is meet, False otherwise.\n msg_tmpl (str): The message template with two variables.\n\n Returns:\n decorator: A specific decorator.\n \"\"\"\n\n def wrap(func):\n\n @functools.wraps(func)\n def wrapped_func(*args, **kwargs):\n requirements = [prerequisites] if isinstance(\n prerequisites, str) else prerequisites\n missing = []\n for item in requirements:\n if not checker(item):\n missing.append(item)\n if missing:\n print(msg_tmpl.format(', '.join(missing), func.__name__))\n raise RuntimeError('Prerequisites not meet.')\n else:\n return func(*args, **kwargs)\n\n return wrapped_func\n\n return wrap\n\n\ndef _check_py_package(package):\n try:\n import_module(package)\n except ImportError:\n return False\n else:\n return True\n\n\ndef _check_executable(cmd):\n if subprocess.call(f'which {cmd}', shell=True) != 0:\n return False\n else:\n return True\n\n\ndef requires_package(prerequisites):\n \"\"\"A decorator to check if some python packages are installed.\n\n Example:\n >>> @requires_package('numpy')\n >>> func(arg1, args):\n >>> return numpy.zeros(1)\n array([0.])\n >>> @requires_package(['numpy', 'non_package'])\n >>> func(arg1, args):\n >>> return numpy.zeros(1)\n ImportError\n \"\"\"\n return check_prerequisites(prerequisites, checker=_check_py_package)\n\n\ndef requires_executable(prerequisites):\n \"\"\"A decorator to check if some executable files are installed.\n\n Example:\n >>> @requires_executable('ffmpeg')\n >>> func(arg1, args):\n >>> print(1)\n 1\n \"\"\"\n return check_prerequisites(prerequisites, checker=_check_executable)\n\n\ndef deprecated_api_warning(name_dict: dict,\n cls_name: Optional[str] = None) -> Callable:\n \"\"\"A decorator to check if some arguments are deprecate and try to replace\n deprecate src_arg_name to dst_arg_name.\n\n Args:\n name_dict(dict):\n key (str): Deprecate argument names.\n val (str): Expected argument names.\n\n Returns:\n func: New function.\n \"\"\"\n\n def api_warning_wrapper(old_func):\n\n @functools.wraps(old_func)\n def new_func(*args, **kwargs):\n # get the arg spec of the decorated method\n args_info = getfullargspec(old_func)\n # get name of the function\n func_name = old_func.__name__\n if cls_name is not None:\n func_name = f'{cls_name}.{func_name}'\n if args:\n arg_names = args_info.args[:len(args)]\n for src_arg_name, dst_arg_name in name_dict.items():\n if src_arg_name in arg_names:\n warnings.warn(\n f'\"{src_arg_name}\" is deprecated in '\n f'`{func_name}`, please use \"{dst_arg_name}\" '\n 'instead', DeprecationWarning)\n arg_names[arg_names.index(src_arg_name)] = dst_arg_name\n if kwargs:\n for src_arg_name, dst_arg_name in name_dict.items():\n if src_arg_name in kwargs:\n assert dst_arg_name not in kwargs, (\n f'The expected behavior is to replace '\n f'the deprecated key `{src_arg_name}` to '\n f'new key `{dst_arg_name}`, but got them '\n f'in the arguments at the same time, which '\n f'is confusing. `{src_arg_name} will be '\n f'deprecated in the future, please '\n f'use `{dst_arg_name}` instead.')\n\n warnings.warn(\n f'\"{src_arg_name}\" is deprecated in '\n f'`{func_name}`, please use \"{dst_arg_name}\" '\n 'instead', DeprecationWarning)\n kwargs[dst_arg_name] = kwargs.pop(src_arg_name)\n\n # apply converted arguments to the decorated method\n output = old_func(*args, **kwargs)\n return output\n\n return new_func\n\n return api_warning_wrapper\n\n\ndef is_method_overridden(method: str, base_class: type,\n derived_class: Union[type, Any]) -> bool:\n \"\"\"Check if a method of base class is overridden in derived class.\n\n Args:\n method (str): the method name to check.\n base_class (type): the class of the base class.\n derived_class (type | Any): the class or instance of the derived class.\n \"\"\"\n assert isinstance(base_class, type), \\\n \"base_class doesn't accept instance, Please pass class instead.\"\n\n if not isinstance(derived_class, type):\n derived_class = derived_class.__class__\n\n base_method = getattr(base_class, method)\n derived_method = getattr(derived_class, method)\n return derived_method != base_method\n\n\ndef has_method(obj: object, method: str) -> bool:\n \"\"\"Check whether the object has a method.\n\n Args:\n method (str): The method name to check.\n obj (object): The object to check.\n\n Returns:\n bool: True if the object has the method else False.\n \"\"\"\n return hasattr(obj, method) and callable(getattr(obj, method))\n"}
|
diff --git a/docs/en/api/utils.rst b/docs/en/api/utils.rst
index 92466352a5..681e15d2c0 100644
--- a/docs/en/api/utils.rst
+++ b/docs/en/api/utils.rst
@@ -109,6 +109,7 @@ Miscellaneous
to_ntuple
check_prerequisites
deprecated_api_warning
+ deprecated_function
has_method
is_method_overridden
import_modules_from_strings
diff --git a/docs/zh_cn/api/utils.rst b/docs/zh_cn/api/utils.rst
index 92466352a5..681e15d2c0 100644
--- a/docs/zh_cn/api/utils.rst
+++ b/docs/zh_cn/api/utils.rst
@@ -109,6 +109,7 @@ Miscellaneous
to_ntuple
check_prerequisites
deprecated_api_warning
+ deprecated_function
has_method
is_method_overridden
import_modules_from_strings
|
{"mmengine/utils/misc.py": [{"type": "function", "name": "deprecated_function", "lines": [395, 461], "signature": "def deprecated_function(since: str, removed_in: str, instructions: str) -> Callable:", "doc": "Marks functions as deprecated.\n\nThrow a warning when a deprecated function is called, and add a note in the\ndocstring. Modified from https://github.com/pytorch/pytorch/blob/master/torch/onnx/_deprecation.py\n\nArgs:\n since (str): The version when the function was first deprecated.\n removed_in (str): The version when the function will be removed.\n instructions (str): The action users should take.\n\nReturns:\n Callable: A new function, which will be deprecated soon."}, {"type": "function", "name": "deprecated_function.decorator", "lines": [412, 459], "signature": "def decorator(function): @functools.wraps(function)", "doc": ""}, {"type": "function", "name": "deprecated_function.decorator.wrapper", "lines": [415, 423], "signature": "def wrapper(*args, **kwargs):", "doc": ""}]}
| null |
["tests/test_utils/test_misc.py::test_to_ntuple", "tests/test_utils/test_misc.py::test_iter_cast", "tests/test_utils/test_misc.py::test_is_seq_of", "tests/test_utils/test_misc.py::test_slice_list", "tests/test_utils/test_misc.py::test_concat_list", "tests/test_utils/test_misc.py::test_requires_package", "tests/test_utils/test_misc.py::test_requires_executable", "tests/test_utils/test_misc.py::test_import_modules_from_strings", "tests/test_utils/test_misc.py::test_is_method_overridden", "tests/test_utils/test_misc.py::test_has_method", "tests/test_utils/test_misc.py::test_deprecated_api_warning", "tests/test_utils/test_misc.py::test_deprecated_function"]
|
[]
|
53474ef1ba0b166508c231fa525b55b580adf20f
|
{"first_commit_time": 1665733154.0, "pr_title": "[Enhancement] Add a function to mark the deprecated function.", "pr_body": "Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.\r\n\r\n## Motivation\r\n\r\nAdd a function to mark the deprecated function (copied from https://github.com/pytorch/pytorch/blob/master/torch/onnx/_deprecation.py).\r\n\r\n## Modification\r\n\r\nPlease briefly describe what modification is made in this PR.\r\n\r\n## BC-breaking (Optional)\r\n\r\nDoes the modification introduce changes that break the backward-compatibility of the downstream repos?\r\nIf so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.\r\n\r\n## Use cases (Optional)\r\n\r\nIf this PR introduces a new feature, it is better to list some use cases here, and update the documentation.\r\n\r\n## Checklist\r\n\r\n1. Pre-commit or other linting tools are used to fix the potential lint issues.\r\n2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.\r\n3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.\r\n4. The documentation has been modified accordingly, like docstring or example tutorials.\r\n", "pr_timeline": [{"time": 1665737990.0, "comment": "Please add unit test for this modification."}, {"time": 1666587418.0, "comment": "# [Codecov](https://codecov.io/gh/open-mmlab/mmengine/pull/609?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=open-mmlab) Report\n> :exclamation: No coverage uploaded for pull request base (`main@276ca24`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=open-mmlab#section-missing-base-commit).\n> Patch has no changes to coverable lines.\n\n> :exclamation: Current head 3b64aca differs from pull request most recent head 85f747d. Consider uploading reports for the commit 85f747d to get more accurate results\n\n<details><summary>Additional details and impacted files</summary>\n\n\n```diff\n@@ Coverage Diff @@\n## main #609 +/- ##\n=======================================\n Coverage ? 78.49% \n=======================================\n Files ? 126 \n Lines ? 9132 \n Branches ? 1816 \n=======================================\n Hits ? 7168 \n Misses ? 1652 \n Partials ? 312 \n```\n\n| Flag | Coverage \u0394 | |\n|---|---|---|\n| unittests | `78.49% <0.00%> (?)` | |\n\nFlags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=open-mmlab#carryforward-flags-in-the-pull-request-comment) to find out more.\n\n\nHelp us with your feedback. Take ten seconds to tell us [how you rate us](https://about.codecov.io/nps?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=open-mmlab). Have a feature suggestion? [Share it here.](https://app.codecov.io/gh/feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=open-mmlab)\n\n</details>\n\n[:umbrella: View full report at Codecov](https://codecov.io/gh/open-mmlab/mmengine/pull/609?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=open-mmlab). \n:loudspeaker: Do you have feedback about the report comment? [Let us know in this issue](https://about.codecov.io/codecov-pr-comment-feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=open-mmlab).\n"}], "issues": {}}
|
pgmpy/pgmpy
| 1,753
|
https://github.com/pgmpy/pgmpy/pull/1753
|
pgmpy__pgmpy-1753
|
[]
|
7ed0659107c9b3768208d17890a28778001320e9
|
diff --git a/pgmpy/utils/__init__.py b/pgmpy/utils/__init__.py
index 7803135af..adf8bb667 100644
--- a/pgmpy/utils/__init__.py
+++ b/pgmpy/utils/__init__.py
@@ -1,9 +1,8 @@
-from .mathext import cartesian, sample_discrete
-from .state_name import StateNameMixin
from .check_functions import _check_1d_array_object, _check_length_equal
+from .mathext import cartesian, sample_discrete
from .optimizer import optimize, pinverse
-from .utils import get_example_model
-
+from .state_name import StateNameMixin
+from .utils import discretize, get_example_model
__all__ = [
"cartesian",
@@ -14,4 +13,5 @@
"optimize",
"pinverse",
"get_example_model",
+ "discretize",
]
diff --git a/pgmpy/utils/utils.py b/pgmpy/utils/utils.py
index 7d53e1927..35b465a8c 100644
--- a/pgmpy/utils/utils.py
+++ b/pgmpy/utils/utils.py
@@ -1,5 +1,7 @@
import gzip
+import pandas as pd
+
try:
from importlib.resources import files
except:
@@ -112,3 +114,64 @@ def get_example_model(model):
content = f.read()
reader = BIFReader(string=content.decode("utf-8"), n_jobs=1)
return reader.get_model()
+
+
+def discretize(data, cardinality, labels=dict(), method="rounding"):
+ """
+ Discretizes a given continuous dataset.
+
+ Parameters
+ ----------
+ data: pandas.DataFrame
+ The dataset to discretize. All columns must have continuous values.
+
+ cardinality: dict
+ A dictionary of the form (str: int) representing the number of bins
+ to create for each of the variables.
+
+ labels: dict (default: None)
+ A dictionary of the form (str: list) representing the label names for
+ each variable in the discretized dataframe.
+
+ method: rounding or quantile
+ If rounding, equal width bins are created and data is discretized into these bins. Refer pandas.cut for more details.
+ If quantile, creates bins such that each bin has an equal number of datapoints. Refer pandas.qcut for more details.
+
+ Examples
+ --------
+ >>> import numpy as np
+ >>> from pgmpy.utils import discretize
+ >>> rng = np.random.default_rng(42)
+ >>> X = rng.standard_normal(1000)
+ >>> Y = 0.2 * X + rng.standard_normal(1000)
+ >>> Z = 0.4 * X + 0.5 * Y + rng.standard_normal(1000)
+ >>> df = pd.DataFrame({"X": X, "Y": Y, "Z": Z})
+ >>> df_disc = discretize(df, cardinality={'X': 3, 'Y': 3, 'Z': 3}, labels={'X': ['low', 'mid', 'high'], 'Y': ['low', 'mid', 'high'], 'Z': ['low', 'mid', 'high']})
+ >>> df_disc.head()
+ X Y Z
+ 0 mid mid mid
+ 1 mid mid low
+ 2 mid mid mid
+ 3 high mid mid
+ 4 low mid low
+
+ Returns
+ -------
+ pandas.DataFrame: A discretized dataframe.
+ """
+ df_copy = data.copy()
+ if method == "rounding":
+ for column in data.columns:
+ df_copy[column] = pd.cut(
+ df_copy[column],
+ bins=cardinality[column],
+ include_lowest=True,
+ labels=labels.get(column),
+ )
+ elif method == "quantile":
+ for column in data.columns:
+ df_copy[column] = pd.qcut(
+ df_copy[column], q=cardinality[column], labels=labels.get(column)
+ )
+
+ return df_copy
|
diff --git a/pgmpy/tests/test_utils/test_utils.py b/pgmpy/tests/test_utils/test_utils.py
index 505d14de1..1ab6e78a0 100644
--- a/pgmpy/tests/test_utils/test_utils.py
+++ b/pgmpy/tests/test_utils/test_utils.py
@@ -1,9 +1,11 @@
-import unittest
import random
+import unittest
+import numpy as np
+import pandas as pd
from tqdm.auto import tqdm
-from pgmpy.utils import get_example_model
+from pgmpy.utils import discretize, get_example_model
class TestDAGCreation(unittest.TestCase):
@@ -40,3 +42,28 @@ def test_get_example_model(self):
for model in tqdm(choices):
m = get_example_model(model=model)
del m
+
+
+class TestDiscretization(unittest.TestCase):
+ def setUp(self):
+ rng = np.random.default_rng(42)
+ X = rng.standard_normal(1000)
+ Y = 0.2 * X + rng.standard_normal(1000)
+ Z = 0.4 * X + 0.5 * Y + rng.standard_normal(1000)
+
+ self.data = pd.DataFrame({"X": X, "Y": Y, "Z": Z})
+
+ def test_rounding_disc(self):
+ df_disc = discretize(
+ data=self.data, cardinality={"X": 5, "Y": 4, "Z": 3}, method="rounding"
+ )
+ self.assertEqual(df_disc["X"].nunique(), 5)
+ self.assertEqual(df_disc["Y"].nunique(), 4)
+ self.assertEqual(df_disc["Z"].nunique(), 3)
+
+ df_disc = discretize(
+ data=self.data, cardinality={"X": 5, "Y": 4, "Z": 3}, method="quantile"
+ )
+ self.assertEqual(df_disc["X"].nunique(), 5)
+ self.assertEqual(df_disc["Y"].nunique(), 4)
+ self.assertEqual(df_disc["Z"].nunique(), 3)
| 2024-04-30T06:51:14
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"pgmpy/utils/__init__.py": "from .mathext import cartesian, sample_discrete\nfrom .state_name import StateNameMixin\nfrom .check_functions import _check_1d_array_object, _check_length_equal\nfrom .optimizer import optimize, pinverse\nfrom .utils import get_example_model\n\n\n__all__ = [\n \"cartesian\",\n \"sample_discrete\",\n \"StateNameMixin\",\n \"_check_1d_array_object\",\n \"_check_length_equal\",\n \"optimize\",\n \"pinverse\",\n \"get_example_model\",\n]\n", "pgmpy/utils/utils.py": "import gzip\n\ntry:\n from importlib.resources import files\nexcept:\n # For python 3.8 and lower\n from importlib_resources import files\n\n\ndef get_example_model(model):\n \"\"\"\n Fetches the specified model from bnlearn repository and returns a\n pgmpy.model instance.\n\n Parameter\n ---------\n model: str\n Any model from bnlearn repository (http://www.bnlearn.com/bnrepository).\n\n Discrete Bayesian Network Options:\n Small Networks:\n 1. asia\n 2. cancer\n 3. earthquake\n 4. sachs\n 5. survey\n Medium Networks:\n 1. alarm\n 2. barley\n 3. child\n 4. insurance\n 5. mildew\n 6. water\n Large Networks:\n 1. hailfinder\n 2. hepar2\n 3. win95pts\n Very Large Networks:\n 1. andes\n 2. diabetes\n 3. link\n 4. munin1\n 5. munin2\n 6. munin3\n 7. munin4\n 8. pathfinder\n 9. pigs\n 10. munin\n Gaussian Bayesian Network Options:\n 1. ecoli70\n 2. magic-niab\n 3. magic-irri\n 4. arth150\n Conditional Linear Gaussian Bayesian Network Options:\n 1. sangiovese\n 2. mehra\n\n Example\n -------\n >>> from pgmpy.data import get_example_model\n >>> model = get_example_model(model='asia')\n >>> model\n\n Returns\n -------\n pgmpy.models instance: An instance of one of the model classes in pgmpy.models\n depending on the type of dataset.\n \"\"\"\n from pgmpy.readwrite import BIFReader\n\n filenames = {\n \"asia\": \"utils/example_models/asia.bif.gz\",\n \"cancer\": \"utils/example_models/cancer.bif.gz\",\n \"earthquake\": \"utils/example_models/earthquake.bif.gz\",\n \"sachs\": \"utils/example_models/sachs.bif.gz\",\n \"survey\": \"utils/example_models/survey.bif.gz\",\n \"alarm\": \"utils/example_models/alarm.bif.gz\",\n \"barley\": \"utils/example_models/barley.bif.gz\",\n \"child\": \"utils/example_models/child.bif.gz\",\n \"insurance\": \"utils/example_models/insurance.bif.gz\",\n \"mildew\": \"utils/example_models/mildew.bif.gz\",\n \"water\": \"utils/example_models/water.bif.gz\",\n \"hailfinder\": \"utils/example_models/hailfinder.bif.gz\",\n \"hepar2\": \"utils/example_models/hepar2.bif.gz\",\n \"win95pts\": \"utils/example_models/win95pts.bif.gz\",\n \"andes\": \"utils/example_models/andes.bif.gz\",\n \"diabetes\": \"utils/example_models/diabetes.bif.gz\",\n \"link\": \"utils/example_models/link.bif.gz\",\n \"munin1\": \"utils/example_models/munin1.bif.gz\",\n \"munin2\": \"utils/example_models/munin2.bif.gz\",\n \"munin3\": \"utils/example_models/munin3.bif.gz\",\n \"munin4\": \"utils/example_models/munin4.bif.gz\",\n \"pathfinder\": \"utils/example_models/pathfinder.bif.gz\",\n \"pigs\": \"utils/example_models/pigs.bif.gz\",\n \"munin\": \"utils/example_models/munin.bif.gz\",\n \"ecoli70\": \"\",\n \"magic-niab\": \"\",\n \"magic-irri\": \"\",\n \"arth150\": \"\",\n \"sangiovese\": \"\",\n \"mehra\": \"\",\n }\n\n if model not in filenames.keys():\n raise ValueError(\"dataset should be one of the options\")\n if filenames[model] == \"\":\n raise NotImplementedError(\"The specified dataset isn't available.\")\n\n path = filenames[model]\n ref = files(\"pgmpy\") / path\n with gzip.open(ref) as f:\n content = f.read()\n reader = BIFReader(string=content.decode(\"utf-8\"), n_jobs=1)\n return reader.get_model()\n"}
|
{"pgmpy/utils/utils.py": [{"type": "function", "name": "discretize", "lines": [119, 177], "signature": "def discretize(data, cardinality, labels=dict(), method=\"rounding\"):", "doc": "Discretizes a given continuous dataset.\n\nParameters\n----------\ndata: pandas.DataFrame\n The dataset to discretize. All columns must have continuous values.\n\ncardinality: dict\n A dictionary of the form (str: int) representing the number of bins\n to create for each of the variables.\n\nlabels: dict (default: None)\n A dictionary of the form (str: list) representing the label names for\n each variable in the discretized dataframe.\n\nmethod: rounding or quantile\n If rounding, equal width bins are created and data is discretized into these bins. Refer pandas.cut for more details.\n If quantile, creates bins such that each bin has an equal number of datapoints. Refer pandas.qcut for more details.\n\nExamples\n--------\n>>> import numpy as np\n>>> from pgmpy.utils import discretize\n>>> rng = np.random.default_rng(42)\n>>> X = rng.standard_normal(1000)\n>>> Y = 0.2 * X + rng.standard_normal(1000)\n>>> Z = 0.4 * X + 0.5 * Y + rng.standard_normal(1000)\n>>> df = pd.DataFrame({\"X\": X, \"Y\": Y, \"Z\": Z})\n>>> df_disc = discretize(df, cardinality={'X': 3, 'Y': 3, 'Z': 3}, labels={'X': ['low', 'mid', 'high'], 'Y': ['low', 'mid', 'high'], 'Z': ['low', 'mid', 'high']})\n>>> df_disc.head()\n X Y Z\n0 mid mid mid\n1 mid mid low\n2 mid mid mid\n3 high mid mid\n4 low mid low\n\nReturns\n-------\npandas.DataFrame: A discretized dataframe."}]}
| null |
["pgmpy/tests/test_utils/test_utils.py::TestDAGCreation::test_get_example_model", "pgmpy/tests/test_utils/test_utils.py::TestDiscretization::test_rounding_disc"]
|
[]
|
cf8d0f12e2e5be62b01ff8fded85f3f64eab1e84
|
{"first_commit_time": 1714459855.0, "pr_title": "Adds a function for discretization", "pr_body": "### Your checklist for this pull request\r\nPlease review the [guidelines for contributing](CONTRIBUTING.md) to this repository.\r\n\r\n- [ ] Make sure you are requesting to **pull a topic/feature/bugfix branch** (right side). Don't request your master!\r\n- [ ] Make sure you are making a pull request against the **dev branch** (left side). Also you should start *your branch* off *our dev*.\r\n- [ ] Check the commit's or even all commits' message styles matches our requested structure.\r\n\r\n### Issue number(s) that this pull request fixes\r\n- Ref #1752 \r\n### List of changes to the codebase in this pull request\r\n- \r\n-\r\n-", "pr_timeline": [], "issues": {}}
|
|
pgmpy/pgmpy
| 1,797
|
https://github.com/pgmpy/pgmpy/pull/1797
|
pgmpy__pgmpy-1797
|
[]
|
336c144b95aa21718e1898930934eb63474d1caf
|
diff --git a/pgmpy/utils/__init__.py b/pgmpy/utils/__init__.py
index adf8bb667..22d901b77 100644
--- a/pgmpy/utils/__init__.py
+++ b/pgmpy/utils/__init__.py
@@ -2,7 +2,7 @@
from .mathext import cartesian, sample_discrete
from .optimizer import optimize, pinverse
from .state_name import StateNameMixin
-from .utils import discretize, get_example_model
+from .utils import discretize, get_example_model, llm_pairwise_orient
__all__ = [
"cartesian",
@@ -14,4 +14,5 @@
"pinverse",
"get_example_model",
"discretize",
+ "llm_pairwise_orient",
]
diff --git a/pgmpy/utils/utils.py b/pgmpy/utils/utils.py
index 35b465a8c..f568f7a18 100644
--- a/pgmpy/utils/utils.py
+++ b/pgmpy/utils/utils.py
@@ -1,5 +1,7 @@
import gzip
+import os
+import google.generativeai as genai
import pandas as pd
try:
@@ -175,3 +177,57 @@ def discretize(data, cardinality, labels=dict(), method="rounding"):
)
return df_copy
+
+
+def llm_pairwise_orient(
+ x, y, descriptions, domain=None, llm_model="gemini-1.5-flash", **kwargs
+):
+ """
+ Asks a Large Language Model (LLM) for the orientation of an edge between `x` and `y`.
+
+ Parameters
+ ----------
+ x: str
+ The first variable's name
+
+ y: str
+ The second variable's name
+
+ description: dict
+ A dict of the form {variable: description} containing text description of the variables.
+
+ domain: str
+ The domain of the variables. The LLM is prompted to be an expert in the domain.
+
+ llm: str (default: gemini)
+ The LLM to use. Currently only Google's gemini is supported.
+ """
+ if llm_model.startswith("gemini"):
+ if "GEMINI_API_KEY" not in os.environ:
+ raise ValueError(
+ "Please set GEMINI_API_KEY environment variable with the API key to use"
+ )
+
+ genai.configure(api_key=os.environ["GEMINI_API_KEY"])
+ model = genai.GenerativeModel(model_name=llm_model)
+
+ prompt = f""" You are an expert in {domain}. You are given two variables with the following descriptions:
+ <A>: {descriptions[x]}
+ <B>: {descriptions[y]}
+
+ Which of the following two options is the most likely causal direction between them:
+ 1. <A> causes <B>
+ 2. <B> causes <A>
+
+ Return a single letter answer between the choices above. I do not need the reasoning behind it. Do not add any formatting in the answer.
+ """
+ response = model.generate_content([prompt])
+ response_txt = response.text.strip().lower().replace("*", "")
+ if response_txt in ("a", "1"):
+ return (x, y)
+ elif response_txt in ("b", "2"):
+ return (y, x)
+ else:
+ raise ValueError(
+ "Results from the LLM are unclear. Try calling the function again."
+ )
diff --git a/requirements/runtime.txt b/requirements/runtime.txt
index c91b5f241..b4f6f5af6 100644
--- a/requirements/runtime.txt
+++ b/requirements/runtime.txt
@@ -10,3 +10,4 @@ tqdm>=4.64
joblib>=1.2
opt_einsum>=3.3
xgboost>=2.0.3
+google-generativeai>=0.7.1
|
diff --git a/pgmpy/tests/test_utils/test_utils.py b/pgmpy/tests/test_utils/test_utils.py
index 1ab6e78a0..38b1041dd 100644
--- a/pgmpy/tests/test_utils/test_utils.py
+++ b/pgmpy/tests/test_utils/test_utils.py
@@ -1,11 +1,13 @@
+import os
import random
import unittest
import numpy as np
import pandas as pd
+import pytest
from tqdm.auto import tqdm
-from pgmpy.utils import discretize, get_example_model
+from pgmpy.utils import discretize, get_example_model, llm_pairwise_orient
class TestDAGCreation(unittest.TestCase):
@@ -67,3 +69,36 @@ def test_rounding_disc(self):
self.assertEqual(df_disc["X"].nunique(), 5)
self.assertEqual(df_disc["Y"].nunique(), 4)
self.assertEqual(df_disc["Z"].nunique(), 3)
+
+
+class TestPairwiseOrientation(unittest.TestCase):
+ @pytest.mark.skipif(
+ "GEMINI_API_KEY" not in os.environ, reason="Gemini API key is not set"
+ )
+ def test_llm(self):
+ descriptions = {
+ "Age": "The age of a person",
+ "Workclass": "The workplace where the person is employed such as Private industry, or self employed",
+ "Education": "The highest level of education the person has finished",
+ "MaritalStatus": "The marital status of the person",
+ "Occupation": "The kind of job the person does. For example, sales, craft repair, clerical",
+ "Relationship": "The relationship status of the person",
+ "Race": "The ethnicity of the person",
+ "Sex": "The sex or gender of the person",
+ "HoursPerWeek": "The number of hours per week the person works",
+ "NativeCountry": "The native country of the person",
+ "Income": "The income i.e. amount of money the person makes",
+ }
+
+ self.assertEqual(
+ llm_pairwise_orient(
+ x="Age", y="Income", descriptions=descriptions, domain="Social Sciences"
+ ),
+ ("Age", "Income"),
+ )
+ self.assertEqual(
+ llm_pairwise_orient(
+ x="Income", y="Age", descriptions=descriptions, domain="Social Sciences"
+ ),
+ ("Age", "Income"),
+ )
| 2024-07-05T15:20:32
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"pgmpy/utils/__init__.py": "from .check_functions import _check_1d_array_object, _check_length_equal\nfrom .mathext import cartesian, sample_discrete\nfrom .optimizer import optimize, pinverse\nfrom .state_name import StateNameMixin\nfrom .utils import discretize, get_example_model\n\n__all__ = [\n \"cartesian\",\n \"sample_discrete\",\n \"StateNameMixin\",\n \"_check_1d_array_object\",\n \"_check_length_equal\",\n \"optimize\",\n \"pinverse\",\n \"get_example_model\",\n \"discretize\",\n]\n", "pgmpy/utils/utils.py": "import gzip\n\nimport pandas as pd\n\ntry:\n from importlib.resources import files\nexcept:\n # For python 3.8 and lower\n from importlib_resources import files\n\n\ndef get_example_model(model):\n \"\"\"\n Fetches the specified model from bnlearn repository and returns a\n pgmpy.model instance.\n\n Parameter\n ---------\n model: str\n Any model from bnlearn repository (http://www.bnlearn.com/bnrepository).\n\n Discrete Bayesian Network Options:\n Small Networks:\n 1. asia\n 2. cancer\n 3. earthquake\n 4. sachs\n 5. survey\n Medium Networks:\n 1. alarm\n 2. barley\n 3. child\n 4. insurance\n 5. mildew\n 6. water\n Large Networks:\n 1. hailfinder\n 2. hepar2\n 3. win95pts\n Very Large Networks:\n 1. andes\n 2. diabetes\n 3. link\n 4. munin1\n 5. munin2\n 6. munin3\n 7. munin4\n 8. pathfinder\n 9. pigs\n 10. munin\n Gaussian Bayesian Network Options:\n 1. ecoli70\n 2. magic-niab\n 3. magic-irri\n 4. arth150\n Conditional Linear Gaussian Bayesian Network Options:\n 1. sangiovese\n 2. mehra\n\n Example\n -------\n >>> from pgmpy.data import get_example_model\n >>> model = get_example_model(model='asia')\n >>> model\n\n Returns\n -------\n pgmpy.models instance: An instance of one of the model classes in pgmpy.models\n depending on the type of dataset.\n \"\"\"\n from pgmpy.readwrite import BIFReader\n\n filenames = {\n \"asia\": \"utils/example_models/asia.bif.gz\",\n \"cancer\": \"utils/example_models/cancer.bif.gz\",\n \"earthquake\": \"utils/example_models/earthquake.bif.gz\",\n \"sachs\": \"utils/example_models/sachs.bif.gz\",\n \"survey\": \"utils/example_models/survey.bif.gz\",\n \"alarm\": \"utils/example_models/alarm.bif.gz\",\n \"barley\": \"utils/example_models/barley.bif.gz\",\n \"child\": \"utils/example_models/child.bif.gz\",\n \"insurance\": \"utils/example_models/insurance.bif.gz\",\n \"mildew\": \"utils/example_models/mildew.bif.gz\",\n \"water\": \"utils/example_models/water.bif.gz\",\n \"hailfinder\": \"utils/example_models/hailfinder.bif.gz\",\n \"hepar2\": \"utils/example_models/hepar2.bif.gz\",\n \"win95pts\": \"utils/example_models/win95pts.bif.gz\",\n \"andes\": \"utils/example_models/andes.bif.gz\",\n \"diabetes\": \"utils/example_models/diabetes.bif.gz\",\n \"link\": \"utils/example_models/link.bif.gz\",\n \"munin1\": \"utils/example_models/munin1.bif.gz\",\n \"munin2\": \"utils/example_models/munin2.bif.gz\",\n \"munin3\": \"utils/example_models/munin3.bif.gz\",\n \"munin4\": \"utils/example_models/munin4.bif.gz\",\n \"pathfinder\": \"utils/example_models/pathfinder.bif.gz\",\n \"pigs\": \"utils/example_models/pigs.bif.gz\",\n \"munin\": \"utils/example_models/munin.bif.gz\",\n \"ecoli70\": \"\",\n \"magic-niab\": \"\",\n \"magic-irri\": \"\",\n \"arth150\": \"\",\n \"sangiovese\": \"\",\n \"mehra\": \"\",\n }\n\n if model not in filenames.keys():\n raise ValueError(\"dataset should be one of the options\")\n if filenames[model] == \"\":\n raise NotImplementedError(\"The specified dataset isn't available.\")\n\n path = filenames[model]\n ref = files(\"pgmpy\") / path\n with gzip.open(ref) as f:\n content = f.read()\n reader = BIFReader(string=content.decode(\"utf-8\"), n_jobs=1)\n return reader.get_model()\n\n\ndef discretize(data, cardinality, labels=dict(), method=\"rounding\"):\n \"\"\"\n Discretizes a given continuous dataset.\n\n Parameters\n ----------\n data: pandas.DataFrame\n The dataset to discretize. All columns must have continuous values.\n\n cardinality: dict\n A dictionary of the form (str: int) representing the number of bins\n to create for each of the variables.\n\n labels: dict (default: None)\n A dictionary of the form (str: list) representing the label names for\n each variable in the discretized dataframe.\n\n method: rounding or quantile\n If rounding, equal width bins are created and data is discretized into these bins. Refer pandas.cut for more details.\n If quantile, creates bins such that each bin has an equal number of datapoints. Refer pandas.qcut for more details.\n\n Examples\n --------\n >>> import numpy as np\n >>> from pgmpy.utils import discretize\n >>> rng = np.random.default_rng(42)\n >>> X = rng.standard_normal(1000)\n >>> Y = 0.2 * X + rng.standard_normal(1000)\n >>> Z = 0.4 * X + 0.5 * Y + rng.standard_normal(1000)\n >>> df = pd.DataFrame({\"X\": X, \"Y\": Y, \"Z\": Z})\n >>> df_disc = discretize(df, cardinality={'X': 3, 'Y': 3, 'Z': 3}, labels={'X': ['low', 'mid', 'high'], 'Y': ['low', 'mid', 'high'], 'Z': ['low', 'mid', 'high']})\n >>> df_disc.head()\n X Y Z\n 0 mid mid mid\n 1 mid mid low\n 2 mid mid mid\n 3 high mid mid\n 4 low mid low\n\n Returns\n -------\n pandas.DataFrame: A discretized dataframe.\n \"\"\"\n df_copy = data.copy()\n if method == \"rounding\":\n for column in data.columns:\n df_copy[column] = pd.cut(\n df_copy[column],\n bins=cardinality[column],\n include_lowest=True,\n labels=labels.get(column),\n )\n elif method == \"quantile\":\n for column in data.columns:\n df_copy[column] = pd.qcut(\n df_copy[column], q=cardinality[column], labels=labels.get(column)\n )\n\n return df_copy\n", "requirements/runtime.txt": "networkx>=3.0\nnumpy==1.26\nscipy>=1.10\nscikit-learn>=1.2\npandas>=1.5\npyparsing>=3.0\ntorch>=1.13\nstatsmodels>=0.13\ntqdm>=4.64\njoblib>=1.2\nopt_einsum>=3.3\nxgboost>=2.0.3\n"}
|
diff --git a/requirements/runtime.txt b/requirements/runtime.txt
index c91b5f241..b4f6f5af6 100644
--- a/requirements/runtime.txt
+++ b/requirements/runtime.txt
@@ -10,3 +10,4 @@ tqdm>=4.64
joblib>=1.2
opt_einsum>=3.3
xgboost>=2.0.3
+google-generativeai>=0.7.1
|
{"pgmpy/utils/utils.py": [{"type": "function", "name": "llm_pairwise_orient", "lines": [182, 232], "signature": "def llm_pairwise_orient( x, y, descriptions, domain=None, llm_model=\"gemini-1.5-flash\", **kwargs ):", "doc": "Asks a Large Language Model (LLM) for the orientation of an edge between `x` and `y`.\n\nParameters\n----------\nx: str\n The first variable's name\n\ny: str\n The second variable's name\n\ndescription: dict\n A dict of the form {variable: description} containing text description of the variables.\n\ndomain: str\n The domain of the variables. The LLM is prompted to be an expert in the domain.\n\nllm: str (default: gemini)\n The LLM to use. Currently only Google's gemini is supported."}]}
| null |
["pgmpy/tests/test_utils/test_utils.py::TestDAGCreation::test_get_example_model", "pgmpy/tests/test_utils/test_utils.py::TestDiscretization::test_rounding_disc"]
|
[]
|
cf8d0f12e2e5be62b01ff8fded85f3f64eab1e84
|
{"first_commit_time": 1720192812.0, "pr_title": "Adds an LLM based pairwise orientation method", "pr_body": "### Your checklist for this pull request\r\nPlease review the [guidelines for contributing](CONTRIBUTING.md) to this repository.\r\n\r\n- [ ] Make sure you are requesting to **pull a topic/feature/bugfix branch** (right side). Don't request your master!\r\n- [ ] Make sure you are making a pull request against the **dev branch** (left side). Also you should start *your branch* off *our dev*.\r\n- [ ] Check the commit's or even all commits' message styles matches our requested structure.\r\n\r\n### Issue number(s) that this pull request fixes\r\n- Fixes #\r\n\r\n### List of changes to the codebase in this pull request\r\n- \r\n-\r\n-", "pr_timeline": [], "issues": {}}
|
prometheus/client_python
| 302
|
https://github.com/prometheus/client_python/pull/302
|
prometheus__client_python-302
|
[]
|
10c8eb4a83069877dc89695fbad236a88e2094f9
|
diff --git a/prometheus_client/core.py b/prometheus_client/core.py
index da3034d0..cb1e7c5b 100644
--- a/prometheus_client/core.py
+++ b/prometheus_client/core.py
@@ -185,7 +185,7 @@ def get_sample_value(self, name, labels=None):
REGISTRY = CollectorRegistry(auto_describe=True)
'''The default registry.'''
-_METRIC_TYPES = ('counter', 'gauge', 'summary', 'histogram',
+_METRIC_TYPES = ('counter', 'gauge', 'summary', 'histogram',
'gaugehistogram', 'unknown', 'info', 'stateset')
@@ -378,8 +378,8 @@ def add_metric(self, labels, buckets, sum_value, timestamp=None):
exemplar = None
if len(b) == 3:
exemplar = b[2]
- self.samples.append(Sample(self.name + '_bucket',
- dict(list(zip(self._labelnames, labels)) + [('le', bucket)]),
+ self.samples.append(Sample(self.name + '_bucket',
+ dict(list(zip(self._labelnames, labels)) + [('le', bucket)]),
value, timestamp, exemplar))
# +Inf is last and provides the count value.
self.samples.append(Sample(self.name + '_count', dict(zip(self._labelnames, labels)), buckets[-1][1], timestamp))
@@ -411,7 +411,7 @@ def add_metric(self, labels, buckets, timestamp=None):
'''
for bucket, value in buckets:
self.samples.append(Sample(
- self.name + '_bucket',
+ self.name + '_bucket',
dict(list(zip(self._labelnames, labels)) + [('le', bucket)]),
value, timestamp))
@@ -438,7 +438,7 @@ def add_metric(self, labels, value, timestamp=None):
labels: A list of label values
value: A dict of labels
'''
- self.samples.append(Sample(self.name + '_info',
+ self.samples.append(Sample(self.name + '_info',
dict(dict(zip(self._labelnames, labels)), **value), 1, timestamp))
@@ -586,6 +586,13 @@ def close(self):
self._f = None
+def _mmap_key(metric_name, name, labelnames, labelvalues):
+ """Format a key for use in the mmap file."""
+ # ensure labels are in consistent order for identity
+ labels = dict(zip(labelnames, labelvalues))
+ return json.dumps([metric_name, name, labels], sort_keys=True)
+
+
def _MultiProcessValue(_pidFunc=os.getpid):
files = {}
values = []
@@ -618,7 +625,7 @@ def __reset(self):
'{0}_{1}.db'.format(file_prefix, pid['value']))
files[file_prefix] = _MmapedDict(filename)
self._file = files[file_prefix]
- self._key = json.dumps((metric_name, name, labelnames, labelvalues))
+ self._key = _mmap_key(metric_name, name, labelnames, labelvalues)
self._value = self._file.read_value(self._key)
def __check_for_pid_change(self):
@@ -1143,7 +1150,7 @@ class Enum(object):
Example usage:
from prometheus_client import Enum
- e = Enum('task_state', 'Description of enum',
+ e = Enum('task_state', 'Description of enum',
states=['starting', 'running', 'stopped'])
e.state('running')
diff --git a/prometheus_client/multiprocess.py b/prometheus_client/multiprocess.py
index 7dd74b2b..55213153 100644
--- a/prometheus_client/multiprocess.py
+++ b/prometheus_client/multiprocess.py
@@ -23,13 +23,24 @@ def __init__(self, registry, path=None):
registry.register(self)
def collect(self):
+ files = glob.glob(os.path.join(self._path, '*.db'))
+ return self.merge(files, accumulate=True)
+
+ def merge(self, files, accumulate=True):
+ """Merge metrics from given mmap files.
+
+ By default, histograms are accumulated, as per prometheus wire format.
+ But if writing the merged data back to mmap files, use
+ accumulate=False to avoid compound accumulation.
+ """
metrics = {}
- for f in glob.glob(os.path.join(self._path, '*.db')):
+ for f in files:
parts = os.path.basename(f).split('_')
typ = parts[0]
d = core._MmapedDict(f, read_mode=True)
for key, value in d.read_all_values():
- metric_name, name, labelnames, labelvalues = json.loads(key)
+ metric_name, name, labels = json.loads(key)
+ labels_key = tuple(sorted(labels.items()))
metric = metrics.get(metric_name)
if metric is None:
@@ -39,10 +50,10 @@ def collect(self):
if typ == 'gauge':
pid = parts[2][:-3]
metric._multiprocess_mode = parts[1]
- metric.add_sample(name, tuple(zip(labelnames, labelvalues)) + (('pid', pid), ), value)
+ metric.add_sample(name, labels_key + (('pid', pid), ), value)
else:
# The duplicates and labels are fixed in the next for.
- metric.add_sample(name, tuple(zip(labelnames, labelvalues)), value)
+ metric.add_sample(name, labels_key, value)
d.close()
for metric in metrics.values():
@@ -86,9 +97,17 @@ def collect(self):
for labels, values in buckets.items():
acc = 0.0
for bucket, value in sorted(values.items()):
- acc += value
- samples[(metric.name + '_bucket', labels + (('le', core._floatToGoString(bucket)), ))] = acc
- samples[(metric.name + '_count', labels)] = acc
+ sample_key = (
+ metric.name + '_bucket',
+ labels + (('le', core._floatToGoString(bucket)), ),
+ )
+ if accumulate:
+ acc += value
+ samples[sample_key] = acc
+ else:
+ samples[sample_key] = value
+ if accumulate:
+ samples[(metric.name + '_count', labels)] = acc
# Convert to correct sample format.
metric.samples = [core.Sample(name, dict(labels), value) for (name, labels), value in samples.items()]
|
diff --git a/tests/test_multiprocess.py b/tests/test_multiprocess.py
index ca84913f..501cb62f 100644
--- a/tests/test_multiprocess.py
+++ b/tests/test_multiprocess.py
@@ -1,9 +1,17 @@
from __future__ import unicode_literals
+import glob
import os
import shutil
+import sys
import tempfile
-import unittest
+
+if sys.version_info < (2, 7):
+ # We need the skip decorators from unittest2 on Python 2.6.
+ import unittest2 as unittest
+else:
+ import unittest
+
from prometheus_client import core
from prometheus_client.core import (
@@ -11,6 +19,7 @@
Counter,
Gauge,
Histogram,
+ Sample,
Summary,
)
from prometheus_client.multiprocess import (
@@ -25,7 +34,7 @@ def setUp(self):
os.environ['prometheus_multiproc_dir'] = self.tempdir
core._ValueClass = core._MultiProcessValue(lambda: 123)
self.registry = CollectorRegistry()
- MultiProcessCollector(self.registry, self.tempdir)
+ self.collector = MultiProcessCollector(self.registry, self.tempdir)
def tearDown(self):
del os.environ['prometheus_multiproc_dir']
@@ -137,6 +146,113 @@ def test_counter_across_forks(self):
self.assertEqual(3, self.registry.get_sample_value('c_total'))
self.assertEqual(1, c1._value.get())
+ @unittest.skipIf(sys.version_info < (2, 7), "Test requires Python 2.7+.")
+ def test_collect(self):
+ pid = 0
+ core._ValueClass = core._MultiProcessValue(lambda: pid)
+ labels = dict((i, i) for i in 'abcd')
+
+ def add_label(key, value):
+ l = labels.copy()
+ l[key] = value
+ return l
+
+ c = Counter('c', 'help', labelnames=labels.keys(), registry=None)
+ g = Gauge('g', 'help', labelnames=labels.keys(), registry=None)
+ h = Histogram('h', 'help', labelnames=labels.keys(), registry=None)
+
+ c.labels(**labels).inc(1)
+ g.labels(**labels).set(1)
+ h.labels(**labels).observe(1)
+
+ pid = 1
+
+ c.labels(**labels).inc(1)
+ g.labels(**labels).set(1)
+ h.labels(**labels).observe(5)
+
+ metrics = dict((m.name, m) for m in self.collector.collect())
+
+ self.assertEqual(
+ metrics['c'].samples, [Sample('c_total', labels, 2.0)]
+ )
+ metrics['g'].samples.sort(key=lambda x: x[1]['pid'])
+ self.assertEqual(metrics['g'].samples, [
+ Sample('g', add_label('pid', '0'), 1.0),
+ Sample('g', add_label('pid', '1'), 1.0),
+ ])
+
+ metrics['h'].samples.sort(
+ key=lambda x: (x[0], float(x[1].get('le', 0)))
+ )
+ expected_histogram = [
+ Sample('h_bucket', add_label('le', '0.005'), 0.0),
+ Sample('h_bucket', add_label('le', '0.01'), 0.0),
+ Sample('h_bucket', add_label('le', '0.025'), 0.0),
+ Sample('h_bucket', add_label('le', '0.05'), 0.0),
+ Sample('h_bucket', add_label('le', '0.075'), 0.0),
+ Sample('h_bucket', add_label('le', '0.1'), 0.0),
+ Sample('h_bucket', add_label('le', '0.25'), 0.0),
+ Sample('h_bucket', add_label('le', '0.5'), 0.0),
+ Sample('h_bucket', add_label('le', '0.75'), 0.0),
+ Sample('h_bucket', add_label('le', '1.0'), 1.0),
+ Sample('h_bucket', add_label('le', '2.5'), 1.0),
+ Sample('h_bucket', add_label('le', '5.0'), 2.0),
+ Sample('h_bucket', add_label('le', '7.5'), 2.0),
+ Sample('h_bucket', add_label('le', '10.0'), 2.0),
+ Sample('h_bucket', add_label('le', '+Inf'), 2.0),
+ Sample('h_count', labels, 2.0),
+ Sample('h_sum', labels, 6.0),
+ ]
+
+ self.assertEqual(metrics['h'].samples, expected_histogram)
+
+ @unittest.skipIf(sys.version_info < (2, 7), "Test requires Python 2.7+.")
+ def test_merge_no_accumulate(self):
+ pid = 0
+ core._ValueClass = core._MultiProcessValue(lambda: pid)
+ labels = dict((i, i) for i in 'abcd')
+
+ def add_label(key, value):
+ l = labels.copy()
+ l[key] = value
+ return l
+
+ h = Histogram('h', 'help', labelnames=labels.keys(), registry=None)
+ h.labels(**labels).observe(1)
+ pid = 1
+ h.labels(**labels).observe(5)
+
+ path = os.path.join(os.environ['prometheus_multiproc_dir'], '*.db')
+ files = glob.glob(path)
+ metrics = dict(
+ (m.name, m) for m in self.collector.merge(files, accumulate=False)
+ )
+
+ metrics['h'].samples.sort(
+ key=lambda x: (x[0], float(x[1].get('le', 0)))
+ )
+ expected_histogram = [
+ Sample('h_bucket', add_label('le', '0.005'), 0.0),
+ Sample('h_bucket', add_label('le', '0.01'), 0.0),
+ Sample('h_bucket', add_label('le', '0.025'), 0.0),
+ Sample('h_bucket', add_label('le', '0.05'), 0.0),
+ Sample('h_bucket', add_label('le', '0.075'), 0.0),
+ Sample('h_bucket', add_label('le', '0.1'), 0.0),
+ Sample('h_bucket', add_label('le', '0.25'), 0.0),
+ Sample('h_bucket', add_label('le', '0.5'), 0.0),
+ Sample('h_bucket', add_label('le', '0.75'), 0.0),
+ Sample('h_bucket', add_label('le', '1.0'), 1.0),
+ Sample('h_bucket', add_label('le', '2.5'), 0.0),
+ Sample('h_bucket', add_label('le', '5.0'), 1.0),
+ Sample('h_bucket', add_label('le', '7.5'), 0.0),
+ Sample('h_bucket', add_label('le', '10.0'), 0.0),
+ Sample('h_bucket', add_label('le', '+Inf'), 0.0),
+ Sample('h_sum', labels, 6.0),
+ ]
+
+ self.assertEqual(metrics['h'].samples, expected_histogram)
+
class TestMmapedDict(unittest.TestCase):
def setUp(self):
| 2018-09-05T11:48:04
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"prometheus_client/core.py": "#!/usr/bin/python\n\nfrom __future__ import unicode_literals\n\nimport copy\nimport json\nimport math\nimport mmap\nimport os\nimport re\nimport struct\nimport sys\nimport time\nimport types\n\nfrom threading import Lock\nfrom timeit import default_timer\nfrom collections import namedtuple\n\nfrom .decorator import decorate\n\n\nif sys.version_info > (3,):\n unicode = str\n\n_METRIC_NAME_RE = re.compile(r'^[a-zA-Z_:][a-zA-Z0-9_:]*$')\n_METRIC_LABEL_NAME_RE = re.compile(r'^[a-zA-Z_][a-zA-Z0-9_]*$')\n_RESERVED_METRIC_LABEL_NAME_RE = re.compile(r'^__.*$')\n_INF = float(\"inf\")\n_MINUS_INF = float(\"-inf\")\n_INITIAL_MMAP_SIZE = 1 << 20\n\n_pack_integer = struct.Struct(b'i').pack_into\n_pack_double = struct.Struct(b'd').pack_into\n_unpack_integer = struct.Struct(b'i').unpack_from\n_unpack_double = struct.Struct(b'd').unpack_from\n\n# Timestamp and exemplar are optional.\n# Value can be an int or a float.\n# Timestamp can be a float containing a unixtime in seconds,\n# a Timestamp object, or None.\n# Exemplar can be an Exemplar object, or None.\nSample = namedtuple('Sample', ['name', 'labels', 'value', 'timestamp', 'exemplar'])\nSample.__new__.__defaults__ = (None, None)\n\n\nclass Timestamp(object):\n '''A nanosecond-resolution timestamp.'''\n def __init__(self, sec, nsec):\n if nsec < 0 or nsec >= 1e9:\n raise ValueError(\"Invalid value for nanoseconds in Timestamp: {}\".format(nsec))\n self.sec = int(sec)\n self.nsec = int(nsec)\n\n def __str__(self):\n return \"{0}.{1:09d}\".format(self.sec, self.nsec)\n\n def __repr__(self):\n return \"Timestamp({0}, {1})\".format(self.sec, self.nsec)\n\n def __float__(self):\n return float(self.sec) + float(self.nsec) / 1e9\n\n def __eq__(self, other):\n return type(self) == type(other) and self.sec == other.sec and self.nsec == other.nsec\n\n\nExemplar = namedtuple('Exemplar', ['labels', 'value', 'timestamp'])\nExemplar.__new__.__defaults__ = (None, )\n\n\nclass CollectorRegistry(object):\n '''Metric collector registry.\n\n Collectors must have a no-argument method 'collect' that returns a list of\n Metric objects. The returned metrics should be consistent with the Prometheus\n exposition formats.\n '''\n def __init__(self, auto_describe=False):\n self._collector_to_names = {}\n self._names_to_collectors = {}\n self._auto_describe = auto_describe\n self._lock = Lock()\n\n def register(self, collector):\n '''Add a collector to the registry.'''\n with self._lock:\n names = self._get_names(collector)\n duplicates = set(self._names_to_collectors).intersection(names)\n if duplicates:\n raise ValueError(\n 'Duplicated timeseries in CollectorRegistry: {0}'.format(\n duplicates))\n for name in names:\n self._names_to_collectors[name] = collector\n self._collector_to_names[collector] = names\n\n def unregister(self, collector):\n '''Remove a collector from the registry.'''\n with self._lock:\n for name in self._collector_to_names[collector]:\n del self._names_to_collectors[name]\n del self._collector_to_names[collector]\n\n def _get_names(self, collector):\n '''Get names of timeseries the collector produces.'''\n desc_func = None\n # If there's a describe function, use it.\n try:\n desc_func = collector.describe\n except AttributeError:\n pass\n # Otherwise, if auto describe is enabled use the collect function.\n if not desc_func and self._auto_describe:\n desc_func = collector.collect\n\n if not desc_func:\n return []\n\n result = []\n type_suffixes = {\n 'counter': ['_total', '_created'],\n 'summary': ['', '_sum', '_count', '_created'],\n 'histogram': ['_bucket', '_sum', '_count', '_created'],\n 'info': ['_info'],\n }\n for metric in desc_func():\n for suffix in type_suffixes.get(metric.type, ['']):\n result.append(metric.name + suffix)\n return result\n\n def collect(self):\n '''Yields metrics from the collectors in the registry.'''\n collectors = None\n with self._lock:\n collectors = copy.copy(self._collector_to_names)\n for collector in collectors:\n for metric in collector.collect():\n yield metric\n\n def restricted_registry(self, names):\n '''Returns object that only collects some metrics.\n\n Returns an object which upon collect() will return\n only samples with the given names.\n\n Intended usage is:\n generate_latest(REGISTRY.restricted_registry(['a_timeseries']))\n\n Experimental.'''\n names = set(names)\n collectors = set()\n with self._lock:\n for name in names:\n if name in self._names_to_collectors:\n collectors.add(self._names_to_collectors[name])\n metrics = []\n for collector in collectors:\n for metric in collector.collect():\n samples = [s for s in metric.samples if s[0] in names]\n if samples:\n m = Metric(metric.name, metric.documentation, metric.type)\n m.samples = samples\n metrics.append(m)\n\n class RestrictedRegistry(object):\n def collect(self):\n return metrics\n return RestrictedRegistry()\n\n def get_sample_value(self, name, labels=None):\n '''Returns the sample value, or None if not found.\n\n This is inefficient, and intended only for use in unittests.\n '''\n if labels is None:\n labels = {}\n for metric in self.collect():\n for s in metric.samples:\n if s.name == name and s.labels == labels:\n return s.value\n return None\n\n\nREGISTRY = CollectorRegistry(auto_describe=True)\n'''The default registry.'''\n\n_METRIC_TYPES = ('counter', 'gauge', 'summary', 'histogram', \n 'gaugehistogram', 'unknown', 'info', 'stateset')\n\n\nclass Metric(object):\n '''A single metric family and its samples.\n\n This is intended only for internal use by the instrumentation client.\n\n Custom collectors should use GaugeMetricFamily, CounterMetricFamily\n and SummaryMetricFamily instead.\n '''\n def __init__(self, name, documentation, typ, unit=''):\n if unit and not name.endswith(\"_\" + unit):\n name += \"_\" + unit\n if not _METRIC_NAME_RE.match(name):\n raise ValueError('Invalid metric name: ' + name)\n self.name = name\n self.documentation = documentation\n self.unit = unit\n if typ == 'untyped':\n typ = 'unknown'\n if typ not in _METRIC_TYPES:\n raise ValueError('Invalid metric type: ' + typ)\n self.type = typ\n if unit:\n if not name.endswith('_' + unit):\n raise ValueError('Metric name does not end with unit: ' + name)\n self.unit = unit\n self.samples = []\n\n def add_sample(self, name, labels, value, timestamp=None, exemplar=None):\n '''Add a sample to the metric.\n\n Internal-only, do not use.'''\n self.samples.append(Sample(name, labels, value, timestamp, exemplar))\n\n def __eq__(self, other):\n return (isinstance(other, Metric) and\n self.name == other.name and\n self.documentation == other.documentation and\n self.type == other.type and\n self.unit == other.unit and\n self.samples == other.samples)\n\n def __repr__(self):\n return \"Metric(%s, %s, %s, %s, %s)\" % (self.name, self.documentation,\n self.type, self.unit, self.samples)\n\n\nclass UnknownMetricFamily(Metric):\n '''A single unknwon metric and its samples.\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, value=None, labels=None, unit=''):\n Metric.__init__(self, name, documentation, 'unknown', unit)\n if labels is not None and value is not None:\n raise ValueError('Can only specify at most one of value and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if value is not None:\n self.add_metric([], value)\n\n def add_metric(self, labels, value, timestamp=None):\n '''Add a metric to the metric family.\n Args:\n labels: A list of label values\n value: The value of the metric.\n '''\n self.samples.append(Sample(self.name, dict(zip(self._labelnames, labels)), value, timestamp))\n\n# For backward compatibility.\nUntypedMetricFamily = UnknownMetricFamily\n\nclass CounterMetricFamily(Metric):\n '''A single counter and its samples.\n\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, value=None, labels=None, created=None, unit=''):\n # Glue code for pre-OpenMetrics metrics.\n if name.endswith('_total'):\n name = name[:-6]\n Metric.__init__(self, name, documentation, 'counter', unit)\n if labels is not None and value is not None:\n raise ValueError('Can only specify at most one of value and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if value is not None:\n self.add_metric([], value, created)\n\n def add_metric(self, labels, value, created=None, timestamp=None):\n '''Add a metric to the metric family.\n\n Args:\n labels: A list of label values\n value: The value of the metric\n created: Optional unix timestamp the child was created at.\n '''\n self.samples.append(Sample(self.name + '_total', dict(zip(self._labelnames, labels)), value, timestamp))\n if created is not None:\n self.samples.append(Sample(self.name + '_created', dict(zip(self._labelnames, labels)), created, timestamp))\n\n\nclass GaugeMetricFamily(Metric):\n '''A single gauge and its samples.\n\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, value=None, labels=None, unit=''):\n Metric.__init__(self, name, documentation, 'gauge', unit)\n if labels is not None and value is not None:\n raise ValueError('Can only specify at most one of value and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if value is not None:\n self.add_metric([], value)\n\n def add_metric(self, labels, value, timestamp=None):\n '''Add a metric to the metric family.\n\n Args:\n labels: A list of label values\n value: A float\n '''\n self.samples.append(Sample(self.name, dict(zip(self._labelnames, labels)), value, timestamp))\n\n\nclass SummaryMetricFamily(Metric):\n '''A single summary and its samples.\n\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, count_value=None, sum_value=None, labels=None, unit=''):\n Metric.__init__(self, name, documentation, 'summary', unit)\n if (sum_value is None) != (count_value is None):\n raise ValueError('count_value and sum_value must be provided together.')\n if labels is not None and count_value is not None:\n raise ValueError('Can only specify at most one of value and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if count_value is not None:\n self.add_metric([], count_value, sum_value)\n\n def add_metric(self, labels, count_value, sum_value, timestamp=None):\n '''Add a metric to the metric family.\n\n Args:\n labels: A list of label values\n count_value: The count value of the metric.\n sum_value: The sum value of the metric.\n '''\n self.samples.append(Sample(self.name + '_count', dict(zip(self._labelnames, labels)), count_value, timestamp))\n self.samples.append(Sample(self.name + '_sum', dict(zip(self._labelnames, labels)), sum_value, timestamp))\n\n\nclass HistogramMetricFamily(Metric):\n '''A single histogram and its samples.\n\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, buckets=None, sum_value=None, labels=None, unit=''):\n Metric.__init__(self, name, documentation, 'histogram', unit)\n if (sum_value is None) != (buckets is None):\n raise ValueError('buckets and sum_value must be provided together.')\n if labels is not None and buckets is not None:\n raise ValueError('Can only specify at most one of buckets and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if buckets is not None:\n self.add_metric([], buckets, sum_value)\n\n def add_metric(self, labels, buckets, sum_value, timestamp=None):\n '''Add a metric to the metric family.\n\n Args:\n labels: A list of label values\n buckets: A list of lists.\n Each inner list can be a pair of bucket name and value,\n or a triple of bucket name, value, and exemplar.\n The buckets must be sorted, and +Inf present.\n sum_value: The sum value of the metric.\n '''\n for b in buckets:\n bucket, value = b[:2]\n exemplar = None\n if len(b) == 3:\n exemplar = b[2]\n self.samples.append(Sample(self.name + '_bucket', \n dict(list(zip(self._labelnames, labels)) + [('le', bucket)]), \n value, timestamp, exemplar))\n # +Inf is last and provides the count value.\n self.samples.append(Sample(self.name + '_count', dict(zip(self._labelnames, labels)), buckets[-1][1], timestamp))\n self.samples.append(Sample(self.name + '_sum', dict(zip(self._labelnames, labels)), sum_value, timestamp))\n\n\nclass GaugeHistogramMetricFamily(Metric):\n '''A single gauge histogram and its samples.\n\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, buckets=None, labels=None, unit=''):\n Metric.__init__(self, name, documentation, 'gaugehistogram', unit)\n if labels is not None and buckets is not None:\n raise ValueError('Can only specify at most one of buckets and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if buckets is not None:\n self.add_metric([], buckets)\n\n def add_metric(self, labels, buckets, timestamp=None):\n '''Add a metric to the metric family.\n\n Args:\n labels: A list of label values\n buckets: A list of pairs of bucket names and values.\n The buckets must be sorted, and +Inf present.\n '''\n for bucket, value in buckets:\n self.samples.append(Sample(\n self.name + '_bucket', \n dict(list(zip(self._labelnames, labels)) + [('le', bucket)]),\n value, timestamp))\n\n\nclass InfoMetricFamily(Metric):\n '''A single info and its samples.\n\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, value=None, labels=None):\n Metric.__init__(self, name, documentation, 'info')\n if labels is not None and value is not None:\n raise ValueError('Can only specify at most one of value and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if value is not None:\n self.add_metric([], value)\n\n def add_metric(self, labels, value, timestamp=None):\n '''Add a metric to the metric family.\n\n Args:\n labels: A list of label values\n value: A dict of labels\n '''\n self.samples.append(Sample(self.name + '_info', \n dict(dict(zip(self._labelnames, labels)), **value), 1, timestamp))\n\n\nclass StateSetMetricFamily(Metric):\n '''A single stateset and its samples.\n\n For use by custom collectors.\n '''\n def __init__(self, name, documentation, value=None, labels=None):\n Metric.__init__(self, name, documentation, 'stateset')\n if labels is not None and value is not None:\n raise ValueError('Can only specify at most one of value and labels.')\n if labels is None:\n labels = []\n self._labelnames = tuple(labels)\n if value is not None:\n self.add_metric([], value)\n\n def add_metric(self, labels, value, timestamp=None):\n '''Add a metric to the metric family.\n\n Args:\n labels: A list of label values\n value: A dict of string state names to booleans\n '''\n labels = tuple(labels)\n for state, enabled in value.items():\n v = (1 if enabled else 0)\n self.samples.append(Sample(self.name,\n dict(zip(self._labelnames + (self.name,), labels + (state,))), v, timestamp))\n\n\nclass _MutexValue(object):\n '''A float protected by a mutex.'''\n\n _multiprocess = False\n\n def __init__(self, typ, metric_name, name, labelnames, labelvalues, **kwargs):\n self._value = 0.0\n self._lock = Lock()\n\n def inc(self, amount):\n with self._lock:\n self._value += amount\n\n def set(self, value):\n with self._lock:\n self._value = value\n\n def get(self):\n with self._lock:\n return self._value\n\n\nclass _MmapedDict(object):\n \"\"\"A dict of doubles, backed by an mmapped file.\n\n The file starts with a 4 byte int, indicating how much of it is used.\n Then 4 bytes of padding.\n There's then a number of entries, consisting of a 4 byte int which is the\n size of the next field, a utf-8 encoded string key, padding to a 8 byte\n alignment, and then a 8 byte float which is the value.\n\n Not thread safe.\n \"\"\"\n def __init__(self, filename, read_mode=False):\n self._f = open(filename, 'a+b')\n if os.fstat(self._f.fileno()).st_size == 0:\n self._f.truncate(_INITIAL_MMAP_SIZE)\n self._capacity = os.fstat(self._f.fileno()).st_size\n self._m = mmap.mmap(self._f.fileno(), self._capacity)\n\n self._positions = {}\n self._used = _unpack_integer(self._m, 0)[0]\n if self._used == 0:\n self._used = 8\n _pack_integer(self._m, 0, self._used)\n else:\n if not read_mode:\n for key, _, pos in self._read_all_values():\n self._positions[key] = pos\n\n def _init_value(self, key):\n \"\"\"Initialize a value. Lock must be held by caller.\"\"\"\n encoded = key.encode('utf-8')\n # Pad to be 8-byte aligned.\n padded = encoded + (b' ' * (8 - (len(encoded) + 4) % 8))\n value = struct.pack('i{0}sd'.format(len(padded)).encode(), len(encoded), padded, 0.0)\n while self._used + len(value) > self._capacity:\n self._capacity *= 2\n self._f.truncate(self._capacity)\n self._m = mmap.mmap(self._f.fileno(), self._capacity)\n self._m[self._used:self._used + len(value)] = value\n\n # Update how much space we've used.\n self._used += len(value)\n _pack_integer(self._m, 0, self._used)\n self._positions[key] = self._used - 8\n\n def _read_all_values(self):\n \"\"\"Yield (key, value, pos). No locking is performed.\"\"\"\n\n pos = 8\n\n # cache variables to local ones and prevent attributes lookup\n # on every loop iteration\n used = self._used\n data = self._m\n unpack_from = struct.unpack_from\n\n while pos < used:\n encoded_len = _unpack_integer(data, pos)[0]\n pos += 4\n encoded = unpack_from(('%ss' % encoded_len).encode(), data, pos)[0]\n padded_len = encoded_len + (8 - (encoded_len + 4) % 8)\n pos += padded_len\n value = _unpack_double(data, pos)[0]\n yield encoded.decode('utf-8'), value, pos\n pos += 8\n\n def read_all_values(self):\n \"\"\"Yield (key, value, pos). No locking is performed.\"\"\"\n for k, v, _ in self._read_all_values():\n yield k, v\n\n def read_value(self, key):\n if key not in self._positions:\n self._init_value(key)\n pos = self._positions[key]\n # We assume that reading from an 8 byte aligned value is atomic\n return _unpack_double(self._m, pos)[0]\n\n def write_value(self, key, value):\n if key not in self._positions:\n self._init_value(key)\n pos = self._positions[key]\n # We assume that writing to an 8 byte aligned value is atomic\n _pack_double(self._m, pos, value)\n\n def close(self):\n if self._f:\n self._m.close()\n self._m = None\n self._f.close()\n self._f = None\n\n\ndef _MultiProcessValue(_pidFunc=os.getpid):\n files = {}\n values = []\n pid = {'value': _pidFunc()}\n # Use a single global lock when in multi-processing mode\n # as we presume this means there is no threading going on.\n # This avoids the need to also have mutexes in __MmapDict.\n lock = Lock()\n\n class _MmapedValue(object):\n '''A float protected by a mutex backed by a per-process mmaped file.'''\n\n _multiprocess = True\n\n def __init__(self, typ, metric_name, name, labelnames, labelvalues, multiprocess_mode='', **kwargs):\n self._params = typ, metric_name, name, labelnames, labelvalues, multiprocess_mode\n with lock:\n self.__reset()\n values.append(self)\n\n def __reset(self):\n typ, metric_name, name, labelnames, labelvalues, multiprocess_mode = self._params\n if typ == 'gauge':\n file_prefix = typ + '_' + multiprocess_mode\n else:\n file_prefix = typ\n if file_prefix not in files:\n filename = os.path.join(\n os.environ['prometheus_multiproc_dir'],\n '{0}_{1}.db'.format(file_prefix, pid['value']))\n files[file_prefix] = _MmapedDict(filename)\n self._file = files[file_prefix]\n self._key = json.dumps((metric_name, name, labelnames, labelvalues))\n self._value = self._file.read_value(self._key)\n\n def __check_for_pid_change(self):\n actual_pid = _pidFunc()\n if pid['value'] != actual_pid:\n pid['value'] = actual_pid\n # There has been a fork(), reset all the values.\n for f in files.values():\n f.close()\n files.clear()\n for value in values:\n value.__reset()\n\n def inc(self, amount):\n with lock:\n self.__check_for_pid_change()\n self._value += amount\n self._file.write_value(self._key, self._value)\n\n def set(self, value):\n with lock:\n self.__check_for_pid_change()\n self._value = value\n self._file.write_value(self._key, self._value)\n\n def get(self):\n with lock:\n self.__check_for_pid_change()\n return self._value\n\n return _MmapedValue\n\n\n# Should we enable multi-process mode?\n# This needs to be chosen before the first metric is constructed,\n# and as that may be in some arbitrary library the user/admin has\n# no control over we use an environment variable.\nif 'prometheus_multiproc_dir' in os.environ:\n _ValueClass = _MultiProcessValue()\nelse:\n _ValueClass = _MutexValue\n\n\nclass _LabelWrapper(object):\n '''Handles labels for the wrapped metric.'''\n def __init__(self, wrappedClass, name, labelnames, **kwargs):\n self._wrappedClass = wrappedClass\n self._type = wrappedClass._type\n self._name = name\n self._labelnames = labelnames\n self._kwargs = kwargs\n self._lock = Lock()\n self._metrics = {}\n\n for l in labelnames:\n if l.startswith('__'):\n raise ValueError('Invalid label metric name: ' + l)\n\n def labels(self, *labelvalues, **labelkwargs):\n '''Return the child for the given labelset.\n\n All metrics can have labels, allowing grouping of related time series.\n Taking a counter as an example:\n\n from prometheus_client import Counter\n\n c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])\n c.labels('get', '/').inc()\n c.labels('post', '/submit').inc()\n\n Labels can also be provided as keyword arguments:\n\n from prometheus_client import Counter\n\n c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])\n c.labels(method='get', endpoint='/').inc()\n c.labels(method='post', endpoint='/submit').inc()\n\n See the best practices on [naming](http://prometheus.io/docs/practices/naming/)\n and [labels](http://prometheus.io/docs/practices/instrumentation/#use-labels).\n '''\n if labelvalues and labelkwargs:\n raise ValueError(\"Can't pass both *args and **kwargs\")\n\n if labelkwargs:\n if sorted(labelkwargs) != sorted(self._labelnames):\n raise ValueError('Incorrect label names')\n labelvalues = tuple(unicode(labelkwargs[l]) for l in self._labelnames)\n else:\n if len(labelvalues) != len(self._labelnames):\n raise ValueError('Incorrect label count')\n labelvalues = tuple(unicode(l) for l in labelvalues)\n with self._lock:\n if labelvalues not in self._metrics:\n self._metrics[labelvalues] = self._wrappedClass(self._name, self._labelnames, labelvalues, **self._kwargs)\n return self._metrics[labelvalues]\n\n def remove(self, *labelvalues):\n '''Remove the given labelset from the metric.'''\n if len(labelvalues) != len(self._labelnames):\n raise ValueError('Incorrect label count')\n labelvalues = tuple(unicode(l) for l in labelvalues)\n with self._lock:\n del self._metrics[labelvalues]\n\n def _samples(self):\n with self._lock:\n metrics = self._metrics.copy()\n for labels, metric in metrics.items():\n series_labels = list(zip(self._labelnames, labels))\n for suffix, sample_labels, value in metric._samples():\n yield (suffix, dict(series_labels + list(sample_labels.items())), value)\n\n\ndef _MetricWrapper(cls):\n '''Provides common functionality for metrics.'''\n def init(name, documentation, labelnames=(), namespace='', subsystem='', unit='', registry=REGISTRY, **kwargs):\n full_name = ''\n if namespace:\n full_name += namespace + '_'\n if subsystem:\n full_name += subsystem + '_'\n full_name += name\n\n if unit and not full_name.endswith(\"_\" + unit):\n full_name += \"_\" + unit\n if unit and cls._type in ('info', 'stateset'):\n raise ValueError('Metric name is of a type that cannot have a unit: ' + full_name)\n\n if cls._type == 'counter' and full_name.endswith('_total'):\n full_name = full_name[:-6] # Munge to OpenMetrics.\n\n if labelnames:\n labelnames = tuple(labelnames)\n for l in labelnames:\n if not _METRIC_LABEL_NAME_RE.match(l):\n raise ValueError('Invalid label metric name: ' + l)\n if _RESERVED_METRIC_LABEL_NAME_RE.match(l):\n raise ValueError('Reserved label metric name: ' + l)\n if l in cls._reserved_labelnames:\n raise ValueError('Reserved label metric name: ' + l)\n collector = _LabelWrapper(cls, full_name, labelnames, **kwargs)\n else:\n collector = cls(full_name, (), (), **kwargs)\n\n if not _METRIC_NAME_RE.match(full_name):\n raise ValueError('Invalid metric name: ' + full_name)\n\n def describe():\n return [Metric(full_name, documentation, cls._type)]\n collector.describe = describe\n\n def collect():\n metric = Metric(full_name, documentation, cls._type, unit)\n for suffix, labels, value in collector._samples():\n metric.add_sample(full_name + suffix, labels, value)\n return [metric]\n collector.collect = collect\n\n if registry:\n registry.register(collector)\n return collector\n\n init.__wrapped__ = cls\n return init\n\n\n@_MetricWrapper\nclass Counter(object):\n '''A Counter tracks counts of events or running totals.\n\n Example use cases for Counters:\n - Number of requests processed\n - Number of items that were inserted into a queue\n - Total amount of data that a system has processed\n\n Counters can only go up (and be reset when the process restarts). If your use case can go down,\n you should use a Gauge instead.\n\n An example for a Counter:\n\n from prometheus_client import Counter\n\n c = Counter('my_failures_total', 'Description of counter')\n c.inc() # Increment by 1\n c.inc(1.6) # Increment by given value\n\n There are utilities to count exceptions raised:\n\n @c.count_exceptions()\n def f():\n pass\n\n with c.count_exceptions():\n pass\n\n # Count only one type of exception\n with c.count_exceptions(ValueError):\n pass\n '''\n _type = 'counter'\n _reserved_labelnames = []\n\n def __init__(self, name, labelnames, labelvalues):\n if name.endswith('_total'):\n name = name[:-6]\n self._value = _ValueClass(self._type, name, name + '_total', labelnames, labelvalues)\n self._created = time.time()\n\n def inc(self, amount=1):\n '''Increment counter by the given amount.'''\n if amount < 0:\n raise ValueError('Counters can only be incremented by non-negative amounts.')\n self._value.inc(amount)\n\n def count_exceptions(self, exception=Exception):\n '''Count exceptions in a block of code or function.\n\n Can be used as a function decorator or context manager.\n Increments the counter when an exception of the given\n type is raised up out of the code.\n '''\n return _ExceptionCounter(self, exception)\n\n def _samples(self):\n return (('_total', {}, self._value.get()),\n ('_created', {}, self._created))\n\n\n@_MetricWrapper\nclass Gauge(object):\n '''Gauge metric, to report instantaneous values.\n\n Examples of Gauges include:\n - Inprogress requests\n - Number of items in a queue\n - Free memory\n - Total memory\n - Temperature\n\n Gauges can go both up and down.\n\n from prometheus_client import Gauge\n\n g = Gauge('my_inprogress_requests', 'Description of gauge')\n g.inc() # Increment by 1\n g.dec(10) # Decrement by given value\n g.set(4.2) # Set to a given value\n\n There are utilities for common use cases:\n\n g.set_to_current_time() # Set to current unixtime\n\n # Increment when entered, decrement when exited.\n @g.track_inprogress()\n def f():\n pass\n\n with g.track_inprogress():\n pass\n\n A Gauge can also take its value from a callback:\n\n d = Gauge('data_objects', 'Number of objects')\n my_dict = {}\n d.set_function(lambda: len(my_dict))\n '''\n _type = 'gauge'\n _reserved_labelnames = []\n _MULTIPROC_MODES = frozenset(('min', 'max', 'livesum', 'liveall', 'all'))\n\n def __init__(self, name, labelnames, labelvalues, multiprocess_mode='all'):\n if (_ValueClass._multiprocess and\n multiprocess_mode not in self._MULTIPROC_MODES):\n raise ValueError('Invalid multiprocess mode: ' + multiprocess_mode)\n self._value = _ValueClass(\n self._type, name, name, labelnames, labelvalues,\n multiprocess_mode=multiprocess_mode)\n\n def inc(self, amount=1):\n '''Increment gauge by the given amount.'''\n self._value.inc(amount)\n\n def dec(self, amount=1):\n '''Decrement gauge by the given amount.'''\n self._value.inc(-amount)\n\n def set(self, value):\n '''Set gauge to the given value.'''\n self._value.set(float(value))\n\n def set_to_current_time(self):\n '''Set gauge to the current unixtime.'''\n self.set(time.time())\n\n def track_inprogress(self):\n '''Track inprogress blocks of code or functions.\n\n Can be used as a function decorator or context manager.\n Increments the gauge when the code is entered,\n and decrements when it is exited.\n '''\n return _InprogressTracker(self)\n\n def time(self):\n '''Time a block of code or function, and set the duration in seconds.\n\n Can be used as a function decorator or context manager.\n '''\n return _Timer(self.set)\n\n def set_function(self, f):\n '''Call the provided function to return the Gauge value.\n\n The function must return a float, and may be called from\n multiple threads. All other methods of the Gauge become NOOPs.\n '''\n def samples(self):\n return (('', {}, float(f())), )\n self._samples = types.MethodType(samples, self)\n\n def _samples(self):\n return (('', {}, self._value.get()), )\n\n\n@_MetricWrapper\nclass Summary(object):\n '''A Summary tracks the size and number of events.\n\n Example use cases for Summaries:\n - Response latency\n - Request size\n\n Example for a Summary:\n\n from prometheus_client import Summary\n\n s = Summary('request_size_bytes', 'Request size (bytes)')\n s.observe(512) # Observe 512 (bytes)\n\n Example for a Summary using time:\n\n from prometheus_client import Summary\n\n REQUEST_TIME = Summary('response_latency_seconds', 'Response latency (seconds)')\n\n @REQUEST_TIME.time()\n def create_response(request):\n \"\"\"A dummy function\"\"\"\n time.sleep(1)\n\n Example for using the same Summary object as a context manager:\n\n with REQUEST_TIME.time():\n pass # Logic to be timed\n '''\n _type = 'summary'\n _reserved_labelnames = ['quantile']\n\n def __init__(self, name, labelnames, labelvalues):\n self._count = _ValueClass(self._type, name, name + '_count', labelnames, labelvalues)\n self._sum = _ValueClass(self._type, name, name + '_sum', labelnames, labelvalues)\n self._created = time.time()\n\n def observe(self, amount):\n '''Observe the given amount.'''\n self._count.inc(1)\n self._sum.inc(amount)\n\n def time(self):\n '''Time a block of code or function, and observe the duration in seconds.\n\n Can be used as a function decorator or context manager.\n '''\n return _Timer(self.observe)\n\n def _samples(self):\n return (\n ('_count', {}, self._count.get()),\n ('_sum', {}, self._sum.get()),\n ('_created', {}, self._created))\n\n\ndef _floatToGoString(d):\n if d == _INF:\n return '+Inf'\n elif d == _MINUS_INF:\n return '-Inf'\n elif math.isnan(d):\n return 'NaN'\n else:\n return repr(float(d))\n\n\n@_MetricWrapper\nclass Histogram(object):\n '''A Histogram tracks the size and number of events in buckets.\n\n You can use Histograms for aggregatable calculation of quantiles.\n\n Example use cases:\n - Response latency\n - Request size\n\n Example for a Histogram:\n\n from prometheus_client import Histogram\n\n h = Histogram('request_size_bytes', 'Request size (bytes)')\n h.observe(512) # Observe 512 (bytes)\n\n Example for a Histogram using time:\n\n from prometheus_client import Histogram\n\n REQUEST_TIME = Histogram('response_latency_seconds', 'Response latency (seconds)')\n\n @REQUEST_TIME.time()\n def create_response(request):\n \"\"\"A dummy function\"\"\"\n time.sleep(1)\n\n Example of using the same Histogram object as a context manager:\n\n with REQUEST_TIME.time():\n pass # Logic to be timed\n\n The default buckets are intended to cover a typical web/rpc request from milliseconds to seconds.\n They can be overridden by passing `buckets` keyword argument to `Histogram`.\n '''\n _type = 'histogram'\n _reserved_labelnames = ['le']\n\n def __init__(self, name, labelnames, labelvalues, buckets=(.005, .01, .025, .05, .075, .1, .25, .5, .75, 1.0, 2.5, 5.0, 7.5, 10.0, _INF)):\n self._created = time.time()\n self._sum = _ValueClass(self._type, name, name + '_sum', labelnames, labelvalues)\n buckets = [float(b) for b in buckets]\n if buckets != sorted(buckets):\n # This is probably an error on the part of the user,\n # so raise rather than sorting for them.\n raise ValueError('Buckets not in sorted order')\n if buckets and buckets[-1] != _INF:\n buckets.append(_INF)\n if len(buckets) < 2:\n raise ValueError('Must have at least two buckets')\n self._upper_bounds = buckets\n self._buckets = []\n bucket_labelnames = labelnames + ('le',)\n for b in buckets:\n self._buckets.append(_ValueClass(self._type, name, name + '_bucket', bucket_labelnames, labelvalues + (_floatToGoString(b),)))\n\n def observe(self, amount):\n '''Observe the given amount.'''\n self._sum.inc(amount)\n for i, bound in enumerate(self._upper_bounds):\n if amount <= bound:\n self._buckets[i].inc(1)\n break\n\n def time(self):\n '''Time a block of code or function, and observe the duration in seconds.\n\n Can be used as a function decorator or context manager.\n '''\n return _Timer(self.observe)\n\n def _samples(self):\n samples = []\n acc = 0\n for i, bound in enumerate(self._upper_bounds):\n acc += self._buckets[i].get()\n samples.append(('_bucket', {'le': _floatToGoString(bound)}, acc))\n samples.append(('_count', {}, acc))\n samples.append(('_sum', {}, self._sum.get()))\n samples.append(('_created', {}, self._created))\n return tuple(samples)\n\n\n@_MetricWrapper\nclass Info(object):\n '''Info metric, key-value pairs.\n\n Examples of Info include:\n - Build information\n - Version information\n - Potential target metadata\n\n Example usage:\n from prometheus_client import Info\n\n i = Info('my_build', 'Description of info')\n i.info({'version': '1.2.3', 'buildhost': 'foo@bar'})\n\n Info metrics do not work in multiprocess mode.\n '''\n _type = 'info'\n _reserved_labelnames = []\n\n def __init__(self, name, labelnames, labelvalues):\n self._labelnames = set(labelnames)\n self._lock = Lock()\n self._value = {}\n\n def info(self, val):\n '''Set info metric.'''\n if self._labelnames.intersection(val.keys()):\n raise ValueError('Overlapping labels for Info metric, metric: %s child: %s' % (\n self._labelnames, val))\n with self._lock:\n self._value = dict(val)\n\n\n def _samples(self):\n with self._lock:\n return (('_info', self._value, 1.0,), )\n\n\n@_MetricWrapper\nclass Enum(object):\n '''Enum metric, which of a set of states is true.\n\n Example usage:\n from prometheus_client import Enum\n\n e = Enum('task_state', 'Description of enum', \n states=['starting', 'running', 'stopped'])\n e.state('running')\n\n The first listed state will be the default.\n Enum metrics do not work in multiprocess mode.\n '''\n _type = 'stateset'\n _reserved_labelnames = []\n\n def __init__(self, name, labelnames, labelvalues, states=None):\n if name in labelnames:\n raise ValueError('Overlapping labels for Enum metric: %s' % (name,))\n if not states:\n raise ValueError('No states provided for Enum metric: %s' % (name,))\n self._name = name\n self._states = states\n self._value = 0\n self._lock = Lock()\n\n def state(self, state):\n '''Set enum metric state.'''\n with self._lock:\n self._value = self._states.index(state)\n\n def _samples(self):\n with self._lock:\n return [('', {self._name: s}, 1 if i == self._value else 0,)\n for i, s in enumerate(self._states)]\n\n\nclass _ExceptionCounter(object):\n def __init__(self, counter, exception):\n self._counter = counter\n self._exception = exception\n\n def __enter__(self):\n pass\n\n def __exit__(self, typ, value, traceback):\n if isinstance(value, self._exception):\n self._counter.inc()\n\n def __call__(self, f):\n def wrapped(func, *args, **kwargs):\n with self:\n return func(*args, **kwargs)\n return decorate(f, wrapped)\n\n\nclass _InprogressTracker(object):\n def __init__(self, gauge):\n self._gauge = gauge\n\n def __enter__(self):\n self._gauge.inc()\n\n def __exit__(self, typ, value, traceback):\n self._gauge.dec()\n\n def __call__(self, f):\n def wrapped(func, *args, **kwargs):\n with self:\n return func(*args, **kwargs)\n return decorate(f, wrapped)\n\n\nclass _Timer(object):\n def __init__(self, callback):\n self._callback = callback\n\n def _new_timer(self):\n return self.__class__(self._callback)\n\n def __enter__(self):\n self._start = default_timer()\n\n def __exit__(self, typ, value, traceback):\n # Time can go backwards.\n duration = max(default_timer() - self._start, 0)\n self._callback(duration)\n\n def __call__(self, f):\n def wrapped(func, *args, **kwargs):\n # Obtaining new instance of timer every time\n # ensures thread safety and reentrancy.\n with self._new_timer():\n return func(*args, **kwargs)\n return decorate(f, wrapped)\n", "prometheus_client/multiprocess.py": "#!/usr/bin/python\n\nfrom __future__ import unicode_literals\n\nfrom collections import defaultdict\n\nimport glob\nimport json\nimport os\n\nfrom . import core\n\n\nclass MultiProcessCollector(object):\n \"\"\"Collector for files for multi-process mode.\"\"\"\n def __init__(self, registry, path=None):\n if path is None:\n path = os.environ.get('prometheus_multiproc_dir')\n if not path or not os.path.isdir(path):\n raise ValueError('env prometheus_multiproc_dir is not set or not a directory')\n self._path = path\n if registry:\n registry.register(self)\n\n def collect(self):\n metrics = {}\n for f in glob.glob(os.path.join(self._path, '*.db')):\n parts = os.path.basename(f).split('_')\n typ = parts[0]\n d = core._MmapedDict(f, read_mode=True)\n for key, value in d.read_all_values():\n metric_name, name, labelnames, labelvalues = json.loads(key)\n\n metric = metrics.get(metric_name)\n if metric is None:\n metric = core.Metric(metric_name, 'Multiprocess metric', typ)\n metrics[metric_name] = metric\n\n if typ == 'gauge':\n pid = parts[2][:-3]\n metric._multiprocess_mode = parts[1]\n metric.add_sample(name, tuple(zip(labelnames, labelvalues)) + (('pid', pid), ), value)\n else:\n # The duplicates and labels are fixed in the next for.\n metric.add_sample(name, tuple(zip(labelnames, labelvalues)), value)\n d.close()\n\n for metric in metrics.values():\n samples = defaultdict(float)\n buckets = {}\n for s in metric.samples:\n name, labels, value = s.name, s.labels, s.value\n if metric.type == 'gauge':\n without_pid = tuple(l for l in labels if l[0] != 'pid')\n if metric._multiprocess_mode == 'min':\n current = samples.setdefault((name, without_pid), value)\n if value < current:\n samples[(s.name, without_pid)] = value\n elif metric._multiprocess_mode == 'max':\n current = samples.setdefault((name, without_pid), value)\n if value > current:\n samples[(s.name, without_pid)] = value\n elif metric._multiprocess_mode == 'livesum':\n samples[(name, without_pid)] += value\n else: # all/liveall\n samples[(name, labels)] = value\n\n elif metric.type == 'histogram':\n bucket = tuple(float(l[1]) for l in labels if l[0] == 'le')\n if bucket:\n # _bucket\n without_le = tuple(l for l in labels if l[0] != 'le')\n buckets.setdefault(without_le, {})\n buckets[without_le].setdefault(bucket[0], 0.0)\n buckets[without_le][bucket[0]] += value\n else:\n # _sum/_count\n samples[(s.name, labels)] += value\n\n else:\n # Counter and Summary.\n samples[(s.name, labels)] += value\n\n # Accumulate bucket values.\n if metric.type == 'histogram':\n for labels, values in buckets.items():\n acc = 0.0\n for bucket, value in sorted(values.items()):\n acc += value\n samples[(metric.name + '_bucket', labels + (('le', core._floatToGoString(bucket)), ))] = acc\n samples[(metric.name + '_count', labels)] = acc\n\n # Convert to correct sample format.\n metric.samples = [core.Sample(name, dict(labels), value) for (name, labels), value in samples.items()]\n return metrics.values()\n\n\ndef mark_process_dead(pid, path=None):\n \"\"\"Do bookkeeping for when one process dies in a multi-process setup.\"\"\"\n if path is None:\n path = os.environ.get('prometheus_multiproc_dir')\n for f in glob.glob(os.path.join(path, 'gauge_livesum_{0}.db'.format(pid))):\n os.remove(f)\n for f in glob.glob(os.path.join(path, 'gauge_liveall_{0}.db'.format(pid))):\n os.remove(f)\n"}
|
{"prometheus_client/core.py": [{"type": "function", "name": "_mmap_key", "lines": [589, 593], "signature": "def _mmap_key(metric_name, name, labelnames, labelvalues):", "doc": "Format a key for use in the mmap file."}], "prometheus_client/multiprocess.py": [{"type": "function", "name": "MultiProcessCollector.merge", "lines": [29, 114], "signature": "def merge(self, files, accumulate=True):", "doc": "Merge metrics from given mmap files.\n\nBy default, histograms are accumulated, as per prometheus wire format.\nBut if writing the merged data back to mmap files, use\naccumulate=False to avoid compound accumulation."}]}
| null |
["tests/test_multiprocess.py::TestMultiProcess::test_merge_no_accumulate"]
|
["tests/test_multiprocess.py::TestMultiProcess::test_collect", "tests/test_multiprocess.py::TestMultiProcess::test_counter_across_forks", "tests/test_multiprocess.py::TestMultiProcess::test_counter_adds", "tests/test_multiprocess.py::TestMultiProcess::test_gauge_all", "tests/test_multiprocess.py::TestMultiProcess::test_gauge_liveall", "tests/test_multiprocess.py::TestMultiProcess::test_gauge_livesum", "tests/test_multiprocess.py::TestMultiProcess::test_gauge_max", "tests/test_multiprocess.py::TestMultiProcess::test_gauge_min", "tests/test_multiprocess.py::TestMultiProcess::test_histogram_adds", "tests/test_multiprocess.py::TestMultiProcess::test_namespace_subsystem", "tests/test_multiprocess.py::TestMultiProcess::test_summary_adds", "tests/test_multiprocess.py::TestMmapedDict::test_expansion", "tests/test_multiprocess.py::TestMmapedDict::test_multi_expansion", "tests/test_multiprocess.py::TestMmapedDict::test_process_restart", "tests/test_multiprocess.py::TestUnsetEnv::test_file_syncpath", "tests/test_multiprocess.py::TestUnsetEnv::test_unset_syncdir_env"]
|
09a5ae30602a7a81f6174dae4ba08b93ee7feed2
|
{"first_commit_time": 1536147437.0, "pr_title": "Refactor MultiProcessCollector.collect() to allow for arbitrary merging.", "pr_body": "Factors out a merge() method from the previous collect() method, which\r\nis parameterized, and thus can be used for arbitrary merging of samples.\r\nFor motivation, see discussion in issue #275 around merging dead worker's\r\ndata into a single mmaped file.\r\n\r\nThis basically allows us to parameterize the files to be merged, and\r\nalso whether to accumulate histograms or not. Accumulation is on by\r\ndefault, as that is what the prometheus format expects. But it can now\r\nbe disabled, which allows merged values to be correctly written back to\r\nan mmaped file. For the same reason, the order of labels is preserved\r\nvia OrderedDict.", "pr_timeline": [{"time": 1536150349.0, "comment": "You've test failures, and need to add a DCO."}, {"time": 1536338545.0, "comment": "Can you resolve the conflict? I just merged a large reworking of the library to support openmetrics."}, {"time": 1536340228.0, "comment": "Rebased on latest master"}, {"time": 1536341475.0, "comment": "Thanks!"}, {"time": 1536344225.0, "comment": "Awesome, thanks :)"}, {"time": 1538472601.0, "comment": "Thanks @bloodearnest for this feature.\r\n\r\n@brian-brazil : any idea when you might release the next version including this ?"}, {"time": 1538473248.0, "comment": "I'm waiting to get all the openmetrics stuff in, and then release."}], "issues": {}}
|
|
pvlib/pvlib-python
| 1,017
|
https://github.com/pvlib/pvlib-python/pull/1017
|
pvlib__pvlib-python-1017
|
[]
|
49da0318256d8b46f90d4b29a7023de680f8410b
|
diff --git a/docs/examples/plot_passias_diffuse_shading.py b/docs/examples/plot_passias_diffuse_shading.py
new file mode 100644
index 0000000000..989e977fdb
--- /dev/null
+++ b/docs/examples/plot_passias_diffuse_shading.py
@@ -0,0 +1,84 @@
+"""
+Diffuse Self-Shading
+====================
+
+Modeling the reduction in diffuse irradiance caused by row-to-row diffuse
+shading.
+"""
+
+# %%
+# The term "self-shading" usually refers to adjacent rows blocking direct
+# irradiance and casting shadows on each other. However, the concept also
+# applies to diffuse irradiance because rows block a portion of the sky
+# dome even when the sun is high in the sky. The irradiance loss fraction
+# depends on how tightly the rows are packed and where on the module the
+# loss is evaluated -- a point near the top of edge of a module will see
+# more of the sky than a point near the bottom edge.
+#
+# This example uses the approach presented by Passias and Källbäck in [1]_
+# and recreates two figures from that paper using
+# :py:func:`pvlib.shading.masking_angle_passias` and
+# :py:func:`pvlib.shading.sky_diffuse_passias`.
+#
+# References
+# ----------
+# .. [1] D. Passias and B. Källbäck, "Shading effects in rows of solar cell
+# panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+# DOI: 10.1016/0379-6787(84)90017-6
+
+from pvlib import shading, irradiance
+import matplotlib.pyplot as plt
+import numpy as np
+
+# %%
+# First we'll recreate Figure 4, showing how the average masking angle varies
+# with array tilt and array packing. The masking angle of a given point on a
+# module is the angle from horizontal to the next row's top edge and represents
+# the portion of the sky dome blocked by the next row. Because it changes
+# from the bottom to the top of a module, the average across the module is
+# calculated. In [1]_, ``k`` refers to the ratio of row pitch to row slant
+# height (i.e. 1 / GCR).
+
+surface_tilt = np.arange(0, 90, 0.5)
+
+plt.figure()
+for k in [1, 1.5, 2, 2.5, 3, 4, 5, 7, 10]:
+ gcr = 1/k
+ psi = shading.masking_angle_passias(surface_tilt, gcr)
+ plt.plot(surface_tilt, psi, label='k={}'.format(k))
+
+plt.xlabel('Inclination angle [degrees]')
+plt.ylabel('Average masking angle [degrees]')
+plt.legend()
+plt.show()
+
+# %%
+# So as the array is packed tighter (decreasing ``k``), the average masking
+# angle increases.
+#
+# Next we'll recreate Figure 5. Note that the y-axis here is the ratio of
+# diffuse plane of array irradiance (after accounting for shading) to diffuse
+# horizontal irradiance. This means that the deviation from 100% is due to the
+# combination of self-shading and the fact that being at a tilt blocks off
+# the portion of the sky behind the row. The first effect is modeled with
+# :py:func:`pvlib.shading.sky_diffuse_passias` and the second with
+# :py:func:`pvlib.irradiance.isotropic`.
+
+plt.figure()
+for k in [1, 1.5, 2, 10]:
+ gcr = 1/k
+ psi = shading.masking_angle_passias(surface_tilt, gcr)
+ shading_loss = shading.sky_diffuse_passias(psi)
+ transposition_ratio = irradiance.isotropic(surface_tilt, dhi=1.0)
+ relative_diffuse = transposition_ratio * (1-shading_loss) * 100 # %
+ plt.plot(surface_tilt, relative_diffuse, label='k={}'.format(k))
+
+plt.xlabel('Inclination angle [degrees]')
+plt.ylabel('Relative diffuse irradiance [%]')
+plt.ylim(0, 105)
+plt.legend()
+plt.show()
+
+# %%
+# As ``k`` decreases, GCR increases, so self-shading loss increases and
+# collected diffuse irradiance decreases.
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index dd7c618646..a7087edad0 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -354,6 +354,12 @@ Effects on PV System Output
soiling.hsu
soiling.kimber
+.. autosummary::
+ :toctree: generated/
+
+ shading.masking_angle
+ shading.masking_angle_passias
+ shading.sky_diffuse_passias
Tracking
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 2aa1ebfb67..1fd41f232f 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -38,6 +38,10 @@ Enhancements
* Add :py:func:`pvlib.iam.marion_diffuse` and
:py:func:`pvlib.iam.marion_integrate` to calculate IAM values for
diffuse irradiance. (:pull:`984`)
+* Add :py:func:`pvlib.shading.sky_diffuse_passias`,
+ :py:func:`pvlib.shading.masking_angle_passias`, and
+ :py:func:`pvlib.shading.masking_angle` to model diffuse shading loss.
+ (:pull:`1017`)
* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
to a set of inverter efficiency curves. (:pull:`1011`)
@@ -75,6 +79,7 @@ Documentation
* Add a transposition gain example to the gallery. (:pull:`979`)
* Add a gallery example of calculating diffuse IAM using
:py:func:`pvlib.iam.marion_diffuse`. (:pull:`984`)
+* Add a gallery example of modeling diffuse shading loss. (:pull:`1017`)
* Add minigalleries to API reference pages. (:pull:`991`)
Requirements
diff --git a/pvlib/shading.py b/pvlib/shading.py
new file mode 100644
index 0000000000..9479eb1739
--- /dev/null
+++ b/pvlib/shading.py
@@ -0,0 +1,193 @@
+"""
+The ``shading`` module contains functions that model module shading and the
+associated effects on PV module output
+"""
+
+import numpy as np
+import pandas as pd
+from pvlib.tools import sind, cosd
+
+
+def masking_angle(surface_tilt, gcr, slant_height):
+ """
+ The elevation angle below which diffuse irradiance is blocked.
+
+ The ``height`` parameter determines how far up the module's surface to
+ evaluate the masking angle. The lower the point, the steeper the masking
+ angle [1]_. SAM uses a "worst-case" approach where the masking angle
+ is calculated for the bottom of the array (i.e. ``slant_height=0``) [2]_.
+
+ Parameters
+ ----------
+ surface_tilt : numeric
+ Panel tilt from horizontal [degrees].
+
+ gcr : float
+ The ground coverage ratio of the array [unitless].
+
+ slant_height : numeric
+ The distance up the module's slant height to evaluate the masking
+ angle, as a fraction [0-1] of the module slant height [unitless].
+
+ Returns
+ -------
+ mask_angle : numeric
+ Angle from horizontal where diffuse light is blocked by the
+ preceding row [degrees].
+
+ See Also
+ --------
+ masking_angle_passias
+ sky_diffuse_passias
+
+ References
+ ----------
+ .. [1] D. Passias and B. Källbäck, "Shading effects in rows of solar cell
+ panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+ DOI: 10.1016/0379-6787(84)90017-6
+ .. [2] Gilman, P. et al., (2018). "SAM Photovoltaic Model Technical
+ Reference Update", NREL Technical Report NREL/TP-6A20-67399.
+ Available at https://www.nrel.gov/docs/fy18osti/67399.pdf
+ """
+ # The original equation (8 in [1]) requires pitch and collector width,
+ # but it's easy to non-dimensionalize it to make it a function of GCR
+ # by factoring out B from the argument to arctan.
+ numerator = (1 - slant_height) * sind(surface_tilt)
+ denominator = 1/gcr - (1 - slant_height) * cosd(surface_tilt)
+ phi = np.arctan(numerator / denominator)
+ return np.degrees(phi)
+
+
+def masking_angle_passias(surface_tilt, gcr):
+ r"""
+ The average masking angle over the slant height of a row.
+
+ The masking angle is the angle from horizontal where the sky dome is
+ blocked by the row in front. The masking angle is larger near the lower
+ edge of a row than near the upper edge. This function calculates the
+ average masking angle as described in [1]_.
+
+ Parameters
+ ----------
+ surface_tilt : numeric
+ Panel tilt from horizontal [degrees].
+
+ gcr : float
+ The ground coverage ratio of the array [unitless].
+
+ Returns
+ ----------
+ mask_angle : numeric
+ Average angle from horizontal where diffuse light is blocked by the
+ preceding row [degrees].
+
+ See Also
+ --------
+ masking_angle
+ sky_diffuse_passias
+
+ Notes
+ -----
+ The pvlib-python authors believe that Eqn. 9 in [1]_ is incorrect.
+ Here we use an independent equation. First, Eqn. 8 is non-dimensionalized
+ (recasting in terms of GCR):
+
+ .. math::
+
+ \psi(z') = \arctan \left [
+ \frac{(1 - z') \sin \beta}
+ {\mathrm{GCR}^{-1} + (z' - 1) \cos \beta}
+ \right ]
+
+ Where :math:`GCR = B/C` and :math:`z' = z/B`. The average masking angle
+ :math:`\overline{\psi} = \int_0^1 \psi(z') \mathrm{d}z'` is then
+ evaluated symbolically using Maxima (using :math:`X = 1/\mathrm{GCR}`):
+
+ .. code-block:: none
+
+ load(scifac) /* for the gcfac function */
+ assume(X>0, cos(beta)>0, cos(beta)-X<0); /* X is 1/GCR */
+ gcfac(integrate(atan((1-z)*sin(beta)/(X+(z-1)*cos(beta))), z, 0, 1))
+
+ This yields the equation implemented by this function:
+
+ .. math::
+
+ \overline{\psi} = \
+ &-\frac{X}{2} \sin\beta \log | 2 X \cos\beta - (X^2 + 1)| \\
+ &+ (X \cos\beta - 1) \arctan \frac{X \cos\beta - 1}{X \sin\beta} \\
+ &+ (1 - X \cos\beta) \arctan \frac{\cos\beta}{\sin\beta} \\
+ &+ X \log X \sin\beta
+
+ The pvlib-python authors have validated this equation against numerical
+ integration of :math:`\overline{\psi} = \int_0^1 \psi(z') \mathrm{d}z'`.
+
+ References
+ ----------
+ .. [1] D. Passias and B. Källbäck, "Shading effects in rows of solar cell
+ panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+ DOI: 10.1016/0379-6787(84)90017-6
+ """
+ # wrap it in an array so that division by zero is handled well
+ beta = np.radians(np.array(surface_tilt))
+ sin_b = np.sin(beta)
+ cos_b = np.cos(beta)
+ X = 1/gcr
+
+ with np.errstate(divide='ignore', invalid='ignore'): # ignore beta=0
+ term1 = -X * sin_b * np.log(np.abs(2 * X * cos_b - (X**2 + 1))) / 2
+ term2 = (X * cos_b - 1) * np.arctan((X * cos_b - 1) / (X * sin_b))
+ term3 = (1 - X * cos_b) * np.arctan(cos_b / sin_b)
+ term4 = X * np.log(X) * sin_b
+
+ psi_avg = term1 + term2 + term3 + term4
+ # when beta=0, divide by zero makes psi_avg NaN. replace with 0:
+ psi_avg = np.where(np.isfinite(psi_avg), psi_avg, 0)
+
+ if isinstance(surface_tilt, pd.Series):
+ psi_avg = pd.Series(psi_avg, index=surface_tilt.index)
+
+ return np.degrees(psi_avg)
+
+
+def sky_diffuse_passias(masking_angle):
+ r"""
+ The diffuse irradiance loss caused by row-to-row sky diffuse shading.
+
+ Even when the sun is high in the sky, a row's view of the sky dome will
+ be partially blocked by the row in front. This causes a reduction in the
+ diffuse irradiance incident on the module. The reduction depends on the
+ masking angle, the elevation angle from a point on the shaded module to
+ the top of the shading row. In [1]_ the masking angle is calculated as
+ the average across the module height. SAM assumes the "worst-case" loss
+ where the masking angle is calculated for the bottom of the array [2]_.
+
+ This function, as in [1]_, makes the assumption that sky diffuse
+ irradiance is isotropic.
+
+ Parameters
+ ----------
+ masking_angle : numeric
+ The elevation angle below which diffuse irradiance is blocked
+ [degrees].
+
+ Returns
+ -------
+ derate : numeric
+ The fraction [0-1] of blocked sky diffuse irradiance.
+
+ See Also
+ --------
+ masking_angle
+ masking_angle_passias
+
+ References
+ ----------
+ .. [1] D. Passias and B. Källbäck, "Shading effects in rows of solar cell
+ panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+ DOI: 10.1016/0379-6787(84)90017-6
+ .. [2] Gilman, P. et al., (2018). "SAM Photovoltaic Model Technical
+ Reference Update", NREL Technical Report NREL/TP-6A20-67399.
+ Available at https://www.nrel.gov/docs/fy18osti/67399.pdf
+ """
+ return 1 - cosd(masking_angle/2)**2
|
diff --git a/pvlib/tests/test_shading.py b/pvlib/tests/test_shading.py
new file mode 100644
index 0000000000..8a9fd46a69
--- /dev/null
+++ b/pvlib/tests/test_shading.py
@@ -0,0 +1,71 @@
+import numpy as np
+import pandas as pd
+
+from pandas.testing import assert_series_equal
+import pytest
+
+from pvlib import shading
+
+
[email protected]
+def surface_tilt():
+ idx = pd.date_range('2019-01-01', freq='h', periods=3)
+ return pd.Series([0, 20, 90], index=idx)
+
+
[email protected]
+def masking_angle(surface_tilt):
+ # masking angles for the surface_tilt fixture,
+ # assuming GCR=0.5 and height=0.25
+ return pd.Series([0.0, 11.20223712, 20.55604522], index=surface_tilt.index)
+
+
[email protected]
+def average_masking_angle(surface_tilt):
+ # average masking angles for the surface_tilt fixture, assuming GCR=0.5
+ return pd.Series([0.0, 7.20980655, 13.779867461], index=surface_tilt.index)
+
+
[email protected]
+def shading_loss(surface_tilt):
+ # diffuse shading loss values for the average_masking_angle fixture
+ return pd.Series([0, 0.00395338, 0.01439098], index=surface_tilt.index)
+
+
+def test_masking_angle_series(surface_tilt, masking_angle):
+ # series inputs and outputs
+ masking_angle_actual = shading.masking_angle(surface_tilt, 0.5, 0.25)
+ assert_series_equal(masking_angle_actual, masking_angle)
+
+
+def test_masking_angle_scalar(surface_tilt, masking_angle):
+ # scalar inputs and outputs, including zero
+ for tilt, angle in zip(surface_tilt, masking_angle):
+ masking_angle_actual = shading.masking_angle(tilt, 0.5, 0.25)
+ assert np.isclose(masking_angle_actual, angle)
+
+
+def test_masking_angle_passias_series(surface_tilt, average_masking_angle):
+ # pandas series inputs and outputs
+ masking_angle_actual = shading.masking_angle_passias(surface_tilt, 0.5)
+ assert_series_equal(masking_angle_actual, average_masking_angle)
+
+
+def test_masking_angle_passias_scalar(surface_tilt, average_masking_angle):
+ # scalar inputs and outputs, including zero
+ for tilt, angle in zip(surface_tilt, average_masking_angle):
+ masking_angle_actual = shading.masking_angle_passias(tilt, 0.5)
+ assert np.isclose(masking_angle_actual, angle)
+
+
+def test_sky_diffuse_passias_series(average_masking_angle, shading_loss):
+ # pandas series inputs and outputs
+ actual_loss = shading.sky_diffuse_passias(average_masking_angle)
+ assert_series_equal(shading_loss, actual_loss)
+
+
+def test_sky_diffuse_passias_scalar(average_masking_angle, shading_loss):
+ # scalar inputs and outputs
+ for angle, loss in zip(average_masking_angle, shading_loss):
+ actual_loss = shading.sky_diffuse_passias(angle)
+ assert np.isclose(loss, actual_loss)
| 2020-08-04T03:27:48
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/examples/plot_passias_diffuse_shading.py": null, "docs/sphinx/source/api.rst": ".. currentmodule:: pvlib\n\n#############\nAPI reference\n#############\n\n\nClasses\n=======\n\npvlib-python provides a collection of classes for users that prefer\nobject-oriented programming. These classes can help users keep track of\ndata in a more organized way, and can help to simplify the modeling\nprocess. The classes do not add any functionality beyond the procedural\ncode. Most of the object methods are simple wrappers around the\ncorresponding procedural code.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location\n pvsystem.PVSystem\n tracking.SingleAxisTracker\n modelchain.ModelChain\n pvsystem.LocalizedPVSystem\n tracking.LocalizedSingleAxisTracker\n\n\nSolar Position\n==============\n\nFunctions and methods for calculating solar position.\n\nThe :py:meth:`location.Location.get_solarposition` method and the\n:py:func:`solarposition.get_solarposition` function with default\nparameters are fast and accurate. We recommend using these functions\nunless you know that you need a different function.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_solarposition\n solarposition.get_solarposition\n solarposition.spa_python\n solarposition.ephemeris\n solarposition.pyephem\n solarposition.spa_c\n\n\nAdditional functions for quantities closely related to solar position.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.calc_time\n solarposition.pyephem_earthsun_distance\n solarposition.nrel_earthsun_distance\n spa.calculate_deltat\n\n\nFunctions for calculating sunrise, sunset and transit times.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_sun_rise_set_transit\n solarposition.sun_rise_set_transit_ephem\n solarposition.sun_rise_set_transit_spa\n solarposition.sun_rise_set_transit_geometric\n\n\nThe spa module contains the implementation of the built-in NREL SPA\nalgorithm.\n\n.. autosummary::\n :toctree: generated/\n\n spa\n\nCorrelations and analytical expressions for low precision solar position\ncalculations.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.solar_zenith_analytical\n solarposition.solar_azimuth_analytical\n solarposition.declination_spencer71\n solarposition.declination_cooper69\n solarposition.equation_of_time_spencer71\n solarposition.equation_of_time_pvcdrom\n solarposition.hour_angle\n solarposition.sun_rise_set_transit_geometric\n\n\nClear sky\n=========\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_clearsky\n clearsky.ineichen\n clearsky.lookup_linke_turbidity\n clearsky.simplified_solis\n clearsky.haurwitz\n clearsky.detect_clearsky\n clearsky.bird\n\n\nAirmass and atmospheric models\n==============================\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_airmass\n atmosphere.get_absolute_airmass\n atmosphere.get_relative_airmass\n atmosphere.pres2alt\n atmosphere.alt2pres\n atmosphere.gueymard94_pw\n atmosphere.first_solar_spectral_correction\n atmosphere.bird_hulstrom80_aod_bb\n atmosphere.kasten96_lt\n atmosphere.angstrom_aod_at_lambda\n atmosphere.angstrom_alpha\n\n\nIrradiance\n==========\n\nMethods for irradiance calculations\n-----------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem.get_irradiance\n pvsystem.PVSystem.get_aoi\n tracking.SingleAxisTracker.get_irradiance\n\nDecomposing and combining irradiance\n------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_extra_radiation\n irradiance.aoi\n irradiance.aoi_projection\n irradiance.poa_horizontal_ratio\n irradiance.beam_component\n irradiance.poa_components\n irradiance.get_ground_diffuse\n irradiance.dni\n\nTransposition models\n--------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_total_irradiance\n irradiance.get_sky_diffuse\n irradiance.isotropic\n irradiance.perez\n irradiance.haydavies\n irradiance.klucher\n irradiance.reindl\n irradiance.king\n\n.. _dniestmodels:\n\nDNI estimation models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.disc\n irradiance.dirint\n irradiance.dirindex\n irradiance.erbs\n irradiance.liujordan\n irradiance.gti_dirint\n\nClearness index models\n----------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.clearness_index\n irradiance.clearness_index_zenith_independent\n irradiance.clearsky_index\n\n\nPV Modeling\n===========\n\nClasses\n-------\n\nThe :py:class:`~pvsystem.PVSystem` class provides many methods that\nwrap the functions listed below. See its documentation for details.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem\n pvsystem.LocalizedPVSystem\n\nIncident angle modifiers\n------------------------\n\n.. autosummary::\n :toctree: generated/\n\n iam.physical\n iam.ashrae\n iam.martin_ruiz\n iam.martin_ruiz_diffuse\n iam.sapm\n iam.interp\n iam.marion_diffuse\n iam.marion_integrate\n\nPV temperature models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n temperature.sapm_cell\n temperature.sapm_module\n temperature.sapm_cell_from_module\n temperature.pvsyst_cell\n temperature.faiman\n\nSingle diode models\n-------------------\n\nFunctions relevant for single diode models.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.calcparams_cec\n pvsystem.calcparams_desoto\n pvsystem.calcparams_pvsyst\n pvsystem.i_from_v\n pvsystem.singlediode\n pvsystem.v_from_i\n pvsystem.max_power_point\n\nLow-level functions for solving the single diode equation.\n\n.. autosummary::\n :toctree: generated/\n\n singlediode.estimate_voc\n singlediode.bishop88\n singlediode.bishop88_i_from_v\n singlediode.bishop88_v_from_i\n singlediode.bishop88_mpp\n\nFunctions for fitting diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.fit_sde_sandia\n ivtools.fit_sdm_cec_sam\n ivtools.fit_sdm_desoto\n\nInverter models (DC to AC conversion)\n-------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n inverter.sandia\n inverter.adr\n inverter.pvwatts\n\nFunctions for fitting inverter models\n\n.. autosummary::\n :toctree: generated/\n\n inverter.fit_sandia\n\n\nPV System Models\n----------------\n\nSandia array performance model (SAPM)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.sapm\n pvsystem.sapm_effective_irradiance\n pvsystem.sapm_spectral_loss\n pvsystem.sapm_aoi_loss\n inverter.sandia\n temperature.sapm_cell\n\nPvsyst model\n^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n temperature.pvsyst_cell\n pvsystem.calcparams_pvsyst\n pvsystem.singlediode\n\nPVWatts model\n^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.pvwatts_dc\n inverter.pvwatts\n pvsystem.pvwatts_losses\n\nOther\n-----\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.retrieve_sam\n pvsystem.scale_voltage_current_power\n\n\nEffects on PV System Output\n===========================\n\n.. autosummary::\n :toctree: generated/\n\n snow.coverage_nrel\n snow.fully_covered_nrel\n snow.dc_loss_nrel\n\n.. autosummary::\n :toctree: generated/\n\n soiling.hsu\n soiling.kimber\n\n\n\nTracking\n========\n\nSingleAxisTracker\n-----------------\n\nThe :py:class:`~tracking.SingleAxisTracker` inherits from\n:py:class:`~pvsystem.PVSystem`.\n\n.. autosummary::\n :toctree: generated/\n\n tracking.SingleAxisTracker\n tracking.SingleAxisTracker.singleaxis\n tracking.SingleAxisTracker.get_irradiance\n tracking.SingleAxisTracker.localize\n tracking.LocalizedSingleAxisTracker\n\nFunctions\n---------\n\n.. autosummary::\n :toctree: generated/\n\n tracking.singleaxis\n\n\n.. _iotools:\n\nIO Tools\n========\n\nFunctions for reading and writing data from a variety of file formats\nrelevant to solar energy modeling.\n\n.. autosummary::\n :toctree: generated/\n\n iotools.read_tmy2\n iotools.read_tmy3\n iotools.read_epw\n iotools.parse_epw\n iotools.read_srml\n iotools.read_srml_month_from_solardat\n iotools.read_surfrad\n iotools.read_midc\n iotools.read_midc_raw_data_from_nrel\n iotools.read_ecmwf_macc\n iotools.get_ecmwf_macc\n iotools.read_crn\n iotools.read_solrad\n iotools.get_psm3\n iotools.read_psm3\n iotools.parse_psm3\n iotools.get_pvgis_tmy\n iotools.read_pvgis_tmy\n\nA :py:class:`~pvlib.location.Location` object may be created from metadata\nin some files.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.from_tmy\n location.Location.from_epw\n\n\nForecasting\n===========\n\nForecast models\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.GFS\n forecast.NAM\n forecast.RAP\n forecast.HRRR\n forecast.HRRR_ESRL\n forecast.NDFD\n\nGetting data\n------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.get_data\n forecast.ForecastModel.get_processed_data\n\nProcessing data\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.process_data\n forecast.ForecastModel.rename\n forecast.ForecastModel.cloud_cover_to_ghi_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_clearsky_scaling\n forecast.ForecastModel.cloud_cover_to_transmittance_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_liujordan\n forecast.ForecastModel.cloud_cover_to_irradiance\n forecast.ForecastModel.kelvin_to_celsius\n forecast.ForecastModel.isobaric_to_ambient_temperature\n forecast.ForecastModel.uv_to_speed\n forecast.ForecastModel.gust_to_speed\n\nIO support\n----------\n\nThese are public for now, but use at your own risk.\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.set_dataset\n forecast.ForecastModel.set_query_latlon\n forecast.ForecastModel.set_location\n forecast.ForecastModel.set_time\n\n\nModelChain\n==========\n\nCreating a ModelChain object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain\n\nRunning\n-------\n\nRunning a ModelChain.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.run_model\n modelchain.ModelChain.complete_irradiance\n modelchain.ModelChain.prepare_inputs\n\nAttributes\n----------\n\nSimple ModelChain attributes:\n\n``system, location, clearsky_model, transposition_model,\nsolar_position_method, airmass_model``\n\nProperties\n----------\n\nModelChain properties that are aliases for your specific modeling functions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.orientation_strategy\n modelchain.ModelChain.dc_model\n modelchain.ModelChain.ac_model\n modelchain.ModelChain.aoi_model\n modelchain.ModelChain.spectral_model\n modelchain.ModelChain.temperature_model\n modelchain.ModelChain.losses_model\n modelchain.ModelChain.effective_irradiance_model\n\nModel definitions\n-----------------\n\nModelChain model definitions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.sapm\n modelchain.ModelChain.cec\n modelchain.ModelChain.desoto\n modelchain.ModelChain.pvsyst\n modelchain.ModelChain.pvwatts_dc\n modelchain.ModelChain.snlinverter\n modelchain.ModelChain.adrinverter\n modelchain.ModelChain.pvwatts_inverter\n modelchain.ModelChain.ashrae_aoi_loss\n modelchain.ModelChain.physical_aoi_loss\n modelchain.ModelChain.sapm_aoi_loss\n modelchain.ModelChain.no_aoi_loss\n modelchain.ModelChain.first_solar_spectral_loss\n modelchain.ModelChain.sapm_spectral_loss\n modelchain.ModelChain.no_spectral_loss\n modelchain.ModelChain.sapm_temp\n modelchain.ModelChain.pvsyst_temp\n modelchain.ModelChain.faiman_temp\n modelchain.ModelChain.pvwatts_losses\n modelchain.ModelChain.no_extra_losses\n\nInference methods\n-----------------\n\nMethods that automatically determine which models should be used based\non the information in the associated :py:class:`~pvsystem.PVSystem` object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.infer_dc_model\n modelchain.ModelChain.infer_ac_model\n modelchain.ModelChain.infer_aoi_model\n modelchain.ModelChain.infer_spectral_model\n modelchain.ModelChain.infer_temperature_model\n modelchain.ModelChain.infer_losses_model\n\nFunctions\n---------\n\nFunctions for power modeling.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.basic_chain\n modelchain.get_orientation\n\n\nBifacial\n========\n\nMethods for calculating back surface irradiance\n\n.. autosummary::\n :toctree: generated/\n\n bifacial.pvfactors_timeseries\n\n\nScaling\n=======\n\nMethods for manipulating irradiance for temporal or spatial considerations\n\n.. autosummary::\n :toctree: generated/\n\n scaling.wvm\n", "docs/sphinx/source/whatsnew/v0.8.0.rst": ".. _whatsnew_0800:\n\nv0.8.0 (Month day, year)\n-------------------------\n\nAPI Changes with Deprecations\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n* Moved functions related to inverters from ``pvsystem.py`` to ``inverter.py``.\n Functions are renamed to follow a more consistent pattern, as follows (:pull:`886`):\n - :py:func:`pvlib.pvsystem.snlinverter` is now :py:func:`pvlib.inverter.sandia`\n - :py:func:`pvlib.pvsystem.pvwatts_ac` is now :py:func:`pvlib.inverter.pvwatts`\n - :py:func:`pvlib.pvsystem.adrinverter` is now :py:func:`pvlib.inverter.adr`\n* Argument ``ac_model`` for :py:class:`pvlib.modelchain.ModelChain` now accepts\n ``'sandia'``, ``'pvwatts'`` and ``'adr'`` for the inverter models. (:pull:`886`)\n\nAPI Changes\n~~~~~~~~~~~\n* Removed ``run_parallel_calculations`` and ``n_workers_for_parallel_calcs``\n from :py:func:`pvlib.bifacial.pvfactors_timeseries` inputs (:issue:`902`) (:pull:`934`)\n* :py:func:`pvlib.iotools.read_tmy3` can now only read local data files because\n the NREL RREDC server hosting the TMY3 dataset has been retired. For\n fetching TMY data from NREL servers, :py:func:`pvlib.iotools.get_psm3` is\n now recommended to retrieve newer PSM3 data over the older TMY3 data.\n (:issue:`996`) (:pull:`1004`)\n* The tkinter-based file selection dialog has been removed from\n :py:func:`pvlib.iotools.read_tmy2` and :py:func:`pvlib.iotools.read_tmy3`;\n the filepath is now a required parameter. (:pull:`1004`)\n* Removed ``systemdef`` function from ``pvsystem.py``. This function was not\n used in pvlib and its output was not directly compatible with any pvlib\n function. See :py:func:`pvlib.iotools.read_tmy2`,\n :py:func:`pvlib.iotools.read_tmy3`, :py:meth:`pvlib.location.Location.from_tmy`, and\n :py:class:`pvlib.pvsystem.LocalizedPVSystem` for alternatives. (:issue:`965`)\n (:pull:`1008`)\n\nEnhancements\n~~~~~~~~~~~~\n* Update :func:`~pvlib.bifacial.pvfactors_timeseries` to run with ``pvfactors`` v1.4.1 (:issue:`902`)(:pull:`934`)\n* Add :py:func:`pvlib.iam.marion_diffuse` and\n :py:func:`pvlib.iam.marion_integrate` to calculate IAM values for\n diffuse irradiance. (:pull:`984`)\n* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model\n to a set of inverter efficiency curves. (:pull:`1011`)\n\nBug fixes\n~~~~~~~~~\n* Fixed unit and default value errors in :py:func:`pvlib.soiling.hsu`. (:pull:`XXX`)\n\nTesting\n~~~~~~~\n* Decorator :py:func:`pvlib.conftest.fail_on_pvlib_version` can now be\n applied to functions that require args or kwargs. (:pull:`973`)\n* Test added for :py:class:`pvlib.modelchain.ModelChain` to confirm ValueError when\n ``ac_model`` is an invalid string. (:pull:`886`)\n* Add minimum requirements configuration to Azure Pipelines build.\n (:pull:`1006`)\n* Update the `data/test_psm3_tmy-2017.csv` datafile to match the updated\n NSRDB data. (:issue:`1005`, :pull:`1007`)\n* Add wrappers around the pandas assert_X_equal functions to accommodate the\n changed API and default precision thresholds in pandas 1.1.0\n (:issue:`1018`, :pull:`1021`)\n\nDocumentation\n~~~~~~~~~~~~~\n* Improved formatting and content of docstrings in :py:mod:`pvlib.atmosphere`.\n (:pull:`969`)\n* Fix LaTeX rendering in :py:func:`pvlib.singlediode.bishop88`. (:pull:`967`)\n* Clarify units for heat loss factors in\n :py:func:`pvlib.temperature.pvsyst_cell` and\n :py:func:`pvlib.temperature.faiman`. (:pull:`960`)\n* Add make.bat so that docs can be built on Windows without ``make`` installed.\n (:issue:`978`, :pull:`981`)\n* Add instructions to build the documentation. (:pull:`982`)\n* Corrected key names for :py:func:`pvlib.inverter.sandia`. (:issue:`976`,\n :pull:`886`)\n* Add a transposition gain example to the gallery. (:pull:`979`)\n* Add a gallery example of calculating diffuse IAM using\n :py:func:`pvlib.iam.marion_diffuse`. (:pull:`984`)\n* Add minigalleries to API reference pages. (:pull:`991`)\n\nRequirements\n~~~~~~~~~~~~\n* Minimum pandas version increased to v0.22.0, released Dec 31, 2017. (:pull:`1003`)\n\nContributors\n~~~~~~~~~~~~\n* Cliff Hansen (:ghuser:`cwhanse`)\n* Kevin Anderson (:ghuser:`kanderso-nrel`)\n* Mark Mikofski (:ghuser:`mikofski`)\n* Joshua S. Stein (:ghuser:`jsstein`)\n* Marc A. Anoma (:ghuser:`anomam`)\n* Will Holmgren (:ghuser:`wholmgren`)\n", "pvlib/shading.py": null}
|
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index dd7c618646..a7087edad0 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -354,6 +354,12 @@ Effects on PV System Output
soiling.hsu
soiling.kimber
+.. autosummary::
+ :toctree: generated/
+
+ shading.masking_angle
+ shading.masking_angle_passias
+ shading.sky_diffuse_passias
Tracking
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 2aa1ebfb67..1fd41f232f 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -38,6 +38,10 @@ Enhancements
* Add :py:func:`pvlib.iam.marion_diffuse` and
:py:func:`pvlib.iam.marion_integrate` to calculate IAM values for
diffuse irradiance. (:pull:`984`)
+* Add :py:func:`pvlib.shading.sky_diffuse_passias`,
+ :py:func:`pvlib.shading.masking_angle_passias`, and
+ :py:func:`pvlib.shading.masking_angle` to model diffuse shading loss.
+ (:pull:`1017`)
* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
to a set of inverter efficiency curves. (:pull:`1011`)
@@ -75,6 +79,7 @@ Documentation
* Add a transposition gain example to the gallery. (:pull:`979`)
* Add a gallery example of calculating diffuse IAM using
:py:func:`pvlib.iam.marion_diffuse`. (:pull:`984`)
+* Add a gallery example of modeling diffuse shading loss. (:pull:`1017`)
* Add minigalleries to API reference pages. (:pull:`991`)
Requirements
|
{"pvlib/shading.py": [{"type": "function", "name": "masking_angle", "lines": [11, 58], "signature": "def masking_angle(surface_tilt, gcr, slant_height):", "doc": "The elevation angle below which diffuse irradiance is blocked.\n\nThe ``height`` parameter determines how far up the module's surface to\nevaluate the masking angle. The lower the point, the steeper the masking\nangle [1]_. SAM uses a \"worst-case\" approach where the masking angle\nis calculated for the bottom of the array (i.e. ``slant_height=0``) [2]_.\n\nParameters\n----------\nsurface_tilt : numeric\n Panel tilt from horizontal [degrees].\n\ngcr : float\n The ground coverage ratio of the array [unitless].\n\nslant_height : numeric\n The distance up the module's slant height to evaluate the masking\n angle, as a fraction [0-1] of the module slant height [unitless].\n\nReturns\n-------\nmask_angle : numeric\n Angle from horizontal where diffuse light is blocked by the\n preceding row [degrees].\n\nSee Also\n--------\nmasking_angle_passias\nsky_diffuse_passias\n\nReferences\n----------\n.. [1] D. Passias and B. Källbäck, \"Shading effects in rows of solar cell\n panels\", Solar Cells, Volume 11, Pages 281-291. 1984.\n DOI: 10.1016/0379-6787(84)90017-6\n.. [2] Gilman, P. et al., (2018). \"SAM Photovoltaic Model Technical\n Reference Update\", NREL Technical Report NREL/TP-6A20-67399.\n Available at https://www.nrel.gov/docs/fy18osti/67399.pdf"}, {"type": "function", "name": "masking_angle_passias", "lines": [61, 150], "signature": "def masking_angle_passias(surface_tilt, gcr):", "doc": "The average masking angle over the slant height of a row.\n\nThe masking angle is the angle from horizontal where the sky dome is\nblocked by the row in front. The masking angle is larger near the lower\nedge of a row than near the upper edge. This function calculates the\naverage masking angle as described in [1]_.\n\nParameters\n----------\nsurface_tilt : numeric\n Panel tilt from horizontal [degrees].\n\ngcr : float\n The ground coverage ratio of the array [unitless].\n\nReturns\n----------\nmask_angle : numeric\n Average angle from horizontal where diffuse light is blocked by the\n preceding row [degrees].\n\nSee Also\n--------\nmasking_angle\nsky_diffuse_passias\n\nNotes\n-----\nThe pvlib-python authors believe that Eqn. 9 in [1]_ is incorrect.\nHere we use an independent equation. First, Eqn. 8 is non-dimensionalized\n(recasting in terms of GCR):\n\n.. math::\n\n \\psi(z') = \\arctan \\left [\n \\frac{(1 - z') \\sin \\beta}\n {\\mathrm{GCR}^{-1} + (z' - 1) \\cos \\beta}\n \\right ]\n\nWhere :math:`GCR = B/C` and :math:`z' = z/B`. The average masking angle\n:math:`\\overline{\\psi} = \\int_0^1 \\psi(z') \\mathrm{d}z'` is then\nevaluated symbolically using Maxima (using :math:`X = 1/\\mathrm{GCR}`):\n\n.. code-block:: none\n\n load(scifac) /* for the gcfac function */\n assume(X>0, cos(beta)>0, cos(beta)-X<0); /* X is 1/GCR */\n gcfac(integrate(atan((1-z)*sin(beta)/(X+(z-1)*cos(beta))), z, 0, 1))\n\nThis yields the equation implemented by this function:\n\n.. math::\n\n \\overline{\\psi} = \\\n &-\\frac{X}{2} \\sin\\beta \\log | 2 X \\cos\\beta - (X^2 + 1)| \\\\\n &+ (X \\cos\\beta - 1) \\arctan \\frac{X \\cos\\beta - 1}{X \\sin\\beta} \\\\\n &+ (1 - X \\cos\\beta) \\arctan \\frac{\\cos\\beta}{\\sin\\beta} \\\\\n &+ X \\log X \\sin\\beta\n\nThe pvlib-python authors have validated this equation against numerical\nintegration of :math:`\\overline{\\psi} = \\int_0^1 \\psi(z') \\mathrm{d}z'`.\n\nReferences\n----------\n.. [1] D. Passias and B. Källbäck, \"Shading effects in rows of solar cell\n panels\", Solar Cells, Volume 11, Pages 281-291. 1984.\n DOI: 10.1016/0379-6787(84)90017-6"}, {"type": "function", "name": "sky_diffuse_passias", "lines": [153, 193], "signature": "def sky_diffuse_passias(masking_angle):", "doc": "The diffuse irradiance loss caused by row-to-row sky diffuse shading.\n\nEven when the sun is high in the sky, a row's view of the sky dome will\nbe partially blocked by the row in front. This causes a reduction in the\ndiffuse irradiance incident on the module. The reduction depends on the\nmasking angle, the elevation angle from a point on the shaded module to\nthe top of the shading row. In [1]_ the masking angle is calculated as\nthe average across the module height. SAM assumes the \"worst-case\" loss\nwhere the masking angle is calculated for the bottom of the array [2]_.\n\nThis function, as in [1]_, makes the assumption that sky diffuse\nirradiance is isotropic.\n\nParameters\n----------\nmasking_angle : numeric\n The elevation angle below which diffuse irradiance is blocked\n [degrees].\n\nReturns\n-------\nderate : numeric\n The fraction [0-1] of blocked sky diffuse irradiance.\n\nSee Also\n--------\nmasking_angle\nmasking_angle_passias\n\nReferences\n----------\n.. [1] D. Passias and B. Källbäck, \"Shading effects in rows of solar cell\n panels\", Solar Cells, Volume 11, Pages 281-291. 1984.\n DOI: 10.1016/0379-6787(84)90017-6\n.. [2] Gilman, P. et al., (2018). \"SAM Photovoltaic Model Technical\n Reference Update\", NREL Technical Report NREL/TP-6A20-67399.\n Available at https://www.nrel.gov/docs/fy18osti/67399.pdf"}]}
|
0.7
|
["pvlib/tests/test_shading.py::test_masking_angle_series", "pvlib/tests/test_shading.py::test_masking_angle_scalar", "pvlib/tests/test_shading.py::test_masking_angle_passias_series", "pvlib/tests/test_shading.py::test_masking_angle_passias_scalar", "pvlib/tests/test_shading.py::test_sky_diffuse_passias_series", "pvlib/tests/test_shading.py::test_sky_diffuse_passias_scalar"]
|
[]
|
aa1635bcb40dc83f82e9fd72158670c235bfe99b
|
{"first_commit_time": 1596510520.0, "pr_title": "Add diffuse self-shading functions", "pr_body": " - ~Closes #xxxx~\r\n - [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)\r\n - [x] Tests added\r\n - [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.\r\n - [x] Adds description and name entries in the appropriate \"what's new\" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).\r\n - [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.\r\n - [x] Pull request is nearly complete and ready for detailed review.\r\n - [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.\r\n\r\nThis implements a modified version of the diffuse self-shading model in Passias and Kallbach (1984) \"SHADING EFFECTS IN ROWS OF SOLAR CELL PANELS\", which I believe has an error in the equation for the average masking angle (Eq 9). Here is a gist comparing the paper's equation with the one I derived, along with a numerical integration for comparison: https://gist.github.com/kanderso-nrel/2c6c3a1853338cdef5b4bbc67092ccc8. Note that I implemented two versions of the paper's equation to account for an ambiguous minus sign.\r\n\r\nIf anyone has a Mathematica license and can produce a cleaner version of that equation, I'd be happy to replace it -- that quick and dirty Maxima script in the docstring is about as much as I can do with symbolic solvers.", "pr_timeline": [{"time": 1596547107.0, "comment": "Test failures appear to be unrelated, a change in pandas: #1018 "}, {"time": 1597932148.0, "comment": "@wholmgren @mikofski any objections to merge?"}, {"time": 1597942973.0, "comment": "LGTM too. I made one comment on a trig identity that might be useful, but probably not essential. This is a great addition. Thanks Kevin!"}], "issues": {}}
|
pvlib/pvlib-python
| 1,045
|
https://github.com/pvlib/pvlib-python/pull/1045
|
pvlib__pvlib-python-1045
|
[]
|
f8b9c04c13228ae74fa3be1cfb7e03ed4cf4eaa5
|
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 077a5e121d..2e882e9a46 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -236,7 +236,10 @@ PV temperature models
temperature.pvsyst_cell
temperature.faiman
temperature.fuentes
+ temperature.ross
pvsystem.PVSystem.sapm_celltemp
+ pvsystem.PVSystem.pvsyst_celltemp
+ pvsystem.PVSystem.faiman_celltemp
Temperature Model Parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index 61eb026205..9b6e2a3800 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -13,6 +13,8 @@ Deprecations
Enhancements
~~~~~~~~~~~~
+* Added :py:func:`pvlib.temperature.ross` for cell temperature modeling using
+ only NOCT. (:pull:`1045`)
Bug fixes
@@ -36,3 +38,5 @@ Requirements
Contributors
~~~~~~~~~~~~
* Kevin Anderson (:ghuser:`kanderso-nrel`)
+* Will Holmgren (:ghuser:`wholmgren`)
+* Cliff Hansen (:ghuser:`cwhanse`)
diff --git a/pvlib/temperature.py b/pvlib/temperature.py
index 1dd32fecf8..1f27180a9a 100644
--- a/pvlib/temperature.py
+++ b/pvlib/temperature.py
@@ -377,9 +377,10 @@ def pvsyst_cell(poa_global, temp_air, wind_speed=1.0, u_c=29.0, u_v=0.0,
def faiman(poa_global, temp_air, wind_speed=1.0, u0=25.0, u1=6.84):
r'''
- Calculate cell or module temperature using the Faiman model. The Faiman
- model uses an empirical heat loss factor model [1]_ and is adopted in the
- IEC 61853 standards [2]_ and [3]_.
+ Calculate cell or module temperature using the Faiman model.
+
+ The Faiman model uses an empirical heat loss factor model [1]_ and is
+ adopted in the IEC 61853 standards [2]_ and [3]_.
Usage of this model in the IEC 61853 standard does not distinguish
between cell and module temperature.
@@ -443,6 +444,53 @@ def faiman(poa_global, temp_air, wind_speed=1.0, u0=25.0, u1=6.84):
return temp_air + temp_difference
+def ross(poa_global, temp_air, noct):
+ r'''
+ Calculate cell temperature using the Ross model.
+
+ The Ross model [1]_ assumes the difference between cell temperature
+ and ambient temperature is proportional to the plane of array irradiance,
+ and assumes wind speed of 1 m/s. The model implicitly assumes steady or
+ slowly changing irradiance conditions.
+
+ Parameters
+ ----------
+ poa_global : numeric
+ Total incident irradiance. [W/m^2]
+
+ temp_air : numeric
+ Ambient dry bulb temperature. [C]
+
+ noct : numeric
+ Nominal operating cell temperature [C], determined at conditions of
+ 800 W/m^2 irradiance, 20 C ambient air temperature and 1 m/s wind.
+
+ Returns
+ -------
+ cell_temperature : numeric
+ Cell temperature. [C]
+
+ Notes
+ -----
+ The Ross model for cell temperature :math:`T_{C}` is given in [1]_ as
+
+ .. math::
+
+ T_{C} = T_{a} + \frac{NOCT - 20}{80} S
+
+ where :math:`S` is the plane of array irradiance in :math:`mW/{cm}^2`.
+ This function expects irradiance in :math:`W/m^2`.
+
+ References
+ ----------
+ .. [1] Ross, R. G. Jr., (1981). "Design Techniques for Flat-Plate
+ Photovoltaic Arrays". 15th IEEE Photovoltaic Specialist Conference,
+ Orlando, FL.
+ '''
+ # factor of 0.1 converts irradiance from W/m2 to mW/cm2
+ return temp_air + (noct - 20.) / 80. * poa_global * 0.1
+
+
def _fuentes_hconv(tave, windmod, tinoct, temp_delta, xlen, tilt,
check_reynold):
# Calculate the convective coefficient as in Fuentes 1987 -- a mixture of
|
diff --git a/pvlib/tests/test_temperature.py b/pvlib/tests/test_temperature.py
index 411adcfcca..f8ea3a8bc1 100644
--- a/pvlib/tests/test_temperature.py
+++ b/pvlib/tests/test_temperature.py
@@ -124,6 +124,14 @@ def test_faiman_ndarray():
assert_allclose(expected, result, 3)
+def test_ross():
+ result = temperature.ross(np.array([1000., 600., 1000.]),
+ np.array([20., 40., 60.]),
+ np.array([40., 100., 20.]))
+ expected = np.array([45., 100., 60.])
+ assert_allclose(expected, result)
+
+
def test_faiman_series():
times = pd.date_range(start="2015-01-01", end="2015-01-02", freq="12H")
temps = pd.Series([0, 10, 5], index=times)
| 2020-09-04T16:52:57
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/sphinx/source/api.rst": ".. currentmodule:: pvlib\n\n#############\nAPI reference\n#############\n\n\nClasses\n=======\n\npvlib-python provides a collection of classes for users that prefer\nobject-oriented programming. These classes can help users keep track of\ndata in a more organized way, and can help to simplify the modeling\nprocess. The classes do not add any functionality beyond the procedural\ncode. Most of the object methods are simple wrappers around the\ncorresponding procedural code.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location\n pvsystem.PVSystem\n tracking.SingleAxisTracker\n modelchain.ModelChain\n\n\nSolar Position\n==============\n\nFunctions and methods for calculating solar position.\n\nThe :py:meth:`location.Location.get_solarposition` method and the\n:py:func:`solarposition.get_solarposition` function with default\nparameters are fast and accurate. We recommend using these functions\nunless you know that you need a different function.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_solarposition\n solarposition.get_solarposition\n solarposition.spa_python\n solarposition.ephemeris\n solarposition.pyephem\n solarposition.spa_c\n\n\nAdditional functions for quantities closely related to solar position.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.calc_time\n solarposition.pyephem_earthsun_distance\n solarposition.nrel_earthsun_distance\n spa.calculate_deltat\n\n\nFunctions for calculating sunrise, sunset and transit times.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_sun_rise_set_transit\n solarposition.sun_rise_set_transit_ephem\n solarposition.sun_rise_set_transit_spa\n solarposition.sun_rise_set_transit_geometric\n\n\nThe spa module contains the implementation of the built-in NREL SPA\nalgorithm.\n\n.. autosummary::\n :toctree: generated/\n\n spa\n\nCorrelations and analytical expressions for low precision solar position\ncalculations.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.solar_zenith_analytical\n solarposition.solar_azimuth_analytical\n solarposition.declination_spencer71\n solarposition.declination_cooper69\n solarposition.equation_of_time_spencer71\n solarposition.equation_of_time_pvcdrom\n solarposition.hour_angle\n solarposition.sun_rise_set_transit_geometric\n\n\nClear sky\n=========\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_clearsky\n clearsky.ineichen\n clearsky.lookup_linke_turbidity\n clearsky.simplified_solis\n clearsky.haurwitz\n clearsky.detect_clearsky\n clearsky.bird\n\n\nAirmass and atmospheric models\n==============================\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_airmass\n atmosphere.get_absolute_airmass\n atmosphere.get_relative_airmass\n atmosphere.pres2alt\n atmosphere.alt2pres\n atmosphere.gueymard94_pw\n atmosphere.first_solar_spectral_correction\n atmosphere.bird_hulstrom80_aod_bb\n atmosphere.kasten96_lt\n atmosphere.angstrom_aod_at_lambda\n atmosphere.angstrom_alpha\n\n\nIrradiance\n==========\n\nMethods for irradiance calculations\n-----------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem.get_irradiance\n pvsystem.PVSystem.get_aoi\n pvsystem.PVSystem.get_iam\n tracking.SingleAxisTracker.get_irradiance\n\nDecomposing and combining irradiance\n------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_extra_radiation\n irradiance.aoi\n irradiance.aoi_projection\n irradiance.poa_horizontal_ratio\n irradiance.beam_component\n irradiance.poa_components\n irradiance.get_ground_diffuse\n irradiance.dni\n\nTransposition models\n--------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_total_irradiance\n irradiance.get_sky_diffuse\n irradiance.isotropic\n irradiance.perez\n irradiance.haydavies\n irradiance.klucher\n irradiance.reindl\n irradiance.king\n\n.. _dniestmodels:\n\nDNI estimation models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.disc\n irradiance.dirint\n irradiance.dirindex\n irradiance.erbs\n irradiance.liujordan\n irradiance.gti_dirint\n\nClearness index models\n----------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.clearness_index\n irradiance.clearness_index_zenith_independent\n irradiance.clearsky_index\n\n\nPV Modeling\n===========\n\nClasses\n-------\n\nThe :py:class:`~pvsystem.PVSystem` class provides many methods that\nwrap the functions listed below. See its documentation for details.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem\n\nIncident angle modifiers\n------------------------\n\n.. autosummary::\n :toctree: generated/\n\n iam.physical\n iam.ashrae\n iam.martin_ruiz\n iam.martin_ruiz_diffuse\n iam.sapm\n iam.interp\n iam.marion_diffuse\n iam.marion_integrate\n\nPV temperature models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n temperature.sapm_cell\n temperature.sapm_module\n temperature.sapm_cell_from_module\n temperature.pvsyst_cell\n temperature.faiman\n temperature.fuentes\n pvsystem.PVSystem.sapm_celltemp\n\nTemperature Model Parameters\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n.. currentmodule:: pvlib.temperature\n.. autodata:: TEMPERATURE_MODEL_PARAMETERS\n :annotation:\n\n.. currentmodule:: pvlib\n\nSingle diode models\n-------------------\n\nFunctions relevant for single diode models.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.calcparams_cec\n pvsystem.calcparams_desoto\n pvsystem.calcparams_pvsyst\n pvsystem.i_from_v\n pvsystem.singlediode\n pvsystem.v_from_i\n pvsystem.max_power_point\n\nLow-level functions for solving the single diode equation.\n\n.. autosummary::\n :toctree: generated/\n\n singlediode.estimate_voc\n singlediode.bishop88\n singlediode.bishop88_i_from_v\n singlediode.bishop88_v_from_i\n singlediode.bishop88_mpp\n\nFunctions for fitting diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n\nInverter models (DC to AC conversion)\n-------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n inverter.sandia\n inverter.adr\n inverter.pvwatts\n\nFunctions for fitting inverter models\n\n.. autosummary::\n :toctree: generated/\n\n inverter.fit_sandia\n\n\nPV System Models\n----------------\n\nSandia array performance model (SAPM)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.sapm\n pvsystem.sapm_effective_irradiance\n pvsystem.sapm_spectral_loss\n inverter.sandia\n temperature.sapm_cell\n\nPvsyst model\n^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.calcparams_pvsyst\n temperature.pvsyst_cell\n pvsystem.calcparams_pvsyst\n pvsystem.singlediode\n\nPVWatts model\n^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.pvwatts_dc\n inverter.pvwatts\n pvsystem.pvwatts_losses\n\nEstimating PV model parameters\n------------------------------\n\nFunctions for fitting single diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n ivtools.sdm.fit_pvsyst_sandia\n ivtools.sdm.fit_desoto_sandia\n\nFunctions for fitting the single diode equation\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n\nUtilities for working with IV curve data\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.utils.rectify_iv_curve\n\nOther\n-----\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.retrieve_sam\n pvsystem.scale_voltage_current_power\n\n\nEffects on PV System Output\n===========================\n\nLoss models\n-----------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.combine_loss_factors\n\nSnow\n----\n\n.. autosummary::\n :toctree: generated/\n\n snow.coverage_nrel\n snow.fully_covered_nrel\n snow.dc_loss_nrel\n\nSoiling\n-------\n\n.. autosummary::\n :toctree: generated/\n\n soiling.hsu\n soiling.kimber\n\nShading\n-------\n\n.. autosummary::\n :toctree: generated/\n\n shading.masking_angle\n shading.masking_angle_passias\n shading.sky_diffuse_passias\n\n\nTracking\n========\n\nSingleAxisTracker\n-----------------\n\nThe :py:class:`~tracking.SingleAxisTracker` inherits from\n:py:class:`~pvsystem.PVSystem`.\n\n.. autosummary::\n :toctree: generated/\n\n tracking.SingleAxisTracker\n tracking.SingleAxisTracker.singleaxis\n tracking.SingleAxisTracker.get_irradiance\n\nFunctions\n---------\n\n.. autosummary::\n :toctree: generated/\n\n tracking.singleaxis\n tracking.calc_axis_tilt\n tracking.calc_cross_axis_tilt\n\n\n.. _iotools:\n\nIO Tools\n========\n\nFunctions for reading and writing data from a variety of file formats\nrelevant to solar energy modeling.\n\n.. autosummary::\n :toctree: generated/\n\n iotools.read_tmy2\n iotools.read_tmy3\n iotools.read_epw\n iotools.parse_epw\n iotools.read_srml\n iotools.read_srml_month_from_solardat\n iotools.read_surfrad\n iotools.read_midc\n iotools.read_midc_raw_data_from_nrel\n iotools.read_ecmwf_macc\n iotools.get_ecmwf_macc\n iotools.read_crn\n iotools.read_solrad\n iotools.get_psm3\n iotools.read_psm3\n iotools.parse_psm3\n iotools.get_pvgis_tmy\n iotools.read_pvgis_tmy\n\nA :py:class:`~pvlib.location.Location` object may be created from metadata\nin some files.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.from_tmy\n location.Location.from_epw\n\n\nForecasting\n===========\n\nForecast models\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.GFS\n forecast.NAM\n forecast.RAP\n forecast.HRRR\n forecast.HRRR_ESRL\n forecast.NDFD\n\nGetting data\n------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.get_data\n forecast.ForecastModel.get_processed_data\n\nProcessing data\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.process_data\n forecast.ForecastModel.rename\n forecast.ForecastModel.cloud_cover_to_ghi_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_clearsky_scaling\n forecast.ForecastModel.cloud_cover_to_transmittance_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_liujordan\n forecast.ForecastModel.cloud_cover_to_irradiance\n forecast.ForecastModel.kelvin_to_celsius\n forecast.ForecastModel.isobaric_to_ambient_temperature\n forecast.ForecastModel.uv_to_speed\n forecast.ForecastModel.gust_to_speed\n\nIO support\n----------\n\nThese are public for now, but use at your own risk.\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.set_dataset\n forecast.ForecastModel.set_query_latlon\n forecast.ForecastModel.set_location\n forecast.ForecastModel.set_time\n\n\nModelChain\n==========\n\nCreating a ModelChain object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain\n modelchain.ModelChain.with_pvwatts\n modelchain.ModelChain.with_sapm\n\nRunning\n-------\n\nA ModelChain can be run from a number of starting points, depending on the\ninput data available.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.run_model\n modelchain.ModelChain.run_model_from_poa\n modelchain.ModelChain.run_model_from_effective_irradiance\n\nFunctions to assist with setting up ModelChains to run\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.complete_irradiance\n modelchain.ModelChain.prepare_inputs\n modelchain.ModelChain.prepare_inputs_from_poa\n\n\nAttributes\n----------\n\nSimple ModelChain attributes:\n\n``system, location, clearsky_model, transposition_model,\nsolar_position_method, airmass_model``\n\nProperties\n----------\n\nModelChain properties that are aliases for your specific modeling functions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.orientation_strategy\n modelchain.ModelChain.dc_model\n modelchain.ModelChain.ac_model\n modelchain.ModelChain.aoi_model\n modelchain.ModelChain.spectral_model\n modelchain.ModelChain.temperature_model\n modelchain.ModelChain.losses_model\n modelchain.ModelChain.effective_irradiance_model\n\nModel definitions\n-----------------\n\nModelChain model definitions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.sapm\n modelchain.ModelChain.cec\n modelchain.ModelChain.desoto\n modelchain.ModelChain.pvsyst\n modelchain.ModelChain.pvwatts_dc\n modelchain.ModelChain.snlinverter\n modelchain.ModelChain.adrinverter\n modelchain.ModelChain.pvwatts_inverter\n modelchain.ModelChain.ashrae_aoi_loss\n modelchain.ModelChain.physical_aoi_loss\n modelchain.ModelChain.sapm_aoi_loss\n modelchain.ModelChain.no_aoi_loss\n modelchain.ModelChain.first_solar_spectral_loss\n modelchain.ModelChain.sapm_spectral_loss\n modelchain.ModelChain.no_spectral_loss\n modelchain.ModelChain.sapm_temp\n modelchain.ModelChain.pvsyst_temp\n modelchain.ModelChain.faiman_temp\n modelchain.ModelChain.pvwatts_losses\n modelchain.ModelChain.no_extra_losses\n\nInference methods\n-----------------\n\nMethods that automatically determine which models should be used based\non the information in the associated :py:class:`~pvsystem.PVSystem` object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.infer_dc_model\n modelchain.ModelChain.infer_ac_model\n modelchain.ModelChain.infer_aoi_model\n modelchain.ModelChain.infer_spectral_model\n modelchain.ModelChain.infer_temperature_model\n modelchain.ModelChain.infer_losses_model\n\nFunctions\n---------\n\nFunctions for power modeling.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.basic_chain\n modelchain.get_orientation\n\n\nBifacial\n========\n\nMethods for calculating back surface irradiance\n\n.. autosummary::\n :toctree: generated/\n\n bifacial.pvfactors_timeseries\n\n\nScaling\n=======\n\nMethods for manipulating irradiance for temporal or spatial considerations\n\n.. autosummary::\n :toctree: generated/\n\n scaling.wvm\n", "docs/sphinx/source/whatsnew/v0.8.1.rst": ".. _whatsnew_0810:\n\nv0.8.1 (MONTH DAY YEAR)\n-----------------------\n\nBreaking changes\n~~~~~~~~~~~~~~~~\n\n\nDeprecations\n~~~~~~~~~~~~\n\n\nEnhancements\n~~~~~~~~~~~~\n\n\nBug fixes\n~~~~~~~~~\n* Fix issue with :py:func:`pvlib.temperature.fuentes` with timezone-aware\n inputs. (:issue:`1071`, :pull:`1072`)\n\nTesting\n~~~~~~~\n* Add airspeed velocity performance testing configuration and a few benchmarks.\n (:issue:`419`, :pull:`1049`, :pull:`1059`)\n\nDocumentation\n~~~~~~~~~~~~~\n* Add gallery example about backtracking on sloped terrain. (:pull:`1077`)\n\nRequirements\n~~~~~~~~~~~~\n\n\nContributors\n~~~~~~~~~~~~\n* Kevin Anderson (:ghuser:`kanderso-nrel`)\n", "pvlib/temperature.py": "\"\"\"\nThe ``temperature`` module contains functions for modeling temperature of\nPV modules and cells.\n\"\"\"\n\nimport numpy as np\nimport pandas as pd\nfrom pvlib.tools import sind\n\nTEMPERATURE_MODEL_PARAMETERS = {\n 'sapm': {\n 'open_rack_glass_glass': {'a': -3.47, 'b': -.0594, 'deltaT': 3},\n 'close_mount_glass_glass': {'a': -2.98, 'b': -.0471, 'deltaT': 1},\n 'open_rack_glass_polymer': {'a': -3.56, 'b': -.0750, 'deltaT': 3},\n 'insulated_back_glass_polymer': {'a': -2.81, 'b': -.0455, 'deltaT': 0},\n },\n 'pvsyst': {'freestanding': {'u_c': 29.0, 'u_v': 0},\n 'insulated': {'u_c': 15.0, 'u_v': 0}}\n}\n\"\"\"Dictionary of temperature parameters organized by model.\n\nThere are keys for each model at the top level. Currently there are two models,\n``'sapm'`` for the Sandia Array Performance Model, and ``'pvsyst'``. Each model\nhas a dictionary of configurations; a value is itself a dictionary containing\nmodel parameters. Retrieve parameters by indexing the model and configuration\nby name. Note: the keys are lower-cased and case sensitive.\n\nExample\n-------\nRetrieve the open rack glass-polymer configuration for SAPM::\n\n from pvlib.temperature import TEMPERATURE_MODEL_PARAMETERS\n temperature_model_parameters = (\n TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_polymer'])\n # {'a': -3.56, 'b': -0.075, 'deltaT': 3}\n\"\"\"\n\n\ndef _temperature_model_params(model, parameter_set):\n try:\n params = TEMPERATURE_MODEL_PARAMETERS[model]\n return params[parameter_set]\n except KeyError:\n msg = ('{} is not a named set of parameters for the {} cell'\n ' temperature model.'\n ' See pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS'\n ' for names'.format(parameter_set, model))\n raise KeyError(msg)\n\n\ndef sapm_cell(poa_global, temp_air, wind_speed, a, b, deltaT,\n irrad_ref=1000):\n r'''\n Calculate cell temperature per the Sandia Array Performance Model.\n\n See [1]_ for details on the Sandia Array Performance Model.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric\n Wind speed at a height of 10 meters [m/s].\n\n a : float\n Parameter :math:`a` in :eq:`sapm1`.\n\n b : float\n Parameter :math:`b` in :eq:`sapm1`.\n\n deltaT : float\n Parameter :math:`\\Delta T` in :eq:`sapm2` [C].\n\n irrad_ref : float, default 1000\n Reference irradiance, parameter :math:`E_{0}` in\n :eq:`sapm2` [W/m^2].\n\n Returns\n -------\n numeric, values in degrees C.\n\n Notes\n -----\n The model for cell temperature :math:`T_{C}` is given by a pair of\n equations (Eq. 11 and 12 in [1]_).\n\n .. math::\n :label: sapm1\n\n T_{m} = E \\times \\exp (a + b \\times WS) + T_{a}\n\n .. math::\n :label: sapm2\n\n T_{C} = T_{m} + \\frac{E}{E_{0}} \\Delta T\n\n The module back surface temperature :math:`T_{m}` is implemented in\n :py:func:`~pvlib.temperature.sapm_module`.\n\n Inputs to the model are plane-of-array irradiance :math:`E` (W/m2) and\n ambient air temperature :math:`T_{a}` (C). Model parameters depend both on\n the module construction and its mounting. Parameter sets are provided in\n [1]_ for representative modules and mounting, and are coded for convenience\n in :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`.\n\n +---------------+----------------+-------+---------+---------------------+\n | Module | Mounting | a | b | :math:`\\Delta T [C]`|\n +===============+================+=======+=========+=====================+\n | glass/glass | open rack | -3.47 | -0.0594 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/glass | close roof | -2.98 | -0.0471 | 1 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | open rack | -3.56 | -0.075 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | insulated back | -2.81 | -0.0455 | 0 |\n +---------------+----------------+-------+---------+---------------------+\n\n References\n ----------\n .. [1] King, D. et al, 2004, \"Sandia Photovoltaic Array Performance\n Model\", SAND Report 3535, Sandia National Laboratories, Albuquerque,\n NM.\n\n See also\n --------\n sapm_cell_from_module\n sapm_module\n\n Examples\n --------\n >>> from pvlib.temperature import sapm_cell, TEMPERATURE_MODEL_PARAMETERS\n >>> params = TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_glass']\n >>> sapm_cell(1000, 10, 0, **params)\n 44.11703066106086\n '''\n module_temperature = sapm_module(poa_global, temp_air, wind_speed,\n a, b)\n return sapm_cell_from_module(module_temperature, poa_global, deltaT,\n irrad_ref)\n\n\ndef sapm_module(poa_global, temp_air, wind_speed, a, b):\n r'''\n Calculate module back surface temperature per the Sandia Array\n Performance Model.\n\n See [1]_ for details on the Sandia Array Performance Model.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric\n Wind speed at a height of 10 meters [m/s].\n\n a : float\n Parameter :math:`a` in :eq:`sapm1mod`.\n\n b : float\n Parameter :math:`b` in :eq:`sapm1mod`.\n\n Returns\n -------\n numeric, values in degrees C.\n\n Notes\n -----\n The model for module temperature :math:`T_{m}` is given by Eq. 11 in [1]_.\n\n .. math::\n :label: sapm1mod\n\n T_{m} = E \\times \\exp (a + b \\times WS) + T_{a}\n\n Inputs to the model are plane-of-array irradiance :math:`E` (W/m2) and\n ambient air temperature :math:`T_{a}` (C). Model outputs are surface\n temperature at the back of the module :math:`T_{m}` and cell temperature\n :math:`T_{C}`. Model parameters depend both on the module construction and\n its mounting. Parameter sets are provided in [1]_ for representative\n modules and mounting, and are coded for convenience in\n :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`.\n\n +---------------+----------------+-------+---------+---------------------+\n | Module | Mounting | a | b | :math:`\\Delta T [C]`|\n +===============+================+=======+=========+=====================+\n | glass/glass | open rack | -3.47 | -0.0594 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/glass | close roof | -2.98 | -0.0471 | 1 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | open rack | -3.56 | -0.075 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | insulated back | -2.81 | -0.0455 | 0 |\n +---------------+----------------+-------+---------+---------------------+\n\n References\n ----------\n .. [1] King, D. et al, 2004, \"Sandia Photovoltaic Array Performance\n Model\", SAND Report 3535, Sandia National Laboratories, Albuquerque,\n NM.\n\n See also\n --------\n sapm_cell\n sapm_cell_from_module\n '''\n return poa_global * np.exp(a + b * wind_speed) + temp_air\n\n\ndef sapm_cell_from_module(module_temperature, poa_global, deltaT,\n irrad_ref=1000):\n r'''\n Calculate cell temperature from module temperature using the Sandia Array\n Performance Model.\n\n See [1]_ for details on the Sandia Array Performance Model.\n\n Parameters\n ----------\n module_temperature : numeric\n Temperature of back of module surface [C].\n\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n deltaT : float\n Parameter :math:`\\Delta T` in :eq:`sapm2_cell_from_mod` [C].\n\n irrad_ref : float, default 1000\n Reference irradiance, parameter :math:`E_{0}` in\n :eq:`sapm2` [W/m^2].\n\n Returns\n -------\n numeric, values in degrees C.\n\n Notes\n -----\n The model for cell temperature :math:`T_{C}` is given by Eq. 12 in [1]_.\n\n .. math::\n :label: sapm2_cell_from_mod\n\n T_{C} = T_{m} + \\frac{E}{E_{0}} \\Delta T\n\n The module back surface temperature :math:`T_{m}` is implemented in\n :py:func:`~pvlib.temperature.sapm_module`.\n\n Model parameters depend both on the module construction and its mounting.\n Parameter sets are provided in [1]_ for representative modules and\n mounting, and are coded for convenience in\n :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`.\n\n +---------------+----------------+-------+---------+---------------------+\n | Module | Mounting | a | b | :math:`\\Delta T [C]`|\n +===============+================+=======+=========+=====================+\n | glass/glass | open rack | -3.47 | -0.0594 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/glass | close roof | -2.98 | -0.0471 | 1 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | open rack | -3.56 | -0.075 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | insulated back | -2.81 | -0.0455 | 0 |\n +---------------+----------------+-------+---------+---------------------+\n\n References\n ----------\n .. [1] King, D. et al, 2004, \"Sandia Photovoltaic Array Performance\n Model\", SAND Report 3535, Sandia National Laboratories, Albuquerque,\n NM.\n\n See also\n --------\n sapm_cell\n sapm_module\n '''\n return module_temperature + (poa_global / irrad_ref) * deltaT\n\n\ndef pvsyst_cell(poa_global, temp_air, wind_speed=1.0, u_c=29.0, u_v=0.0,\n eta_m=0.1, alpha_absorption=0.9):\n r\"\"\"\n Calculate cell temperature using an empirical heat loss factor model\n as implemented in PVsyst.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric, default 1.0\n Wind speed in m/s measured at the same height for which the wind loss\n factor was determined. The default value 1.0 m/2 is the wind\n speed at module height used to determine NOCT. [m/s]\n\n u_c : float, default 29.0\n Combined heat loss factor coefficient. The default value is\n representative of freestanding modules with the rear surfaces exposed\n to open air (e.g., rack mounted). Parameter :math:`U_{c}` in\n :eq:`pvsyst`.\n :math:`\\left[\\frac{\\text{W}/{\\text{m}^2}}{\\text{C}}\\right]`\n\n u_v : float, default 0.0\n Combined heat loss factor influenced by wind. Parameter :math:`U_{v}`\n in :eq:`pvsyst`.\n :math:`\\left[ \\frac{\\text{W}/\\text{m}^2}{\\text{C}\\ \\left( \\text{m/s} \\right)} \\right]`\n\n eta_m : numeric, default 0.1\n Module external efficiency as a fraction, i.e., DC power / poa_global.\n Parameter :math:`\\eta_{m}` in :eq:`pvsyst`.\n\n alpha_absorption : numeric, default 0.9\n Absorption coefficient. Parameter :math:`\\alpha` in :eq:`pvsyst`.\n\n Returns\n -------\n numeric, values in degrees Celsius\n\n Notes\n -----\n The Pvsyst model for cell temperature :math:`T_{C}` is given by\n\n .. math::\n :label: pvsyst\n\n T_{C} = T_{a} + \\frac{\\alpha E (1 - \\eta_{m})}{U_{c} + U_{v} \\times WS}\n\n Inputs to the model are plane-of-array irradiance :math:`E` (W/m2), ambient\n air temperature :math:`T_{a}` (C) and wind speed :math:`WS` (m/s). Model\n output is cell temperature :math:`T_{C}`. Model parameters depend both on\n the module construction and its mounting. Parameters are provided in\n [1]_ for open (freestanding) and close (insulated) mounting configurations,\n , and are coded for convenience in\n :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`. The heat loss\n factors provided represent the combined effect of convection, radiation and\n conduction, and their values are experimentally determined.\n\n +--------------+---------------+---------------+\n | Mounting | :math:`U_{c}` | :math:`U_{v}` |\n +==============+===============+===============+\n | freestanding | 29.0 | 0.0 |\n +--------------+---------------+---------------+\n | insulated | 15.0 | 0.0 |\n +--------------+---------------+---------------+\n\n References\n ----------\n .. [1] \"PVsyst 6 Help\", Files.pvsyst.com, 2018. [Online]. Available:\n http://files.pvsyst.com/help/index.html. [Accessed: 10- Dec- 2018].\n\n .. [2] Faiman, D. (2008). \"Assessing the outdoor operating temperature of\n photovoltaic modules.\" Progress in Photovoltaics 16(4): 307-315.\n\n Examples\n --------\n >>> from pvlib.temperature import pvsyst_cell, TEMPERATURE_MODEL_PARAMETERS\n >>> params = TEMPERATURE_MODEL_PARAMETERS['pvsyst']['freestanding']\n >>> pvsyst_cell(1000, 10, **params)\n 37.93103448275862\n \"\"\"\n\n total_loss_factor = u_c + u_v * wind_speed\n heat_input = poa_global * alpha_absorption * (1 - eta_m)\n temp_difference = heat_input / total_loss_factor\n return temp_air + temp_difference\n\n\ndef faiman(poa_global, temp_air, wind_speed=1.0, u0=25.0, u1=6.84):\n r'''\n Calculate cell or module temperature using the Faiman model. The Faiman\n model uses an empirical heat loss factor model [1]_ and is adopted in the\n IEC 61853 standards [2]_ and [3]_.\n\n Usage of this model in the IEC 61853 standard does not distinguish\n between cell and module temperature.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric, default 1.0\n Wind speed in m/s measured at the same height for which the wind loss\n factor was determined. The default value 1.0 m/s is the wind\n speed at module height used to determine NOCT. [m/s]\n\n u0 : numeric, default 25.0\n Combined heat loss factor coefficient. The default value is one\n determined by Faiman for 7 silicon modules.\n :math:`\\left[\\frac{\\text{W}/{\\text{m}^2}}{\\text{C}}\\right]`\n\n u1 : numeric, default 6.84\n Combined heat loss factor influenced by wind. The default value is one\n determined by Faiman for 7 silicon modules.\n :math:`\\left[ \\frac{\\text{W}/\\text{m}^2}{\\text{C}\\ \\left( \\text{m/s} \\right)} \\right]`\n\n Returns\n -------\n numeric, values in degrees Celsius\n\n Notes\n -----\n All arguments may be scalars or vectors. If multiple arguments\n are vectors they must be the same length.\n\n References\n ----------\n .. [1] Faiman, D. (2008). \"Assessing the outdoor operating temperature of\n photovoltaic modules.\" Progress in Photovoltaics 16(4): 307-315.\n\n .. [2] \"IEC 61853-2 Photovoltaic (PV) module performance testing and energy\n rating - Part 2: Spectral responsivity, incidence angle and module\n operating temperature measurements\". IEC, Geneva, 2018.\n\n .. [3] \"IEC 61853-3 Photovoltaic (PV) module performance testing and energy\n rating - Part 3: Energy rating of PV modules\". IEC, Geneva, 2018.\n\n '''\n # Contributed by Anton Driesse (@adriesse), PV Performance Labs. Dec., 2019\n\n # The following lines may seem odd since u0 & u1 are probably scalar,\n # but it serves an indirect and easy way of allowing lists and\n # tuples for the other function arguments.\n u0 = np.asanyarray(u0)\n u1 = np.asanyarray(u1)\n\n total_loss_factor = u0 + u1 * wind_speed\n heat_input = poa_global\n temp_difference = heat_input / total_loss_factor\n return temp_air + temp_difference\n\n\ndef _fuentes_hconv(tave, windmod, tinoct, temp_delta, xlen, tilt,\n check_reynold):\n # Calculate the convective coefficient as in Fuentes 1987 -- a mixture of\n # free, laminar, and turbulent convection.\n densair = 0.003484 * 101325.0 / tave # density\n visair = 0.24237e-6 * tave**0.76 / densair # kinematic viscosity\n condair = 2.1695e-4 * tave**0.84 # thermal conductivity\n reynold = windmod * xlen / visair\n # the boundary between laminar and turbulent is modeled as an abrupt\n # change at Re = 1.2e5:\n if check_reynold and reynold > 1.2e5:\n # turbulent convection\n hforce = 0.0282 / reynold**0.2 * densair * windmod * 1007 / 0.71**0.4\n else:\n # laminar convection\n hforce = 0.8600 / reynold**0.5 * densair * windmod * 1007 / 0.71**0.67\n # free convection via Grashof number\n # NB: Fuentes hardwires sind(tilt) as 0.5 for tilt=30\n grashof = 9.8 / tave * temp_delta * xlen**3 / visair**2 * sind(tilt)\n # product of Nusselt number and (k/l)\n hfree = 0.21 * (grashof * 0.71)**0.32 * condair / xlen\n # combine free and forced components\n hconv = (hfree**3 + hforce**3)**(1/3)\n return hconv\n\n\ndef _hydraulic_diameter(width, height):\n # calculate the hydraulic diameter of a rectangle\n return 2 * (width * height) / (width + height)\n\n\ndef fuentes(poa_global, temp_air, wind_speed, noct_installed, module_height=5,\n wind_height=9.144, emissivity=0.84, absorption=0.83,\n surface_tilt=30, module_width=0.31579, module_length=1.2):\n \"\"\"\n Calculate cell or module temperature using the Fuentes model.\n\n The Fuentes model is a first-principles heat transfer energy balance\n model [1]_ that is used in PVWatts for cell temperature modeling [2]_.\n\n Parameters\n ----------\n poa_global : pandas Series\n Total incident irradiance [W/m^2]\n\n temp_air : pandas Series\n Ambient dry bulb temperature [C]\n\n wind_speed : pandas Series\n Wind speed [m/s]\n\n noct_installed : float\n The \"installed\" nominal operating cell temperature as defined in [1]_.\n PVWatts assumes this value to be 45 C for rack-mounted arrays and\n 49 C for roof mount systems with restricted air flow around the\n module. [C]\n\n module_height : float, default 5.0\n The height above ground of the center of the module. The PVWatts\n default is 5.0 [m]\n\n wind_height : float, default 9.144\n The height above ground at which ``wind_speed`` is measured. The\n PVWatts defauls is 9.144 [m]\n\n emissivity : float, default 0.84\n The effectiveness of the module at radiating thermal energy. [unitless]\n\n absorption : float, default 0.83\n The fraction of incident irradiance that is converted to thermal\n energy in the module. [unitless]\n\n surface_tilt : float, default 30\n Module tilt from horizontal. If not provided, the default value\n of 30 degrees from [1]_ and [2]_ is used. [degrees]\n\n module_width : float, default 0.31579\n Module width. The default value of 0.31579 meters in combination with\n the default `module_length` gives a hydraulic diameter of 0.5 as\n assumed in [1]_ and [2]_. [m]\n\n module_length : float, default 1.2\n Module length. The default value of 1.2 meters in combination with\n the default `module_width` gives a hydraulic diameter of 0.5 as\n assumed in [1]_ and [2]_. [m]\n\n Returns\n -------\n temperature_cell : pandas Series\n The modeled cell temperature [C]\n\n Notes\n -----\n This function returns slightly different values from PVWatts at night\n and just after dawn. This is because the SAM SSC assumes that module\n temperature equals ambient temperature when irradiance is zero so it can\n skip the heat balance calculation at night.\n\n References\n ----------\n .. [1] Fuentes, M. K., 1987, \"A Simplifed Thermal Model for Flat-Plate\n Photovoltaic Arrays\", SAND85-0330, Sandia National Laboratories,\n Albuquerque NM.\n http://prod.sandia.gov/techlib/access-control.cgi/1985/850330.pdf\n .. [2] Dobos, A. P., 2014, \"PVWatts Version 5 Manual\", NREL/TP-6A20-62641,\n National Renewable Energy Laboratory, Golden CO.\n doi:10.2172/1158421.\n \"\"\"\n # ported from the FORTRAN77 code provided in Appendix A of Fuentes 1987;\n # nearly all variable names are kept the same for ease of comparison.\n\n boltz = 5.669e-8\n emiss = emissivity\n absorp = absorption\n xlen = _hydraulic_diameter(module_width, module_length)\n # cap0 has units of [J / (m^2 K)], equal to mass per unit area times\n # specific heat of the module.\n cap0 = 11000\n tinoct = noct_installed + 273.15\n\n # convective coefficient of top surface of module at NOCT\n windmod = 1.0\n tave = (tinoct + 293.15) / 2\n hconv = _fuentes_hconv(tave, windmod, tinoct, tinoct - 293.15, xlen,\n surface_tilt, False)\n\n # determine the ground temperature ratio and the ratio of the total\n # convection to the top side convection\n hground = emiss * boltz * (tinoct**2 + 293.15**2) * (tinoct + 293.15)\n backrat = (\n absorp * 800.0\n - emiss * boltz * (tinoct**4 - 282.21**4)\n - hconv * (tinoct - 293.15)\n ) / ((hground + hconv) * (tinoct - 293.15))\n tground = (tinoct**4 - backrat * (tinoct**4 - 293.15**4))**0.25\n tground = np.clip(tground, 293.15, tinoct)\n\n tgrat = (tground - 293.15) / (tinoct - 293.15)\n convrat = (absorp * 800 - emiss * boltz * (\n 2 * tinoct**4 - 282.21**4 - tground**4)) / (hconv * (tinoct - 293.15))\n\n # adjust the capacitance (thermal mass) of the module based on the INOCT.\n # It is a function of INOCT because high INOCT implies thermal coupling\n # with the racking (e.g. roofmount), so the thermal mass is increased.\n # `cap` has units J/(m^2 C) -- see Table 3, Equations 26 & 27\n cap = cap0\n if tinoct > 321.15:\n cap = cap * (1 + (tinoct - 321.15) / 12)\n\n # iterate through timeseries inputs\n sun0 = 0\n tmod0 = 293.15\n\n # n.b. the way Fuentes calculates the first timedelta makes it seem like\n # the value doesn't matter -- rather than recreate it here, just assume\n # it's the same as the second timedelta:\n timedelta_seconds = poa_global.index.to_series().diff().dt.total_seconds()\n timedelta_hours = timedelta_seconds / 3600\n timedelta_hours.iloc[0] = timedelta_hours.iloc[1]\n\n tamb_array = temp_air + 273.15\n sun_array = poa_global * absorp\n\n # Two of the calculations are easily vectorized, so precalculate them:\n # sky temperature -- Equation 24\n tsky_array = 0.68 * (0.0552 * tamb_array**1.5) + 0.32 * tamb_array\n # wind speed at module height -- Equation 22\n # not sure why the 1e-4 factor is included -- maybe the equations don't\n # behave well if wind == 0?\n windmod_array = wind_speed * (module_height/wind_height)**0.2 + 1e-4\n\n tmod0 = 293.15\n tmod_array = np.zeros_like(poa_global)\n\n iterator = zip(tamb_array, sun_array, windmod_array, tsky_array,\n timedelta_hours)\n for i, (tamb, sun, windmod, tsky, dtime) in enumerate(iterator):\n # solve the heat transfer equation, iterating because the heat loss\n # terms depend on tmod. NB Fuentes doesn't show that 10 iterations is\n # sufficient for convergence.\n tmod = tmod0\n for j in range(10):\n # overall convective coefficient\n tave = (tmod + tamb) / 2\n hconv = convrat * _fuentes_hconv(tave, windmod, tinoct,\n abs(tmod-tamb), xlen,\n surface_tilt, True)\n # sky radiation coefficient (Equation 3)\n hsky = emiss * boltz * (tmod**2 + tsky**2) * (tmod + tsky)\n # ground radiation coeffieicient (Equation 4)\n tground = tamb + tgrat * (tmod - tamb)\n hground = emiss * boltz * (tmod**2 + tground**2) * (tmod + tground)\n # thermal lag -- Equation 8\n eigen = - (hconv + hsky + hground) / cap * dtime * 3600\n # not sure why this check is done, maybe as a speed optimization?\n if eigen > -10:\n ex = np.exp(eigen)\n else:\n ex = 0\n # Equation 7 -- note that `sun` and `sun0` already account for\n # absorption (alpha)\n tmod = tmod0 * ex + (\n (1 - ex) * (\n hconv * tamb\n + hsky * tsky\n + hground * tground\n + sun0\n + (sun - sun0) / eigen\n ) + sun - sun0\n ) / (hconv + hsky + hground)\n tmod_array[i] = tmod\n tmod0 = tmod\n sun0 = sun\n\n return pd.Series(tmod_array - 273.15, index=poa_global.index, name='tmod')\n"}
|
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 077a5e121d..2e882e9a46 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -236,7 +236,10 @@ PV temperature models
temperature.pvsyst_cell
temperature.faiman
temperature.fuentes
+ temperature.ross
pvsystem.PVSystem.sapm_celltemp
+ pvsystem.PVSystem.pvsyst_celltemp
+ pvsystem.PVSystem.faiman_celltemp
Temperature Model Parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index 61eb026205..9b6e2a3800 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -13,6 +13,8 @@ Deprecations
Enhancements
~~~~~~~~~~~~
+* Added :py:func:`pvlib.temperature.ross` for cell temperature modeling using
+ only NOCT. (:pull:`1045`)
Bug fixes
@@ -36,3 +38,5 @@ Requirements
Contributors
~~~~~~~~~~~~
* Kevin Anderson (:ghuser:`kanderso-nrel`)
+* Will Holmgren (:ghuser:`wholmgren`)
+* Cliff Hansen (:ghuser:`cwhanse`)
|
{"pvlib/temperature.py": [{"type": "function", "name": "ross", "lines": [447, 491], "signature": "def ross(poa_global, temp_air, noct):", "doc": "Calculate cell temperature using the Ross model.\n\nThe Ross model [1]_ assumes the difference between cell temperature\nand ambient temperature is proportional to the plane of array irradiance,\nand assumes wind speed of 1 m/s. The model implicitly assumes steady or\nslowly changing irradiance conditions.\n\nParameters\n----------\npoa_global : numeric\n Total incident irradiance. [W/m^2]\n\ntemp_air : numeric\n Ambient dry bulb temperature. [C]\n\nnoct : numeric\n Nominal operating cell temperature [C], determined at conditions of\n 800 W/m^2 irradiance, 20 C ambient air temperature and 1 m/s wind.\n\nReturns\n-------\ncell_temperature : numeric\n Cell temperature. [C]\n\nNotes\n-----\nThe Ross model for cell temperature :math:`T_{C}` is given in [1]_ as\n\n.. math::\n\n T_{C} = T_{a} + \\frac{NOCT - 20}{80} S\n\nwhere :math:`S` is the plane of array irradiance in :math:`mW/{cm}^2`.\nThis function expects irradiance in :math:`W/m^2`.\n\nReferences\n----------\n.. [1] Ross, R. G. Jr., (1981). \"Design Techniques for Flat-Plate\n Photovoltaic Arrays\". 15th IEEE Photovoltaic Specialist Conference,\n Orlando, FL."}]}
|
0.7
|
["pvlib/tests/test_temperature.py::test_ross"]
|
["pvlib/tests/test_temperature.py::test_sapm_cell", "pvlib/tests/test_temperature.py::test_sapm_module", "pvlib/tests/test_temperature.py::test_sapm_cell_from_module", "pvlib/tests/test_temperature.py::test_sapm_ndarray", "pvlib/tests/test_temperature.py::test_sapm_series", "pvlib/tests/test_temperature.py::test_pvsyst_cell_default", "pvlib/tests/test_temperature.py::test_pvsyst_cell_kwargs", "pvlib/tests/test_temperature.py::test_pvsyst_cell_ndarray", "pvlib/tests/test_temperature.py::test_pvsyst_cell_series", "pvlib/tests/test_temperature.py::test_faiman_default", "pvlib/tests/test_temperature.py::test_faiman_kwargs", "pvlib/tests/test_temperature.py::test_faiman_list", "pvlib/tests/test_temperature.py::test_faiman_ndarray", "pvlib/tests/test_temperature.py::test_faiman_series", "pvlib/tests/test_temperature.py::test__temperature_model_params", "pvlib/tests/test_temperature.py::test_fuentes[pvwatts_8760_rackmount.csv-45]", "pvlib/tests/test_temperature.py::test_fuentes[pvwatts_8760_roofmount.csv-49]", "pvlib/tests/test_temperature.py::test_fuentes_timezone[None]", "pvlib/tests/test_temperature.py::test_fuentes_timezone[Etc/GMT+5]"]
|
aa1635bcb40dc83f82e9fd72158670c235bfe99b
|
{"first_commit_time": 1599238234.0, "pr_title": "add Ross temperature model", "pr_body": " - [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)\r\n - [x] Tests added\r\n - [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.\r\n - [x] Adds description and name entries in the appropriate \"what's new\" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).\r\n - [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.\r\n - [x] Pull request is nearly complete and ready for detailed review.\r\n - [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.\r\n\r\nA simple cell temperature model using only NOCT as a parameter.", "pr_timeline": [{"time": 1600893971.0, "comment": "Should this really be called a \"Ross\" model? It's just basic heat loss and NOCT conditions:\r\n\r\n```\r\ntotal_loss_factor = 800 / (noct - 20)\r\nheat_input = poa_global\r\ntemp_difference = heat_input / total_loss_factor\r\nreturn temp_air + temp_difference\r\n```\r\nI wouldn't say this approach ignores the effect of wind but rather that it tacitly assumes a wind speed of 1 m/s.\r\n"}, {"time": 1600895570.0, "comment": "> Should this really be called a \"Ross\" model? It's just basic heat loss and NOCT conditions:\r\n\r\nAgree that's what it is, but it needs a name, and others have cited it as Ross' model (author of the reference).\r\n\r\n> I wouldn't say this approach ignores the effect of wind but rather that it tacitly assumes a wind speed of 1 m/s.\r\n\r\ngood point."}, {"time": 1602265103.0, "comment": "thanks @cwhanse!"}], "issues": {}}
|
pvlib/pvlib-python
| 1,177
|
https://github.com/pvlib/pvlib-python/pull/1177
|
pvlib__pvlib-python-1177
|
[]
|
8b98768818ee5ad85d9479877533651a2e9dc2cd
|
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 8805d199a4..7ed5ee941d 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -238,6 +238,7 @@ PV temperature models
temperature.faiman
temperature.fuentes
temperature.ross
+ temperature.noct_sam
pvsystem.PVSystem.sapm_celltemp
pvsystem.PVSystem.pvsyst_celltemp
pvsystem.PVSystem.faiman_celltemp
diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 81e7a0c60b..9abc4da039 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -96,6 +96,8 @@ Enhancements
* :py:meth:`~pvlib.pvsystem.PVSystem.get_ac` is added to calculate AC power
from DC power. Use parameter ``model`` to specify which inverter model to use.
(:pull:`1147`, :issue:`998`, :pull:`1150`)
+* Added :py:func:`~pvlib.temperature.noct_sam`, a cell temperature model
+ implemented in SAM (:pull:`1177`)
Bug fixes
~~~~~~~~~
diff --git a/pvlib/temperature.py b/pvlib/temperature.py
index 03871143e8..7ff063f64d 100644
--- a/pvlib/temperature.py
+++ b/pvlib/temperature.py
@@ -706,3 +706,109 @@ def fuentes(poa_global, temp_air, wind_speed, noct_installed, module_height=5,
sun0 = sun
return pd.Series(tmod_array - 273.15, index=poa_global.index, name='tmod')
+
+
+def _adj_for_mounting_standoff(x):
+ # supports noct cell temperature function. Except for x > 3.5, the SAM code
+ # and documentation aren't clear on the precise intervals. The choice of
+ # < or <= here is pvlib's.
+ return np.piecewise(x, [x <= 0, (x > 0) & (x < 0.5),
+ (x >= 0.5) & (x < 1.5), (x >= 1.5) & (x < 2.5),
+ (x >= 2.5) & (x <= 3.5), x > 3.5],
+ [0., 18., 11., 6., 2., 0.])
+
+
+def noct_sam(poa_global, temp_air, wind_speed, noct, eta_m_ref,
+ effective_irradiance=None, transmittance_absorptance=0.9,
+ array_height=1, mount_standoff=4):
+ r'''
+ Cell temperature model from the System Advisor Model (SAM).
+
+ The model is described in [1]_, Section 10.6.
+
+ Parameters
+ ----------
+ poa_global : numeric
+ Total incident irradiance. [W/m^2]
+
+ temp_air : numeric
+ Ambient dry bulb temperature. [C]
+
+ wind_speed : numeric
+ Wind speed in m/s measured at the same height for which the wind loss
+ factor was determined. The default value 1.0 m/s is the wind
+ speed at module height used to determine NOCT. [m/s]
+
+ noct : float
+ Nominal operating cell temperature [C], determined at conditions of
+ 800 W/m^2 irradiance, 20 C ambient air temperature and 1 m/s wind.
+
+ eta_m_ref : float
+ Module external efficiency [unitless] at reference conditions of
+ 1000 W/m^2 and 20C. Calculate as
+ :math:`\eta_{m} = \frac{V_{mp} I_{mp}}{A \times 1000 W/m^2}`
+ where A is module area [m^2].
+
+ effective_irradiance : numeric, default None.
+ The irradiance that is converted to photocurrent. If None,
+ assumed equal to poa_global. [W/m^2]
+
+ transmittance_absorptance : numeric, default 0.9
+ Coefficient for combined transmittance and absorptance effects.
+ [unitless]
+
+ array_height : int, default 1
+ Height of array above ground in stories (one story is about 3m). Must
+ be either 1 or 2. For systems elevated less than one story, use 1.
+ If system is elevated more than two stories, use 2.
+
+ mount_standoff : numeric, default 4
+ Distance between array mounting and mounting surface. Use default
+ if system is ground-mounted. [inches]
+
+ Returns
+ -------
+ cell_temperature : numeric
+ Cell temperature. [C]
+
+ Raises
+ ------
+ ValueError
+ If array_height is an invalid value (must be 1 or 2).
+
+ References
+ ----------
+ .. [1] Gilman, P., Dobos, A., DiOrio, N., Freeman, J., Janzou, S.,
+ Ryberg, D., 2018, "SAM Photovoltaic Model Technical Reference
+ Update", National Renewable Energy Laboratory Report
+ NREL/TP-6A20-67399.
+ '''
+ # in [1] the denominator for irr_ratio isn't precisely clear. From
+ # reproducing output of the SAM function noct_celltemp_t, we determined
+ # that:
+ # - G_total (SAM) is broadband plane-of-array irradiance before
+ # reflections. Equivalent to pvlib variable poa_global
+ # - Geff_total (SAM) is POA irradiance after reflections and
+ # adjustment for spectrum. Equivalent to effective_irradiance
+ if effective_irradiance is None:
+ irr_ratio = 1.
+ else:
+ irr_ratio = effective_irradiance / poa_global
+
+ if array_height == 1:
+ wind_adj = 0.51 * wind_speed
+ elif array_height == 2:
+ wind_adj = 0.61 * wind_speed
+ else:
+ raise ValueError(
+ f'array_height must be 1 or 2, {array_height} was given')
+
+ noct_adj = noct + _adj_for_mounting_standoff(mount_standoff)
+ tau_alpha = transmittance_absorptance * irr_ratio
+
+ # [1] Eq. 10.37 isn't clear on exactly what "G" is. SAM SSC code uses
+ # poa_global where G appears
+ cell_temp_init = poa_global / 800. * (noct_adj - 20.)
+ heat_loss = 1 - eta_m_ref / tau_alpha
+ wind_loss = 9.5 / (5.7 + 3.8 * wind_adj)
+ return temp_air + cell_temp_init * heat_loss * wind_loss
|
diff --git a/pvlib/tests/test_temperature.py b/pvlib/tests/test_temperature.py
index f8ea3a8bc1..1ce2e69a47 100644
--- a/pvlib/tests/test_temperature.py
+++ b/pvlib/tests/test_temperature.py
@@ -5,7 +5,7 @@
from conftest import DATA_DIR, assert_series_equal
from numpy.testing import assert_allclose
-from pvlib import temperature
+from pvlib import temperature, tools
@pytest.fixture
@@ -212,3 +212,76 @@ def test_fuentes_timezone(tz):
assert_series_equal(out, pd.Series([47.85, 50.85, 50.85], index=index,
name='tmod'))
+
+
+def test_noct_sam():
+ poa_global, temp_air, wind_speed, noct, eta_m_ref = (1000., 25., 1., 45.,
+ 0.2)
+ expected = 55.230790492
+ result = temperature.noct_sam(poa_global, temp_air, wind_speed, noct,
+ eta_m_ref)
+ assert_allclose(result, expected)
+ # test with different types
+ result = temperature.noct_sam(np.array(poa_global), np.array(temp_air),
+ np.array(wind_speed), np.array(noct),
+ np.array(eta_m_ref))
+ assert_allclose(result, expected)
+ dr = pd.date_range(start='2020-01-01 12:00:00', end='2020-01-01 13:00:00',
+ freq='1H')
+ result = temperature.noct_sam(pd.Series(index=dr, data=poa_global),
+ pd.Series(index=dr, data=temp_air),
+ pd.Series(index=dr, data=wind_speed),
+ pd.Series(index=dr, data=noct),
+ eta_m_ref)
+ assert_series_equal(result, pd.Series(index=dr, data=expected))
+
+
+def test_noct_sam_against_sam():
+ # test is constructed to reproduce output from SAM v2020.11.29.
+ # SAM calculation is the default Detailed PV System model (CEC diode model,
+ # NOCT cell temperature model), with the only change being the soiling
+ # loss is set to 0. Weather input is TMY3 for Phoenix AZ.
+ # Values are taken from the Jan 1 12:00:00 timestamp.
+ poa_total, temp_air, wind_speed, noct, eta_m_ref = (
+ 860.673, 25, 3, 46.4, 0.20551)
+ poa_total_after_refl = 851.458 # from SAM output
+ # compute effective irradiance
+ # spectral loss coefficients fixed in lib_cec6par.cpp
+ a = np.flipud([0.918093, 0.086257, -0.024459, 0.002816, -0.000126])
+ # reproduce SAM air mass calculation
+ zen = 56.4284
+ elev = 358
+ air_mass = 1. / (tools.cosd(zen) + 0.5057 * (96.080 - zen)**-1.634)
+ air_mass *= np.exp(-0.0001184 * elev)
+ f1 = np.polyval(a, air_mass)
+ effective_irradiance = f1 * poa_total_after_refl
+ transmittance_absorptance = 0.9
+ array_height = 1
+ mount_standoff = 4.0
+ result = temperature.noct_sam(poa_total, temp_air, wind_speed, noct,
+ eta_m_ref, effective_irradiance,
+ transmittance_absorptance, array_height,
+ mount_standoff)
+ expected = 43.0655
+ # rtol from limited SAM output precision
+ assert_allclose(result, expected, rtol=1e-5)
+
+
+def test_noct_sam_options():
+ poa_global, temp_air, wind_speed, noct, eta_m_ref = (1000., 25., 1., 45.,
+ 0.2)
+ effective_irradiance = 1100.
+ transmittance_absorbtance = 0.8
+ array_height = 2
+ mount_standoff = 2.0
+ result = temperature.noct_sam(poa_global, temp_air, wind_speed, noct,
+ eta_m_ref, effective_irradiance,
+ transmittance_absorbtance, array_height,
+ mount_standoff)
+ expected = 60.477703576
+ assert_allclose(result, expected)
+
+
+def test_noct_sam_errors():
+ with pytest.raises(ValueError):
+ temperature.noct_sam(1000., 25., 1., 34., 0.2, array_height=3)
| 2021-02-25T22:32:29
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/sphinx/source/api.rst": ".. currentmodule:: pvlib\n\n#############\nAPI reference\n#############\n\n\nClasses\n=======\n\npvlib-python provides a collection of classes for users that prefer\nobject-oriented programming. These classes can help users keep track of\ndata in a more organized way, and can help to simplify the modeling\nprocess. The classes do not add any functionality beyond the procedural\ncode. Most of the object methods are simple wrappers around the\ncorresponding procedural code.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location\n pvsystem.PVSystem\n pvsystem.Array\n tracking.SingleAxisTracker\n modelchain.ModelChain\n modelchain.ModelChainResult\n\nSolar Position\n==============\n\nFunctions and methods for calculating solar position.\n\nThe :py:meth:`location.Location.get_solarposition` method and the\n:py:func:`solarposition.get_solarposition` function with default\nparameters are fast and accurate. We recommend using these functions\nunless you know that you need a different function.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_solarposition\n solarposition.get_solarposition\n solarposition.spa_python\n solarposition.ephemeris\n solarposition.pyephem\n solarposition.spa_c\n\n\nAdditional functions for quantities closely related to solar position.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.calc_time\n solarposition.pyephem_earthsun_distance\n solarposition.nrel_earthsun_distance\n spa.calculate_deltat\n\n\nFunctions for calculating sunrise, sunset and transit times.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_sun_rise_set_transit\n solarposition.sun_rise_set_transit_ephem\n solarposition.sun_rise_set_transit_spa\n solarposition.sun_rise_set_transit_geometric\n\n\nThe spa module contains the implementation of the built-in NREL SPA\nalgorithm.\n\n.. autosummary::\n :toctree: generated/\n\n spa\n\nCorrelations and analytical expressions for low precision solar position\ncalculations.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.solar_zenith_analytical\n solarposition.solar_azimuth_analytical\n solarposition.declination_spencer71\n solarposition.declination_cooper69\n solarposition.equation_of_time_spencer71\n solarposition.equation_of_time_pvcdrom\n solarposition.hour_angle\n solarposition.sun_rise_set_transit_geometric\n\n\nClear sky\n=========\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_clearsky\n clearsky.ineichen\n clearsky.lookup_linke_turbidity\n clearsky.simplified_solis\n clearsky.haurwitz\n clearsky.detect_clearsky\n clearsky.bird\n\n\nAirmass and atmospheric models\n==============================\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_airmass\n atmosphere.get_absolute_airmass\n atmosphere.get_relative_airmass\n atmosphere.pres2alt\n atmosphere.alt2pres\n atmosphere.gueymard94_pw\n atmosphere.first_solar_spectral_correction\n atmosphere.bird_hulstrom80_aod_bb\n atmosphere.kasten96_lt\n atmosphere.angstrom_aod_at_lambda\n atmosphere.angstrom_alpha\n\n\nIrradiance\n==========\n\nMethods for irradiance calculations\n-----------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem.get_irradiance\n pvsystem.PVSystem.get_aoi\n pvsystem.PVSystem.get_iam\n tracking.SingleAxisTracker.get_irradiance\n\nDecomposing and combining irradiance\n------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_extra_radiation\n irradiance.aoi\n irradiance.aoi_projection\n irradiance.poa_horizontal_ratio\n irradiance.beam_component\n irradiance.poa_components\n irradiance.get_ground_diffuse\n irradiance.dni\n\nTransposition models\n--------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_total_irradiance\n irradiance.get_sky_diffuse\n irradiance.isotropic\n irradiance.perez\n irradiance.haydavies\n irradiance.klucher\n irradiance.reindl\n irradiance.king\n\n.. _dniestmodels:\n\nDNI estimation models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.disc\n irradiance.dirint\n irradiance.dirindex\n irradiance.erbs\n irradiance.campbell_norman\n irradiance.gti_dirint\n\nClearness index models\n----------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.clearness_index\n irradiance.clearness_index_zenith_independent\n irradiance.clearsky_index\n\n\nPV Modeling\n===========\n\nClasses\n-------\n\nThe :py:class:`~pvsystem.PVSystem` class provides many methods that\nwrap the functions listed below. See its documentation for details.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem\n\nIncident angle modifiers\n------------------------\n\n.. autosummary::\n :toctree: generated/\n\n iam.physical\n iam.ashrae\n iam.martin_ruiz\n iam.martin_ruiz_diffuse\n iam.sapm\n iam.interp\n iam.marion_diffuse\n iam.marion_integrate\n\nPV temperature models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n temperature.sapm_cell\n temperature.sapm_module\n temperature.sapm_cell_from_module\n temperature.pvsyst_cell\n temperature.faiman\n temperature.fuentes\n temperature.ross\n pvsystem.PVSystem.sapm_celltemp\n pvsystem.PVSystem.pvsyst_celltemp\n pvsystem.PVSystem.faiman_celltemp\n\nTemperature Model Parameters\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n.. currentmodule:: pvlib.temperature\n.. autodata:: TEMPERATURE_MODEL_PARAMETERS\n :annotation:\n\n.. currentmodule:: pvlib\n\nSingle diode models\n-------------------\n\nFunctions relevant for single diode models.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.calcparams_cec\n pvsystem.calcparams_desoto\n pvsystem.calcparams_pvsyst\n pvsystem.i_from_v\n pvsystem.singlediode\n pvsystem.v_from_i\n pvsystem.max_power_point\n\nLow-level functions for solving the single diode equation.\n\n.. autosummary::\n :toctree: generated/\n\n singlediode.estimate_voc\n singlediode.bishop88\n singlediode.bishop88_i_from_v\n singlediode.bishop88_v_from_i\n singlediode.bishop88_mpp\n\nFunctions for fitting diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n\nInverter models (DC to AC conversion)\n-------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem.get_ac\n inverter.sandia\n inverter.sandia_multi\n inverter.adr\n inverter.pvwatts\n inverter.pvwatts_multi\n\nFunctions for fitting inverter models\n\n.. autosummary::\n :toctree: generated/\n\n inverter.fit_sandia\n\n\nPV System Models\n----------------\n\nSandia array performance model (SAPM)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.sapm\n pvsystem.sapm_effective_irradiance\n pvsystem.sapm_spectral_loss\n inverter.sandia\n temperature.sapm_cell\n\nPvsyst model\n^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n temperature.pvsyst_cell\n pvsystem.calcparams_pvsyst\n pvsystem.singlediode\n\nPVWatts model\n^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.pvwatts_dc\n inverter.pvwatts\n pvsystem.pvwatts_losses\n\nEstimating PV model parameters\n------------------------------\n\nFunctions for fitting single diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n ivtools.sdm.fit_pvsyst_sandia\n ivtools.sdm.fit_desoto_sandia\n\nFunctions for fitting the single diode equation\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n\nUtilities for working with IV curve data\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.utils.rectify_iv_curve\n\nOther\n-----\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.retrieve_sam\n pvsystem.scale_voltage_current_power\n\n\nEffects on PV System Output\n===========================\n\nLoss models\n-----------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.combine_loss_factors\n\nSnow\n----\n\n.. autosummary::\n :toctree: generated/\n\n snow.coverage_nrel\n snow.fully_covered_nrel\n snow.dc_loss_nrel\n\nSoiling\n-------\n\n.. autosummary::\n :toctree: generated/\n\n soiling.hsu\n soiling.kimber\n\nShading\n-------\n\n.. autosummary::\n :toctree: generated/\n\n shading.masking_angle\n shading.masking_angle_passias\n shading.sky_diffuse_passias\n\nSpectrum\n--------\n\n.. autosummary::\n :toctree: generated/\n\n spectrum.spectrl2\n\nTracking\n========\n\nSingleAxisTracker\n-----------------\n\nThe :py:class:`~tracking.SingleAxisTracker` inherits from\n:py:class:`~pvsystem.PVSystem`.\n\n.. autosummary::\n :toctree: generated/\n\n tracking.SingleAxisTracker\n tracking.SingleAxisTracker.singleaxis\n tracking.SingleAxisTracker.get_irradiance\n\nFunctions\n---------\n\n.. autosummary::\n :toctree: generated/\n\n tracking.singleaxis\n tracking.calc_axis_tilt\n tracking.calc_cross_axis_tilt\n\n\n.. _iotools:\n\nIO Tools\n========\n\nFunctions for reading and writing data from a variety of file formats\nrelevant to solar energy modeling.\n\n.. autosummary::\n :toctree: generated/\n\n iotools.read_tmy2\n iotools.read_tmy3\n iotools.read_epw\n iotools.parse_epw\n iotools.read_srml\n iotools.read_srml_month_from_solardat\n iotools.read_surfrad\n iotools.read_midc\n iotools.read_midc_raw_data_from_nrel\n iotools.read_ecmwf_macc\n iotools.get_ecmwf_macc\n iotools.read_crn\n iotools.read_solrad\n iotools.get_psm3\n iotools.read_psm3\n iotools.parse_psm3\n iotools.get_pvgis_tmy\n iotools.read_pvgis_tmy\n iotools.read_bsrn\n\nA :py:class:`~pvlib.location.Location` object may be created from metadata\nin some files.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.from_tmy\n location.Location.from_epw\n\n\nForecasting\n===========\n\nForecast models\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.GFS\n forecast.NAM\n forecast.RAP\n forecast.HRRR\n forecast.HRRR_ESRL\n forecast.NDFD\n\nGetting data\n------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.get_data\n forecast.ForecastModel.get_processed_data\n\nProcessing data\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.process_data\n forecast.ForecastModel.rename\n forecast.ForecastModel.cloud_cover_to_ghi_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_clearsky_scaling\n forecast.ForecastModel.cloud_cover_to_transmittance_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_campbell_norman\n forecast.ForecastModel.cloud_cover_to_irradiance\n forecast.ForecastModel.kelvin_to_celsius\n forecast.ForecastModel.isobaric_to_ambient_temperature\n forecast.ForecastModel.uv_to_speed\n forecast.ForecastModel.gust_to_speed\n\nIO support\n----------\n\nThese are public for now, but use at your own risk.\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.set_dataset\n forecast.ForecastModel.set_query_latlon\n forecast.ForecastModel.set_location\n forecast.ForecastModel.set_time\n\n\nModelChain\n==========\n\nCreating a ModelChain object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain\n modelchain.ModelChain.with_pvwatts\n modelchain.ModelChain.with_sapm\n\nRunning\n-------\n\nA ModelChain can be run from a number of starting points, depending on the\ninput data available.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.run_model\n modelchain.ModelChain.run_model_from_poa\n modelchain.ModelChain.run_model_from_effective_irradiance\n\nFunctions to assist with setting up ModelChains to run\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.complete_irradiance\n modelchain.ModelChain.prepare_inputs\n modelchain.ModelChain.prepare_inputs_from_poa\n\nResults\n-------\n\nOutput from the running the ModelChain is stored in the\n:py:attr:`modelchain.ModelChain.results` attribute. For more\ninformation see :py:class:`modelchain.ModelChainResult`.\n\nAttributes\n----------\n\nSimple ModelChain attributes:\n\n``system, location, clearsky_model, transposition_model,\nsolar_position_method, airmass_model``\n\nProperties\n----------\n\nModelChain properties that are aliases for your specific modeling functions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.orientation_strategy\n modelchain.ModelChain.dc_model\n modelchain.ModelChain.ac_model\n modelchain.ModelChain.aoi_model\n modelchain.ModelChain.spectral_model\n modelchain.ModelChain.temperature_model\n modelchain.ModelChain.losses_model\n modelchain.ModelChain.effective_irradiance_model\n\nModel definitions\n-----------------\n\nModelChain model definitions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.sapm\n modelchain.ModelChain.cec\n modelchain.ModelChain.desoto\n modelchain.ModelChain.pvsyst\n modelchain.ModelChain.pvwatts_dc\n modelchain.ModelChain.sandia_inverter\n modelchain.ModelChain.adr_inverter\n modelchain.ModelChain.pvwatts_inverter\n modelchain.ModelChain.ashrae_aoi_loss\n modelchain.ModelChain.physical_aoi_loss\n modelchain.ModelChain.sapm_aoi_loss\n modelchain.ModelChain.no_aoi_loss\n modelchain.ModelChain.first_solar_spectral_loss\n modelchain.ModelChain.sapm_spectral_loss\n modelchain.ModelChain.no_spectral_loss\n modelchain.ModelChain.sapm_temp\n modelchain.ModelChain.pvsyst_temp\n modelchain.ModelChain.faiman_temp\n modelchain.ModelChain.fuentes_temp\n modelchain.ModelChain.pvwatts_losses\n modelchain.ModelChain.no_extra_losses\n\nInference methods\n-----------------\n\nMethods that automatically determine which models should be used based\non the information in the associated :py:class:`~pvsystem.PVSystem` object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.infer_dc_model\n modelchain.ModelChain.infer_ac_model\n modelchain.ModelChain.infer_aoi_model\n modelchain.ModelChain.infer_spectral_model\n modelchain.ModelChain.infer_temperature_model\n modelchain.ModelChain.infer_losses_model\n\nFunctions\n---------\n\nFunctions for power modeling.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.basic_chain\n modelchain.get_orientation\n\n\nBifacial\n========\n\nMethods for calculating back surface irradiance\n\n.. autosummary::\n :toctree: generated/\n\n bifacial.pvfactors_timeseries\n\n\nScaling\n=======\n\nMethods for manipulating irradiance for temporal or spatial considerations\n\n.. autosummary::\n :toctree: generated/\n\n scaling.wvm\n", "docs/sphinx/source/whatsnew/v0.9.0.rst": ".. _whatsnew_0900:\n\nv0.9.0 (MONTH DAY YEAR)\n-----------------------\n\nBreaking changes\n~~~~~~~~~~~~~~~~\n* Moved functions related to inverters from ``pvsystem.py`` to ``inverter.py``.\n Functions are renamed to follow a more consistent pattern, as follows (:pull:`886`, :pull:`1136`):\n\n - ``pvlib.pvsystem.snlinverter`` is now :py:func:`pvlib.inverter.sandia`\n - ``pvlib.pvsystem.pvwatts_ac`` is now :py:func:`pvlib.inverter.pvwatts`\n - ``pvlib.pvsystem.adrinverter`` is now :py:func:`pvlib.inverter.adr`\n\n* Argument ``ac_model`` for :py:class:`pvlib.modelchain.ModelChain` now accepts\n ``'sandia'``, ``'pvwatts'`` and ``'adr'`` for the inverter models. (:pull:`886`, :pull:`1136`)\n\n* Calling :py:meth:`pvlib.pvsystem.PVSystem.sapm_celltemp` without setting\n ``PVSystem.temperature_model_parameters``,\n or a valid combination of ``PVsystem.module_type`` and ``PVsystem.racking_model``, will\n now raise an exception. (:issue:`1030`, :pull:`1033`, :pull:`1136`)\n\n* Deprecated arbitrary keyword arguments for\n :py:class:`pvlib.location.Location`, :py:class:`pvlib.pvsystem.PVSystem`,\n :py:class:`pvlib.tracking.SingleAxisTracker`, and\n :py:class:`pvlib.modelchain.ModelChain`. Supplying arbitrary keyword\n to these objects result in TypeErrors in v0.9. (:issue:`1029`, :pull:`1053`, :pull:`1136`)\n\n* ``pvlib.pvsystem.LocalizedPVSystem`` and ``pvlib.pvsystem.LocalizedSingleAxisTracker``\n have been removed. Use\n :py:class:`pvlib.location.Location`, :py:class:`pvlib.pvsystem.PVSystem`,\n :py:class:`pvlib.tracking.SingleAxisTracker`, and\n :py:class:`pvlib.modelchain.ModelChain` instead.\n (:issue:`1029`, :pull:`1034`, :pull:`1053`, :pull:`1136`)\n\n* ``irradiance.liujordan`` and ``ForecastModel.cloud_cover_to_irradiance_liujordan``\n have been removed. (:pull:`1136`)\n\n* ``ModelChain.snlinverter`` changed to ``ModelChain.sandia_inverter``.\n ``ModelChain.adrinverter`` changed to ``ModelChain.adr_inverter``.\n (:pull:`1150`)\n\n\nDeprecations\n~~~~~~~~~~~~\n* The following ``ModelChain`` attributes are deprecated. They have been moved\n to the :py:class:`~pvlib.modelchain.ModelChainResult` class that is\n accessible via ``ModelChain.results``:\n\n * ``ModelChain.ac``\n * ``ModelChain.airmass``\n * ``ModelChain.aoi``\n * ``ModelChain.aoi_modifier``\n * ``ModelChain.cell_temperature``\n * ``ModelChain.dc``\n * ``ModelChain.diode_params``\n * ``ModelChain.effective_irradiance``\n * ``ModelChain.solar_position``\n * ``ModelChain.spectral_modifier``\n * ``ModelChain.total_irrad``\n * ``ModelChain.tracking``\n\nEnhancements\n~~~~~~~~~~~~\n* Add :func:`~pvlib.iotools.read_bsrn` for reading BSRN solar radiation data\n files. (:pull:`1145`, :issue:`1015`)\n* In :py:class:`~pvlib.modelchain.ModelChain`, attributes which contain\n output of models are now collected into ``ModelChain.results``.\n (:pull:`1076`, :issue:`1067`)\n* Added :py:class:`~pvlib.pvsystem.Array` class to represent an array of\n modules separately from a :py:class:`~pvlib.pvsystem.PVSystem`.\n (:pull:`1076`, :issue:`1067`)\n* Added capability for modeling a PV system with multiple arrays in\n :py:class:`~pvlib.pvsystem.PVSystem`. Updates the ``PVSystem`` API\n to operate on and return tuples where each element of the tuple corresponds\n to the input or output for a specific ``Array``. (:pull:`1076`,\n :issue:`1067`)\n* Support for systems with multiple ``Arrays`` added to\n :py:class:`~pvlib.modelchain.ModelChain`. This includes substantial API\n enhancements for accepting different weather input for each ``Array`` in the\n system. (:pull:`1076`, :issue:`1067`)\n* Support for :py:func:`~pvlib.inverter.sandia_multi` and\n :py:func:`~pvlib.inverter.pvwatts_multi` added to\n :py:class:`~pvlib.pvsystem.PVSystem` and\n :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia'``\n and ``ac_model='pvwatts'``).\n (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`, :pull:`1150`)\n* :py:class:`~pvlib.modelchain.ModelChain` 'run_model' methods now\n automatically switch to using ``'effective_irradiance'`` (if available) for\n cell temperature models, when ``'poa_global'`` is not provided in input\n weather or calculated from input weather data.\n* :py:meth:`~pvlib.modelchain.ModelChain.pvwatts_dc` now scales the DC power\n by ``pvsystem.PVSystem.modules_per_strings`` and\n ``pvsystem.PVSystem.strings_per_inverter``. Note that both attributes still\n default to 1. (:pull:`1138`)\n* :py:meth:`~pvlib.pvsystem.PVSystem.get_ac` is added to calculate AC power\n from DC power. Use parameter ``model`` to specify which inverter model to use.\n (:pull:`1147`, :issue:`998`, :pull:`1150`)\n\nBug fixes\n~~~~~~~~~\n* Pass weather data to solar position calculations in\n :py:meth:`~pvlib.modelchain.ModelChain.prepare_inputs_from_poa`.\n (:issue:`1065`, :pull:`1140`)\n* Reindl model fixed to generate sky_diffuse=0 when GHI=0.\n (:issue:`1153`, :pull:`1154`)\n\nTesting\n~~~~~~~\n\nDocumentation\n~~~~~~~~~~~~~\n\n* Update intro tutorial to highlight the use of historical meteorological data\n and to make the procedural and object oriented results match exactly.\n* Update documentation links in :py:func:`pvlib.iotools.get_psm3`\n\nRequirements\n~~~~~~~~~~~~\n* ``dataclasses`` is required for python 3.6\n\nContributors\n~~~~~~~~~~~~\n* Will Holmgren (:ghuser:`wholmgren`)\n* Cliff Hansen (:ghuser:`cwhanse`)\n* Will Vining (:ghuser:`wfvining`)\n* Anton Driesse (:ghuser:`adriesse`)\n* Mark Mikofski (:ghuser:`mikofski`)\n* Nate Croft (:ghuser:`ncroft-b4`)\n* Kevin Anderson (:ghuser:`kanderso-nrel`)\n* Adam R. Jensen (:ghuser:`AdamRJensen`)\n* Joshua Stein (:ghuser:`jsstein`)\n* Tony Lorenzo (:ghuser:`alorenzo175`)\n", "pvlib/temperature.py": "\"\"\"\nThe ``temperature`` module contains functions for modeling temperature of\nPV modules and cells.\n\"\"\"\n\nimport numpy as np\nimport pandas as pd\nfrom pvlib.tools import sind\n\nTEMPERATURE_MODEL_PARAMETERS = {\n 'sapm': {\n 'open_rack_glass_glass': {'a': -3.47, 'b': -.0594, 'deltaT': 3},\n 'close_mount_glass_glass': {'a': -2.98, 'b': -.0471, 'deltaT': 1},\n 'open_rack_glass_polymer': {'a': -3.56, 'b': -.0750, 'deltaT': 3},\n 'insulated_back_glass_polymer': {'a': -2.81, 'b': -.0455, 'deltaT': 0},\n },\n 'pvsyst': {'freestanding': {'u_c': 29.0, 'u_v': 0},\n 'insulated': {'u_c': 15.0, 'u_v': 0}}\n}\n\"\"\"Dictionary of temperature parameters organized by model.\n\nThere are keys for each model at the top level. Currently there are two models,\n``'sapm'`` for the Sandia Array Performance Model, and ``'pvsyst'``. Each model\nhas a dictionary of configurations; a value is itself a dictionary containing\nmodel parameters. Retrieve parameters by indexing the model and configuration\nby name. Note: the keys are lower-cased and case sensitive.\n\nExample\n-------\nRetrieve the open rack glass-polymer configuration for SAPM::\n\n from pvlib.temperature import TEMPERATURE_MODEL_PARAMETERS\n temperature_model_parameters = (\n TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_polymer'])\n # {'a': -3.56, 'b': -0.075, 'deltaT': 3}\n\"\"\"\n\n\ndef _temperature_model_params(model, parameter_set):\n try:\n params = TEMPERATURE_MODEL_PARAMETERS[model]\n return params[parameter_set]\n except KeyError:\n msg = ('{} is not a named set of parameters for the {} cell'\n ' temperature model.'\n ' See pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS'\n ' for names'.format(parameter_set, model))\n raise KeyError(msg)\n\n\ndef sapm_cell(poa_global, temp_air, wind_speed, a, b, deltaT,\n irrad_ref=1000.):\n r'''\n Calculate cell temperature per the Sandia Array Performance Model.\n\n See [1]_ for details on the Sandia Array Performance Model.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric\n Wind speed at a height of 10 meters [m/s].\n\n a : float\n Parameter :math:`a` in :eq:`sapm1`.\n\n b : float\n Parameter :math:`b` in :eq:`sapm1`.\n\n deltaT : float\n Parameter :math:`\\Delta T` in :eq:`sapm2` [C].\n\n irrad_ref : float, default 1000\n Reference irradiance, parameter :math:`E_{0}` in\n :eq:`sapm2` [W/m^2].\n\n Returns\n -------\n numeric, values in degrees C.\n\n Notes\n -----\n The model for cell temperature :math:`T_{C}` is given by a pair of\n equations (Eq. 11 and 12 in [1]_).\n\n .. math::\n :label: sapm1\n\n T_{m} = E \\times \\exp (a + b \\times WS) + T_{a}\n\n .. math::\n :label: sapm2\n\n T_{C} = T_{m} + \\frac{E}{E_{0}} \\Delta T\n\n The module back surface temperature :math:`T_{m}` is implemented in\n :py:func:`~pvlib.temperature.sapm_module`.\n\n Inputs to the model are plane-of-array irradiance :math:`E` (W/m2) and\n ambient air temperature :math:`T_{a}` (C). Model parameters depend both on\n the module construction and its mounting. Parameter sets are provided in\n [1]_ for representative modules and mounting, and are coded for convenience\n in :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`.\n\n +---------------+----------------+-------+---------+---------------------+\n | Module | Mounting | a | b | :math:`\\Delta T [C]`|\n +===============+================+=======+=========+=====================+\n | glass/glass | open rack | -3.47 | -0.0594 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/glass | close roof | -2.98 | -0.0471 | 1 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | open rack | -3.56 | -0.075 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | insulated back | -2.81 | -0.0455 | 0 |\n +---------------+----------------+-------+---------+---------------------+\n\n References\n ----------\n .. [1] King, D. et al, 2004, \"Sandia Photovoltaic Array Performance\n Model\", SAND Report 3535, Sandia National Laboratories, Albuquerque,\n NM.\n\n See also\n --------\n sapm_cell_from_module\n sapm_module\n\n Examples\n --------\n >>> from pvlib.temperature import sapm_cell, TEMPERATURE_MODEL_PARAMETERS\n >>> params = TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_glass']\n >>> sapm_cell(1000, 10, 0, **params)\n 44.11703066106086\n '''\n module_temperature = sapm_module(poa_global, temp_air, wind_speed,\n a, b)\n return sapm_cell_from_module(module_temperature, poa_global, deltaT,\n irrad_ref)\n\n\ndef sapm_module(poa_global, temp_air, wind_speed, a, b):\n r'''\n Calculate module back surface temperature per the Sandia Array\n Performance Model.\n\n See [1]_ for details on the Sandia Array Performance Model.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric\n Wind speed at a height of 10 meters [m/s].\n\n a : float\n Parameter :math:`a` in :eq:`sapm1mod`.\n\n b : float\n Parameter :math:`b` in :eq:`sapm1mod`.\n\n Returns\n -------\n numeric, values in degrees C.\n\n Notes\n -----\n The model for module temperature :math:`T_{m}` is given by Eq. 11 in [1]_.\n\n .. math::\n :label: sapm1mod\n\n T_{m} = E \\times \\exp (a + b \\times WS) + T_{a}\n\n Inputs to the model are plane-of-array irradiance :math:`E` (W/m2) and\n ambient air temperature :math:`T_{a}` (C). Model outputs are surface\n temperature at the back of the module :math:`T_{m}` and cell temperature\n :math:`T_{C}`. Model parameters depend both on the module construction and\n its mounting. Parameter sets are provided in [1]_ for representative\n modules and mounting, and are coded for convenience in\n :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`.\n\n +---------------+----------------+-------+---------+---------------------+\n | Module | Mounting | a | b | :math:`\\Delta T [C]`|\n +===============+================+=======+=========+=====================+\n | glass/glass | open rack | -3.47 | -0.0594 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/glass | close roof | -2.98 | -0.0471 | 1 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | open rack | -3.56 | -0.075 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | insulated back | -2.81 | -0.0455 | 0 |\n +---------------+----------------+-------+---------+---------------------+\n\n References\n ----------\n .. [1] King, D. et al, 2004, \"Sandia Photovoltaic Array Performance\n Model\", SAND Report 3535, Sandia National Laboratories, Albuquerque,\n NM.\n\n See also\n --------\n sapm_cell\n sapm_cell_from_module\n '''\n return poa_global * np.exp(a + b * wind_speed) + temp_air\n\n\ndef sapm_cell_from_module(module_temperature, poa_global, deltaT,\n irrad_ref=1000.):\n r'''\n Calculate cell temperature from module temperature using the Sandia Array\n Performance Model.\n\n See [1]_ for details on the Sandia Array Performance Model.\n\n Parameters\n ----------\n module_temperature : numeric\n Temperature of back of module surface [C].\n\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n deltaT : float\n Parameter :math:`\\Delta T` in :eq:`sapm2_cell_from_mod` [C].\n\n irrad_ref : float, default 1000\n Reference irradiance, parameter :math:`E_{0}` in\n :eq:`sapm2` [W/m^2].\n\n Returns\n -------\n numeric, values in degrees C.\n\n Notes\n -----\n The model for cell temperature :math:`T_{C}` is given by Eq. 12 in [1]_.\n\n .. math::\n :label: sapm2_cell_from_mod\n\n T_{C} = T_{m} + \\frac{E}{E_{0}} \\Delta T\n\n The module back surface temperature :math:`T_{m}` is implemented in\n :py:func:`~pvlib.temperature.sapm_module`.\n\n Model parameters depend both on the module construction and its mounting.\n Parameter sets are provided in [1]_ for representative modules and\n mounting, and are coded for convenience in\n :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`.\n\n +---------------+----------------+-------+---------+---------------------+\n | Module | Mounting | a | b | :math:`\\Delta T [C]`|\n +===============+================+=======+=========+=====================+\n | glass/glass | open rack | -3.47 | -0.0594 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/glass | close roof | -2.98 | -0.0471 | 1 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | open rack | -3.56 | -0.075 | 3 |\n +---------------+----------------+-------+---------+---------------------+\n | glass/polymer | insulated back | -2.81 | -0.0455 | 0 |\n +---------------+----------------+-------+---------+---------------------+\n\n References\n ----------\n .. [1] King, D. et al, 2004, \"Sandia Photovoltaic Array Performance\n Model\", SAND Report 3535, Sandia National Laboratories, Albuquerque,\n NM.\n\n See also\n --------\n sapm_cell\n sapm_module\n '''\n return module_temperature + (poa_global / irrad_ref) * deltaT\n\n\ndef pvsyst_cell(poa_global, temp_air, wind_speed=1.0, u_c=29.0, u_v=0.0,\n eta_m=0.1, alpha_absorption=0.9):\n r\"\"\"\n Calculate cell temperature using an empirical heat loss factor model\n as implemented in PVsyst.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric, default 1.0\n Wind speed in m/s measured at the same height for which the wind loss\n factor was determined. The default value 1.0 m/2 is the wind\n speed at module height used to determine NOCT. [m/s]\n\n u_c : float, default 29.0\n Combined heat loss factor coefficient. The default value is\n representative of freestanding modules with the rear surfaces exposed\n to open air (e.g., rack mounted). Parameter :math:`U_{c}` in\n :eq:`pvsyst`.\n :math:`\\left[\\frac{\\text{W}/{\\text{m}^2}}{\\text{C}}\\right]`\n\n u_v : float, default 0.0\n Combined heat loss factor influenced by wind. Parameter :math:`U_{v}`\n in :eq:`pvsyst`.\n :math:`\\left[ \\frac{\\text{W}/\\text{m}^2}{\\text{C}\\ \\left( \\text{m/s} \\right)} \\right]`\n\n eta_m : numeric, default 0.1\n Module external efficiency as a fraction, i.e., DC power / poa_global.\n Parameter :math:`\\eta_{m}` in :eq:`pvsyst`.\n\n alpha_absorption : numeric, default 0.9\n Absorption coefficient. Parameter :math:`\\alpha` in :eq:`pvsyst`.\n\n Returns\n -------\n numeric, values in degrees Celsius\n\n Notes\n -----\n The Pvsyst model for cell temperature :math:`T_{C}` is given by\n\n .. math::\n :label: pvsyst\n\n T_{C} = T_{a} + \\frac{\\alpha E (1 - \\eta_{m})}{U_{c} + U_{v} \\times WS}\n\n Inputs to the model are plane-of-array irradiance :math:`E` (W/m2), ambient\n air temperature :math:`T_{a}` (C) and wind speed :math:`WS` (m/s). Model\n output is cell temperature :math:`T_{C}`. Model parameters depend both on\n the module construction and its mounting. Parameters are provided in\n [1]_ for open (freestanding) and close (insulated) mounting configurations,\n , and are coded for convenience in\n :data:`~pvlib.temperature.TEMPERATURE_MODEL_PARAMETERS`. The heat loss\n factors provided represent the combined effect of convection, radiation and\n conduction, and their values are experimentally determined.\n\n +--------------+---------------+---------------+\n | Mounting | :math:`U_{c}` | :math:`U_{v}` |\n +==============+===============+===============+\n | freestanding | 29.0 | 0.0 |\n +--------------+---------------+---------------+\n | insulated | 15.0 | 0.0 |\n +--------------+---------------+---------------+\n\n References\n ----------\n .. [1] \"PVsyst 6 Help\", Files.pvsyst.com, 2018. [Online]. Available:\n http://files.pvsyst.com/help/index.html. [Accessed: 10- Dec- 2018].\n\n .. [2] Faiman, D. (2008). \"Assessing the outdoor operating temperature of\n photovoltaic modules.\" Progress in Photovoltaics 16(4): 307-315.\n\n Examples\n --------\n >>> from pvlib.temperature import pvsyst_cell, TEMPERATURE_MODEL_PARAMETERS\n >>> params = TEMPERATURE_MODEL_PARAMETERS['pvsyst']['freestanding']\n >>> pvsyst_cell(1000, 10, **params)\n 37.93103448275862\n \"\"\"\n\n total_loss_factor = u_c + u_v * wind_speed\n heat_input = poa_global * alpha_absorption * (1 - eta_m)\n temp_difference = heat_input / total_loss_factor\n return temp_air + temp_difference\n\n\ndef faiman(poa_global, temp_air, wind_speed=1.0, u0=25.0, u1=6.84):\n r'''\n Calculate cell or module temperature using the Faiman model.\n\n The Faiman model uses an empirical heat loss factor model [1]_ and is\n adopted in the IEC 61853 standards [2]_ and [3]_.\n\n Usage of this model in the IEC 61853 standard does not distinguish\n between cell and module temperature.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance [W/m^2].\n\n temp_air : numeric\n Ambient dry bulb temperature [C].\n\n wind_speed : numeric, default 1.0\n Wind speed in m/s measured at the same height for which the wind loss\n factor was determined. The default value 1.0 m/s is the wind\n speed at module height used to determine NOCT. [m/s]\n\n u0 : numeric, default 25.0\n Combined heat loss factor coefficient. The default value is one\n determined by Faiman for 7 silicon modules.\n :math:`\\left[\\frac{\\text{W}/{\\text{m}^2}}{\\text{C}}\\right]`\n\n u1 : numeric, default 6.84\n Combined heat loss factor influenced by wind. The default value is one\n determined by Faiman for 7 silicon modules.\n :math:`\\left[ \\frac{\\text{W}/\\text{m}^2}{\\text{C}\\ \\left( \\text{m/s} \\right)} \\right]`\n\n Returns\n -------\n numeric, values in degrees Celsius\n\n Notes\n -----\n All arguments may be scalars or vectors. If multiple arguments\n are vectors they must be the same length.\n\n References\n ----------\n .. [1] Faiman, D. (2008). \"Assessing the outdoor operating temperature of\n photovoltaic modules.\" Progress in Photovoltaics 16(4): 307-315.\n\n .. [2] \"IEC 61853-2 Photovoltaic (PV) module performance testing and energy\n rating - Part 2: Spectral responsivity, incidence angle and module\n operating temperature measurements\". IEC, Geneva, 2018.\n\n .. [3] \"IEC 61853-3 Photovoltaic (PV) module performance testing and energy\n rating - Part 3: Energy rating of PV modules\". IEC, Geneva, 2018.\n\n '''\n # Contributed by Anton Driesse (@adriesse), PV Performance Labs. Dec., 2019\n\n # The following lines may seem odd since u0 & u1 are probably scalar,\n # but it serves an indirect and easy way of allowing lists and\n # tuples for the other function arguments.\n u0 = np.asanyarray(u0)\n u1 = np.asanyarray(u1)\n\n total_loss_factor = u0 + u1 * wind_speed\n heat_input = poa_global\n temp_difference = heat_input / total_loss_factor\n return temp_air + temp_difference\n\n\ndef ross(poa_global, temp_air, noct):\n r'''\n Calculate cell temperature using the Ross model.\n\n The Ross model [1]_ assumes the difference between cell temperature\n and ambient temperature is proportional to the plane of array irradiance,\n and assumes wind speed of 1 m/s. The model implicitly assumes steady or\n slowly changing irradiance conditions.\n\n Parameters\n ----------\n poa_global : numeric\n Total incident irradiance. [W/m^2]\n\n temp_air : numeric\n Ambient dry bulb temperature. [C]\n\n noct : numeric\n Nominal operating cell temperature [C], determined at conditions of\n 800 W/m^2 irradiance, 20 C ambient air temperature and 1 m/s wind.\n\n Returns\n -------\n cell_temperature : numeric\n Cell temperature. [C]\n\n Notes\n -----\n The Ross model for cell temperature :math:`T_{C}` is given in [1]_ as\n\n .. math::\n\n T_{C} = T_{a} + \\frac{NOCT - 20}{80} S\n\n where :math:`S` is the plane of array irradiance in :math:`mW/{cm}^2`.\n This function expects irradiance in :math:`W/m^2`.\n\n References\n ----------\n .. [1] Ross, R. G. Jr., (1981). \"Design Techniques for Flat-Plate\n Photovoltaic Arrays\". 15th IEEE Photovoltaic Specialist Conference,\n Orlando, FL.\n '''\n # factor of 0.1 converts irradiance from W/m2 to mW/cm2\n return temp_air + (noct - 20.) / 80. * poa_global * 0.1\n\n\ndef _fuentes_hconv(tave, windmod, tinoct, temp_delta, xlen, tilt,\n check_reynold):\n # Calculate the convective coefficient as in Fuentes 1987 -- a mixture of\n # free, laminar, and turbulent convection.\n densair = 0.003484 * 101325.0 / tave # density\n visair = 0.24237e-6 * tave**0.76 / densair # kinematic viscosity\n condair = 2.1695e-4 * tave**0.84 # thermal conductivity\n reynold = windmod * xlen / visair\n # the boundary between laminar and turbulent is modeled as an abrupt\n # change at Re = 1.2e5:\n if check_reynold and reynold > 1.2e5:\n # turbulent convection\n hforce = 0.0282 / reynold**0.2 * densair * windmod * 1007 / 0.71**0.4\n else:\n # laminar convection\n hforce = 0.8600 / reynold**0.5 * densair * windmod * 1007 / 0.71**0.67\n # free convection via Grashof number\n # NB: Fuentes hardwires sind(tilt) as 0.5 for tilt=30\n grashof = 9.8 / tave * temp_delta * xlen**3 / visair**2 * sind(tilt)\n # product of Nusselt number and (k/l)\n hfree = 0.21 * (grashof * 0.71)**0.32 * condair / xlen\n # combine free and forced components\n hconv = (hfree**3 + hforce**3)**(1/3)\n return hconv\n\n\ndef _hydraulic_diameter(width, height):\n # calculate the hydraulic diameter of a rectangle\n return 2 * (width * height) / (width + height)\n\n\ndef fuentes(poa_global, temp_air, wind_speed, noct_installed, module_height=5,\n wind_height=9.144, emissivity=0.84, absorption=0.83,\n surface_tilt=30, module_width=0.31579, module_length=1.2):\n \"\"\"\n Calculate cell or module temperature using the Fuentes model.\n\n The Fuentes model is a first-principles heat transfer energy balance\n model [1]_ that is used in PVWatts for cell temperature modeling [2]_.\n\n Parameters\n ----------\n poa_global : pandas Series\n Total incident irradiance [W/m^2]\n\n temp_air : pandas Series\n Ambient dry bulb temperature [C]\n\n wind_speed : pandas Series\n Wind speed [m/s]\n\n noct_installed : float\n The \"installed\" nominal operating cell temperature as defined in [1]_.\n PVWatts assumes this value to be 45 C for rack-mounted arrays and\n 49 C for roof mount systems with restricted air flow around the\n module. [C]\n\n module_height : float, default 5.0\n The height above ground of the center of the module. The PVWatts\n default is 5.0 [m]\n\n wind_height : float, default 9.144\n The height above ground at which ``wind_speed`` is measured. The\n PVWatts defauls is 9.144 [m]\n\n emissivity : float, default 0.84\n The effectiveness of the module at radiating thermal energy. [unitless]\n\n absorption : float, default 0.83\n The fraction of incident irradiance that is converted to thermal\n energy in the module. [unitless]\n\n surface_tilt : float, default 30\n Module tilt from horizontal. If not provided, the default value\n of 30 degrees from [1]_ and [2]_ is used. [degrees]\n\n module_width : float, default 0.31579\n Module width. The default value of 0.31579 meters in combination with\n the default `module_length` gives a hydraulic diameter of 0.5 as\n assumed in [1]_ and [2]_. [m]\n\n module_length : float, default 1.2\n Module length. The default value of 1.2 meters in combination with\n the default `module_width` gives a hydraulic diameter of 0.5 as\n assumed in [1]_ and [2]_. [m]\n\n Returns\n -------\n temperature_cell : pandas Series\n The modeled cell temperature [C]\n\n Notes\n -----\n This function returns slightly different values from PVWatts at night\n and just after dawn. This is because the SAM SSC assumes that module\n temperature equals ambient temperature when irradiance is zero so it can\n skip the heat balance calculation at night.\n\n References\n ----------\n .. [1] Fuentes, M. K., 1987, \"A Simplifed Thermal Model for Flat-Plate\n Photovoltaic Arrays\", SAND85-0330, Sandia National Laboratories,\n Albuquerque NM.\n http://prod.sandia.gov/techlib/access-control.cgi/1985/850330.pdf\n .. [2] Dobos, A. P., 2014, \"PVWatts Version 5 Manual\", NREL/TP-6A20-62641,\n National Renewable Energy Laboratory, Golden CO.\n doi:10.2172/1158421.\n \"\"\"\n # ported from the FORTRAN77 code provided in Appendix A of Fuentes 1987;\n # nearly all variable names are kept the same for ease of comparison.\n\n boltz = 5.669e-8\n emiss = emissivity\n absorp = absorption\n xlen = _hydraulic_diameter(module_width, module_length)\n # cap0 has units of [J / (m^2 K)], equal to mass per unit area times\n # specific heat of the module.\n cap0 = 11000\n tinoct = noct_installed + 273.15\n\n # convective coefficient of top surface of module at NOCT\n windmod = 1.0\n tave = (tinoct + 293.15) / 2\n hconv = _fuentes_hconv(tave, windmod, tinoct, tinoct - 293.15, xlen,\n surface_tilt, False)\n\n # determine the ground temperature ratio and the ratio of the total\n # convection to the top side convection\n hground = emiss * boltz * (tinoct**2 + 293.15**2) * (tinoct + 293.15)\n backrat = (\n absorp * 800.0\n - emiss * boltz * (tinoct**4 - 282.21**4)\n - hconv * (tinoct - 293.15)\n ) / ((hground + hconv) * (tinoct - 293.15))\n tground = (tinoct**4 - backrat * (tinoct**4 - 293.15**4))**0.25\n tground = np.clip(tground, 293.15, tinoct)\n\n tgrat = (tground - 293.15) / (tinoct - 293.15)\n convrat = (absorp * 800 - emiss * boltz * (\n 2 * tinoct**4 - 282.21**4 - tground**4)) / (hconv * (tinoct - 293.15))\n\n # adjust the capacitance (thermal mass) of the module based on the INOCT.\n # It is a function of INOCT because high INOCT implies thermal coupling\n # with the racking (e.g. roofmount), so the thermal mass is increased.\n # `cap` has units J/(m^2 C) -- see Table 3, Equations 26 & 27\n cap = cap0\n if tinoct > 321.15:\n cap = cap * (1 + (tinoct - 321.15) / 12)\n\n # iterate through timeseries inputs\n sun0 = 0\n tmod0 = 293.15\n\n # n.b. the way Fuentes calculates the first timedelta makes it seem like\n # the value doesn't matter -- rather than recreate it here, just assume\n # it's the same as the second timedelta:\n timedelta_seconds = poa_global.index.to_series().diff().dt.total_seconds()\n timedelta_hours = timedelta_seconds / 3600\n timedelta_hours.iloc[0] = timedelta_hours.iloc[1]\n\n tamb_array = temp_air + 273.15\n sun_array = poa_global * absorp\n\n # Two of the calculations are easily vectorized, so precalculate them:\n # sky temperature -- Equation 24\n tsky_array = 0.68 * (0.0552 * tamb_array**1.5) + 0.32 * tamb_array\n # wind speed at module height -- Equation 22\n # not sure why the 1e-4 factor is included -- maybe the equations don't\n # behave well if wind == 0?\n windmod_array = wind_speed * (module_height/wind_height)**0.2 + 1e-4\n\n tmod0 = 293.15\n tmod_array = np.zeros_like(poa_global)\n\n iterator = zip(tamb_array, sun_array, windmod_array, tsky_array,\n timedelta_hours)\n for i, (tamb, sun, windmod, tsky, dtime) in enumerate(iterator):\n # solve the heat transfer equation, iterating because the heat loss\n # terms depend on tmod. NB Fuentes doesn't show that 10 iterations is\n # sufficient for convergence.\n tmod = tmod0\n for j in range(10):\n # overall convective coefficient\n tave = (tmod + tamb) / 2\n hconv = convrat * _fuentes_hconv(tave, windmod, tinoct,\n abs(tmod-tamb), xlen,\n surface_tilt, True)\n # sky radiation coefficient (Equation 3)\n hsky = emiss * boltz * (tmod**2 + tsky**2) * (tmod + tsky)\n # ground radiation coeffieicient (Equation 4)\n tground = tamb + tgrat * (tmod - tamb)\n hground = emiss * boltz * (tmod**2 + tground**2) * (tmod + tground)\n # thermal lag -- Equation 8\n eigen = - (hconv + hsky + hground) / cap * dtime * 3600\n # not sure why this check is done, maybe as a speed optimization?\n if eigen > -10:\n ex = np.exp(eigen)\n else:\n ex = 0\n # Equation 7 -- note that `sun` and `sun0` already account for\n # absorption (alpha)\n tmod = tmod0 * ex + (\n (1 - ex) * (\n hconv * tamb\n + hsky * tsky\n + hground * tground\n + sun0\n + (sun - sun0) / eigen\n ) + sun - sun0\n ) / (hconv + hsky + hground)\n tmod_array[i] = tmod\n tmod0 = tmod\n sun0 = sun\n\n return pd.Series(tmod_array - 273.15, index=poa_global.index, name='tmod')\n"}
|
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 8805d199a4..7ed5ee941d 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -238,6 +238,7 @@ PV temperature models
temperature.faiman
temperature.fuentes
temperature.ross
+ temperature.noct_sam
pvsystem.PVSystem.sapm_celltemp
pvsystem.PVSystem.pvsyst_celltemp
pvsystem.PVSystem.faiman_celltemp
diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 81e7a0c60b..9abc4da039 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -96,6 +96,8 @@ Enhancements
* :py:meth:`~pvlib.pvsystem.PVSystem.get_ac` is added to calculate AC power
from DC power. Use parameter ``model`` to specify which inverter model to use.
(:pull:`1147`, :issue:`998`, :pull:`1150`)
+* Added :py:func:`~pvlib.temperature.noct_sam`, a cell temperature model
+ implemented in SAM (:pull:`1177`)
Bug fixes
~~~~~~~~~
|
{"pvlib/temperature.py": [{"type": "function", "name": "_adj_for_mounting_standoff", "lines": [711, 718], "signature": "def _adj_for_mounting_standoff(x):", "doc": ""}, {"type": "function", "name": "noct_sam", "lines": [721, 814], "signature": "def noct_sam(poa_global, temp_air, wind_speed, noct, eta_m_ref, effective_irradiance=None, transmittance_absorptance=0.9, array_height=1, mount_standoff=4):", "doc": "Cell temperature model from the System Advisor Model (SAM).\n\nThe model is described in [1]_, Section 10.6.\n\nParameters\n----------\npoa_global : numeric\n Total incident irradiance. [W/m^2]\n\ntemp_air : numeric\n Ambient dry bulb temperature. [C]\n\nwind_speed : numeric\n Wind speed in m/s measured at the same height for which the wind loss\n factor was determined. The default value 1.0 m/s is the wind\n speed at module height used to determine NOCT. [m/s]\n\nnoct : float\n Nominal operating cell temperature [C], determined at conditions of\n 800 W/m^2 irradiance, 20 C ambient air temperature and 1 m/s wind.\n\neta_m_ref : float\n Module external efficiency [unitless] at reference conditions of\n 1000 W/m^2 and 20C. Calculate as\n :math:`\\eta_{m} = \\frac{V_{mp} I_{mp}}{A \\times 1000 W/m^2}`\n where A is module area [m^2].\n\neffective_irradiance : numeric, default None.\n The irradiance that is converted to photocurrent. If None,\n assumed equal to poa_global. [W/m^2]\n\ntransmittance_absorptance : numeric, default 0.9\n Coefficient for combined transmittance and absorptance effects.\n [unitless]\n\narray_height : int, default 1\n Height of array above ground in stories (one story is about 3m). Must\n be either 1 or 2. For systems elevated less than one story, use 1.\n If system is elevated more than two stories, use 2.\n\nmount_standoff : numeric, default 4\n Distance between array mounting and mounting surface. Use default\n if system is ground-mounted. [inches]\n\nReturns\n-------\ncell_temperature : numeric\n Cell temperature. [C]\n\nRaises\n------\nValueError\n If array_height is an invalid value (must be 1 or 2).\n\nReferences\n----------\n.. [1] Gilman, P., Dobos, A., DiOrio, N., Freeman, J., Janzou, S.,\n Ryberg, D., 2018, \"SAM Photovoltaic Model Technical Reference\n Update\", National Renewable Energy Laboratory Report\n NREL/TP-6A20-67399."}]}
|
0.8
|
["pvlib/tests/test_temperature.py::test_noct_sam", "pvlib/tests/test_temperature.py::test_noct_sam_against_sam", "pvlib/tests/test_temperature.py::test_noct_sam_options", "pvlib/tests/test_temperature.py::test_noct_sam_errors"]
|
["pvlib/tests/test_temperature.py::test_sapm_cell", "pvlib/tests/test_temperature.py::test_sapm_module", "pvlib/tests/test_temperature.py::test_sapm_cell_from_module", "pvlib/tests/test_temperature.py::test_sapm_ndarray", "pvlib/tests/test_temperature.py::test_sapm_series", "pvlib/tests/test_temperature.py::test_pvsyst_cell_default", "pvlib/tests/test_temperature.py::test_pvsyst_cell_kwargs", "pvlib/tests/test_temperature.py::test_pvsyst_cell_ndarray", "pvlib/tests/test_temperature.py::test_pvsyst_cell_series", "pvlib/tests/test_temperature.py::test_faiman_default", "pvlib/tests/test_temperature.py::test_faiman_kwargs", "pvlib/tests/test_temperature.py::test_faiman_list", "pvlib/tests/test_temperature.py::test_faiman_ndarray", "pvlib/tests/test_temperature.py::test_ross", "pvlib/tests/test_temperature.py::test_faiman_series", "pvlib/tests/test_temperature.py::test__temperature_model_params", "pvlib/tests/test_temperature.py::test_fuentes[pvwatts_8760_rackmount.csv-45]", "pvlib/tests/test_temperature.py::test_fuentes[pvwatts_8760_roofmount.csv-49]", "pvlib/tests/test_temperature.py::test_fuentes_timezone[None]", "pvlib/tests/test_temperature.py::test_fuentes_timezone[Etc/GMT+5]"]
|
311781d2380997044da0e484dc90aa146a74ca44
|
{"first_commit_time": 1614287349.0, "pr_title": "NOCT cell temperature function", "pr_body": " - ~~[ ] Closes #xxxx~~\r\n - [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)\r\n - [x] Tests added\r\n - [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.\r\n - [x] Adds description and name entries in the appropriate \"what's new\" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).\r\n - [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.\r\n - [x] Pull request is nearly complete and ready for detailed review.\r\n - [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.\r\n\r\nImplement cell temperature function that is the default in SAM when the CEC module model is chosen.\r\n", "pr_timeline": [{"time": 1614631627.0, "comment": "Ready for review. check failure is codecov, but all of `temperature.py` is covered."}, {"time": 1614640452.0, "comment": "> Mostly looks good. Are you planning a follow up issue/pr for integration into `PVSystem` and `ModelChain`?\r\n\r\nYes, I wanted to get consensus on the function first."}, {"time": 1614704693.0, "comment": "> I'm only partially able to reconcile the standoff adjustment values in SAM Tech Ref Eq. 10.36 with their supposed source, Fuentes Table 4. \r\n\r\nI think we should regard SAM as the source for standoff adjustment values. \r\n\r\nAbout \"G\" vs. \"G0\" in the SAM document: the term \"G\" is redefined twice in Chapter 10 (Eq. 10.13 and 10.22 are POA broadband, Eq. 10.25 and 10.26 (and 7.8) are effective irradiance absorbed by the cells. I don't read C well, but it looked to me that `G_total` in the [SAM function](https://github.com/NREL/ssc/blob/055e2847ea1f4a81b4a0062beb972ec45e77d185/shared/lib_cec6par.cpp#L160) is POA irradiance, not effective irradiance. \r\n\r\nIt would be desirable if this temperature model could be used without requiring both `poa_global` and `effective_irradiance`, or a third quantity falling in between which isn't present in pvlib: absorbed irradiance, POA after reflection but before spectral adjustment, this quantity is 'S' in De Soto's papers.\r\n\r\nThe SAM implementation is more involved that what is described in Duffie and Beckman. One way forward would be to implement `noct_duffie_beckman` rather than `noct_sam`. In D&B:\r\n- 'G' is POA irradiance\r\n- the transmittance/absorptance term isn't adjusted with `effective_irradiance`\r\n- wind speed isn't adjusted for standoff"}, {"time": 1614812255.0, "comment": "I spent some time trying to get SAM to give me back cell temperatures, without success. That quantity isn't available in the time series output, and the relevant function (`noct_celltemp_t`) isn't directly accessible through PySAM. I was able to get a 'tcell' value using PySAM's `Pv6parmod` object and of course it's not what I expected to see from the inputs I provided.\r\n\r\nAt this point, my recommendation is to fall back to implementing the model in Duffie and Beckman, and adding the SAM model in the future (as a different function) when we're clear exactly what the SAM model does. @kanderso-nrel? @wholmgren?\r\n\r\nThe driver for this PR is the Solar Performance Insight project, where it would be nice to have a pre-packaged \"SAM\" model chain.\r\n\r\n"}, {"time": 1614812936.0, "comment": "No preference from me on the way best way forward. For getting cell temps from SAM: it is available in the UI, but maybe not under a name you expect (Subarray 1 Cell Temperature), see screenshots below. You can grab numerical values from the `Data tables` tab. I think I've pulled it directly from SSC before, but that was before the days of PySAM so I don't know how to do it there. \r\n\r\n<details>\r\n <summary>Click to expand!</summary>\r\n\r\n\r\n\r\n---\r\n\r\n\r\n\r\n\r\n</details>"}, {"time": 1614982073.0, "comment": "@kanderso-nrel thanks for those pointers to SAM output. I am able to reproduce the Subarray 1 Cell Temperature values. To reproduce, I equated the following terms used in the [SAM code](https://github.com/NREL/ssc/blob/055e2847ea1f4a81b4a0062beb972ec45e77d185/shared/lib_cec6par.cpp#L214) with pvlib variables:\r\n- `Geff_total` is equivalent to pvlib's `effective_irradiance`,\r\n- `G_total` is equivalent to pvlib's `poa_global` (plane of array before reflections or spectrum adjustment).\r\n\r\nI will put comments in the pvlib code to clarify why we assign these variables and write one test that reproduces the SAM value."}, {"time": 1615147048.0, "comment": "Failed check is codecov. This is ready to go, I think."}], "issues": {}}
|
pvlib/pvlib-python
| 1,179
|
https://github.com/pvlib/pvlib-python/pull/1179
|
pvlib__pvlib-python-1179
|
[]
|
bee329bd926c308a3e07e9099999fa37634a2f88
|
diff --git a/docs/examples/irradiance-decomposition/README.rst b/docs/examples/irradiance-decomposition/README.rst
new file mode 100644
index 0000000000..95f18c9330
--- /dev/null
+++ b/docs/examples/irradiance-decomposition/README.rst
@@ -0,0 +1,3 @@
+Irradiance Decomposition
+------------------------
+
diff --git a/docs/examples/irradiance-decomposition/plot_diffuse_fraction.py b/docs/examples/irradiance-decomposition/plot_diffuse_fraction.py
new file mode 100644
index 0000000000..dbd0406cf6
--- /dev/null
+++ b/docs/examples/irradiance-decomposition/plot_diffuse_fraction.py
@@ -0,0 +1,219 @@
+"""
+Diffuse Fraction Estimation
+===========================
+
+Comparison of diffuse fraction estimation methods used to derive direct and
+diffuse components from measured global horizontal irradiance.
+"""
+
+# %%
+# This example demonstrates how to use diffuse fraction estimation methods to
+# obtain direct and diffuse components from measured global horizontal
+# irradiance (GHI). Irradiance sensors such as pyranometers typically only
+# measure GHI. pvlib provides several functions that can be used to separate
+# GHI into the diffuse and direct components. The separate components are
+# needed to estimate the total irradiance on a tilted surface.
+
+import pathlib
+from matplotlib import pyplot as plt
+import pandas as pd
+from pvlib.iotools import read_tmy3
+from pvlib.solarposition import get_solarposition
+from pvlib import irradiance
+import pvlib
+
+# For this example we use the Greensboro, North Carolina, TMY3 file which is
+# in the pvlib data directory. TMY3 are made from the median months from years
+# of data measured from 1990 to 2010. Therefore we change the timestamps to a
+# common year, 1990.
+DATA_DIR = pathlib.Path(pvlib.__file__).parent / 'data'
+greensboro, metadata = read_tmy3(DATA_DIR / '723170TYA.CSV', coerce_year=1990)
+
+# Many of the diffuse fraction estimation methods require the "true" zenith, so
+# we calculate the solar positions for the 1990 at Greensboro, NC.
+# NOTE: TMY3 files timestamps indicate the end of the hour, so shift indices
+# back 30-minutes to calculate solar position at center of the interval
+solpos = get_solarposition(
+ greensboro.index.shift(freq="-30T"), latitude=metadata['latitude'],
+ longitude=metadata['longitude'], altitude=metadata['altitude'],
+ pressure=greensboro.Pressure*100, # convert from millibar to Pa
+ temperature=greensboro.DryBulb)
+solpos.index = greensboro.index # reset index to end of the hour
+
+# %%
+# pvlib Decomposition Functions
+# -----------------------------
+# Methods for separating DHI into diffuse and direct components include:
+# `DISC`_, `DIRINT`_, `Erbs`_, and `Boland`_.
+
+# %%
+# DISC
+# ----
+#
+# DISC :py:func:`~pvlib.irradiance.disc` is an empirical correlation developed
+# at SERI (now NREL) in 1987. The direct normal irradiance (DNI) is related to
+# clearness index (kt) by two polynomials split at kt = 0.6, then combined with
+# an exponential relation with airmass.
+
+out_disc = irradiance.disc(
+ greensboro.GHI, solpos.zenith, greensboro.index, greensboro.Pressure*100)
+# use "complete sum" AKA "closure" equations: DHI = GHI - DNI * cos(zenith)
+df_disc = irradiance.complete_irradiance(
+ solar_zenith=solpos.apparent_zenith, ghi=greensboro.GHI, dni=out_disc.dni,
+ dhi=None)
+out_disc = out_disc.rename(columns={'dni': 'dni_disc'})
+out_disc['dhi_disc'] = df_disc.dhi
+
+# %%
+# DIRINT
+# ------
+#
+# DIRINT :py:func:`~pvlib.irradiance.dirint` is a modification of DISC
+# developed by Richard Perez and Pierre Ineichen in 1992.
+
+dni_dirint = irradiance.dirint(
+ greensboro.GHI, solpos.zenith, greensboro.index, greensboro.Pressure*100,
+ temp_dew=greensboro.DewPoint)
+# use "complete sum" AKA "closure" equation: DHI = GHI - DNI * cos(zenith)
+df_dirint = irradiance.complete_irradiance(
+ solar_zenith=solpos.apparent_zenith, ghi=greensboro.GHI, dni=dni_dirint,
+ dhi=None)
+out_dirint = pd.DataFrame(
+ {'dni_dirint': dni_dirint, 'dhi_dirint': df_dirint.dhi},
+ index=greensboro.index)
+
+# %%
+# Erbs
+# ----
+#
+# The Erbs method, :py:func:`~pvlib.irradiance.erbs` developed by Daryl Gregory
+# Erbs at the University of Wisconsin in 1982 is a piecewise correlation that
+# splits kt into 3 regions: linear for kt <= 0.22, a 4th order polynomial
+# between 0.22 < kt <= 0.8, and a horizontal line for kt > 0.8.
+
+out_erbs = irradiance.erbs(greensboro.GHI, solpos.zenith, greensboro.index)
+out_erbs = out_erbs.rename(columns={'dni': 'dni_erbs', 'dhi': 'dhi_erbs'})
+
+# %%
+# Boland
+# ------
+#
+# The Boland method, :py:func:`~pvlib.irradiance.boland` is a single logistic
+# exponential correlation that is continuously differentiable and bounded
+# between zero and one.
+
+out_boland = irradiance.boland(greensboro.GHI, solpos.zenith, greensboro.index)
+out_boland = out_boland.rename(
+ columns={'dni': 'dni_boland', 'dhi': 'dhi_boland'})
+
+# %%
+# Comparison Plots
+# ----------------
+# In the plots below we compare the four decomposition models to the TMY3 file
+# for Greensboro, North Carolina. We also compare the clearness index, kt, with
+# GHI normalized by a reference irradiance, E0 = 1000 [W/m^2], to highlight
+# spikes caused when cosine of zenith approaches zero, particularly at sunset.
+#
+# First we combine the dataframes for the decomposition models and the TMY3
+# file together to make plotting easier.
+
+dni_renames = {
+ 'DNI': 'TMY3', 'dni_disc': 'DISC', 'dni_dirint': 'DIRINT',
+ 'dni_erbs': 'Erbs', 'dni_boland': 'Boland'}
+dni = [
+ greensboro.DNI, out_disc.dni_disc, out_dirint.dni_dirint,
+ out_erbs.dni_erbs, out_boland.dni_boland]
+dni = pd.concat(dni, axis=1).rename(columns=dni_renames)
+dhi_renames = {
+ 'DHI': 'TMY3', 'dhi_disc': 'DISC', 'dhi_dirint': 'DIRINT',
+ 'dhi_erbs': 'Erbs', 'dhi_boland': 'Boland'}
+dhi = [
+ greensboro.DHI, out_disc.dhi_disc, out_dirint.dhi_dirint,
+ out_erbs.dhi_erbs, out_boland.dhi_boland]
+dhi = pd.concat(dhi, axis=1).rename(columns=dhi_renames)
+ghi_kt = pd.concat([greensboro.GHI/1000.0, out_erbs.kt], axis=1)
+
+# %%
+# Winter
+# ++++++
+# Finally, let's plot them for a few winter days and compare
+
+JAN04, JAN07 = '1990-01-04 00:00:00-05:00', '1990-01-07 23:59:59-05:00'
+f, ax = plt.subplots(3, 1, figsize=(8, 10), sharex=True)
+dni[JAN04:JAN07].plot(ax=ax[0])
+ax[0].grid(which="both")
+ax[0].set_ylabel('DNI $[W/m^2]$')
+ax[0].set_title('Comparison of Diffuse Fraction Estimation Methods')
+dhi[JAN04:JAN07].plot(ax=ax[1])
+ax[1].grid(which="both")
+ax[1].set_ylabel('DHI $[W/m^2]$')
+ghi_kt[JAN04:JAN07].plot(ax=ax[2])
+ax[2].grid(which='both')
+ax[2].set_ylabel(r'$\frac{GHI}{E0}, k_t$')
+f.tight_layout()
+
+# %%
+# Spring
+# ++++++
+# And a few spring days ...
+
+APR04, APR07 = '1990-04-04 00:00:00-05:00', '1990-04-07 23:59:59-05:00'
+f, ax = plt.subplots(3, 1, figsize=(8, 10), sharex=True)
+dni[APR04:APR07].plot(ax=ax[0])
+ax[0].grid(which="both")
+ax[0].set_ylabel('DNI $[W/m^2]$')
+ax[0].set_title('Comparison of Diffuse Fraction Estimation Methods')
+dhi[APR04:APR07].plot(ax=ax[1])
+ax[1].grid(which="both")
+ax[1].set_ylabel('DHI $[W/m^2]$')
+ghi_kt[APR04:APR07].plot(ax=ax[2])
+ax[2].grid(which='both')
+ax[2].set_ylabel(r'$\frac{GHI}{E0}, k_t$')
+f.tight_layout()
+
+# %%
+# Summer
+# ++++++
+# And few summer days to finish off the seasons.
+
+JUL04, JUL07 = '1990-07-04 00:00:00-05:00', '1990-07-07 23:59:59-05:00'
+f, ax = plt.subplots(3, 1, figsize=(8, 10), sharex=True)
+dni[JUL04:JUL07].plot(ax=ax[0])
+ax[0].grid(which="both")
+ax[0].set_ylabel('DNI $[W/m^2]$')
+ax[0].set_title('Comparison of Diffuse Fraction Estimation Methods')
+dhi[JUL04:JUL07].plot(ax=ax[1])
+ax[1].grid(which="both")
+ax[1].set_ylabel('DHI $[W/m^2]$')
+ghi_kt[JUL04:JUL07].plot(ax=ax[2])
+ax[2].grid(which='both')
+ax[2].set_ylabel(r'$\frac{GHI}{E0}, k_t$')
+f.tight_layout()
+
+# %%
+# Conclusion
+# ----------
+# This example compares several decomposition models to a TMY3 file for
+# Greensboro, North Carolina. However, DNI and DHI in TMY3 files are themselves
+# the output of models (either METSTAT or SUNY), and so differences between
+# *e.g.* DISC output and the TMY3 file shouldn't be regarded as errors, and
+# it's not a reasonable expectation to assume that the four models should
+# reproduce the TMY3 values. We refer those interested to the `TMY3`_ and
+# `NSRDB`_ user manuals.
+#
+# The Erbs and Boland models are correlations only based on the clearness index
+# kt, which is the ratio of GHI to the the horizontal component of the
+# extra-terrestrial irradiance. At low sun elevation (zenith near 90 degrees),
+# especially near sunset, kt can explode because the denominator
+# (extra-terrestrial irradiance) approaches zero. In pvlib this behavior is
+# moderated by ``min_cos_zenith`` and ``max_clearness_index`` which each have
+# reasonable defaults. Even so, near sunset there are still spikes in kt and
+# DNI from Erbs and Boland for Jan. 5th & 7th, April 4th, 5th, & 7th, and July
+# 6th & 7th.
+#
+# By contrast, the DISC and DIRINT methods estimate DNI first by means of
+# correlations, which include additional variables such as airmass. These
+# methods seem to reduce DNI spikes over 1000 [W/m^2].
+#
+# .. _TMY3: https://www.nrel.gov/docs/fy08osti/43156.pdf
+# .. _NSRDB: https://www.nrel.gov/docs/fy12osti/54824.pdf
diff --git a/docs/sphinx/source/reference/irradiance/decomposition.rst b/docs/sphinx/source/reference/irradiance/decomposition.rst
index 2b89a272d7..f0d1495889 100644
--- a/docs/sphinx/source/reference/irradiance/decomposition.rst
+++ b/docs/sphinx/source/reference/irradiance/decomposition.rst
@@ -12,5 +12,6 @@ DNI estimation models
irradiance.dirint
irradiance.dirindex
irradiance.erbs
+ irradiance.boland
irradiance.campbell_norman
irradiance.gti_dirint
diff --git a/docs/sphinx/source/whatsnew/v0.9.5.rst b/docs/sphinx/source/whatsnew/v0.9.5.rst
index fc2afa3321..c5d246310f 100644
--- a/docs/sphinx/source/whatsnew/v0.9.5.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.5.rst
@@ -20,6 +20,10 @@ Enhancements
:py:func:`pvlib.snow.loss_townsend` (:issue:`1636`, :pull:`1653`)
* Added optional ``n_ar`` parameter to :py:func:`pvlib.iam.physical` to
support an anti-reflective coating. (:issue:`1501`, :pull:`1616`)
+* :py:func:`~pvlib.irradiance.boland` is another diffuse fraction, DF,
+ estimation method similar to Erbs but uses a single logistic exponential
+ correlation between DF and clearness index, kt, that is continuously
+ differentiable and bounded between zero and one. (:pull:`1179`)
* Add ``model='gueymard2003'``, the airmass model used for REST and REST2,
to :py:func:`~pvlib.atmosphere.get_relative_airmass`. (:pull:`1655`)
diff --git a/pvlib/irradiance.py b/pvlib/irradiance.py
index f6d166d44c..79ce3c3904 100644
--- a/pvlib/irradiance.py
+++ b/pvlib/irradiance.py
@@ -2265,6 +2265,110 @@ def erbs(ghi, zenith, datetime_or_doy, min_cos_zenith=0.065, max_zenith=87):
return data
+def boland(ghi, solar_zenith, datetime_or_doy, a_coeff=8.645, b_coeff=0.613,
+ min_cos_zenith=0.065, max_zenith=87):
+ r"""
+ Estimate DNI and DHI from GHI using the Boland clearness index model.
+
+ The Boland model [1]_, [2]_ estimates the diffuse fraction, DF, from global
+ horizontal irradiance, GHI, through an empirical relationship between DF
+ and the clearness index, :math:`k_t`, the ratio of GHI to horizontal
+ extraterrestrial irradiance.
+
+ .. math::
+
+ \mathit{DF} = \frac{1}{1 + \exp\left(a \left(k_t - b\right)\right)}
+
+
+ Parameters
+ ----------
+ ghi: numeric
+ Global horizontal irradiance. [W/m^2]
+ solar_zenith: numeric
+ True (not refraction-corrected) zenith angles in decimal degrees.
+ datetime_or_doy : numeric, pandas.DatetimeIndex
+ Day of year or array of days of year e.g.
+ pd.DatetimeIndex.dayofyear, or pd.DatetimeIndex.
+ a_coeff : float, default 8.645
+ Logistic curve fit coefficient.
+ b_coeff : float, default 0.613
+ Logistic curve fit coefficient.
+ min_cos_zenith : numeric, default 0.065
+ Minimum value of cos(zenith) to allow when calculating global
+ clearness index :math:`k_t`. Equivalent to zenith = 86.273 degrees.
+ max_zenith : numeric, default 87
+ Maximum value of zenith to allow in DNI calculation. DNI will be
+ set to 0 for times with zenith values greater than `max_zenith`.
+
+ Returns
+ -------
+ data : OrderedDict or DataFrame
+ Contains the following keys/columns:
+
+ * ``dni``: the modeled direct normal irradiance in W/m^2.
+ * ``dhi``: the modeled diffuse horizontal irradiance in
+ W/m^2.
+ * ``kt``: Ratio of global to extraterrestrial irradiance
+ on a horizontal plane.
+
+ References
+ ----------
+ .. [1] J. Boland, B. Ridley (2008) Models of Diffuse Solar Fraction. In:
+ Badescu V. (eds) Modeling Solar Radiation at the Earth’s Surface.
+ Springer, Berlin, Heidelberg. :doi:`10.1007/978-3-540-77455-6_8`
+ .. [2] John Boland, Lynne Scott, and Mark Luther, Modelling the diffuse
+ fraction of global solar radiation on a horizontal surface,
+ Environmetrics 12(2), pp 103-116, 2001,
+ :doi:`10.1002/1099-095X(200103)12:2%3C103::AID-ENV447%3E3.0.CO;2-2`
+
+ See also
+ --------
+ dirint
+ disc
+ erbs
+
+ Notes
+ -----
+ Boland diffuse fraction differs from other decomposition algorithms by use
+ of a logistic function to fit the entire range of clearness index,
+ :math:`k_t`. Parameters ``a_coeff`` and ``b_coeff`` are reported in [2]_
+ for different time intervals:
+
+ * 15-minute: ``a = 8.645`` and ``b = 0.613``
+ * 1-hour: ``a = 7.997`` and ``b = 0.586``
+ """
+
+ dni_extra = get_extra_radiation(datetime_or_doy)
+
+ kt = clearness_index(
+ ghi, solar_zenith, dni_extra, min_cos_zenith=min_cos_zenith,
+ max_clearness_index=1)
+
+ # Boland equation
+ df = 1.0 / (1.0 + np.exp(a_coeff * (kt - b_coeff)))
+ # NOTE: [2] has different coefficients, for different time intervals
+ # 15-min: df = 1 / (1 + exp(8.645 * (kt - 0.613)))
+ # 1-hour: df = 1 / (1 + exp(7.997 * (kt - 0.586)))
+
+ dhi = df * ghi
+
+ dni = (ghi - dhi) / tools.cosd(solar_zenith)
+ bad_values = (solar_zenith > max_zenith) | (ghi < 0) | (dni < 0)
+ dni = np.where(bad_values, 0, dni)
+ # ensure that closure relationship remains valid
+ dhi = np.where(bad_values, ghi, dhi)
+
+ data = OrderedDict()
+ data['dni'] = dni
+ data['dhi'] = dhi
+ data['kt'] = kt
+
+ if isinstance(datetime_or_doy, pd.DatetimeIndex):
+ data = pd.DataFrame(data, index=datetime_or_doy)
+
+ return data
+
+
def campbell_norman(zenith, transmittance, pressure=101325.0,
dni_extra=1367.0):
'''
diff --git a/readthedocs.yml b/readthedocs.yml
index fb2d1374bb..dde255335c 100644
--- a/readthedocs.yml
+++ b/readthedocs.yml
@@ -1,7 +1,26 @@
+# .readthedocs.yaml
+# Read the Docs configuration file
+# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
+
+# Required
+version: 2
+
+build:
+ os: ubuntu-20.04
+ tools:
+ python: "3.7"
+ jobs:
+ # fetch the full history so that setuptools_scm can determine the
+ # correct version string for long PRs with many commits
+ post_checkout:
+ - git fetch --unshallow
+
python:
- version: 3
- # only use the packages specified in setup.py
- use_system_site_packages: false
- pip_install: true
- extra_requirements:
- - doc
\ No newline at end of file
+ # only use the packages specified in setup.py
+ system_packages: false
+
+ install:
+ - method: pip
+ path: .
+ extra_requirements:
+ - doc
|
diff --git a/pvlib/tests/test_irradiance.py b/pvlib/tests/test_irradiance.py
index ae3a7ec88a..cffdd23e40 100644
--- a/pvlib/tests/test_irradiance.py
+++ b/pvlib/tests/test_irradiance.py
@@ -804,6 +804,22 @@ def test_erbs():
assert_frame_equal(np.round(out, 0), np.round(expected, 0))
+def test_boland():
+ index = pd.DatetimeIndex(['20190101']*3 + ['20190620'])
+ ghi = pd.Series([0, 50, 1000, 1000], index=index)
+ zenith = pd.Series([120, 85, 10, 10], index=index)
+ expected = pd.DataFrame(np.array(
+ [[0.0, 0.0, 0.0],
+ [81.9448546, 42.8580353, 0.405723511],
+ [723.764990, 287.230626, 0.718132729],
+ [805.020419, 207.209650, 0.768214312]]),
+ columns=['dni', 'dhi', 'kt'], index=index)
+
+ out = irradiance.boland(ghi, zenith, index)
+
+ assert np.allclose(out, expected)
+
+
def test_erbs_min_cos_zenith_max_zenith():
# map out behavior under difficult conditions with various
# limiting kwargs settings
| 2021-02-26T04:41:51
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/examples/irradiance-decomposition/README.rst": null, "docs/examples/irradiance-decomposition/plot_diffuse_fraction.py": null, "docs/sphinx/source/reference/irradiance/decomposition.rst": ".. currentmodule:: pvlib\n\n.. _dniestmodels:\n\nDNI estimation models\n---------------------\n\n.. autosummary::\n :toctree: ../generated/\n\n irradiance.disc\n irradiance.dirint\n irradiance.dirindex\n irradiance.erbs\n irradiance.campbell_norman\n irradiance.gti_dirint\n", "docs/sphinx/source/whatsnew/v0.9.5.rst": ".. _whatsnew_0950:\n\n\nv0.9.5 (anticipated March 2023)\n-------------------------------\n\nStarting with this version, new releases are no longer distributed through\nthe ``pvlib`` `conda channel <https://anaconda.org/pvlib/pvlib>`_. We recommend\n``conda`` users install from the ``conda-forge`` channel instead (see\n:ref:`installation`).\n\n\nDeprecations\n~~~~~~~~~~~~\n\n\nEnhancements\n~~~~~~~~~~~~\n* Added the optional `string_factor` parameter to\n :py:func:`pvlib.snow.loss_townsend` (:issue:`1636`, :pull:`1653`)\n* Added optional ``n_ar`` parameter to :py:func:`pvlib.iam.physical` to\n support an anti-reflective coating. (:issue:`1501`, :pull:`1616`)\n* Add ``model='gueymard2003'``, the airmass model used for REST and REST2,\n to :py:func:`~pvlib.atmosphere.get_relative_airmass`. (:pull:`1655`)\n\n\n* Added an optional ``model`` parameter to\n :py:func:`pvlib.bifacial.infinite_sheds.get_irradiance` and\n :py:func:`pvlib.bifacial.infinite_sheds.get_irradiance_poa`\n to enable use of the hay-davies sky diffuse irradiance model\n instead of the default isotropic model. (:pull:`1668`)\n\nBug fixes\n~~~~~~~~~\n* Added a limit to :py:func:`pvlib.snow.loss_townsend` to guard against\n incorrect loss results for systems that are near the ground (:issue:`1636`,\n :pull:`1653`)\n* Fixed incorrect mapping of requested parameters names when using the ``get_psm3``\n function. Also fixed the random reordering of the dataframe columns.\n (:issue:`1629`, :issue:`1647`, :pull:`1648`)\n* When using ``utc_time_range`` with :py:func:`pvlib.iotools.read_ecmwf_macc`,\n the time index subset is now selected with ``nearest`` instead of ``before``\n and ``after`` for consistency with ``cftime>=1.6.0``. (:issue:`1609`, :pull:`1656`)\n* :py:func:`~pvlib.ivtools.sdm.pvsyst_temperature_coeff` no longer raises\n a scipy deprecation warning (and is slightly more accurate) (:issue:`1644`, :pull:`1674`)\n\n\nTesting\n~~~~~~~\n* Added Python 3.11 to test suite. (:pull:`1582`)\n* Updated PSM3 test data files to match the new version 3.2.2 data returned\n by the PSM3 API (:issue:`1591`, :pull:`1652`)\n\n\nDocumentation\n~~~~~~~~~~~~~\n* Remove LGTM.com integration. (:issue:`1550`, :pull:`1651`)\n\nBenchmarking\n~~~~~~~~~~~~~\n* Added benchmarks for :py:mod:`pvlib.bifacial.infinite_sheds` (:pull:`1627`)\n\nRequirements\n~~~~~~~~~~~~\n* Removed unnecessary ``nose`` test requirement (:pull:`1637`)\n\nContributors\n~~~~~~~~~~~~\n* Kevin Anderson (:ghuser:`kanderso-nrel`)\n* Will Holmgren (:ghuser:`wholmgren`)\n* Cliff Hansen (:ghuser:`cwhanse`)\n* Adam R. Jensen (:ghuser:`adamrjensen`)\n* Pratham Chauhan (:ghuser:`ooprathamm`)\n* Karel De Brabandere (:ghuser:`kdebrab`)\n* Mark Mikofski (:ghuser:`mikofski`)\n* Anton Driesse (:ghuser:`adriesse`)\n* Michael Deceglie (:ghuser:`mdeceglie`)\n* Saurabh Aneja (:ghuser:`spaneja`)\n* John Moseley (:ghuser:`johnMoseleyArray`)\n", "pvlib/irradiance.py": "\"\"\"\nThe ``irradiance`` module contains functions for modeling global\nhorizontal irradiance, direct normal irradiance, diffuse horizontal\nirradiance, and total irradiance under various conditions.\n\"\"\"\n\nimport datetime\nfrom collections import OrderedDict\nfrom functools import partial\n\nimport numpy as np\nimport pandas as pd\n\nfrom pvlib import atmosphere, solarposition, tools\nimport pvlib # used to avoid dni name collision in complete_irradiance\n\n\n# see References section of get_ground_diffuse function\nSURFACE_ALBEDOS = {'urban': 0.18,\n 'grass': 0.20,\n 'fresh grass': 0.26,\n 'soil': 0.17,\n 'sand': 0.40,\n 'snow': 0.65,\n 'fresh snow': 0.75,\n 'asphalt': 0.12,\n 'concrete': 0.30,\n 'aluminum': 0.85,\n 'copper': 0.74,\n 'fresh steel': 0.35,\n 'dirty steel': 0.08,\n 'sea': 0.06}\n\n\ndef get_extra_radiation(datetime_or_doy, solar_constant=1366.1,\n method='spencer', epoch_year=2014, **kwargs):\n \"\"\"\n Determine extraterrestrial radiation from day of year.\n\n Parameters\n ----------\n datetime_or_doy : numeric, array, date, datetime, Timestamp, DatetimeIndex\n Day of year, array of days of year, or datetime-like object\n\n solar_constant : float, default 1366.1\n The solar constant.\n\n method : string, default 'spencer'\n The method by which the ET radiation should be calculated.\n Options include ``'pyephem', 'spencer', 'asce', 'nrel'``.\n\n epoch_year : int, default 2014\n The year in which a day of year input will be calculated. Only\n applies to day of year input used with the pyephem or nrel\n methods.\n\n kwargs :\n Passed to solarposition.nrel_earthsun_distance\n\n Returns\n -------\n dni_extra : float, array, or Series\n The extraterrestrial radiation present in watts per square meter\n on a surface which is normal to the sun. Pandas Timestamp and\n DatetimeIndex inputs will yield a Pandas TimeSeries. All other\n inputs will yield a float or an array of floats.\n\n References\n ----------\n .. [1] M. Reno, C. Hansen, and J. Stein, \"Global Horizontal Irradiance\n Clear Sky Models: Implementation and Analysis\", Sandia National\n Laboratories, SAND2012-2389, 2012.\n\n .. [2] <http://solardat.uoregon.edu/SolarRadiationBasics.html>, Eqs.\n SR1 and SR2\n\n .. [3] Partridge, G. W. and Platt, C. M. R. 1976. Radiative Processes\n in Meteorology and Climatology.\n\n .. [4] Duffie, J. A. and Beckman, W. A. 1991. Solar Engineering of\n Thermal Processes, 2nd edn. J. Wiley and Sons, New York.\n\n .. [5] ASCE, 2005. The ASCE Standardized Reference Evapotranspiration\n Equation, Environmental and Water Resources Institute of the American\n Civil Engineers, Ed. R. G. Allen et al.\n \"\"\"\n\n to_doy, to_datetimeindex, to_output = \\\n _handle_extra_radiation_types(datetime_or_doy, epoch_year)\n\n # consider putting asce and spencer methods in their own functions\n method = method.lower()\n if method == 'asce':\n B = solarposition._calculate_simple_day_angle(to_doy(datetime_or_doy),\n offset=0)\n RoverR0sqrd = 1 + 0.033 * np.cos(B)\n elif method == 'spencer':\n B = solarposition._calculate_simple_day_angle(to_doy(datetime_or_doy))\n RoverR0sqrd = (1.00011 + 0.034221 * np.cos(B) + 0.00128 * np.sin(B) +\n 0.000719 * np.cos(2 * B) + 7.7e-05 * np.sin(2 * B))\n elif method == 'pyephem':\n times = to_datetimeindex(datetime_or_doy)\n RoverR0sqrd = solarposition.pyephem_earthsun_distance(times) ** (-2)\n elif method == 'nrel':\n times = to_datetimeindex(datetime_or_doy)\n RoverR0sqrd = \\\n solarposition.nrel_earthsun_distance(times, **kwargs) ** (-2)\n else:\n raise ValueError('Invalid method: %s', method)\n\n Ea = solar_constant * RoverR0sqrd\n\n Ea = to_output(Ea)\n\n return Ea\n\n\ndef _handle_extra_radiation_types(datetime_or_doy, epoch_year):\n # This block will set the functions that can be used to convert the\n # inputs to either day of year or pandas DatetimeIndex, and the\n # functions that will yield the appropriate output type. It's\n # complicated because there are many day-of-year-like input types,\n # and the different algorithms need different types. Maybe you have\n # a better way to do it.\n if isinstance(datetime_or_doy, pd.DatetimeIndex):\n to_doy = tools._pandas_to_doy # won't be evaluated unless necessary\n def to_datetimeindex(x): return x # noqa: E306\n to_output = partial(pd.Series, index=datetime_or_doy)\n elif isinstance(datetime_or_doy, pd.Timestamp):\n to_doy = tools._pandas_to_doy\n to_datetimeindex = \\\n tools._datetimelike_scalar_to_datetimeindex\n to_output = tools._scalar_out\n elif isinstance(datetime_or_doy,\n (datetime.date, datetime.datetime, np.datetime64)):\n to_doy = tools._datetimelike_scalar_to_doy\n to_datetimeindex = \\\n tools._datetimelike_scalar_to_datetimeindex\n to_output = tools._scalar_out\n elif np.isscalar(datetime_or_doy): # ints and floats of various types\n def to_doy(x): return x # noqa: E306\n to_datetimeindex = partial(tools._doy_to_datetimeindex,\n epoch_year=epoch_year)\n to_output = tools._scalar_out\n else: # assume that we have an array-like object of doy\n def to_doy(x): return x # noqa: E306\n to_datetimeindex = partial(tools._doy_to_datetimeindex,\n epoch_year=epoch_year)\n to_output = tools._array_out\n\n return to_doy, to_datetimeindex, to_output\n\n\ndef aoi_projection(surface_tilt, surface_azimuth, solar_zenith, solar_azimuth):\n \"\"\"\n Calculates the dot product of the sun position unit vector and the surface\n normal unit vector; in other words, the cosine of the angle of incidence.\n\n Usage note: When the sun is behind the surface the value returned is\n negative. For many uses negative values must be set to zero.\n\n Input all angles in degrees.\n\n Parameters\n ----------\n surface_tilt : numeric\n Panel tilt from horizontal.\n surface_azimuth : numeric\n Panel azimuth from north.\n solar_zenith : numeric\n Solar zenith angle.\n solar_azimuth : numeric\n Solar azimuth angle.\n\n Returns\n -------\n projection : numeric\n Dot product of panel normal and solar angle.\n \"\"\"\n\n projection = (\n tools.cosd(surface_tilt) * tools.cosd(solar_zenith) +\n tools.sind(surface_tilt) * tools.sind(solar_zenith) *\n tools.cosd(solar_azimuth - surface_azimuth))\n\n # GH 1185\n projection = np.clip(projection, -1, 1)\n\n try:\n projection.name = 'aoi_projection'\n except AttributeError:\n pass\n\n return projection\n\n\ndef aoi(surface_tilt, surface_azimuth, solar_zenith, solar_azimuth):\n \"\"\"\n Calculates the angle of incidence of the solar vector on a surface.\n This is the angle between the solar vector and the surface normal.\n\n Input all angles in degrees.\n\n Parameters\n ----------\n surface_tilt : numeric\n Panel tilt from horizontal.\n surface_azimuth : numeric\n Panel azimuth from north.\n solar_zenith : numeric\n Solar zenith angle.\n solar_azimuth : numeric\n Solar azimuth angle.\n\n Returns\n -------\n aoi : numeric\n Angle of incidence in degrees.\n \"\"\"\n\n projection = aoi_projection(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)\n aoi_value = np.rad2deg(np.arccos(projection))\n\n try:\n aoi_value.name = 'aoi'\n except AttributeError:\n pass\n\n return aoi_value\n\n\ndef poa_horizontal_ratio(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth):\n \"\"\"\n Calculates the ratio of the beam components of the plane of array\n irradiance and the horizontal irradiance.\n\n Input all angles in degrees.\n\n Parameters\n ----------\n surface_tilt : numeric\n Panel tilt from horizontal.\n surface_azimuth : numeric\n Panel azimuth from north.\n solar_zenith : numeric\n Solar zenith angle.\n solar_azimuth : numeric\n Solar azimuth angle.\n\n Returns\n -------\n ratio : numeric\n Ratio of the plane of array irradiance to the horizontal plane\n irradiance\n \"\"\"\n\n cos_poa_zen = aoi_projection(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)\n\n cos_solar_zenith = tools.cosd(solar_zenith)\n\n # ratio of tilted and horizontal beam irradiance\n ratio = cos_poa_zen / cos_solar_zenith\n\n try:\n ratio.name = 'poa_ratio'\n except AttributeError:\n pass\n\n return ratio\n\n\ndef beam_component(surface_tilt, surface_azimuth, solar_zenith, solar_azimuth,\n dni):\n \"\"\"\n Calculates the beam component of the plane of array irradiance.\n\n Parameters\n ----------\n surface_tilt : numeric\n Panel tilt from horizontal.\n surface_azimuth : numeric\n Panel azimuth from north.\n solar_zenith : numeric\n Solar zenith angle.\n solar_azimuth : numeric\n Solar azimuth angle.\n dni : numeric\n Direct Normal Irradiance\n\n Returns\n -------\n beam : numeric\n Beam component\n \"\"\"\n beam = dni * aoi_projection(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)\n beam = np.maximum(beam, 0)\n\n return beam\n\n\ndef get_total_irradiance(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth,\n dni, ghi, dhi, dni_extra=None, airmass=None,\n albedo=0.25, surface_type=None,\n model='isotropic',\n model_perez='allsitescomposite1990'):\n r\"\"\"\n Determine total in-plane irradiance and its beam, sky diffuse and ground\n reflected components, using the specified sky diffuse irradiance model.\n\n .. math::\n\n I_{tot} = I_{beam} + I_{sky diffuse} + I_{ground}\n\n Sky diffuse models include:\n * isotropic (default)\n * klucher\n * haydavies\n * reindl\n * king\n * perez\n\n Parameters\n ----------\n surface_tilt : numeric\n Panel tilt from horizontal. [degree]\n surface_azimuth : numeric\n Panel azimuth from north. [degree]\n solar_zenith : numeric\n Solar zenith angle. [degree]\n solar_azimuth : numeric\n Solar azimuth angle. [degree]\n dni : numeric\n Direct Normal Irradiance. [W/m2]\n ghi : numeric\n Global horizontal irradiance. [W/m2]\n dhi : numeric\n Diffuse horizontal irradiance. [W/m2]\n dni_extra : None or numeric, default None\n Extraterrestrial direct normal irradiance. [W/m2]\n airmass : None or numeric, default None\n Relative airmass (not adjusted for pressure). [unitless]\n albedo : numeric, default 0.25\n Ground surface albedo. [unitless]\n surface_type : None or str, default None\n Surface type. See :py:func:`~pvlib.irradiance.get_ground_diffuse` for\n the list of accepted values.\n model : str, default 'isotropic'\n Irradiance model. Can be one of ``'isotropic'``, ``'klucher'``,\n ``'haydavies'``, ``'reindl'``, ``'king'``, ``'perez'``.\n model_perez : str, default 'allsitescomposite1990'\n Used only if ``model='perez'``. See :py:func:`~pvlib.irradiance.perez`.\n\n Returns\n -------\n total_irrad : OrderedDict or DataFrame\n Contains keys/columns ``'poa_global', 'poa_direct', 'poa_diffuse',\n 'poa_sky_diffuse', 'poa_ground_diffuse'``.\n\n Notes\n -----\n Models ``'haydavies'``, ``'reindl'``, or ``'perez'`` require\n ``'dni_extra'``. Values can be calculated using\n :py:func:`~pvlib.irradiance.get_extra_radiation`.\n\n The ``'perez'`` model requires relative airmass (``airmass``) as input. If\n ``airmass`` is not provided, it is calculated using the defaults in\n :py:func:`~pvlib.atmosphere.get_relative_airmass`.\n \"\"\"\n\n poa_sky_diffuse = get_sky_diffuse(\n surface_tilt, surface_azimuth, solar_zenith, solar_azimuth,\n dni, ghi, dhi, dni_extra=dni_extra, airmass=airmass, model=model,\n model_perez=model_perez)\n\n poa_ground_diffuse = get_ground_diffuse(surface_tilt, ghi, albedo,\n surface_type)\n aoi_ = aoi(surface_tilt, surface_azimuth, solar_zenith, solar_azimuth)\n irrads = poa_components(aoi_, dni, poa_sky_diffuse, poa_ground_diffuse)\n return irrads\n\n\ndef get_sky_diffuse(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth,\n dni, ghi, dhi, dni_extra=None, airmass=None,\n model='isotropic',\n model_perez='allsitescomposite1990'):\n r\"\"\"\n Determine in-plane sky diffuse irradiance component\n using the specified sky diffuse irradiance model.\n\n Sky diffuse models include:\n * isotropic (default)\n * klucher\n * haydavies\n * reindl\n * king\n * perez\n\n Parameters\n ----------\n surface_tilt : numeric\n Panel tilt from horizontal. [degree]\n surface_azimuth : numeric\n Panel azimuth from north. [degree]\n solar_zenith : numeric\n Solar zenith angle. [degree]\n solar_azimuth : numeric\n Solar azimuth angle. [degree]\n dni : numeric\n Direct Normal Irradiance. [W/m2]\n ghi : numeric\n Global horizontal irradiance. [W/m2]\n dhi : numeric\n Diffuse horizontal irradiance. [W/m2]\n dni_extra : None or numeric, default None\n Extraterrestrial direct normal irradiance. [W/m2]\n airmass : None or numeric, default None\n Relative airmass (not adjusted for pressure). [unitless]\n model : str, default 'isotropic'\n Irradiance model. Can be one of ``'isotropic'``, ``'klucher'``,\n ``'haydavies'``, ``'reindl'``, ``'king'``, ``'perez'``.\n model_perez : str, default 'allsitescomposite1990'\n Used only if ``model='perez'``. See :py:func:`~pvlib.irradiance.perez`.\n\n Returns\n -------\n poa_sky_diffuse : numeric\n Sky diffuse irradiance in the plane of array. [W/m2]\n\n Raises\n ------\n ValueError\n If model is one of ``'haydavies'``, ``'reindl'``, or ``'perez'`` and\n ``dni_extra`` is ``None``.\n\n Notes\n -----\n Models ``'haydavies'``, ``'reindl'``, and ``'perez``` require 'dni_extra'.\n Values can be calculated using\n :py:func:`~pvlib.irradiance.get_extra_radiation`.\n\n The ``'perez'`` model requires relative airmass (``airmass``) as input. If\n ``airmass`` is not provided, it is calculated using the defaults in\n :py:func:`~pvlib.atmosphere.get_relative_airmass`.\n \"\"\"\n\n model = model.lower()\n\n if (model in {'haydavies', 'reindl', 'perez'}) and (dni_extra is None):\n raise ValueError(f'dni_extra is required for model {model}')\n\n if model == 'isotropic':\n sky = isotropic(surface_tilt, dhi)\n elif model == 'klucher':\n sky = klucher(surface_tilt, surface_azimuth, dhi, ghi,\n solar_zenith, solar_azimuth)\n elif model == 'haydavies':\n sky = haydavies(surface_tilt, surface_azimuth, dhi, dni, dni_extra,\n solar_zenith, solar_azimuth)\n elif model == 'reindl':\n sky = reindl(surface_tilt, surface_azimuth, dhi, dni, ghi, dni_extra,\n solar_zenith, solar_azimuth)\n elif model == 'king':\n sky = king(surface_tilt, dhi, ghi, solar_zenith)\n elif model == 'perez':\n if airmass is None:\n airmass = atmosphere.get_relative_airmass(solar_zenith)\n sky = perez(surface_tilt, surface_azimuth, dhi, dni, dni_extra,\n solar_zenith, solar_azimuth, airmass,\n model=model_perez)\n else:\n raise ValueError(f'invalid model selection {model}')\n\n return sky\n\n\ndef poa_components(aoi, dni, poa_sky_diffuse, poa_ground_diffuse):\n r'''\n Determine in-plane irradiance components.\n\n Combines DNI with sky diffuse and ground-reflected irradiance to calculate\n total, direct and diffuse irradiance components in the plane of array.\n\n Parameters\n ----------\n aoi : numeric\n Angle of incidence of solar rays with respect to the module\n surface, from :func:`aoi`.\n\n dni : numeric\n Direct normal irradiance (W/m^2), as measured from a TMY file or\n calculated with a clearsky model.\n\n poa_sky_diffuse : numeric\n Diffuse irradiance (W/m^2) in the plane of the modules, as\n calculated by a diffuse irradiance translation function\n\n poa_ground_diffuse : numeric\n Ground reflected irradiance (W/m^2) in the plane of the modules,\n as calculated by an albedo model (eg. :func:`grounddiffuse`)\n\n Returns\n -------\n irrads : OrderedDict or DataFrame\n Contains the following keys:\n\n * ``poa_global`` : Total in-plane irradiance (W/m^2)\n * ``poa_direct`` : Total in-plane beam irradiance (W/m^2)\n * ``poa_diffuse`` : Total in-plane diffuse irradiance (W/m^2)\n * ``poa_sky_diffuse`` : In-plane diffuse irradiance from sky (W/m^2)\n * ``poa_ground_diffuse`` : In-plane diffuse irradiance from ground\n (W/m^2)\n\n Notes\n ------\n Negative beam irradiation due to aoi :math:`> 90^{\\circ}` or AOI\n :math:`< 0^{\\circ}` is set to zero.\n '''\n\n poa_direct = np.maximum(dni * np.cos(np.radians(aoi)), 0)\n poa_diffuse = poa_sky_diffuse + poa_ground_diffuse\n poa_global = poa_direct + poa_diffuse\n\n irrads = OrderedDict()\n irrads['poa_global'] = poa_global\n irrads['poa_direct'] = poa_direct\n irrads['poa_diffuse'] = poa_diffuse\n irrads['poa_sky_diffuse'] = poa_sky_diffuse\n irrads['poa_ground_diffuse'] = poa_ground_diffuse\n\n if isinstance(poa_direct, pd.Series):\n irrads = pd.DataFrame(irrads)\n\n return irrads\n\n\ndef get_ground_diffuse(surface_tilt, ghi, albedo=.25, surface_type=None):\n '''\n Estimate diffuse irradiance from ground reflections given\n irradiance, albedo, and surface tilt.\n\n Function to determine the portion of irradiance on a tilted surface\n due to ground reflections. Any of the inputs may be DataFrames or\n scalars.\n\n Parameters\n ----------\n surface_tilt : numeric\n Surface tilt angles in decimal degrees. Tilt must be >=0 and\n <=180. The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90).\n\n ghi : numeric\n Global horizontal irradiance. [W/m^2]\n\n albedo : numeric, default 0.25\n Ground reflectance, typically 0.1-0.4 for surfaces on Earth\n (land), may increase over snow, ice, etc. May also be known as\n the reflection coefficient. Must be >=0 and <=1. Will be\n overridden if surface_type is supplied.\n\n surface_type: None or string, default None\n If not None, overrides albedo. String can be one of 'urban',\n 'grass', 'fresh grass', 'snow', 'fresh snow', 'asphalt', 'concrete',\n 'aluminum', 'copper', 'fresh steel', 'dirty steel', 'sea'.\n\n Returns\n -------\n grounddiffuse : numeric\n Ground reflected irradiance. [W/m^2]\n\n\n References\n ----------\n .. [1] Loutzenhiser P.G. et. al. \"Empirical validation of models to compute\n solar irradiance on inclined surfaces for building energy simulation\"\n 2007, Solar Energy vol. 81. pp. 254-267.\n\n The calculation is the last term of equations 3, 4, 7, 8, 10, 11, and 12.\n\n .. [2] albedos from:\n http://files.pvsyst.com/help/albedo.htm\n and\n http://en.wikipedia.org/wiki/Albedo\n and\n https://doi.org/10.1175/1520-0469(1972)029<0959:AOTSS>2.0.CO;2\n '''\n\n if surface_type is not None:\n albedo = SURFACE_ALBEDOS[surface_type]\n\n diffuse_irrad = ghi * albedo * (1 - np.cos(np.radians(surface_tilt))) * 0.5\n\n try:\n diffuse_irrad.name = 'diffuse_ground'\n except AttributeError:\n pass\n\n return diffuse_irrad\n\n\ndef isotropic(surface_tilt, dhi):\n r'''\n Determine diffuse irradiance from the sky on a tilted surface using\n the isotropic sky model.\n\n .. math::\n\n I_{d} = DHI \\frac{1 + \\cos\\beta}{2}\n\n Hottel and Woertz's model treats the sky as a uniform source of\n diffuse irradiance. Thus the diffuse irradiance from the sky (ground\n reflected irradiance is not included in this algorithm) on a tilted\n surface can be found from the diffuse horizontal irradiance and the\n tilt angle of the surface.\n\n Parameters\n ----------\n surface_tilt : numeric\n Surface tilt angle in decimal degrees. Tilt must be >=0 and\n <=180. The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90)\n\n dhi : numeric\n Diffuse horizontal irradiance in W/m^2. DHI must be >=0.\n\n Returns\n -------\n diffuse : numeric\n The sky diffuse component of the solar radiation.\n\n References\n ----------\n .. [1] Loutzenhiser P.G. et. al. \"Empirical validation of models to\n compute solar irradiance on inclined surfaces for building energy\n simulation\" 2007, Solar Energy vol. 81. pp. 254-267\n\n .. [2] Hottel, H.C., Woertz, B.B., 1942. Evaluation of flat-plate solar\n heat collector. Trans. ASME 64, 91.\n '''\n\n sky_diffuse = dhi * (1 + tools.cosd(surface_tilt)) * 0.5\n\n return sky_diffuse\n\n\ndef klucher(surface_tilt, surface_azimuth, dhi, ghi, solar_zenith,\n solar_azimuth):\n r'''\n Determine diffuse irradiance from the sky on a tilted surface\n using Klucher's 1979 model\n\n .. math::\n\n I_{d} = DHI \\frac{1 + \\cos\\beta}{2} (1 + F' \\sin^3(\\beta/2))\n (1 + F' \\cos^2\\theta\\sin^3\\theta_z)\n\n where\n\n .. math::\n\n F' = 1 - (I_{d0} / GHI)^2\n\n Klucher's 1979 model determines the diffuse irradiance from the sky\n (ground reflected irradiance is not included in this algorithm) on a\n tilted surface using the surface tilt angle, surface azimuth angle,\n diffuse horizontal irradiance, direct normal irradiance, global\n horizontal irradiance, extraterrestrial irradiance, sun zenith\n angle, and sun azimuth angle.\n\n Parameters\n ----------\n surface_tilt : numeric\n Surface tilt angles in decimal degrees. surface_tilt must be >=0\n and <=180. The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90)\n\n surface_azimuth : numeric\n Surface azimuth angles in decimal degrees. surface_azimuth must\n be >=0 and <=360. The Azimuth convention is defined as degrees\n east of north (e.g. North = 0, South=180 East = 90, West = 270).\n\n dhi : numeric\n Diffuse horizontal irradiance in W/m^2. DHI must be >=0.\n\n ghi : numeric\n Global irradiance in W/m^2. DNI must be >=0.\n\n solar_zenith : numeric\n Apparent (refraction-corrected) zenith angles in decimal\n degrees. solar_zenith must be >=0 and <=180.\n\n solar_azimuth : numeric\n Sun azimuth angles in decimal degrees. solar_azimuth must be >=0\n and <=360. The Azimuth convention is defined as degrees east of\n north (e.g. North = 0, East = 90, West = 270).\n\n Returns\n -------\n diffuse : numeric\n The sky diffuse component of the solar radiation.\n\n References\n ----------\n .. [1] Loutzenhiser P.G. et. al. \"Empirical validation of models to compute\n solar irradiance on inclined surfaces for building energy simulation\"\n 2007, Solar Energy vol. 81. pp. 254-267\n\n .. [2] Klucher, T.M., 1979. Evaluation of models to predict insolation on\n tilted surfaces. Solar Energy 23 (2), 111-114.\n '''\n\n # zenith angle with respect to panel normal.\n cos_tt = aoi_projection(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)\n cos_tt = np.maximum(cos_tt, 0) # GH 526\n\n # silence warning from 0 / 0\n with np.errstate(invalid='ignore'):\n F = 1 - ((dhi / ghi) ** 2)\n\n try:\n # fails with single point input\n F.fillna(0, inplace=True)\n except AttributeError:\n F = np.where(np.isnan(F), 0, F)\n\n term1 = 0.5 * (1 + tools.cosd(surface_tilt))\n term2 = 1 + F * (tools.sind(0.5 * surface_tilt) ** 3)\n term3 = 1 + F * (cos_tt ** 2) * (tools.sind(solar_zenith) ** 3)\n\n sky_diffuse = dhi * term1 * term2 * term3\n\n return sky_diffuse\n\n\ndef haydavies(surface_tilt, surface_azimuth, dhi, dni, dni_extra,\n solar_zenith=None, solar_azimuth=None, projection_ratio=None,\n return_components=False):\n r'''\n Determine diffuse irradiance from the sky on a tilted surface using\n Hay & Davies' 1980 model\n\n .. math::\n I_{d} = DHI ( A R_b + (1 - A) (\\frac{1 + \\cos\\beta}{2}) )\n\n Hay and Davies' 1980 model determines the diffuse irradiance from\n the sky (ground reflected irradiance is not included in this\n algorithm) on a tilted surface using the surface tilt angle, surface\n azimuth angle, diffuse horizontal irradiance, direct normal\n irradiance, extraterrestrial irradiance, sun zenith angle, and sun\n azimuth angle.\n\n Parameters\n ----------\n surface_tilt : numeric\n Surface tilt angles in decimal degrees. The tilt angle is\n defined as degrees from horizontal (e.g. surface facing up = 0,\n surface facing horizon = 90)\n\n surface_azimuth : numeric\n Surface azimuth angles in decimal degrees. The azimuth\n convention is defined as degrees east of north (e.g. North=0,\n South=180, East=90, West=270).\n\n dhi : numeric\n Diffuse horizontal irradiance in W/m^2.\n\n dni : numeric\n Direct normal irradiance in W/m^2.\n\n dni_extra : numeric\n Extraterrestrial normal irradiance in W/m^2.\n\n solar_zenith : None or numeric, default None\n Solar apparent (refraction-corrected) zenith angles in decimal\n degrees. Must supply ``solar_zenith`` and ``solar_azimuth`` or\n supply ``projection_ratio``.\n\n solar_azimuth : None or numeric, default None\n Solar azimuth angles in decimal degrees. Must supply\n ``solar_zenith`` and ``solar_azimuth`` or supply\n ``projection_ratio``.\n\n projection_ratio : None or numeric, default None\n Ratio of angle of incidence projection to solar zenith angle\n projection. Must supply ``solar_zenith`` and ``solar_azimuth``\n or supply ``projection_ratio``.\n\n return_components : bool, default False\n Flag used to decide whether to return the calculated diffuse components\n or not.\n\n Returns\n --------\n numeric, OrderedDict, or DataFrame\n Return type controlled by `return_components` argument.\n If ``return_components=False``, `sky_diffuse` is returned.\n If ``return_components=True``, `diffuse_components` is returned.\n\n sky_diffuse : numeric\n The sky diffuse component of the solar radiation on a tilted\n surface.\n\n diffuse_components : OrderedDict (array input) or DataFrame (Series input)\n Keys/columns are:\n * sky_diffuse: Total sky diffuse\n * isotropic\n * circumsolar\n * horizon\n\n Notes\n ------\n When supplying ``projection_ratio``, consider constraining its values\n when zenith angle approaches 90 degrees or angle of incidence\n projection is negative. See code for details.\n\n References\n -----------\n .. [1] Loutzenhiser P.G. et. al. \"Empirical validation of models to\n compute solar irradiance on inclined surfaces for building energy\n simulation\" 2007, Solar Energy vol. 81. pp. 254-267\n\n .. [2] Hay, J.E., Davies, J.A., 1980. Calculations of the solar\n radiation incident on an inclined surface. In: Hay, J.E., Won, T.K.\n (Eds.), Proc. of First Canadian Solar Radiation Data Workshop, 59.\n Ministry of Supply and Services, Canada.\n '''\n\n # if necessary, calculate ratio of titled and horizontal beam irradiance\n if projection_ratio is None:\n cos_tt = aoi_projection(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)\n cos_tt = np.maximum(cos_tt, 0) # GH 526\n cos_solar_zenith = tools.cosd(solar_zenith)\n Rb = cos_tt / np.maximum(cos_solar_zenith, 0.01745) # GH 432\n else:\n Rb = projection_ratio\n\n # Anisotropy Index\n AI = dni / dni_extra\n\n # these are the () and [] sub-terms of the second term of eqn 7\n term1 = 1 - AI\n term2 = 0.5 * (1 + tools.cosd(surface_tilt))\n\n poa_isotropic = np.maximum(dhi * term1 * term2, 0)\n poa_circumsolar = np.maximum(dhi * (AI * Rb), 0)\n sky_diffuse = poa_isotropic + poa_circumsolar\n\n if return_components:\n diffuse_components = OrderedDict()\n diffuse_components['sky_diffuse'] = sky_diffuse\n\n # Calculate the individual components\n diffuse_components['isotropic'] = poa_isotropic\n diffuse_components['circumsolar'] = poa_circumsolar\n diffuse_components['horizon'] = np.where(\n np.isnan(diffuse_components['isotropic']), np.nan, 0.)\n\n if isinstance(sky_diffuse, pd.Series):\n diffuse_components = pd.DataFrame(diffuse_components)\n return diffuse_components\n else:\n return sky_diffuse\n\n\ndef reindl(surface_tilt, surface_azimuth, dhi, dni, ghi, dni_extra,\n solar_zenith, solar_azimuth):\n r'''\n Determine diffuse irradiance from the sky on a tilted surface using\n Reindl's 1990 model\n\n .. math::\n\n I_{d} = DHI (A R_b + (1 - A) (\\frac{1 + \\cos\\beta}{2})\n (1 + \\sqrt{\\frac{I_{hb}}{I_h}} \\sin^3(\\beta/2)) )\n\n Reindl's 1990 model determines the diffuse irradiance from the sky\n (ground reflected irradiance is not included in this algorithm) on a\n tilted surface using the surface tilt angle, surface azimuth angle,\n diffuse horizontal irradiance, direct normal irradiance, global\n horizontal irradiance, extraterrestrial irradiance, sun zenith\n angle, and sun azimuth angle.\n\n Parameters\n ----------\n surface_tilt : numeric\n Surface tilt angles in decimal degrees. The tilt angle is\n defined as degrees from horizontal (e.g. surface facing up = 0,\n surface facing horizon = 90)\n\n surface_azimuth : numeric\n Surface azimuth angles in decimal degrees. The azimuth\n convention is defined as degrees east of north (e.g. North = 0,\n South=180 East = 90, West = 270).\n\n dhi : numeric\n diffuse horizontal irradiance in W/m^2.\n\n dni : numeric\n direct normal irradiance in W/m^2.\n\n ghi: numeric\n Global irradiance in W/m^2.\n\n dni_extra : numeric\n Extraterrestrial normal irradiance in W/m^2.\n\n solar_zenith : numeric\n Apparent (refraction-corrected) zenith angles in decimal degrees.\n\n solar_azimuth : numeric\n Sun azimuth angles in decimal degrees. The azimuth convention is\n defined as degrees east of north (e.g. North = 0, East = 90,\n West = 270).\n\n Returns\n -------\n poa_sky_diffuse : numeric\n The sky diffuse component of the solar radiation.\n\n Notes\n -----\n The poa_sky_diffuse calculation is generated from the Loutzenhiser et al.\n (2007) paper, equation 8. Note that I have removed the beam and ground\n reflectance portion of the equation and this generates ONLY the diffuse\n radiation from the sky and circumsolar, so the form of the equation\n varies slightly from equation 8.\n\n References\n ----------\n .. [1] Loutzenhiser P.G. et. al. \"Empirical validation of models to\n compute solar irradiance on inclined surfaces for building energy\n simulation\" 2007, Solar Energy vol. 81. pp. 254-267\n\n .. [2] Reindl, D.T., Beckmann, W.A., Duffie, J.A., 1990a. Diffuse\n fraction correlations. Solar Energy 45(1), 1-7.\n\n .. [3] Reindl, D.T., Beckmann, W.A., Duffie, J.A., 1990b. Evaluation of\n hourly tilted surface radiation models. Solar Energy 45(1), 9-17.\n '''\n\n cos_tt = aoi_projection(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)\n cos_tt = np.maximum(cos_tt, 0) # GH 526\n\n # do not apply cos(zen) limit here (needed for HB below)\n cos_solar_zenith = tools.cosd(solar_zenith)\n\n # ratio of titled and horizontal beam irradiance\n Rb = cos_tt / np.maximum(cos_solar_zenith, 0.01745) # GH 432\n\n # Anisotropy Index\n AI = dni / dni_extra\n\n # DNI projected onto horizontal\n HB = dni * cos_solar_zenith\n HB = np.maximum(HB, 0)\n\n # these are the () and [] sub-terms of the second term of eqn 8\n term1 = 1 - AI\n term2 = 0.5 * (1 + tools.cosd(surface_tilt))\n with np.errstate(invalid='ignore', divide='ignore'):\n hb_to_ghi = np.where(ghi == 0, 0, np.divide(HB, ghi))\n term3 = 1 + np.sqrt(hb_to_ghi) * (tools.sind(0.5 * surface_tilt)**3)\n sky_diffuse = dhi * (AI * Rb + term1 * term2 * term3)\n sky_diffuse = np.maximum(sky_diffuse, 0)\n\n return sky_diffuse\n\n\ndef king(surface_tilt, dhi, ghi, solar_zenith):\n '''\n Determine diffuse irradiance from the sky on a tilted surface using\n the King model.\n\n King's model determines the diffuse irradiance from the sky (ground\n reflected irradiance is not included in this algorithm) on a tilted\n surface using the surface tilt angle, diffuse horizontal irradiance,\n global horizontal irradiance, and sun zenith angle. Note that this\n model is not well documented and has not been published in any\n fashion (as of January 2012).\n\n Parameters\n ----------\n surface_tilt : numeric\n Surface tilt angles in decimal degrees. The tilt angle is\n defined as degrees from horizontal (e.g. surface facing up = 0,\n surface facing horizon = 90)\n\n dhi : numeric\n Diffuse horizontal irradiance in W/m^2.\n\n ghi : numeric\n Global horizontal irradiance in W/m^2.\n\n solar_zenith : numeric\n Apparent (refraction-corrected) zenith angles in decimal degrees.\n\n Returns\n --------\n poa_sky_diffuse : numeric\n The diffuse component of the solar radiation.\n '''\n\n sky_diffuse = (dhi * (1 + tools.cosd(surface_tilt)) / 2 + ghi *\n (0.012 * solar_zenith - 0.04) *\n (1 - tools.cosd(surface_tilt)) / 2)\n sky_diffuse = np.maximum(sky_diffuse, 0)\n\n return sky_diffuse\n\n\ndef perez(surface_tilt, surface_azimuth, dhi, dni, dni_extra,\n solar_zenith, solar_azimuth, airmass,\n model='allsitescomposite1990', return_components=False):\n '''\n Determine diffuse irradiance from the sky on a tilted surface using\n one of the Perez models.\n\n Perez models determine the diffuse irradiance from the sky (ground\n reflected irradiance is not included in this algorithm) on a tilted\n surface using the surface tilt angle, surface azimuth angle, diffuse\n horizontal irradiance, direct normal irradiance, extraterrestrial\n irradiance, sun zenith angle, sun azimuth angle, and relative (not\n pressure-corrected) airmass. Optionally a selector may be used to\n use any of Perez's model coefficient sets.\n\n Parameters\n ----------\n surface_tilt : numeric\n Surface tilt angles in decimal degrees. surface_tilt must be >=0\n and <=180. The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90)\n\n surface_azimuth : numeric\n Surface azimuth angles in decimal degrees. surface_azimuth must\n be >=0 and <=360. The azimuth convention is defined as degrees\n east of north (e.g. North = 0, South=180 East = 90, West = 270).\n\n dhi : numeric\n Diffuse horizontal irradiance in W/m^2. DHI must be >=0.\n\n dni : numeric\n Direct normal irradiance in W/m^2. DNI must be >=0.\n\n dni_extra : numeric\n Extraterrestrial normal irradiance in W/m^2.\n\n solar_zenith : numeric\n apparent (refraction-corrected) zenith angles in decimal\n degrees. solar_zenith must be >=0 and <=180.\n\n solar_azimuth : numeric\n Sun azimuth angles in decimal degrees. solar_azimuth must be >=0\n and <=360. The azimuth convention is defined as degrees east of\n north (e.g. North = 0, East = 90, West = 270).\n\n airmass : numeric\n Relative (not pressure-corrected) airmass values. If AM is a\n DataFrame it must be of the same size as all other DataFrame\n inputs. AM must be >=0 (careful using the 1/sec(z) model of AM\n generation)\n\n model : string (optional, default='allsitescomposite1990')\n A string which selects the desired set of Perez coefficients. If\n model is not provided as an input, the default, '1990' will be\n used. All possible model selections are:\n\n * '1990'\n * 'allsitescomposite1990' (same as '1990')\n * 'allsitescomposite1988'\n * 'sandiacomposite1988'\n * 'usacomposite1988'\n * 'france1988'\n * 'phoenix1988'\n * 'elmonte1988'\n * 'osage1988'\n * 'albuquerque1988'\n * 'capecanaveral1988'\n * 'albany1988'\n\n return_components: bool (optional, default=False)\n Flag used to decide whether to return the calculated diffuse components\n or not.\n\n Returns\n --------\n numeric, OrderedDict, or DataFrame\n Return type controlled by `return_components` argument.\n If ``return_components=False``, `sky_diffuse` is returned.\n If ``return_components=True``, `diffuse_components` is returned.\n\n sky_diffuse : numeric\n The sky diffuse component of the solar radiation on a tilted\n surface.\n\n diffuse_components : OrderedDict (array input) or DataFrame (Series input)\n Keys/columns are:\n * sky_diffuse: Total sky diffuse\n * isotropic\n * circumsolar\n * horizon\n\n\n References\n ----------\n .. [1] Loutzenhiser P.G. et. al. \"Empirical validation of models to\n compute solar irradiance on inclined surfaces for building energy\n simulation\" 2007, Solar Energy vol. 81. pp. 254-267\n\n .. [2] Perez, R., Seals, R., Ineichen, P., Stewart, R., Menicucci, D.,\n 1987. A new simplified version of the Perez diffuse irradiance model\n for tilted surfaces. Solar Energy 39(3), 221-232.\n\n .. [3] Perez, R., Ineichen, P., Seals, R., Michalsky, J., Stewart, R.,\n 1990. Modeling daylight availability and irradiance components from\n direct and global irradiance. Solar Energy 44 (5), 271-289.\n\n .. [4] Perez, R. et. al 1988. \"The Development and Verification of the\n Perez Diffuse Radiation Model\". SAND88-7030\n '''\n\n kappa = 1.041 # for solar_zenith in radians\n z = np.radians(solar_zenith) # convert to radians\n\n # delta is the sky's \"brightness\"\n delta = dhi * airmass / dni_extra\n\n # epsilon is the sky's \"clearness\"\n with np.errstate(invalid='ignore'):\n eps = ((dhi + dni) / dhi + kappa * (z ** 3)) / (1 + kappa * (z ** 3))\n\n # numpy indexing below will not work with a Series\n if isinstance(eps, pd.Series):\n eps = eps.values\n\n # Perez et al define clearness bins according to the following\n # rules. 1 = overcast ... 8 = clear (these names really only make\n # sense for small zenith angles, but...) these values will\n # eventually be used as indicies for coeffecient look ups\n ebin = np.digitize(eps, (0., 1.065, 1.23, 1.5, 1.95, 2.8, 4.5, 6.2))\n ebin = np.array(ebin) # GH 642\n ebin[np.isnan(eps)] = 0\n\n # correct for 0 indexing in coeffecient lookup\n # later, ebin = -1 will yield nan coefficients\n ebin -= 1\n\n # The various possible sets of Perez coefficients are contained\n # in a subfunction to clean up the code.\n F1c, F2c = _get_perez_coefficients(model)\n\n # results in invalid eps (ebin = -1) being mapped to nans\n nans = np.array([np.nan, np.nan, np.nan])\n F1c = np.vstack((F1c, nans))\n F2c = np.vstack((F2c, nans))\n\n F1 = (F1c[ebin, 0] + F1c[ebin, 1] * delta + F1c[ebin, 2] * z)\n F1 = np.maximum(F1, 0)\n\n F2 = (F2c[ebin, 0] + F2c[ebin, 1] * delta + F2c[ebin, 2] * z)\n\n A = aoi_projection(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)\n A = np.maximum(A, 0)\n\n B = tools.cosd(solar_zenith)\n B = np.maximum(B, tools.cosd(85))\n\n # Calculate Diffuse POA from sky dome\n term1 = 0.5 * (1 - F1) * (1 + tools.cosd(surface_tilt))\n term2 = F1 * A / B\n term3 = F2 * tools.sind(surface_tilt)\n\n sky_diffuse = np.maximum(dhi * (term1 + term2 + term3), 0)\n\n # we've preserved the input type until now, so don't ruin it!\n if isinstance(sky_diffuse, pd.Series):\n sky_diffuse[np.isnan(airmass)] = 0\n else:\n sky_diffuse = np.where(np.isnan(airmass), 0, sky_diffuse)\n\n if return_components:\n diffuse_components = OrderedDict()\n diffuse_components['sky_diffuse'] = sky_diffuse\n\n # Calculate the different components\n diffuse_components['isotropic'] = dhi * term1\n diffuse_components['circumsolar'] = dhi * term2\n diffuse_components['horizon'] = dhi * term3\n\n # Set values of components to 0 when sky_diffuse is 0\n mask = sky_diffuse == 0\n if isinstance(sky_diffuse, pd.Series):\n diffuse_components = pd.DataFrame(diffuse_components)\n diffuse_components.loc[mask] = 0\n else:\n diffuse_components = {k: np.where(mask, 0, v) for k, v in\n diffuse_components.items()}\n return diffuse_components\n else:\n return sky_diffuse\n\n\ndef clearsky_index(ghi, clearsky_ghi, max_clearsky_index=2.0):\n \"\"\"\n Calculate the clearsky index.\n\n The clearsky index is the ratio of global to clearsky global irradiance.\n Negative and non-finite clearsky index values will be truncated to zero.\n\n Parameters\n ----------\n ghi : numeric\n Global horizontal irradiance in W/m^2.\n\n clearsky_ghi : numeric\n Modeled clearsky GHI\n\n max_clearsky_index : numeric, default 2.0\n Maximum value of the clearsky index. The default, 2.0, allows\n for over-irradiance events typically seen in sub-hourly data.\n\n Returns\n -------\n clearsky_index : numeric\n Clearsky index\n \"\"\"\n clearsky_index = ghi / clearsky_ghi\n # set +inf, -inf, and nans to zero\n clearsky_index = np.where(~np.isfinite(clearsky_index), 0,\n clearsky_index)\n # but preserve nans in the input arrays\n input_is_nan = ~np.isfinite(ghi) | ~np.isfinite(clearsky_ghi)\n clearsky_index = np.where(input_is_nan, np.nan, clearsky_index)\n\n clearsky_index = np.maximum(clearsky_index, 0)\n clearsky_index = np.minimum(clearsky_index, max_clearsky_index)\n\n # preserve input type\n if isinstance(ghi, pd.Series):\n clearsky_index = pd.Series(clearsky_index, index=ghi.index)\n\n return clearsky_index\n\n\ndef clearness_index(ghi, solar_zenith, extra_radiation, min_cos_zenith=0.065,\n max_clearness_index=2.0):\n \"\"\"\n Calculate the clearness index.\n\n The clearness index is the ratio of global to extraterrestrial\n irradiance on a horizontal plane [1]_.\n\n Parameters\n ----------\n ghi : numeric\n Global horizontal irradiance in W/m^2.\n\n solar_zenith : numeric\n True (not refraction-corrected) solar zenith angle in decimal\n degrees.\n\n extra_radiation : numeric\n Irradiance incident at the top of the atmosphere\n\n min_cos_zenith : numeric, default 0.065\n Minimum value of cos(zenith) to allow when calculating global\n clearness index `kt`. Equivalent to zenith = 86.273 degrees.\n\n max_clearness_index : numeric, default 2.0\n Maximum value of the clearness index. The default, 2.0, allows\n for over-irradiance events typically seen in sub-hourly data.\n NREL's SRRL Fortran code used 0.82 for hourly data.\n\n Returns\n -------\n kt : numeric\n Clearness index\n\n References\n ----------\n .. [1] Maxwell, E. L., \"A Quasi-Physical Model for Converting Hourly\n Global Horizontal to Direct Normal Insolation\", Technical\n Report No. SERI/TR-215-3087, Golden, CO: Solar Energy Research\n Institute, 1987.\n \"\"\"\n cos_zenith = tools.cosd(solar_zenith)\n I0h = extra_radiation * np.maximum(cos_zenith, min_cos_zenith)\n # consider adding\n # with np.errstate(invalid='ignore', divide='ignore'):\n # to kt calculation, but perhaps it's good to allow these\n # warnings to the users that override min_cos_zenith\n kt = ghi / I0h\n kt = np.maximum(kt, 0)\n kt = np.minimum(kt, max_clearness_index)\n return kt\n\n\ndef clearness_index_zenith_independent(clearness_index, airmass,\n max_clearness_index=2.0):\n \"\"\"\n Calculate the zenith angle independent clearness index.\n\n See [1]_ for details.\n\n Parameters\n ----------\n clearness_index : numeric\n Ratio of global to extraterrestrial irradiance on a horizontal\n plane\n\n airmass : numeric\n Airmass\n\n max_clearness_index : numeric, default 2.0\n Maximum value of the clearness index. The default, 2.0, allows\n for over-irradiance events typically seen in sub-hourly data.\n NREL's SRRL Fortran code used 0.82 for hourly data.\n\n Returns\n -------\n kt_prime : numeric\n Zenith independent clearness index\n\n References\n ----------\n .. [1] Perez, R., P. Ineichen, E. Maxwell, R. Seals and A. Zelenka,\n (1992). \"Dynamic Global-to-Direct Irradiance Conversion Models\".\n ASHRAE Transactions-Research Series, pp. 354-369\n \"\"\"\n # Perez eqn 1\n kt_prime = clearness_index / _kt_kt_prime_factor(airmass)\n kt_prime = np.maximum(kt_prime, 0)\n kt_prime = np.minimum(kt_prime, max_clearness_index)\n return kt_prime\n\n\ndef _kt_kt_prime_factor(airmass):\n \"\"\"\n Calculate the conversion factor between kt and kt prime.\n Function is useful because DIRINT and GTI-DIRINT both use this.\n \"\"\"\n # consider adding\n # airmass = np.maximum(airmass, 12) # GH 450\n return 1.031 * np.exp(-1.4 / (0.9 + 9.4 / airmass)) + 0.1\n\n\ndef disc(ghi, solar_zenith, datetime_or_doy, pressure=101325,\n min_cos_zenith=0.065, max_zenith=87, max_airmass=12):\n \"\"\"\n Estimate Direct Normal Irradiance from Global Horizontal Irradiance\n using the DISC model.\n\n The DISC algorithm converts global horizontal irradiance to direct\n normal irradiance through empirical relationships between the global\n and direct clearness indices.\n\n The pvlib implementation limits the clearness index to 1.\n\n The original report describing the DISC model [1]_ uses the\n relative airmass rather than the absolute (pressure-corrected)\n airmass. However, the NREL implementation of the DISC model [2]_\n uses absolute airmass. PVLib Matlab also uses the absolute airmass.\n pvlib python defaults to absolute airmass, but the relative airmass\n can be used by supplying `pressure=None`.\n\n Parameters\n ----------\n ghi : numeric\n Global horizontal irradiance in W/m^2.\n\n solar_zenith : numeric\n True (not refraction-corrected) solar zenith angles in decimal\n degrees.\n\n datetime_or_doy : int, float, array, pd.DatetimeIndex\n Day of year or array of days of year e.g.\n pd.DatetimeIndex.dayofyear, or pd.DatetimeIndex.\n\n pressure : None or numeric, default 101325\n Site pressure in Pascal. If None, relative airmass is used\n instead of absolute (pressure-corrected) airmass.\n\n min_cos_zenith : numeric, default 0.065\n Minimum value of cos(zenith) to allow when calculating global\n clearness index `kt`. Equivalent to zenith = 86.273 degrees.\n\n max_zenith : numeric, default 87\n Maximum value of zenith to allow in DNI calculation. DNI will be\n set to 0 for times with zenith values greater than `max_zenith`.\n\n max_airmass : numeric, default 12\n Maximum value of the airmass to allow in Kn calculation.\n Default value (12) comes from range over which Kn was fit\n to airmass in the original paper.\n\n Returns\n -------\n output : OrderedDict or DataFrame\n Contains the following keys:\n\n * ``dni``: The modeled direct normal irradiance\n in W/m^2 provided by the\n Direct Insolation Simulation Code (DISC) model.\n * ``kt``: Ratio of global to extraterrestrial\n irradiance on a horizontal plane.\n * ``airmass``: Airmass\n\n References\n ----------\n .. [1] Maxwell, E. L., \"A Quasi-Physical Model for Converting Hourly\n Global Horizontal to Direct Normal Insolation\", Technical\n Report No. SERI/TR-215-3087, Golden, CO: Solar Energy Research\n Institute, 1987.\n\n .. [2] Maxwell, E. \"DISC Model\", Excel Worksheet.\n https://www.nrel.gov/grid/solar-resource/disc.html\n\n See Also\n --------\n dirint\n \"\"\"\n\n # this is the I0 calculation from the reference\n # SSC uses solar constant = 1367.0 (checked 2018 08 15)\n I0 = get_extra_radiation(datetime_or_doy, 1370., 'spencer')\n\n kt = clearness_index(ghi, solar_zenith, I0, min_cos_zenith=min_cos_zenith,\n max_clearness_index=1)\n\n am = atmosphere.get_relative_airmass(solar_zenith, model='kasten1966')\n if pressure is not None:\n am = atmosphere.get_absolute_airmass(am, pressure)\n\n Kn, am = _disc_kn(kt, am, max_airmass=max_airmass)\n dni = Kn * I0\n\n bad_values = (solar_zenith > max_zenith) | (ghi < 0) | (dni < 0)\n dni = np.where(bad_values, 0, dni)\n\n output = OrderedDict()\n output['dni'] = dni\n output['kt'] = kt\n output['airmass'] = am\n\n if isinstance(datetime_or_doy, pd.DatetimeIndex):\n output = pd.DataFrame(output, index=datetime_or_doy)\n\n return output\n\n\ndef _disc_kn(clearness_index, airmass, max_airmass=12):\n \"\"\"\n Calculate Kn for `disc`\n\n Parameters\n ----------\n clearness_index : numeric\n airmass : numeric\n max_airmass : float\n airmass > max_airmass is set to max_airmass before being used\n in calculating Kn.\n\n Returns\n -------\n Kn : numeric\n am : numeric\n airmass used in the calculation of Kn. am <= max_airmass.\n \"\"\"\n # short names for equations\n kt = clearness_index\n am = airmass\n\n am = np.minimum(am, max_airmass) # GH 450\n\n # powers of kt will be used repeatedly, so compute only once\n kt2 = kt * kt # about the same as kt ** 2\n kt3 = kt2 * kt # 5-10x faster than kt ** 3\n\n bools = (kt <= 0.6)\n a = np.where(bools,\n 0.512 - 1.56*kt + 2.286*kt2 - 2.222*kt3,\n -5.743 + 21.77*kt - 27.49*kt2 + 11.56*kt3)\n b = np.where(bools,\n 0.37 + 0.962*kt,\n 41.4 - 118.5*kt + 66.05*kt2 + 31.9*kt3)\n c = np.where(bools,\n -0.28 + 0.932*kt - 2.048*kt2,\n -47.01 + 184.2*kt - 222.0*kt2 + 73.81*kt3)\n\n delta_kn = a + b * np.exp(c*am)\n\n Knc = 0.866 - 0.122*am + 0.0121*am**2 - 0.000653*am**3 + 1.4e-05*am**4\n Kn = Knc - delta_kn\n return Kn, am\n\n\ndef dirint(ghi, solar_zenith, times, pressure=101325., use_delta_kt_prime=True,\n temp_dew=None, min_cos_zenith=0.065, max_zenith=87):\n \"\"\"\n Determine DNI from GHI using the DIRINT modification of the DISC\n model.\n\n Implements the modified DISC model known as \"DIRINT\" introduced in\n [1]_. DIRINT predicts direct normal irradiance (DNI) from measured\n global horizontal irradiance (GHI). DIRINT improves upon the DISC\n model by using time-series GHI data and dew point temperature\n information. The effectiveness of the DIRINT model improves with\n each piece of information provided.\n\n The pvlib implementation limits the clearness index to 1.\n\n Parameters\n ----------\n ghi : array-like\n Global horizontal irradiance in W/m^2.\n\n solar_zenith : array-like\n True (not refraction-corrected) solar_zenith angles in decimal\n degrees.\n\n times : DatetimeIndex\n\n pressure : float or array-like, default 101325.0\n The site pressure in Pascal. Pressure may be measured or an\n average pressure may be calculated from site altitude.\n\n use_delta_kt_prime : bool, default True\n If True, indicates that the stability index delta_kt_prime is\n included in the model. The stability index adjusts the estimated\n DNI in response to dynamics in the time series of GHI. It is\n recommended that delta_kt_prime is not used if the time between\n GHI points is 1.5 hours or greater. If use_delta_kt_prime=True,\n input data must be Series.\n\n temp_dew : None, float, or array-like, default None\n Surface dew point temperatures, in degrees C. Values of temp_dew\n may be numeric or NaN. Any single time period point with a\n temp_dew=NaN does not have dew point improvements applied. If\n temp_dew is not provided, then dew point improvements are not\n applied.\n\n min_cos_zenith : numeric, default 0.065\n Minimum value of cos(zenith) to allow when calculating global\n clearness index `kt`. Equivalent to zenith = 86.273 degrees.\n\n max_zenith : numeric, default 87\n Maximum value of zenith to allow in DNI calculation. DNI will be\n set to 0 for times with zenith values greater than `max_zenith`.\n\n Returns\n -------\n dni : array-like\n The modeled direct normal irradiance in W/m^2 provided by the\n DIRINT model.\n\n Notes\n -----\n DIRINT model requires time series data (ie. one of the inputs must\n be a vector of length > 2).\n\n References\n ----------\n .. [1] Perez, R., P. Ineichen, E. Maxwell, R. Seals and A. Zelenka,\n (1992). \"Dynamic Global-to-Direct Irradiance Conversion Models\".\n ASHRAE Transactions-Research Series, pp. 354-369\n\n .. [2] Maxwell, E. L., \"A Quasi-Physical Model for Converting Hourly\n Global Horizontal to Direct Normal Insolation\", Technical Report No.\n SERI/TR-215-3087, Golden, CO: Solar Energy Research Institute, 1987.\n \"\"\"\n\n disc_out = disc(ghi, solar_zenith, times, pressure=pressure,\n min_cos_zenith=min_cos_zenith, max_zenith=max_zenith)\n airmass = disc_out['airmass']\n kt = disc_out['kt']\n\n kt_prime = clearness_index_zenith_independent(\n kt, airmass, max_clearness_index=1)\n delta_kt_prime = _delta_kt_prime_dirint(kt_prime, use_delta_kt_prime,\n times)\n w = _temp_dew_dirint(temp_dew, times)\n\n dirint_coeffs = _dirint_coeffs(times, kt_prime, solar_zenith, w,\n delta_kt_prime)\n\n # Perez eqn 5\n dni = disc_out['dni'] * dirint_coeffs\n\n return dni\n\n\ndef _dirint_from_dni_ktprime(dni, kt_prime, solar_zenith, use_delta_kt_prime,\n temp_dew):\n \"\"\"\n Calculate DIRINT DNI from supplied DISC DNI and Kt'.\n\n Supports :py:func:`gti_dirint`\n \"\"\"\n times = dni.index\n delta_kt_prime = _delta_kt_prime_dirint(kt_prime, use_delta_kt_prime,\n times)\n w = _temp_dew_dirint(temp_dew, times)\n dirint_coeffs = _dirint_coeffs(times, kt_prime, solar_zenith, w,\n delta_kt_prime)\n dni_dirint = dni * dirint_coeffs\n return dni_dirint\n\n\ndef _delta_kt_prime_dirint(kt_prime, use_delta_kt_prime, times):\n \"\"\"\n Calculate delta_kt_prime (Perez eqn 2 and eqn 3), or return a default value\n for use with :py:func:`_dirint_bins`.\n \"\"\"\n if use_delta_kt_prime:\n # Perez eqn 2\n kt_next = kt_prime.shift(-1)\n kt_previous = kt_prime.shift(1)\n # replace nan with values that implement Perez Eq 3 for first and last\n # positions. Use kt_previous and kt_next to handle series of length 1\n kt_next.iloc[-1] = kt_previous.iloc[-1]\n kt_previous.iloc[0] = kt_next.iloc[0]\n delta_kt_prime = 0.5 * ((kt_prime - kt_next).abs().add(\n (kt_prime - kt_previous).abs(),\n fill_value=0))\n else:\n # do not change unless also modifying _dirint_bins\n delta_kt_prime = pd.Series(-1, index=times)\n return delta_kt_prime\n\n\ndef _temp_dew_dirint(temp_dew, times):\n \"\"\"\n Calculate precipitable water from surface dew point temp (Perez eqn 4),\n or return a default value for use with :py:func:`_dirint_bins`.\n \"\"\"\n if temp_dew is not None:\n # Perez eqn 4\n w = pd.Series(np.exp(0.07 * temp_dew - 0.075), index=times)\n else:\n # do not change unless also modifying _dirint_bins\n w = pd.Series(-1, index=times)\n return w\n\n\ndef _dirint_coeffs(times, kt_prime, solar_zenith, w, delta_kt_prime):\n \"\"\"\n Determine the DISC to DIRINT multiplier `dirint_coeffs`.\n\n dni = disc_out['dni'] * dirint_coeffs\n\n Parameters\n ----------\n times : pd.DatetimeIndex\n kt_prime : Zenith-independent clearness index\n solar_zenith : Solar zenith angle\n w : precipitable water estimated from surface dew-point temperature\n delta_kt_prime : stability index\n\n Returns\n -------\n dirint_coeffs : array-like\n \"\"\"\n kt_prime_bin, zenith_bin, w_bin, delta_kt_prime_bin = \\\n _dirint_bins(times, kt_prime, solar_zenith, w, delta_kt_prime)\n\n # get the coefficients\n coeffs = _get_dirint_coeffs()\n\n # subtract 1 to account for difference between MATLAB-style bin\n # assignment and Python-style array lookup.\n dirint_coeffs = coeffs[kt_prime_bin-1, zenith_bin-1,\n delta_kt_prime_bin-1, w_bin-1]\n\n # convert unassigned bins to nan\n dirint_coeffs = np.where((kt_prime_bin == 0) | (zenith_bin == 0) |\n (w_bin == 0) | (delta_kt_prime_bin == 0),\n np.nan, dirint_coeffs)\n return dirint_coeffs\n\n\ndef _dirint_bins(times, kt_prime, zenith, w, delta_kt_prime):\n \"\"\"\n Determine the bins for the DIRINT coefficients.\n\n Parameters\n ----------\n times : pd.DatetimeIndex\n kt_prime : Zenith-independent clearness index\n zenith : Solar zenith angle\n w : precipitable water estimated from surface dew-point temperature\n delta_kt_prime : stability index\n\n Returns\n -------\n tuple of kt_prime_bin, zenith_bin, w_bin, delta_kt_prime_bin\n \"\"\"\n # @wholmgren: the following bin assignments use MATLAB's 1-indexing.\n # Later, we'll subtract 1 to conform to Python's 0-indexing.\n\n # Create kt_prime bins\n kt_prime_bin = pd.Series(0, index=times, dtype=np.int64)\n kt_prime_bin[(kt_prime >= 0) & (kt_prime < 0.24)] = 1\n kt_prime_bin[(kt_prime >= 0.24) & (kt_prime < 0.4)] = 2\n kt_prime_bin[(kt_prime >= 0.4) & (kt_prime < 0.56)] = 3\n kt_prime_bin[(kt_prime >= 0.56) & (kt_prime < 0.7)] = 4\n kt_prime_bin[(kt_prime >= 0.7) & (kt_prime < 0.8)] = 5\n kt_prime_bin[(kt_prime >= 0.8) & (kt_prime <= 1)] = 6\n\n # Create zenith angle bins\n zenith_bin = pd.Series(0, index=times, dtype=np.int64)\n zenith_bin[(zenith >= 0) & (zenith < 25)] = 1\n zenith_bin[(zenith >= 25) & (zenith < 40)] = 2\n zenith_bin[(zenith >= 40) & (zenith < 55)] = 3\n zenith_bin[(zenith >= 55) & (zenith < 70)] = 4\n zenith_bin[(zenith >= 70) & (zenith < 80)] = 5\n zenith_bin[(zenith >= 80)] = 6\n\n # Create the bins for w based on dew point temperature\n w_bin = pd.Series(0, index=times, dtype=np.int64)\n w_bin[(w >= 0) & (w < 1)] = 1\n w_bin[(w >= 1) & (w < 2)] = 2\n w_bin[(w >= 2) & (w < 3)] = 3\n w_bin[(w >= 3)] = 4\n w_bin[(w == -1)] = 5\n\n # Create delta_kt_prime binning.\n delta_kt_prime_bin = pd.Series(0, index=times, dtype=np.int64)\n delta_kt_prime_bin[(delta_kt_prime >= 0) & (delta_kt_prime < 0.015)] = 1\n delta_kt_prime_bin[(delta_kt_prime >= 0.015) &\n (delta_kt_prime < 0.035)] = 2\n delta_kt_prime_bin[(delta_kt_prime >= 0.035) & (delta_kt_prime < 0.07)] = 3\n delta_kt_prime_bin[(delta_kt_prime >= 0.07) & (delta_kt_prime < 0.15)] = 4\n delta_kt_prime_bin[(delta_kt_prime >= 0.15) & (delta_kt_prime < 0.3)] = 5\n delta_kt_prime_bin[(delta_kt_prime >= 0.3) & (delta_kt_prime <= 1)] = 6\n delta_kt_prime_bin[delta_kt_prime == -1] = 7\n\n return kt_prime_bin, zenith_bin, w_bin, delta_kt_prime_bin\n\n\ndef dirindex(ghi, ghi_clearsky, dni_clearsky, zenith, times, pressure=101325.,\n use_delta_kt_prime=True, temp_dew=None, min_cos_zenith=0.065,\n max_zenith=87):\n \"\"\"\n Determine DNI from GHI using the DIRINDEX model.\n\n The DIRINDEX model [1]_ modifies the DIRINT model implemented in\n :py:func:`pvlib.irradiance.dirint` by taking into account information\n from a clear sky model. It is recommended that ``ghi_clearsky`` be\n calculated using the Ineichen clear sky model\n :py:func:`pvlib.clearsky.ineichen` with ``perez_enhancement=True``.\n\n The pvlib implementation limits the clearness index to 1.\n\n Parameters\n ----------\n ghi : array-like\n Global horizontal irradiance in W/m^2.\n\n ghi_clearsky : array-like\n Global horizontal irradiance from clear sky model, in W/m^2.\n\n dni_clearsky : array-like\n Direct normal irradiance from clear sky model, in W/m^2.\n\n zenith : array-like\n True (not refraction-corrected) zenith angles in decimal\n degrees. If Z is a vector it must be of the same size as all\n other vector inputs. Z must be >=0 and <=180.\n\n times : DatetimeIndex\n\n pressure : float or array-like, default 101325.0\n The site pressure in Pascal. Pressure may be measured or an\n average pressure may be calculated from site altitude.\n\n use_delta_kt_prime : bool, default True\n If True, indicates that the stability index delta_kt_prime is\n included in the model. The stability index adjusts the estimated\n DNI in response to dynamics in the time series of GHI. It is\n recommended that delta_kt_prime is not used if the time between\n GHI points is 1.5 hours or greater. If use_delta_kt_prime=True,\n input data must be Series.\n\n temp_dew : None, float, or array-like, default None\n Surface dew point temperatures, in degrees C. Values of temp_dew\n may be numeric or NaN. Any single time period point with a\n temp_dew=NaN does not have dew point improvements applied. If\n temp_dew is not provided, then dew point improvements are not\n applied.\n\n min_cos_zenith : numeric, default 0.065\n Minimum value of cos(zenith) to allow when calculating global\n clearness index `kt`. Equivalent to zenith = 86.273 degrees.\n\n max_zenith : numeric, default 87\n Maximum value of zenith to allow in DNI calculation. DNI will be\n set to 0 for times with zenith values greater than `max_zenith`.\n\n Returns\n -------\n dni : array-like\n The modeled direct normal irradiance in W/m^2.\n\n Notes\n -----\n DIRINDEX model requires time series data (ie. one of the inputs must\n be a vector of length > 2).\n\n References\n ----------\n .. [1] Perez, R., Ineichen, P., Moore, K., Kmiecik, M., Chain, C., George,\n R., & Vignola, F. (2002). A new operational model for satellite-derived\n irradiances: description and validation. Solar Energy, 73(5), 307-317.\n \"\"\"\n\n dni_dirint = dirint(ghi, zenith, times, pressure=pressure,\n use_delta_kt_prime=use_delta_kt_prime,\n temp_dew=temp_dew, min_cos_zenith=min_cos_zenith,\n max_zenith=max_zenith)\n\n dni_dirint_clearsky = dirint(ghi_clearsky, zenith, times,\n pressure=pressure,\n use_delta_kt_prime=use_delta_kt_prime,\n temp_dew=temp_dew,\n min_cos_zenith=min_cos_zenith,\n max_zenith=max_zenith)\n\n dni_dirindex = dni_clearsky * dni_dirint / dni_dirint_clearsky\n\n dni_dirindex[dni_dirindex < 0] = 0.\n\n return dni_dirindex\n\n\ndef gti_dirint(poa_global, aoi, solar_zenith, solar_azimuth, times,\n surface_tilt, surface_azimuth, pressure=101325.,\n use_delta_kt_prime=True, temp_dew=None, albedo=.25,\n model='perez', model_perez='allsitescomposite1990',\n calculate_gt_90=True, max_iterations=30):\n \"\"\"\n Determine GHI, DNI, DHI from POA global using the GTI DIRINT model.\n\n The GTI DIRINT model is described in [1]_.\n\n .. warning::\n\n Model performance is poor for AOI greater than approximately\n 80 degrees `and` plane of array irradiance greater than\n approximately 200 W/m^2.\n\n Parameters\n ----------\n poa_global : array-like\n Plane of array global irradiance in W/m^2.\n\n aoi : array-like\n Angle of incidence of solar rays with respect to the module\n surface normal.\n\n solar_zenith : array-like\n True (not refraction-corrected) solar zenith angles in decimal\n degrees.\n\n solar_azimuth : array-like\n Solar azimuth angles in decimal degrees.\n\n times : DatetimeIndex\n Time indices for the input array-like data.\n\n surface_tilt : numeric\n Surface tilt angles in decimal degrees. Tilt must be >=0 and\n <=180. The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90).\n\n surface_azimuth : numeric\n Surface azimuth angles in decimal degrees. surface_azimuth must\n be >=0 and <=360. The Azimuth convention is defined as degrees\n east of north (e.g. North = 0, South=180 East = 90, West = 270).\n\n pressure : numeric, default 101325.0\n The site pressure in Pascal. Pressure may be measured or an\n average pressure may be calculated from site altitude.\n\n use_delta_kt_prime : bool, default True\n If True, indicates that the stability index delta_kt_prime is\n included in the model. The stability index adjusts the estimated\n DNI in response to dynamics in the time series of GHI. It is\n recommended that delta_kt_prime is not used if the time between\n GHI points is 1.5 hours or greater. If use_delta_kt_prime=True,\n input data must be Series.\n\n temp_dew : None, float, or array-like, default None\n Surface dew point temperatures, in degrees C. Values of temp_dew\n may be numeric or NaN. Any single time period point with a\n temp_dew=NaN does not have dew point improvements applied. If\n temp_dew is not provided, then dew point improvements are not\n applied.\n\n albedo : numeric, default 0.25\n Ground surface albedo. [unitless]\n\n model : String, default 'perez'\n Irradiance model. See :py:func:`get_sky_diffuse` for allowed values.\n\n model_perez : String, default 'allsitescomposite1990'\n Used only if model='perez'. See :py:func:`perez`.\n\n calculate_gt_90 : bool, default True\n Controls if the algorithm evaluates inputs with AOI >= 90 degrees.\n If False, returns nan for AOI >= 90 degrees. Significant speed ups\n can be achieved by setting this parameter to False.\n\n max_iterations : int, default 30\n Maximum number of iterations for the aoi < 90 deg algorithm.\n\n Returns\n -------\n data : DataFrame\n Contains the following keys/columns:\n\n * ``ghi``: the modeled global horizontal irradiance in W/m^2.\n * ``dni``: the modeled direct normal irradiance in W/m^2.\n * ``dhi``: the modeled diffuse horizontal irradiance in\n W/m^2.\n\n References\n ----------\n .. [1] B. Marion, A model for deriving the direct normal and\n diffuse horizontal irradiance from the global tilted\n irradiance, Solar Energy 122, 1037-1046.\n :doi:`10.1016/j.solener.2015.10.024`\n \"\"\"\n\n aoi_lt_90 = aoi < 90\n\n # for AOI less than 90 degrees\n ghi, dni, dhi, kt_prime = _gti_dirint_lt_90(\n poa_global, aoi, aoi_lt_90, solar_zenith, solar_azimuth, times,\n surface_tilt, surface_azimuth, pressure=pressure,\n use_delta_kt_prime=use_delta_kt_prime, temp_dew=temp_dew,\n albedo=albedo, model=model, model_perez=model_perez,\n max_iterations=max_iterations)\n\n # for AOI greater than or equal to 90 degrees\n if calculate_gt_90:\n ghi_gte_90, dni_gte_90, dhi_gte_90 = _gti_dirint_gte_90(\n poa_global, aoi, solar_zenith, solar_azimuth,\n surface_tilt, times, kt_prime,\n pressure=pressure, temp_dew=temp_dew, albedo=albedo)\n else:\n ghi_gte_90, dni_gte_90, dhi_gte_90 = np.nan, np.nan, np.nan\n\n # put the AOI < 90 and AOI >= 90 conditions together\n output = OrderedDict()\n output['ghi'] = ghi.where(aoi_lt_90, ghi_gte_90)\n output['dni'] = dni.where(aoi_lt_90, dni_gte_90)\n output['dhi'] = dhi.where(aoi_lt_90, dhi_gte_90)\n\n output = pd.DataFrame(output, index=times)\n\n return output\n\n\ndef _gti_dirint_lt_90(poa_global, aoi, aoi_lt_90, solar_zenith, solar_azimuth,\n times, surface_tilt, surface_azimuth, pressure=101325.,\n use_delta_kt_prime=True, temp_dew=None, albedo=.25,\n model='perez', model_perez='allsitescomposite1990',\n max_iterations=30):\n \"\"\"\n GTI-DIRINT model for AOI < 90 degrees. See Marion 2015 Section 2.1.\n\n See gti_dirint signature for parameter details.\n \"\"\"\n I0 = get_extra_radiation(times, 1370, 'spencer')\n cos_zenith = tools.cosd(solar_zenith)\n # I0h as in Marion 2015 eqns 1, 3\n I0h = I0 * np.maximum(0.065, cos_zenith)\n\n airmass = atmosphere.get_relative_airmass(solar_zenith, model='kasten1966')\n airmass = atmosphere.get_absolute_airmass(airmass, pressure)\n\n # these coeffs and diff variables and the loop below\n # implement figure 1 of Marion 2015\n\n # make coeffs that is at least 30 elements long so that all\n # coeffs can be assigned as specified in Marion 2015.\n # slice below will limit iterations if necessary\n coeffs = np.empty(max(30, max_iterations))\n coeffs[0:3] = 1\n coeffs[3:10] = 0.5\n coeffs[10:20] = 0.25\n coeffs[20:] = 0.125\n coeffs = coeffs[:max_iterations] # covers case where max_iterations < 30\n\n # initialize diff\n diff = pd.Series(9999, index=times)\n best_diff = diff\n\n # initialize poa_global_i\n poa_global_i = poa_global\n\n for iteration, coeff in enumerate(coeffs):\n\n # test if difference between modeled GTI and\n # measured GTI (poa_global) is less than 1 W/m^2\n # only test for aoi less than 90 deg\n best_diff_lte_1 = best_diff <= 1\n best_diff_lte_1_lt_90 = best_diff_lte_1[aoi_lt_90]\n if best_diff_lte_1_lt_90.all():\n # all aoi < 90 points have a difference <= 1, so break loop\n break\n\n # calculate kt and DNI from GTI\n kt = clearness_index(poa_global_i, aoi, I0) # kt from Marion eqn 2\n disc_dni = np.maximum(_disc_kn(kt, airmass)[0] * I0, 0)\n kt_prime = clearness_index_zenith_independent(kt, airmass)\n # dirint DNI in Marion eqn 3\n dni = _dirint_from_dni_ktprime(disc_dni, kt_prime, solar_zenith,\n use_delta_kt_prime, temp_dew)\n\n # calculate DHI using Marion eqn 3 (identify 1st term on RHS as GHI)\n # I0h has a minimum zenith projection, but multiplier of DNI does not\n ghi = kt * I0h # Kt * I0 * max(0.065, cos(zen))\n dhi = ghi - dni * cos_zenith # no cos(zen) restriction here\n\n # following SSC code\n dni = np.maximum(dni, 0)\n ghi = np.maximum(ghi, 0)\n dhi = np.maximum(dhi, 0)\n\n # use DNI and DHI to model GTI\n # GTI-DIRINT uses perez transposition model, but we allow for\n # any model here\n all_irrad = get_total_irradiance(\n surface_tilt, surface_azimuth, solar_zenith, solar_azimuth,\n dni, ghi, dhi, dni_extra=I0, airmass=airmass,\n albedo=albedo, model=model, model_perez=model_perez)\n\n gti_model = all_irrad['poa_global']\n\n # calculate new diff\n diff = gti_model - poa_global\n\n # determine if the new diff is smaller in magnitude\n # than the old diff\n diff_abs = diff.abs()\n smallest_diff = diff_abs < best_diff\n\n # save the best differences\n best_diff = diff_abs.where(smallest_diff, best_diff)\n\n # on first iteration, the best values are the only values\n if iteration == 0:\n best_ghi = ghi\n best_dni = dni\n best_dhi = dhi\n best_kt_prime = kt_prime\n else:\n # save new DNI, DHI, DHI if they provide the best consistency\n # otherwise use the older values.\n best_ghi = ghi.where(smallest_diff, best_ghi)\n best_dni = dni.where(smallest_diff, best_dni)\n best_dhi = dhi.where(smallest_diff, best_dhi)\n best_kt_prime = kt_prime.where(smallest_diff, best_kt_prime)\n\n # calculate adjusted inputs for next iteration. Marion eqn 4\n poa_global_i = np.maximum(1.0, poa_global_i - coeff * diff)\n else:\n # we are here because we ran out of coeffs to loop over and\n # therefore we have exceeded max_iterations\n import warnings\n failed_points = best_diff[aoi_lt_90][~best_diff_lte_1_lt_90]\n warnings.warn(\n ('%s points failed to converge after %s iterations. best_diff:\\n%s'\n % (len(failed_points), max_iterations, failed_points)),\n RuntimeWarning)\n\n # return the best data, whether or not the solution converged\n return best_ghi, best_dni, best_dhi, best_kt_prime\n\n\ndef _gti_dirint_gte_90(poa_global, aoi, solar_zenith, solar_azimuth,\n surface_tilt, times, kt_prime,\n pressure=101325., temp_dew=None, albedo=.25):\n \"\"\"\n GTI-DIRINT model for AOI >= 90 degrees. See Marion 2015 Section 2.2.\n\n See gti_dirint signature for parameter details.\n \"\"\"\n kt_prime_gte_90 = _gti_dirint_gte_90_kt_prime(aoi, solar_zenith,\n solar_azimuth, times,\n kt_prime)\n\n I0 = get_extra_radiation(times, 1370, 'spencer')\n airmass = atmosphere.get_relative_airmass(solar_zenith, model='kasten1966')\n airmass = atmosphere.get_absolute_airmass(airmass, pressure)\n kt = kt_prime_gte_90 * _kt_kt_prime_factor(airmass)\n disc_dni = np.maximum(_disc_kn(kt, airmass)[0] * I0, 0)\n\n dni_gte_90 = _dirint_from_dni_ktprime(disc_dni, kt_prime, solar_zenith,\n False, temp_dew)\n\n dni_gte_90_proj = dni_gte_90 * tools.cosd(solar_zenith)\n cos_surface_tilt = tools.cosd(surface_tilt)\n\n # isotropic sky plus ground diffuse\n dhi_gte_90 = (\n (2 * poa_global - dni_gte_90_proj * albedo * (1 - cos_surface_tilt)) /\n (1 + cos_surface_tilt + albedo * (1 - cos_surface_tilt)))\n\n ghi_gte_90 = dni_gte_90_proj + dhi_gte_90\n\n return ghi_gte_90, dni_gte_90, dhi_gte_90\n\n\ndef _gti_dirint_gte_90_kt_prime(aoi, solar_zenith, solar_azimuth, times,\n kt_prime):\n \"\"\"\n Determine kt' values to be used in GTI-DIRINT AOI >= 90 deg case.\n See Marion 2015 Section 2.2.\n\n For AOI >= 90 deg: average of the kt_prime values for 65 < AOI < 80\n in each day's morning and afternoon. Morning and afternoon are treated\n separately.\n\n For AOI < 90 deg: NaN.\n\n See gti_dirint signature for parameter details.\n\n Returns\n -------\n kt_prime_gte_90 : Series\n Index is `times`.\n \"\"\"\n # kt_prime values from DIRINT calculation for AOI < 90 case\n # set the kt_prime from sunrise to AOI=90 to be equal to\n # the kt_prime for 65 < AOI < 80 during the morning.\n # similar for the afternoon. repeat for every day.\n aoi_gte_90 = aoi >= 90\n aoi_65_80 = (aoi > 65) & (aoi < 80)\n zenith_lt_90 = solar_zenith < 90\n morning = solar_azimuth < 180\n afternoon = solar_azimuth > 180\n aoi_65_80_morning = aoi_65_80 & morning\n aoi_65_80_afternoon = aoi_65_80 & afternoon\n zenith_lt_90_aoi_gte_90_morning = zenith_lt_90 & aoi_gte_90 & morning\n zenith_lt_90_aoi_gte_90_afternoon = zenith_lt_90 & aoi_gte_90 & afternoon\n\n kt_prime_gte_90 = []\n for date, data in kt_prime.groupby(times.date):\n kt_prime_am_avg = data[aoi_65_80_morning].mean()\n kt_prime_pm_avg = data[aoi_65_80_afternoon].mean()\n\n kt_prime_by_date = pd.Series(np.nan, index=data.index)\n kt_prime_by_date[zenith_lt_90_aoi_gte_90_morning] = kt_prime_am_avg\n kt_prime_by_date[zenith_lt_90_aoi_gte_90_afternoon] = kt_prime_pm_avg\n kt_prime_gte_90.append(kt_prime_by_date)\n kt_prime_gte_90 = pd.concat(kt_prime_gte_90)\n\n return kt_prime_gte_90\n\n\ndef erbs(ghi, zenith, datetime_or_doy, min_cos_zenith=0.065, max_zenith=87):\n r\"\"\"\n Estimate DNI and DHI from GHI using the Erbs model.\n\n The Erbs model [1]_ estimates the diffuse fraction DF from global\n horizontal irradiance through an empirical relationship between DF\n and the ratio of GHI to extraterrestrial irradiance, Kt. The\n function uses the diffuse fraction to compute DHI as\n\n .. math::\n\n DHI = DF \\times GHI\n\n DNI is then estimated as\n\n .. math::\n\n DNI = (GHI - DHI)/\\cos(Z)\n\n where Z is the zenith angle.\n\n Parameters\n ----------\n ghi: numeric\n Global horizontal irradiance in W/m^2.\n zenith: numeric\n True (not refraction-corrected) zenith angles in decimal degrees.\n datetime_or_doy : int, float, array, pd.DatetimeIndex\n Day of year or array of days of year e.g.\n pd.DatetimeIndex.dayofyear, or pd.DatetimeIndex.\n min_cos_zenith : numeric, default 0.065\n Minimum value of cos(zenith) to allow when calculating global\n clearness index `kt`. Equivalent to zenith = 86.273 degrees.\n max_zenith : numeric, default 87\n Maximum value of zenith to allow in DNI calculation. DNI will be\n set to 0 for times with zenith values greater than `max_zenith`.\n\n Returns\n -------\n data : OrderedDict or DataFrame\n Contains the following keys/columns:\n\n * ``dni``: the modeled direct normal irradiance in W/m^2.\n * ``dhi``: the modeled diffuse horizontal irradiance in\n W/m^2.\n * ``kt``: Ratio of global to extraterrestrial irradiance\n on a horizontal plane.\n\n References\n ----------\n .. [1] D. G. Erbs, S. A. Klein and J. A. Duffie, Estimation of the\n diffuse radiation fraction for hourly, daily and monthly-average\n global radiation, Solar Energy 28(4), pp 293-302, 1982. Eq. 1\n\n See also\n --------\n dirint\n disc\n \"\"\"\n\n dni_extra = get_extra_radiation(datetime_or_doy)\n\n kt = clearness_index(ghi, zenith, dni_extra, min_cos_zenith=min_cos_zenith,\n max_clearness_index=1)\n\n # For Kt <= 0.22, set the diffuse fraction\n df = 1 - 0.09*kt\n\n # For Kt > 0.22 and Kt <= 0.8, set the diffuse fraction\n df = np.where((kt > 0.22) & (kt <= 0.8),\n 0.9511 - 0.1604*kt + 4.388*kt**2 -\n 16.638*kt**3 + 12.336*kt**4,\n df)\n\n # For Kt > 0.8, set the diffuse fraction\n df = np.where(kt > 0.8, 0.165, df)\n\n dhi = df * ghi\n\n dni = (ghi - dhi) / tools.cosd(zenith)\n bad_values = (zenith > max_zenith) | (ghi < 0) | (dni < 0)\n dni = np.where(bad_values, 0, dni)\n # ensure that closure relationship remains valid\n dhi = np.where(bad_values, ghi, dhi)\n\n data = OrderedDict()\n data['dni'] = dni\n data['dhi'] = dhi\n data['kt'] = kt\n\n if isinstance(datetime_or_doy, pd.DatetimeIndex):\n data = pd.DataFrame(data, index=datetime_or_doy)\n\n return data\n\n\ndef campbell_norman(zenith, transmittance, pressure=101325.0,\n dni_extra=1367.0):\n '''\n Determine DNI, DHI, GHI from extraterrestrial flux, transmittance,\n and atmospheric pressure.\n\n Parameters\n ----------\n zenith: pd.Series\n True (not refraction-corrected) zenith angles in decimal\n degrees. If Z is a vector it must be of the same size as all\n other vector inputs. Z must be >=0 and <=180.\n\n transmittance: float\n Atmospheric transmittance between 0 and 1.\n\n pressure: float, default 101325.0\n Air pressure\n\n dni_extra: float, default 1367.0\n Direct irradiance incident at the top of the atmosphere.\n\n Returns\n -------\n irradiance: DataFrame\n Modeled direct normal irradiance, direct horizontal irradiance,\n and global horizontal irradiance in W/m^2\n\n References\n ----------\n .. [1] Campbell, G. S., J. M. Norman (1998) An Introduction to\n Environmental Biophysics. 2nd Ed. New York: Springer.\n '''\n\n tau = transmittance\n\n airmass = atmosphere.get_relative_airmass(zenith, model='simple')\n airmass = atmosphere.get_absolute_airmass(airmass, pressure=pressure)\n dni = dni_extra*tau**airmass\n cos_zen = tools.cosd(zenith)\n dhi = 0.3 * (1.0 - tau**airmass) * dni_extra * cos_zen\n ghi = dhi + dni * cos_zen\n\n irrads = OrderedDict()\n irrads['ghi'] = ghi\n irrads['dni'] = dni\n irrads['dhi'] = dhi\n\n if isinstance(ghi, pd.Series):\n irrads = pd.DataFrame(irrads)\n\n return irrads\n\n\ndef _liujordan(zenith, transmittance, airmass, dni_extra=1367.0):\n '''\n Determine DNI, DHI, GHI from extraterrestrial flux, transmittance,\n and optical air mass number.\n\n Liu and Jordan, 1960, developed a simplified direct radiation model.\n DHI is from an empirical equation for diffuse radiation from Liu and\n Jordan, 1960.\n\n Parameters\n ----------\n zenith: pd.Series\n True (not refraction-corrected) zenith angles in decimal\n degrees. If Z is a vector it must be of the same size as all\n other vector inputs. Z must be >=0 and <=180.\n\n transmittance: float\n Atmospheric transmittance between 0 and 1.\n\n pressure: float, default 101325.0\n Air pressure\n\n dni_extra: float, default 1367.0\n Direct irradiance incident at the top of the atmosphere.\n\n Returns\n -------\n irradiance: DataFrame\n Modeled direct normal irradiance, direct horizontal irradiance,\n and global horizontal irradiance in W/m^2\n\n References\n ----------\n .. [1] Campbell, G. S., J. M. Norman (1998) An Introduction to\n Environmental Biophysics. 2nd Ed. New York: Springer.\n\n .. [2] Liu, B. Y., R. C. Jordan, (1960). \"The interrelationship and\n characteristic distribution of direct, diffuse, and total solar\n radiation\". Solar Energy 4:1-19\n '''\n\n tau = transmittance\n\n dni = dni_extra*tau**airmass\n dhi = 0.3 * (1.0 - tau**airmass) * dni_extra * np.cos(np.radians(zenith))\n ghi = dhi + dni * np.cos(np.radians(zenith))\n\n irrads = OrderedDict()\n irrads['ghi'] = ghi\n irrads['dni'] = dni\n irrads['dhi'] = dhi\n\n if isinstance(ghi, pd.Series):\n irrads = pd.DataFrame(irrads)\n\n return irrads\n\n\ndef _get_perez_coefficients(perezmodel):\n '''\n Find coefficients for the Perez model\n\n Parameters\n ----------\n\n perezmodel : string (optional, default='allsitescomposite1990')\n\n a character string which selects the desired set of Perez\n coefficients. If model is not provided as an input, the default,\n '1990' will be used.\n\n All possible model selections are:\n\n * '1990'\n * 'allsitescomposite1990' (same as '1990')\n * 'allsitescomposite1988'\n * 'sandiacomposite1988'\n * 'usacomposite1988'\n * 'france1988'\n * 'phoenix1988'\n * 'elmonte1988'\n * 'osage1988'\n * 'albuquerque1988'\n * 'capecanaveral1988'\n * 'albany1988'\n\n Returns\n --------\n F1coeffs, F2coeffs : (array, array)\n F1 and F2 coefficients for the Perez model\n\n References\n ----------\n .. [1] Loutzenhiser P.G. et. al. \"Empirical validation of models to\n compute solar irradiance on inclined surfaces for building energy\n simulation\" 2007, Solar Energy vol. 81. pp. 254-267\n\n .. [2] Perez, R., Seals, R., Ineichen, P., Stewart, R., Menicucci, D.,\n 1987. A new simplified version of the Perez diffuse irradiance model\n for tilted surfaces. Solar Energy 39(3), 221-232.\n\n .. [3] Perez, R., Ineichen, P., Seals, R., Michalsky, J., Stewart, R.,\n 1990. Modeling daylight availability and irradiance components from\n direct and global irradiance. Solar Energy 44 (5), 271-289.\n\n .. [4] Perez, R. et. al 1988. \"The Development and Verification of the\n Perez Diffuse Radiation Model\". SAND88-7030\n\n '''\n coeffdict = {\n 'allsitescomposite1990': [\n [-0.0080, 0.5880, -0.0620, -0.0600, 0.0720, -0.0220],\n [0.1300, 0.6830, -0.1510, -0.0190, 0.0660, -0.0290],\n [0.3300, 0.4870, -0.2210, 0.0550, -0.0640, -0.0260],\n [0.5680, 0.1870, -0.2950, 0.1090, -0.1520, -0.0140],\n [0.8730, -0.3920, -0.3620, 0.2260, -0.4620, 0.0010],\n [1.1320, -1.2370, -0.4120, 0.2880, -0.8230, 0.0560],\n [1.0600, -1.6000, -0.3590, 0.2640, -1.1270, 0.1310],\n [0.6780, -0.3270, -0.2500, 0.1560, -1.3770, 0.2510]],\n 'allsitescomposite1988': [\n [-0.0180, 0.7050, -0.071, -0.0580, 0.1020, -0.0260],\n [0.1910, 0.6450, -0.1710, 0.0120, 0.0090, -0.0270],\n [0.4400, 0.3780, -0.2560, 0.0870, -0.1040, -0.0250],\n [0.7560, -0.1210, -0.3460, 0.1790, -0.3210, -0.0080],\n [0.9960, -0.6450, -0.4050, 0.2600, -0.5900, 0.0170],\n [1.0980, -1.2900, -0.3930, 0.2690, -0.8320, 0.0750],\n [0.9730, -1.1350, -0.3780, 0.1240, -0.2580, 0.1490],\n [0.6890, -0.4120, -0.2730, 0.1990, -1.6750, 0.2370]],\n 'sandiacomposite1988': [\n [-0.1960, 1.0840, -0.0060, -0.1140, 0.1800, -0.0190],\n [0.2360, 0.5190, -0.1800, -0.0110, 0.0200, -0.0380],\n [0.4540, 0.3210, -0.2550, 0.0720, -0.0980, -0.0460],\n [0.8660, -0.3810, -0.3750, 0.2030, -0.4030, -0.0490],\n [1.0260, -0.7110, -0.4260, 0.2730, -0.6020, -0.0610],\n [0.9780, -0.9860, -0.3500, 0.2800, -0.9150, -0.0240],\n [0.7480, -0.9130, -0.2360, 0.1730, -1.0450, 0.0650],\n [0.3180, -0.7570, 0.1030, 0.0620, -1.6980, 0.2360]],\n 'usacomposite1988': [\n [-0.0340, 0.6710, -0.0590, -0.0590, 0.0860, -0.0280],\n [0.2550, 0.4740, -0.1910, 0.0180, -0.0140, -0.0330],\n [0.4270, 0.3490, -0.2450, 0.0930, -0.1210, -0.0390],\n [0.7560, -0.2130, -0.3280, 0.1750, -0.3040, -0.0270],\n [1.0200, -0.8570, -0.3850, 0.2800, -0.6380, -0.0190],\n [1.0500, -1.3440, -0.3480, 0.2800, -0.8930, 0.0370],\n [0.9740, -1.5070, -0.3700, 0.1540, -0.5680, 0.1090],\n [0.7440, -1.8170, -0.2560, 0.2460, -2.6180, 0.2300]],\n 'france1988': [\n [0.0130, 0.7640, -0.1000, -0.0580, 0.1270, -0.0230],\n [0.0950, 0.9200, -0.1520, 0, 0.0510, -0.0200],\n [0.4640, 0.4210, -0.2800, 0.0640, -0.0510, -0.0020],\n [0.7590, -0.0090, -0.3730, 0.2010, -0.3820, 0.0100],\n [0.9760, -0.4000, -0.4360, 0.2710, -0.6380, 0.0510],\n [1.1760, -1.2540, -0.4620, 0.2950, -0.9750, 0.1290],\n [1.1060, -1.5630, -0.3980, 0.3010, -1.4420, 0.2120],\n [0.9340, -1.5010, -0.2710, 0.4200, -2.9170, 0.2490]],\n 'phoenix1988': [\n [-0.0030, 0.7280, -0.0970, -0.0750, 0.1420, -0.0430],\n [0.2790, 0.3540, -0.1760, 0.0300, -0.0550, -0.0540],\n [0.4690, 0.1680, -0.2460, 0.0480, -0.0420, -0.0570],\n [0.8560, -0.5190, -0.3400, 0.1760, -0.3800, -0.0310],\n [0.9410, -0.6250, -0.3910, 0.1880, -0.3600, -0.0490],\n [1.0560, -1.1340, -0.4100, 0.2810, -0.7940, -0.0650],\n [0.9010, -2.1390, -0.2690, 0.1180, -0.6650, 0.0460],\n [0.1070, 0.4810, 0.1430, -0.1110, -0.1370, 0.2340]],\n 'elmonte1988': [\n [0.0270, 0.7010, -0.1190, -0.0580, 0.1070, -0.0600],\n [0.1810, 0.6710, -0.1780, -0.0790, 0.1940, -0.0350],\n [0.4760, 0.4070, -0.2880, 0.0540, -0.0320, -0.0550],\n [0.8750, -0.2180, -0.4030, 0.1870, -0.3090, -0.0610],\n [1.1660, -1.0140, -0.4540, 0.2110, -0.4100, -0.0440],\n [1.1430, -2.0640, -0.2910, 0.0970, -0.3190, 0.0530],\n [1.0940, -2.6320, -0.2590, 0.0290, -0.4220, 0.1470],\n [0.1550, 1.7230, 0.1630, -0.1310, -0.0190, 0.2770]],\n 'osage1988': [\n [-0.3530, 1.4740, 0.0570, -0.1750, 0.3120, 0.0090],\n [0.3630, 0.2180, -0.2120, 0.0190, -0.0340, -0.0590],\n [-0.0310, 1.2620, -0.0840, -0.0820, 0.2310, -0.0170],\n [0.6910, 0.0390, -0.2950, 0.0910, -0.1310, -0.0350],\n [1.1820, -1.3500, -0.3210, 0.4080, -0.9850, -0.0880],\n [0.7640, 0.0190, -0.2030, 0.2170, -0.2940, -0.1030],\n [0.2190, 1.4120, 0.2440, 0.4710, -2.9880, 0.0340],\n [3.5780, 22.2310, -10.7450, 2.4260, 4.8920, -5.6870]],\n 'albuquerque1988': [\n [0.0340, 0.5010, -0.0940, -0.0630, 0.1060, -0.0440],\n [0.2290, 0.4670, -0.1560, -0.0050, -0.0190, -0.0230],\n [0.4860, 0.2410, -0.2530, 0.0530, -0.0640, -0.0220],\n [0.8740, -0.3930, -0.3970, 0.1810, -0.3270, -0.0370],\n [1.1930, -1.2960, -0.5010, 0.2810, -0.6560, -0.0450],\n [1.0560, -1.7580, -0.3740, 0.2260, -0.7590, 0.0340],\n [0.9010, -4.7830, -0.1090, 0.0630, -0.9700, 0.1960],\n [0.8510, -7.0550, -0.0530, 0.0600, -2.8330, 0.3300]],\n 'capecanaveral1988': [\n [0.0750, 0.5330, -0.1240, -0.0670, 0.0420, -0.0200],\n [0.2950, 0.4970, -0.2180, -0.0080, 0.0030, -0.0290],\n [0.5140, 0.0810, -0.2610, 0.0750, -0.1600, -0.0290],\n [0.7470, -0.3290, -0.3250, 0.1810, -0.4160, -0.0300],\n [0.9010, -0.8830, -0.2970, 0.1780, -0.4890, 0.0080],\n [0.5910, -0.0440, -0.1160, 0.2350, -0.9990, 0.0980],\n [0.5370, -2.4020, 0.3200, 0.1690, -1.9710, 0.3100],\n [-0.8050, 4.5460, 1.0720, -0.2580, -0.9500, 0.7530]],\n 'albany1988': [\n [0.0120, 0.5540, -0.0760, -0.0520, 0.0840, -0.0290],\n [0.2670, 0.4370, -0.1940, 0.0160, 0.0220, -0.0360],\n [0.4200, 0.3360, -0.2370, 0.0740, -0.0520, -0.0320],\n [0.6380, -0.0010, -0.2810, 0.1380, -0.1890, -0.0120],\n [1.0190, -1.0270, -0.3420, 0.2710, -0.6280, 0.0140],\n [1.1490, -1.9400, -0.3310, 0.3220, -1.0970, 0.0800],\n [1.4340, -3.9940, -0.4920, 0.4530, -2.3760, 0.1170],\n [1.0070, -2.2920, -0.4820, 0.3900, -3.3680, 0.2290]], }\n\n array = np.array(coeffdict[perezmodel])\n\n F1coeffs = array[:, 0:3]\n F2coeffs = array[:, 3:7]\n\n return F1coeffs, F2coeffs\n\n\ndef _get_dirint_coeffs():\n \"\"\"\n A place to stash the dirint coefficients.\n\n Returns\n -------\n np.array with shape ``(6, 6, 7, 5)``.\n Ordering is ``[kt_prime_bin, zenith_bin, delta_kt_prime_bin, w_bin]``\n \"\"\"\n\n # To allow for maximum copy/paste from the MATLAB 1-indexed code,\n # we create and assign values to an oversized array.\n # Then, we return the [1:, 1:, :, :] slice.\n\n coeffs = np.zeros((7, 7, 7, 5))\n\n coeffs[1, 1, :, :] = [\n [0.385230, 0.385230, 0.385230, 0.462880, 0.317440],\n [0.338390, 0.338390, 0.221270, 0.316730, 0.503650],\n [0.235680, 0.235680, 0.241280, 0.157830, 0.269440],\n [0.830130, 0.830130, 0.171970, 0.841070, 0.457370],\n [0.548010, 0.548010, 0.478000, 0.966880, 1.036370],\n [0.548010, 0.548010, 1.000000, 3.012370, 1.976540],\n [0.582690, 0.582690, 0.229720, 0.892710, 0.569950]]\n\n coeffs[1, 2, :, :] = [\n [0.131280, 0.131280, 0.385460, 0.511070, 0.127940],\n [0.223710, 0.223710, 0.193560, 0.304560, 0.193940],\n [0.229970, 0.229970, 0.275020, 0.312730, 0.244610],\n [0.090100, 0.184580, 0.260500, 0.687480, 0.579440],\n [0.131530, 0.131530, 0.370190, 1.380350, 1.052270],\n [1.116250, 1.116250, 0.928030, 3.525490, 2.316920],\n [0.090100, 0.237000, 0.300040, 0.812470, 0.664970]]\n\n coeffs[1, 3, :, :] = [\n [0.587510, 0.130000, 0.400000, 0.537210, 0.832490],\n [0.306210, 0.129830, 0.204460, 0.500000, 0.681640],\n [0.224020, 0.260620, 0.334080, 0.501040, 0.350470],\n [0.421540, 0.753970, 0.750660, 3.706840, 0.983790],\n [0.706680, 0.373530, 1.245670, 0.864860, 1.992630],\n [4.864400, 0.117390, 0.265180, 0.359180, 3.310820],\n [0.392080, 0.493290, 0.651560, 1.932780, 0.898730]]\n\n coeffs[1, 4, :, :] = [\n [0.126970, 0.126970, 0.126970, 0.126970, 0.126970],\n [0.810820, 0.810820, 0.810820, 0.810820, 0.810820],\n [3.241680, 2.500000, 2.291440, 2.291440, 2.291440],\n [4.000000, 3.000000, 2.000000, 0.975430, 1.965570],\n [12.494170, 12.494170, 8.000000, 5.083520, 8.792390],\n [21.744240, 21.744240, 21.744240, 21.744240, 21.744240],\n [3.241680, 12.494170, 1.620760, 1.375250, 2.331620]]\n\n coeffs[1, 5, :, :] = [\n [0.126970, 0.126970, 0.126970, 0.126970, 0.126970],\n [0.810820, 0.810820, 0.810820, 0.810820, 0.810820],\n [3.241680, 2.500000, 2.291440, 2.291440, 2.291440],\n [4.000000, 3.000000, 2.000000, 0.975430, 1.965570],\n [12.494170, 12.494170, 8.000000, 5.083520, 8.792390],\n [21.744240, 21.744240, 21.744240, 21.744240, 21.744240],\n [3.241680, 12.494170, 1.620760, 1.375250, 2.331620]]\n\n coeffs[1, 6, :, :] = [\n [0.126970, 0.126970, 0.126970, 0.126970, 0.126970],\n [0.810820, 0.810820, 0.810820, 0.810820, 0.810820],\n [3.241680, 2.500000, 2.291440, 2.291440, 2.291440],\n [4.000000, 3.000000, 2.000000, 0.975430, 1.965570],\n [12.494170, 12.494170, 8.000000, 5.083520, 8.792390],\n [21.744240, 21.744240, 21.744240, 21.744240, 21.744240],\n [3.241680, 12.494170, 1.620760, 1.375250, 2.331620]]\n\n coeffs[2, 1, :, :] = [\n [0.337440, 0.337440, 0.969110, 1.097190, 1.116080],\n [0.337440, 0.337440, 0.969110, 1.116030, 0.623900],\n [0.337440, 0.337440, 1.530590, 1.024420, 0.908480],\n [0.584040, 0.584040, 0.847250, 0.914940, 1.289300],\n [0.337440, 0.337440, 0.310240, 1.435020, 1.852830],\n [0.337440, 0.337440, 1.015010, 1.097190, 2.117230],\n [0.337440, 0.337440, 0.969110, 1.145730, 1.476400]]\n\n coeffs[2, 2, :, :] = [\n [0.300000, 0.300000, 0.700000, 1.100000, 0.796940],\n [0.219870, 0.219870, 0.526530, 0.809610, 0.649300],\n [0.386650, 0.386650, 0.119320, 0.576120, 0.685460],\n [0.746730, 0.399830, 0.470970, 0.986530, 0.785370],\n [0.575420, 0.936700, 1.649200, 1.495840, 1.335590],\n [1.319670, 4.002570, 1.276390, 2.644550, 2.518670],\n [0.665190, 0.678910, 1.012360, 1.199940, 0.986580]]\n\n coeffs[2, 3, :, :] = [\n [0.378870, 0.974060, 0.500000, 0.491880, 0.665290],\n [0.105210, 0.263470, 0.407040, 0.553460, 0.582590],\n [0.312900, 0.345240, 1.144180, 0.854790, 0.612280],\n [0.119070, 0.365120, 0.560520, 0.793720, 0.802600],\n [0.781610, 0.837390, 1.270420, 1.537980, 1.292950],\n [1.152290, 1.152290, 1.492080, 1.245370, 2.177100],\n [0.424660, 0.529550, 0.966910, 1.033460, 0.958730]]\n\n coeffs[2, 4, :, :] = [\n [0.310590, 0.714410, 0.252450, 0.500000, 0.607600],\n [0.975190, 0.363420, 0.500000, 0.400000, 0.502800],\n [0.175580, 0.196250, 0.476360, 1.072470, 0.490510],\n [0.719280, 0.698620, 0.657770, 1.190840, 0.681110],\n [0.426240, 1.464840, 0.678550, 1.157730, 0.978430],\n [2.501120, 1.789130, 1.387090, 2.394180, 2.394180],\n [0.491640, 0.677610, 0.685610, 1.082400, 0.735410]]\n\n coeffs[2, 5, :, :] = [\n [0.597000, 0.500000, 0.300000, 0.310050, 0.413510],\n [0.314790, 0.336310, 0.400000, 0.400000, 0.442460],\n [0.166510, 0.460440, 0.552570, 1.000000, 0.461610],\n [0.401020, 0.559110, 0.403630, 1.016710, 0.671490],\n [0.400360, 0.750830, 0.842640, 1.802600, 1.023830],\n [3.315300, 1.510380, 2.443650, 1.638820, 2.133990],\n [0.530790, 0.745850, 0.693050, 1.458040, 0.804500]]\n\n coeffs[2, 6, :, :] = [\n [0.597000, 0.500000, 0.300000, 0.310050, 0.800920],\n [0.314790, 0.336310, 0.400000, 0.400000, 0.237040],\n [0.166510, 0.460440, 0.552570, 1.000000, 0.581990],\n [0.401020, 0.559110, 0.403630, 1.016710, 0.898570],\n [0.400360, 0.750830, 0.842640, 1.802600, 3.400390],\n [3.315300, 1.510380, 2.443650, 1.638820, 2.508780],\n [0.204340, 1.157740, 2.003080, 2.622080, 1.409380]]\n\n coeffs[3, 1, :, :] = [\n [1.242210, 1.242210, 1.242210, 1.242210, 1.242210],\n [0.056980, 0.056980, 0.656990, 0.656990, 0.925160],\n [0.089090, 0.089090, 1.040430, 1.232480, 1.205300],\n [1.053850, 1.053850, 1.399690, 1.084640, 1.233340],\n [1.151540, 1.151540, 1.118290, 1.531640, 1.411840],\n [1.494980, 1.494980, 1.700000, 1.800810, 1.671600],\n [1.018450, 1.018450, 1.153600, 1.321890, 1.294670]]\n\n coeffs[3, 2, :, :] = [\n [0.700000, 0.700000, 1.023460, 0.700000, 0.945830],\n [0.886300, 0.886300, 1.333620, 0.800000, 1.066620],\n [0.902180, 0.902180, 0.954330, 1.126690, 1.097310],\n [1.095300, 1.075060, 1.176490, 1.139470, 1.096110],\n [1.201660, 1.201660, 1.438200, 1.256280, 1.198060],\n [1.525850, 1.525850, 1.869160, 1.985410, 1.911590],\n [1.288220, 1.082810, 1.286370, 1.166170, 1.119330]]\n\n coeffs[3, 3, :, :] = [\n [0.600000, 1.029910, 0.859890, 0.550000, 0.813600],\n [0.604450, 1.029910, 0.859890, 0.656700, 0.928840],\n [0.455850, 0.750580, 0.804930, 0.823000, 0.911000],\n [0.526580, 0.932310, 0.908620, 0.983520, 0.988090],\n [1.036110, 1.100690, 0.848380, 1.035270, 1.042380],\n [1.048440, 1.652720, 0.900000, 2.350410, 1.082950],\n [0.817410, 0.976160, 0.861300, 0.974780, 1.004580]]\n\n coeffs[3, 4, :, :] = [\n [0.782110, 0.564280, 0.600000, 0.600000, 0.665740],\n [0.894480, 0.680730, 0.541990, 0.800000, 0.669140],\n [0.487460, 0.818950, 0.841830, 0.872540, 0.709040],\n [0.709310, 0.872780, 0.908480, 0.953290, 0.844350],\n [0.863920, 0.947770, 0.876220, 1.078750, 0.936910],\n [1.280350, 0.866720, 0.769790, 1.078750, 0.975130],\n [0.725420, 0.869970, 0.868810, 0.951190, 0.829220]]\n\n coeffs[3, 5, :, :] = [\n [0.791750, 0.654040, 0.483170, 0.409000, 0.597180],\n [0.566140, 0.948990, 0.971820, 0.653570, 0.718550],\n [0.648710, 0.637730, 0.870510, 0.860600, 0.694300],\n [0.637630, 0.767610, 0.925670, 0.990310, 0.847670],\n [0.736380, 0.946060, 1.117590, 1.029340, 0.947020],\n [1.180970, 0.850000, 1.050000, 0.950000, 0.888580],\n [0.700560, 0.801440, 0.961970, 0.906140, 0.823880]]\n\n coeffs[3, 6, :, :] = [\n [0.500000, 0.500000, 0.586770, 0.470550, 0.629790],\n [0.500000, 0.500000, 1.056220, 1.260140, 0.658140],\n [0.500000, 0.500000, 0.631830, 0.842620, 0.582780],\n [0.554710, 0.734730, 0.985820, 0.915640, 0.898260],\n [0.712510, 1.205990, 0.909510, 1.078260, 0.885610],\n [1.899260, 1.559710, 1.000000, 1.150000, 1.120390],\n [0.653880, 0.793120, 0.903320, 0.944070, 0.796130]]\n\n coeffs[4, 1, :, :] = [\n [1.000000, 1.000000, 1.050000, 1.170380, 1.178090],\n [0.960580, 0.960580, 1.059530, 1.179030, 1.131690],\n [0.871470, 0.871470, 0.995860, 1.141910, 1.114600],\n [1.201590, 1.201590, 0.993610, 1.109380, 1.126320],\n [1.065010, 1.065010, 0.828660, 0.939970, 1.017930],\n [1.065010, 1.065010, 0.623690, 1.119620, 1.132260],\n [1.071570, 1.071570, 0.958070, 1.114130, 1.127110]]\n\n coeffs[4, 2, :, :] = [\n [0.950000, 0.973390, 0.852520, 1.092200, 1.096590],\n [0.804120, 0.913870, 0.980990, 1.094580, 1.042420],\n [0.737540, 0.935970, 0.999940, 1.056490, 1.050060],\n [1.032980, 1.034540, 0.968460, 1.032080, 1.015780],\n [0.900000, 0.977210, 0.945960, 1.008840, 0.969960],\n [0.600000, 0.750000, 0.750000, 0.844710, 0.899100],\n [0.926800, 0.965030, 0.968520, 1.044910, 1.032310]]\n\n coeffs[4, 3, :, :] = [\n [0.850000, 1.029710, 0.961100, 1.055670, 1.009700],\n [0.818530, 0.960010, 0.996450, 1.081970, 1.036470],\n [0.765380, 0.953500, 0.948260, 1.052110, 1.000140],\n [0.775610, 0.909610, 0.927800, 0.987800, 0.952100],\n [1.000990, 0.881880, 0.875950, 0.949100, 0.893690],\n [0.902370, 0.875960, 0.807990, 0.942410, 0.917920],\n [0.856580, 0.928270, 0.946820, 1.032260, 0.972990]]\n\n coeffs[4, 4, :, :] = [\n [0.750000, 0.857930, 0.983800, 1.056540, 0.980240],\n [0.750000, 0.987010, 1.013730, 1.133780, 1.038250],\n [0.800000, 0.947380, 1.012380, 1.091270, 0.999840],\n [0.800000, 0.914550, 0.908570, 0.999190, 0.915230],\n [0.778540, 0.800590, 0.799070, 0.902180, 0.851560],\n [0.680190, 0.317410, 0.507680, 0.388910, 0.646710],\n [0.794920, 0.912780, 0.960830, 1.057110, 0.947950]]\n\n coeffs[4, 5, :, :] = [\n [0.750000, 0.833890, 0.867530, 1.059890, 0.932840],\n [0.979700, 0.971470, 0.995510, 1.068490, 1.030150],\n [0.858850, 0.987920, 1.043220, 1.108700, 1.044900],\n [0.802400, 0.955110, 0.911660, 1.045070, 0.944470],\n [0.884890, 0.766210, 0.885390, 0.859070, 0.818190],\n [0.615680, 0.700000, 0.850000, 0.624620, 0.669300],\n [0.835570, 0.946150, 0.977090, 1.049350, 0.979970]]\n\n coeffs[4, 6, :, :] = [\n [0.689220, 0.809600, 0.900000, 0.789500, 0.853990],\n [0.854660, 0.852840, 0.938200, 0.923110, 0.955010],\n [0.938600, 0.932980, 1.010390, 1.043950, 1.041640],\n [0.843620, 0.981300, 0.951590, 0.946100, 0.966330],\n [0.694740, 0.814690, 0.572650, 0.400000, 0.726830],\n [0.211370, 0.671780, 0.416340, 0.297290, 0.498050],\n [0.843540, 0.882330, 0.911760, 0.898420, 0.960210]]\n\n coeffs[5, 1, :, :] = [\n [1.054880, 1.075210, 1.068460, 1.153370, 1.069220],\n [1.000000, 1.062220, 1.013470, 1.088170, 1.046200],\n [0.885090, 0.993530, 0.942590, 1.054990, 1.012740],\n [0.920000, 0.950000, 0.978720, 1.020280, 0.984440],\n [0.850000, 0.908500, 0.839940, 0.985570, 0.962180],\n [0.800000, 0.800000, 0.810080, 0.950000, 0.961550],\n [1.038590, 1.063200, 1.034440, 1.112780, 1.037800]]\n\n coeffs[5, 2, :, :] = [\n [1.017610, 1.028360, 1.058960, 1.133180, 1.045620],\n [0.920000, 0.998970, 1.033590, 1.089030, 1.022060],\n [0.912370, 0.949930, 0.979770, 1.020420, 0.981770],\n [0.847160, 0.935300, 0.930540, 0.955050, 0.946560],\n [0.880260, 0.867110, 0.874130, 0.972650, 0.883420],\n [0.627150, 0.627150, 0.700000, 0.774070, 0.845130],\n [0.973700, 1.006240, 1.026190, 1.071960, 1.017240]]\n\n coeffs[5, 3, :, :] = [\n [1.028710, 1.017570, 1.025900, 1.081790, 1.024240],\n [0.924980, 0.985500, 1.014100, 1.092210, 0.999610],\n [0.828570, 0.934920, 0.994950, 1.024590, 0.949710],\n [0.900810, 0.901330, 0.928830, 0.979570, 0.913100],\n [0.761030, 0.845150, 0.805360, 0.936790, 0.853460],\n [0.626400, 0.546750, 0.730500, 0.850000, 0.689050],\n [0.957630, 0.985480, 0.991790, 1.050220, 0.987900]]\n\n coeffs[5, 4, :, :] = [\n [0.992730, 0.993880, 1.017150, 1.059120, 1.017450],\n [0.975610, 0.987160, 1.026820, 1.075440, 1.007250],\n [0.871090, 0.933190, 0.974690, 0.979840, 0.952730],\n [0.828750, 0.868090, 0.834920, 0.905510, 0.871530],\n [0.781540, 0.782470, 0.767910, 0.764140, 0.795890],\n [0.743460, 0.693390, 0.514870, 0.630150, 0.715660],\n [0.934760, 0.957870, 0.959640, 0.972510, 0.981640]]\n\n coeffs[5, 5, :, :] = [\n [0.965840, 0.941240, 0.987100, 1.022540, 1.011160],\n [0.988630, 0.994770, 0.976590, 0.950000, 1.034840],\n [0.958200, 1.018080, 0.974480, 0.920000, 0.989870],\n [0.811720, 0.869090, 0.812020, 0.850000, 0.821050],\n [0.682030, 0.679480, 0.632450, 0.746580, 0.738550],\n [0.668290, 0.445860, 0.500000, 0.678920, 0.696510],\n [0.926940, 0.953350, 0.959050, 0.876210, 0.991490]]\n\n coeffs[5, 6, :, :] = [\n [0.948940, 0.997760, 0.850000, 0.826520, 0.998470],\n [1.017860, 0.970000, 0.850000, 0.700000, 0.988560],\n [1.000000, 0.950000, 0.850000, 0.606240, 0.947260],\n [1.000000, 0.746140, 0.751740, 0.598390, 0.725230],\n [0.922210, 0.500000, 0.376800, 0.517110, 0.548630],\n [0.500000, 0.450000, 0.429970, 0.404490, 0.539940],\n [0.960430, 0.881630, 0.775640, 0.596350, 0.937680]]\n\n coeffs[6, 1, :, :] = [\n [1.030000, 1.040000, 1.000000, 1.000000, 1.049510],\n [1.050000, 0.990000, 0.990000, 0.950000, 0.996530],\n [1.050000, 0.990000, 0.990000, 0.820000, 0.971940],\n [1.050000, 0.790000, 0.880000, 0.820000, 0.951840],\n [1.000000, 0.530000, 0.440000, 0.710000, 0.928730],\n [0.540000, 0.470000, 0.500000, 0.550000, 0.773950],\n [1.038270, 0.920180, 0.910930, 0.821140, 1.034560]]\n\n coeffs[6, 2, :, :] = [\n [1.041020, 0.997520, 0.961600, 1.000000, 1.035780],\n [0.948030, 0.980000, 0.900000, 0.950360, 0.977460],\n [0.950000, 0.977250, 0.869270, 0.800000, 0.951680],\n [0.951870, 0.850000, 0.748770, 0.700000, 0.883850],\n [0.900000, 0.823190, 0.727450, 0.600000, 0.839870],\n [0.850000, 0.805020, 0.692310, 0.500000, 0.788410],\n [1.010090, 0.895270, 0.773030, 0.816280, 1.011680]]\n\n coeffs[6, 3, :, :] = [\n [1.022450, 1.004600, 0.983650, 1.000000, 1.032940],\n [0.943960, 0.999240, 0.983920, 0.905990, 0.978150],\n [0.936240, 0.946480, 0.850000, 0.850000, 0.930320],\n [0.816420, 0.885000, 0.644950, 0.817650, 0.865310],\n [0.742960, 0.765690, 0.561520, 0.700000, 0.827140],\n [0.643870, 0.596710, 0.474460, 0.600000, 0.651200],\n [0.971740, 0.940560, 0.714880, 0.864380, 1.001650]]\n\n coeffs[6, 4, :, :] = [\n [0.995260, 0.977010, 1.000000, 1.000000, 1.035250],\n [0.939810, 0.975250, 0.939980, 0.950000, 0.982550],\n [0.876870, 0.879440, 0.850000, 0.900000, 0.917810],\n [0.873480, 0.873450, 0.751470, 0.850000, 0.863040],\n [0.761470, 0.702360, 0.638770, 0.750000, 0.783120],\n [0.734080, 0.650000, 0.600000, 0.650000, 0.715660],\n [0.942160, 0.919100, 0.770340, 0.731170, 0.995180]]\n\n coeffs[6, 5, :, :] = [\n [0.952560, 0.916780, 0.920000, 0.900000, 1.005880],\n [0.928620, 0.994420, 0.900000, 0.900000, 0.983720],\n [0.913070, 0.850000, 0.850000, 0.800000, 0.924280],\n [0.868090, 0.807170, 0.823550, 0.600000, 0.844520],\n [0.769570, 0.719870, 0.650000, 0.550000, 0.733500],\n [0.580250, 0.650000, 0.600000, 0.500000, 0.628850],\n [0.904770, 0.852650, 0.708370, 0.493730, 0.949030]]\n\n coeffs[6, 6, :, :] = [\n [0.911970, 0.800000, 0.800000, 0.800000, 0.956320],\n [0.912620, 0.682610, 0.750000, 0.700000, 0.950110],\n [0.653450, 0.659330, 0.700000, 0.600000, 0.856110],\n [0.648440, 0.600000, 0.641120, 0.500000, 0.695780],\n [0.570000, 0.550000, 0.598800, 0.400000, 0.560150],\n [0.475230, 0.500000, 0.518640, 0.339970, 0.520230],\n [0.743440, 0.592190, 0.603060, 0.316930, 0.794390]]\n\n return coeffs[1:, 1:, :, :]\n\n\ndef dni(ghi, dhi, zenith, clearsky_dni=None, clearsky_tolerance=1.1,\n zenith_threshold_for_zero_dni=88.0,\n zenith_threshold_for_clearsky_limit=80.0):\n \"\"\"\n Determine DNI from GHI and DHI.\n\n When calculating the DNI from GHI and DHI the calculated DNI may be\n unreasonably high or negative for zenith angles close to 90 degrees\n (sunrise/sunset transitions). This function identifies unreasonable DNI\n values and sets them to NaN. If the clearsky DNI is given unreasonably high\n values are cut off.\n\n Parameters\n ----------\n ghi : Series\n Global horizontal irradiance.\n\n dhi : Series\n Diffuse horizontal irradiance.\n\n zenith : Series\n True (not refraction-corrected) zenith angles in decimal\n degrees. Angles must be >=0 and <=180.\n\n clearsky_dni : None or Series, default None\n Clearsky direct normal irradiance.\n\n clearsky_tolerance : float, default 1.1\n If 'clearsky_dni' is given this parameter can be used to allow a\n tolerance by how much the calculated DNI value can be greater than\n the clearsky value before it is identified as an unreasonable value.\n\n zenith_threshold_for_zero_dni : float, default 88.0\n Non-zero DNI values for zenith angles greater than or equal to\n 'zenith_threshold_for_zero_dni' will be set to NaN.\n\n zenith_threshold_for_clearsky_limit : float, default 80.0\n DNI values for zenith angles greater than or equal to\n 'zenith_threshold_for_clearsky_limit' and smaller the\n 'zenith_threshold_for_zero_dni' that are greater than the clearsky DNI\n (times allowed tolerance) will be corrected. Only applies if\n 'clearsky_dni' is not None.\n\n Returns\n -------\n dni : Series\n The modeled direct normal irradiance.\n \"\"\"\n\n # calculate DNI\n dni = (ghi - dhi) / tools.cosd(zenith)\n\n # cutoff negative values\n dni[dni < 0] = float('nan')\n\n # set non-zero DNI values for zenith angles >=\n # zenith_threshold_for_zero_dni to NaN\n dni[(zenith >= zenith_threshold_for_zero_dni) & (dni != 0)] = float('nan')\n\n # correct DNI values for zenith angles greater or equal to the\n # zenith_threshold_for_clearsky_limit and smaller than the\n # upper_cutoff_zenith that are greater than the clearsky DNI (times\n # clearsky_tolerance)\n if clearsky_dni is not None:\n max_dni = clearsky_dni * clearsky_tolerance\n dni[(zenith >= zenith_threshold_for_clearsky_limit) &\n (zenith < zenith_threshold_for_zero_dni) &\n (dni > max_dni)] = max_dni\n return dni\n\n\ndef complete_irradiance(solar_zenith,\n ghi=None,\n dhi=None,\n dni=None,\n dni_clear=None):\n r\"\"\"\n Use the component sum equations to calculate the missing series, using\n the other available time series. One of the three parameters (ghi, dhi,\n dni) is passed as None, and the other associated series passed are used to\n calculate the missing series value.\n\n The \"component sum\" or \"closure\" equation relates the three\n primary irradiance components as follows:\n\n .. math::\n\n GHI = DHI + DNI \\cos(\\theta_z)\n\n Parameters\n ----------\n solar_zenith : Series\n Zenith angles in decimal degrees, with datetime index.\n Angles must be >=0 and <=180. Must have the same datetime index\n as ghi, dhi, and dni series, when available.\n ghi : Series, optional\n Pandas series of dni data, with datetime index. Must have the same\n datetime index as dni, dhi, and zenith series, when available.\n dhi : Series, optional\n Pandas series of dni data, with datetime index. Must have the same\n datetime index as ghi, dni, and zenith series, when available.\n dni : Series, optional\n Pandas series of dni data, with datetime index. Must have the same\n datetime index as ghi, dhi, and zenith series, when available.\n dni_clear : Series, optional\n Pandas series of clearsky dni data. Must have the same datetime index\n as ghi, dhi, dni, and zenith series, when available. See\n :py:func:`dni` for details.\n\n Returns\n -------\n component_sum_df : Dataframe\n Pandas series of 'ghi', 'dhi', and 'dni' columns with datetime index\n \"\"\"\n if ghi is not None and dhi is not None and dni is None:\n dni = pvlib.irradiance.dni(ghi, dhi, solar_zenith,\n clearsky_dni=dni_clear,\n clearsky_tolerance=1.1)\n elif dni is not None and dhi is not None and ghi is None:\n ghi = (dhi + dni * tools.cosd(solar_zenith))\n elif dni is not None and ghi is not None and dhi is None:\n dhi = (ghi - dni * tools.cosd(solar_zenith))\n else:\n raise ValueError(\n \"Please check that exactly one of ghi, dhi and dni parameters \"\n \"is set to None\"\n )\n # Merge the outputs into a master dataframe containing 'ghi', 'dhi',\n # and 'dni' columns\n component_sum_df = pd.DataFrame({'ghi': ghi,\n 'dhi': dhi,\n 'dni': dni})\n return component_sum_df\n", "readthedocs.yml": "python:\n version: 3\n # only use the packages specified in setup.py\n use_system_site_packages: false\n pip_install: true\n extra_requirements:\n - doc"}
|
diff --git a/docs/examples/irradiance-decomposition/README.rst b/docs/examples/irradiance-decomposition/README.rst
new file mode 100644
index 0000000000..95f18c9330
--- /dev/null
+++ b/docs/examples/irradiance-decomposition/README.rst
@@ -0,0 +1,3 @@
+Irradiance Decomposition
+------------------------
+
diff --git a/docs/sphinx/source/reference/irradiance/decomposition.rst b/docs/sphinx/source/reference/irradiance/decomposition.rst
index 2b89a272d7..f0d1495889 100644
--- a/docs/sphinx/source/reference/irradiance/decomposition.rst
+++ b/docs/sphinx/source/reference/irradiance/decomposition.rst
@@ -12,5 +12,6 @@ DNI estimation models
irradiance.dirint
irradiance.dirindex
irradiance.erbs
+ irradiance.boland
irradiance.campbell_norman
irradiance.gti_dirint
diff --git a/docs/sphinx/source/whatsnew/v0.9.5.rst b/docs/sphinx/source/whatsnew/v0.9.5.rst
index fc2afa3321..c5d246310f 100644
--- a/docs/sphinx/source/whatsnew/v0.9.5.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.5.rst
@@ -20,6 +20,10 @@ Enhancements
:py:func:`pvlib.snow.loss_townsend` (:issue:`1636`, :pull:`1653`)
* Added optional ``n_ar`` parameter to :py:func:`pvlib.iam.physical` to
support an anti-reflective coating. (:issue:`1501`, :pull:`1616`)
+* :py:func:`~pvlib.irradiance.boland` is another diffuse fraction, DF,
+ estimation method similar to Erbs but uses a single logistic exponential
+ correlation between DF and clearness index, kt, that is continuously
+ differentiable and bounded between zero and one. (:pull:`1179`)
* Add ``model='gueymard2003'``, the airmass model used for REST and REST2,
to :py:func:`~pvlib.atmosphere.get_relative_airmass`. (:pull:`1655`)
diff --git a/readthedocs.yml b/readthedocs.yml
index fb2d1374bb..dde255335c 100644
--- a/readthedocs.yml
+++ b/readthedocs.yml
@@ -1,7 +1,26 @@
+# .readthedocs.yaml
+# Read the Docs configuration file
+# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
+
+# Required
+version: 2
+
+build:
+ os: ubuntu-20.04
+ tools:
+ python: "3.7"
+ jobs:
+ # fetch the full history so that setuptools_scm can determine the
+ # correct version string for long PRs with many commits
+ post_checkout:
+ - git fetch --unshallow
+
python:
- version: 3
- # only use the packages specified in setup.py
- use_system_site_packages: false
- pip_install: true
- extra_requirements:
- - doc
\ No newline at end of file
+ # only use the packages specified in setup.py
+ system_packages: false
+
+ install:
+ - method: pip
+ path: .
+ extra_requirements:
+ - doc
|
{"pvlib/irradiance.py": [{"type": "function", "name": "boland", "lines": [2268, 2369], "signature": "def boland(ghi, solar_zenith, datetime_or_doy, a_coeff=8.645, b_coeff=0.613, min_cos_zenith=0.065, max_zenith=87):", "doc": "Estimate DNI and DHI from GHI using the Boland clearness index model.\n\nThe Boland model [1]_, [2]_ estimates the diffuse fraction, DF, from global\nhorizontal irradiance, GHI, through an empirical relationship between DF\nand the clearness index, :math:`k_t`, the ratio of GHI to horizontal\nextraterrestrial irradiance.\n\n.. math::\n\n \\mathit{DF} = \\frac{1}{1 + \\exp\\left(a \\left(k_t - b\\right)\\right)}\n\n\nParameters\n----------\nghi: numeric\n Global horizontal irradiance. [W/m^2]\nsolar_zenith: numeric\n True (not refraction-corrected) zenith angles in decimal degrees.\ndatetime_or_doy : numeric, pandas.DatetimeIndex\n Day of year or array of days of year e.g.\n pd.DatetimeIndex.dayofyear, or pd.DatetimeIndex.\na_coeff : float, default 8.645\n Logistic curve fit coefficient.\nb_coeff : float, default 0.613\n Logistic curve fit coefficient.\nmin_cos_zenith : numeric, default 0.065\n Minimum value of cos(zenith) to allow when calculating global\n clearness index :math:`k_t`. Equivalent to zenith = 86.273 degrees.\nmax_zenith : numeric, default 87\n Maximum value of zenith to allow in DNI calculation. DNI will be\n set to 0 for times with zenith values greater than `max_zenith`.\n\nReturns\n-------\ndata : OrderedDict or DataFrame\n Contains the following keys/columns:\n\n * ``dni``: the modeled direct normal irradiance in W/m^2.\n * ``dhi``: the modeled diffuse horizontal irradiance in\n W/m^2.\n * ``kt``: Ratio of global to extraterrestrial irradiance\n on a horizontal plane.\n\nReferences\n----------\n.. [1] J. Boland, B. Ridley (2008) Models of Diffuse Solar Fraction. In:\n Badescu V. (eds) Modeling Solar Radiation at the Earth’s Surface.\n Springer, Berlin, Heidelberg. :doi:`10.1007/978-3-540-77455-6_8`\n.. [2] John Boland, Lynne Scott, and Mark Luther, Modelling the diffuse\n fraction of global solar radiation on a horizontal surface,\n Environmetrics 12(2), pp 103-116, 2001,\n :doi:`10.1002/1099-095X(200103)12:2%3C103::AID-ENV447%3E3.0.CO;2-2`\n\nSee also\n--------\ndirint\ndisc\nerbs\n\nNotes\n-----\nBoland diffuse fraction differs from other decomposition algorithms by use\nof a logistic function to fit the entire range of clearness index,\n:math:`k_t`. Parameters ``a_coeff`` and ``b_coeff`` are reported in [2]_\nfor different time intervals:\n\n* 15-minute: ``a = 8.645`` and ``b = 0.613``\n* 1-hour: ``a = 7.997`` and ``b = 0.586``"}]}
|
0.8
|
["pvlib/tests/test_irradiance.py::test_boland"]
|
["pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-300-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-300.0-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-testval2-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-testval3-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-testval4-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-testval5-expected5]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-testval6-expected6]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-testval7-expected7]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[asce-testval8-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-300-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-300.0-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-testval2-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-testval3-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-testval4-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-testval5-expected5]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-testval6-expected6]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-testval7-expected7]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[spencer-testval8-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-300-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-300.0-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-testval2-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-testval3-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-testval4-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-testval5-expected5]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-testval6-expected6]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-testval7-expected7]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[nrel-testval8-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-300-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-300.0-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-testval2-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-testval3-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-testval4-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-testval5-expected5]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-testval6-expected6]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-testval7-expected7]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation[pyephem-testval8-1383.636203]", "pvlib/tests/test_irradiance.py::test_get_extra_radiation_epoch_year", "pvlib/tests/test_irradiance.py::test_get_extra_radiation_nrel_numba", "pvlib/tests/test_irradiance.py::test_get_extra_radiation_invalid", "pvlib/tests/test_irradiance.py::test_get_ground_diffuse_simple_float", "pvlib/tests/test_irradiance.py::test_get_ground_diffuse_simple_series", "pvlib/tests/test_irradiance.py::test_get_ground_diffuse_albedo_0", "pvlib/tests/test_irradiance.py::test_get_ground_diffuse_albedo_series", "pvlib/tests/test_irradiance.py::test_grounddiffuse_albedo_invalid_surface", "pvlib/tests/test_irradiance.py::test_get_ground_diffuse_albedo_surface", "pvlib/tests/test_irradiance.py::test_isotropic_float", "pvlib/tests/test_irradiance.py::test_isotropic_series", "pvlib/tests/test_irradiance.py::test_klucher_series_float", "pvlib/tests/test_irradiance.py::test_klucher_series", "pvlib/tests/test_irradiance.py::test_haydavies", "pvlib/tests/test_irradiance.py::test_haydavies_components", "pvlib/tests/test_irradiance.py::test_reindl", "pvlib/tests/test_irradiance.py::test_king", "pvlib/tests/test_irradiance.py::test_perez", "pvlib/tests/test_irradiance.py::test_perez_components", "pvlib/tests/test_irradiance.py::test_perez_negative_horizon", "pvlib/tests/test_irradiance.py::test_perez_arrays", "pvlib/tests/test_irradiance.py::test_perez_scalar", "pvlib/tests/test_irradiance.py::test_sky_diffuse_zenith_close_to_90[isotropic]", "pvlib/tests/test_irradiance.py::test_sky_diffuse_zenith_close_to_90[klucher]", "pvlib/tests/test_irradiance.py::test_sky_diffuse_zenith_close_to_90[haydavies]", "pvlib/tests/test_irradiance.py::test_sky_diffuse_zenith_close_to_90[reindl]", "pvlib/tests/test_irradiance.py::test_sky_diffuse_zenith_close_to_90[king]", "pvlib/tests/test_irradiance.py::test_sky_diffuse_zenith_close_to_90[perez]", "pvlib/tests/test_irradiance.py::test_get_sky_diffuse_model_invalid", "pvlib/tests/test_irradiance.py::test_get_sky_diffuse_missing_dni_extra", "pvlib/tests/test_irradiance.py::test_get_sky_diffuse_missing_airmass", "pvlib/tests/test_irradiance.py::test_campbell_norman", "pvlib/tests/test_irradiance.py::test_get_total_irradiance", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_albedo[isotropic]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_albedo[klucher]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_albedo[haydavies]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_albedo[reindl]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_albedo[king]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_albedo[perez]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_scalars[isotropic]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_scalars[klucher]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_scalars[haydavies]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_scalars[reindl]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_scalars[king]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_scalars[perez]", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_missing_dni_extra", "pvlib/tests/test_irradiance.py::test_get_total_irradiance_missing_airmass", "pvlib/tests/test_irradiance.py::test_poa_components", "pvlib/tests/test_irradiance.py::test_disc_value[93193-expected0]", "pvlib/tests/test_irradiance.py::test_disc_value[None-expected1]", "pvlib/tests/test_irradiance.py::test_disc_value[101325-expected2]", "pvlib/tests/test_irradiance.py::test_disc_overirradiance", "pvlib/tests/test_irradiance.py::test_disc_min_cos_zenith_max_zenith", "pvlib/tests/test_irradiance.py::test_dirint_value", "pvlib/tests/test_irradiance.py::test_dirint_nans", "pvlib/tests/test_irradiance.py::test_dirint_tdew", "pvlib/tests/test_irradiance.py::test_dirint_no_delta_kt", "pvlib/tests/test_irradiance.py::test_dirint_coeffs", "pvlib/tests/test_irradiance.py::test_dirint_min_cos_zenith_max_zenith", "pvlib/tests/test_irradiance.py::test_gti_dirint", "pvlib/tests/test_irradiance.py::test_erbs", "pvlib/tests/test_irradiance.py::test_erbs_min_cos_zenith_max_zenith", "pvlib/tests/test_irradiance.py::test_erbs_all_scalar", "pvlib/tests/test_irradiance.py::test_dirindex", "pvlib/tests/test_irradiance.py::test_dirindex_min_cos_zenith_max_zenith", "pvlib/tests/test_irradiance.py::test_dni", "pvlib/tests/test_irradiance.py::test_aoi_and_aoi_projection[0-0-0-0-0-1]", "pvlib/tests/test_irradiance.py::test_aoi_and_aoi_projection[30-180-30-180-0-1]", "pvlib/tests/test_irradiance.py::test_aoi_and_aoi_projection[30-180-150-0-180--1]", "pvlib/tests/test_irradiance.py::test_aoi_and_aoi_projection[90-0-30-60-75.5224878-0.25]", "pvlib/tests/test_irradiance.py::test_aoi_and_aoi_projection[90-0-30-170-119.4987042--0.4924038]", "pvlib/tests/test_irradiance.py::test_aoi_projection_precision", "pvlib/tests/test_irradiance.py::test_kt_kt_prime_factor", "pvlib/tests/test_irradiance.py::test_clearsky_index", "pvlib/tests/test_irradiance.py::test_clearness_index", "pvlib/tests/test_irradiance.py::test_clearness_index_zenith_independent", "pvlib/tests/test_irradiance.py::test_complete_irradiance"]
|
311781d2380997044da0e484dc90aa146a74ca44
|
{"first_commit_time": 1614314068.0, "pr_title": "add Boland DF estimation", "pr_body": "<!-- Thank you for your contribution! The following items must be addressed before the code can be merged. Please don't hesitate to ask for help if you're unsure of how to accomplish any of the items. Feel free to remove checklist items that are not relevant to your change. -->\r\n\r\n - [x] referenced in Google group: https://groups.google.com/g/pvlib-python/c/TsDIiU19AUU/m/Bwi9595PAwAJ\r\n - [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)\r\n - [x] Tests added\r\n - [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.\r\n - [x] Adds description and name entries in the appropriate \"what's new\" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).\r\n - [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.\r\n - [x] Pull request is nearly complete and ready for detailed review.\r\n - [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.\r\n\r\n<!-- Brief description of the problem and proposed solution (if not already fully described in the issue linked to above): -->\r\n`pvlib.irradiance.boland` is another diffuse fraction, DF, estimation method similar to Erbs but uses a single logistic exponential correlation between DF and clearness index, kt, that is continuously differentiable and bounded between zero and one.", "pr_timeline": [{"time": 1614331405.0, "comment": "Hi @cwhanse, interested in reviewing the gallery example and added Boland DNI estimation function? Thanks"}, {"time": 1677474331.0, "comment": "Hi @cwhanse sorry for the very long delay in responding. I have accepted your changes, added discussions for your suggestions, and responded to your comments. Let me know if you have any additional feedback. Thanks!"}, {"time": 1677490573.0, "comment": "BTW: I think the readthedocs failure is unrelated, I'm trying to figure it out - the bifacial gallery examples are failing with this error:\r\n>ImportError: cannot import name 'geos_geometrycollection_from_py' from 'shapely.geometry.collection' (/home/docs/checkouts/readthedocs.org/user_builds/pvlib-python/envs/1179/lib/python3.7/site-packages/shapely/geometry/collection.py)\r\n\r\nIDK why but readthedocs is pulling multiple versions of pvfactors, finally settling on v1.4.1 instead of the latest v1.5.2 which limits shapely<v2. It must have something to do with readthedocs b/c it was building the docs a few commits ago, and nothing else has changed. Truly bizarre:\r\n\r\n\r\n[UPDATE]: RTD is getting the pvlib version wrong, that's why pvfactors downgrades causing the wrong shapely to install. See below"}, {"time": 1677490498.0, "comment": "~looks like we do not have reproducible readthedocs builds - see this: https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html~ - nope, that wasn't the problem, altho we should update rtd to v2"}, {"time": 1677487511.0, "comment": "@kanderso-nrel I've figured out why readthedocs is going bonkers. For some reason the version is being read as pvlib-0.9.0a5.dev58+g<commit-SHA> EG: the last commit (9715084):\r\n\r\n\r\nThis only started happening after I merged with main 9d9421e, but oddly, that was the last commit that _did_ have the correct version: pvlib-0.9.5.dev16+g9d9421e\r\n\r\n\r\nAnd readthedocs _did_ build. This has something to do with the pep517/518 changes, `pyproject.toml` and `importlib_metadata` but none of these files are affected as far as I can tell.\r\n\r\nWhen the version is misread as v0.9.0, that causes pvfactors to downgrade, and that causes shapely>v2 to be installed leading to the build failure.\r\n\r\nI don't understand how the new build system works, all I can hope is that it all works out when/if this gets merged."}, {"time": 1677490228.0, "comment": "You can view a rendering of the docs here: https://pvlib-python--1179.org.readthedocs.build/en/1179/\r\n\r\nI guess the readthedocs failures has something to do with [`setuptools_scm`](https://github.com/pypa/setuptools_scm). I had to delete the `pvlib.egg-info` file in my local working copy to get it to return the correct version:\r\n```python\r\nfrom importlib_metadata import version\r\nimport pvlib\r\nversion('pvlib')\r\n# '0.9.5.dev34+g1ae35cd'\r\n```\r\n\r\nBefore I purged the egg-info file, it was returning 0.9.0!"}, {"time": 1677508377.0, "comment": "@mikofski I think the problem was that RTD does a shallow clone by default and didn't give `setuptools_scm` enough info to determine the correct version string for this PR (which has a long/strange pedigree now? not sure...)\r\n\r\nAnyway to get around this problem we can do \"unshallow\" clones instead (see [docs](https://docs.readthedocs.io/en/stable/build-customization.html#unshallow-git-clone)). I hope you don't mind that I went ahead and pushed a fix directly to this branch!"}, {"time": 1677646113.0, "comment": "@pvlib/pvlib-maintainer any more comments on this? Someone care to do the honors?"}, {"time": 1677656096.0, "comment": "I'm out back country skiing but I'd like to review on Monday if it isn't merged by then \ud83d\ude0e"}, {"time": 1678084007.0, "comment": "@kanderso-nrel thanks for tips. I think I've addressed everything. @AdamRJensen looking forward to your comments."}, {"time": 1678305424.0, "comment": "For the examples, you could consider (1) a df vs. kt plot and (2) some predicted vs. measured scatter plots. "}, {"time": 1678665783.0, "comment": "@pvlib/pvlib-maintainer ready to merge"}], "issues": {}}
|
pvlib/pvlib-python
| 1,190
|
https://github.com/pvlib/pvlib-python/pull/1190
|
pvlib__pvlib-python-1190
|
[]
|
2a1ed55acdd22bf813cbaa9dab2e7379306cf645
|
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 373c879a49..633a9d8cc0 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -268,6 +268,7 @@ Functions relevant for single diode models.
pvsystem.singlediode
pvsystem.v_from_i
pvsystem.max_power_point
+ ivtools.sdm.pvsyst_temperature_coeff
Low-level functions for solving the single diode equation.
@@ -334,6 +335,7 @@ Pvsyst model
temperature.pvsyst_cell
pvsystem.calcparams_pvsyst
pvsystem.singlediode
+ ivtools.sdm.pvsyst_temperature_coeff
pvsystem.dc_ohms_from_percent
pvsystem.dc_ohmic_losses
diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 2c8e178eba..43dc7b0fe7 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -102,7 +102,10 @@ Enhancements
from DC power. Use parameter ``model`` to specify which inverter model to use.
(:pull:`1147`, :issue:`998`, :pull:`1150`)
* Added :py:func:`~pvlib.temperature.noct_sam`, a cell temperature model
- implemented in SAM (:pull:`1177`, :pull:`1195`)
+ implemented in SAM. (:pull:`1177`, :pull:`1195`)
+* Added :py:func:`~pvlib.ivtools.sdm.pvsyst_temperature_coeff` to calculate
+ the temperature coefficient of power for the pvsyst module model.
+ (:pull:`1190`)
Bug fixes
~~~~~~~~~
diff --git a/pvlib/ivtools/sdm.py b/pvlib/ivtools/sdm.py
index 29f5860d40..04263c34c5 100644
--- a/pvlib/ivtools/sdm.py
+++ b/pvlib/ivtools/sdm.py
@@ -11,8 +11,10 @@
import scipy.constants
from scipy import optimize
from scipy.special import lambertw
+from scipy.misc import derivative
-from pvlib.pvsystem import singlediode, v_from_i
+from pvlib.pvsystem import calcparams_pvsyst, singlediode, v_from_i
+from pvlib.singlediode import bishop88_mpp
from pvlib.ivtools.utils import rectify_iv_curve, _numdiff
from pvlib.ivtools.sde import _fit_sandia_cocontent
@@ -1252,3 +1254,91 @@ def _calc_theta_phi_exact(vmp, imp, iph, io, rs, rsh, nnsvth):
theta = np.transpose(theta)
return theta, phi
+
+
+def pvsyst_temperature_coeff(alpha_sc, gamma_ref, mu_gamma, I_L_ref, I_o_ref,
+ R_sh_ref, R_sh_0, R_s, cells_in_series,
+ R_sh_exp=5.5, EgRef=1.121, irrad_ref=1000,
+ temp_ref=25):
+ r"""
+ Calculates the temperature coefficient of power for a pvsyst single
+ diode model.
+
+ The temperature coefficient is determined as the numerical derivative
+ :math:`\frac{dP}{dT}` at the maximum power point at reference conditions
+ [1]_.
+
+ Parameters
+ ----------
+ alpha_sc : float
+ The short-circuit current temperature coefficient of the module. [A/C]
+
+ gamma_ref : float
+ The diode ideality factor. [unitless]
+
+ mu_gamma : float
+ The temperature coefficient for the diode ideality factor. [1/K]
+
+ I_L_ref : float
+ The light-generated current (or photocurrent) at reference conditions.
+ [A]
+
+ I_o_ref : float
+ The dark or diode reverse saturation current at reference conditions.
+ [A]
+
+ R_sh_ref : float
+ The shunt resistance at reference conditions. [ohm]
+
+ R_sh_0 : float
+ The shunt resistance at zero irradiance conditions. [ohm]
+
+ R_s : float
+ The series resistance at reference conditions. [ohm]
+
+ cells_in_series : int
+ The number of cells connected in series.
+
+ R_sh_exp : float, default 5.5
+ The exponent in the equation for shunt resistance. [unitless]
+
+ EgRef : float, default 1.121
+ The energy bandgap of the module's cells at reference temperature.
+ Default of 1.121 eV is for crystalline silicon. Must be positive. [eV]
+
+ irrad_ref : float, default 1000
+ Reference irradiance. [W/m^2].
+
+ temp_ref : float, default 25
+ Reference cell temperature. [C]
+
+
+ Returns
+ -------
+ gamma_pdc : float
+ Temperature coefficient of power at maximum power point at reference
+ conditions. [1/C]
+
+ References
+ ----------
+ .. [1] K. Sauer, T. Roessler, C. W. Hansen, Modeling the Irradiance and
+ Temperature Dependence of Photovoltaic Modules in PVsyst, IEEE Journal
+ of Photovoltaics v5(1), January 2015.
+ """
+
+ def maxp(temp_cell, irrad_ref, alpha_sc, gamma_ref, mu_gamma, I_L_ref,
+ I_o_ref, R_sh_ref, R_sh_0, R_s, cells_in_series, R_sh_exp, EgRef,
+ temp_ref):
+ params = calcparams_pvsyst(
+ irrad_ref, temp_cell, alpha_sc, gamma_ref, mu_gamma, I_L_ref,
+ I_o_ref, R_sh_ref, R_sh_0, R_s, cells_in_series, R_sh_exp, EgRef,
+ irrad_ref, temp_ref)
+ res = bishop88_mpp(*params)
+ return res[2]
+
+ args = (irrad_ref, alpha_sc, gamma_ref, mu_gamma, I_L_ref,
+ I_o_ref, R_sh_ref, R_sh_0, R_s, cells_in_series, R_sh_exp, EgRef,
+ temp_ref)
+ pmp = maxp(temp_ref, *args)
+ gamma_pdc = derivative(maxp, temp_ref, args=args)
+ return gamma_pdc / pmp
|
diff --git a/pvlib/tests/ivtools/test_sdm.py b/pvlib/tests/ivtools/test_sdm.py
index 67bae225bb..c1df6b1e50 100644
--- a/pvlib/tests/ivtools/test_sdm.py
+++ b/pvlib/tests/ivtools/test_sdm.py
@@ -2,6 +2,7 @@
import pandas as pd
import pytest
+from numpy.testing import assert_allclose
from pvlib.ivtools import sdm
from pvlib import pvsystem
@@ -306,7 +307,7 @@ def test__update_rsh_fixed_pt_nans(vmp, imp, iph, io, rs, rsh, nnsvth,
def test__update_rsh_fixed_pt_vmp0():
outrsh = sdm._update_rsh_fixed_pt(vmp=0., imp=2., iph=2., io=2., rs=2.,
rsh=2., nnsvth=2.)
- np.testing.assert_allclose(outrsh, np.array([502.]), atol=.0001)
+ assert_allclose(outrsh, np.array([502.]), atol=.0001)
def test__update_rsh_fixed_pt_vector():
@@ -318,7 +319,7 @@ def test__update_rsh_fixed_pt_vector():
imp=np.array([.2, .2, -1., 2.]),
vmp=np.array([0., -1, 0., 0.]))
assert np.all(np.isnan(outrsh[0:3]))
- np.testing.assert_allclose(outrsh[3], np.array([502.]), atol=.0001)
+ assert_allclose(outrsh[3], np.array([502.]), atol=.0001)
@pytest.mark.parametrize('voc, iph, io, rs, rsh, nnsvth, expected', [
@@ -329,7 +330,7 @@ def test__update_rsh_fixed_pt_vector():
(0., 2., 2., 2., 2., 2., 17.9436)])
def test__update_io(voc, iph, io, rs, rsh, nnsvth, expected):
outio = sdm._update_io(voc, iph, io, rs, rsh, nnsvth)
- np.testing.assert_allclose(outio, expected, atol=.0001)
+ assert_allclose(outio, expected, atol=.0001)
@pytest.mark.parametrize('voc, iph, io, rs, rsh, nnsvth', [
@@ -347,8 +348,8 @@ def test__update_io_nan(voc, iph, io, rs, rsh, nnsvth):
(0., 2., 2., 2., 2., 2., 2., (1.5571, 2.))])
def test__calc_theta_phi_exact(vmp, imp, iph, io, rs, rsh, nnsvth, expected):
theta, phi = sdm._calc_theta_phi_exact(vmp, imp, iph, io, rs, rsh, nnsvth)
- np.testing.assert_allclose(theta, expected[0], atol=.0001)
- np.testing.assert_allclose(phi, expected[1], atol=.0001)
+ assert_allclose(theta, expected[0], atol=.0001)
+ assert_allclose(phi, expected[1], atol=.0001)
@pytest.mark.parametrize('vmp, imp, iph, io, rs, rsh, nnsvth', [
@@ -365,7 +366,7 @@ def test__calc_theta_phi_exact_one_nan():
theta, phi = sdm._calc_theta_phi_exact(imp=2., iph=2., vmp=2., io=2.,
nnsvth=2., rs=0., rsh=2.)
assert np.isnan(theta)
- np.testing.assert_allclose(phi, 2., atol=.0001)
+ assert_allclose(phi, 2., atol=.0001)
def test__calc_theta_phi_exact_vector():
@@ -379,4 +380,23 @@ def test__calc_theta_phi_exact_vector():
assert np.isnan(theta[0])
assert np.isnan(theta[1])
assert np.isnan(phi[0])
- np.testing.assert_allclose(phi[1], 2.2079, atol=.0001)
+ assert_allclose(phi[1], 2.2079, atol=.0001)
+
+
+def test_pvsyst_temperature_coeff():
+ # test for consistency with dP/dT estimated with secant rule
+ params = {'alpha_sc': 0., 'gamma_ref': 1.1, 'mu_gamma': 0.,
+ 'I_L_ref': 6., 'I_o_ref': 5.e-9, 'R_sh_ref': 200.,
+ 'R_sh_0': 2000., 'R_s': 0.5, 'cells_in_series': 60}
+ expected = -0.004886706494879083
+ # params defines a Pvsyst model for a notional module.
+ # expected value is created by calculating power at 1000 W/m2, and cell
+ # temperature of 24 and 26C, using pvsystem.calcparams_pvsyst and
+ # pvsystem.singlediode. The derivative (value for expected) is estimated
+ # as the slope (p_mp at 26C - p_mp at 24C) / 2
+ # using the secant rule for derivatives.
+ gamma_pdc = sdm.pvsyst_temperature_coeff(
+ params['alpha_sc'], params['gamma_ref'], params['mu_gamma'],
+ params['I_L_ref'], params['I_o_ref'], params['R_sh_ref'],
+ params['R_sh_0'], params['R_s'], params['cells_in_series'])
+ assert_allclose(gamma_pdc, expected, rtol=0.0005)
| 2021-03-09T22:52:52
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/sphinx/source/api.rst": ".. currentmodule:: pvlib\n\n#############\nAPI reference\n#############\n\n\nClasses\n=======\n\npvlib-python provides a collection of classes for users that prefer\nobject-oriented programming. These classes can help users keep track of\ndata in a more organized way, and can help to simplify the modeling\nprocess. The classes do not add any functionality beyond the procedural\ncode. Most of the object methods are simple wrappers around the\ncorresponding procedural code.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location\n pvsystem.PVSystem\n pvsystem.Array\n tracking.SingleAxisTracker\n modelchain.ModelChain\n modelchain.ModelChainResult\n\nSolar Position\n==============\n\nFunctions and methods for calculating solar position.\n\nThe :py:meth:`location.Location.get_solarposition` method and the\n:py:func:`solarposition.get_solarposition` function with default\nparameters are fast and accurate. We recommend using these functions\nunless you know that you need a different function.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_solarposition\n solarposition.get_solarposition\n solarposition.spa_python\n solarposition.ephemeris\n solarposition.pyephem\n solarposition.spa_c\n\n\nAdditional functions for quantities closely related to solar position.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.calc_time\n solarposition.pyephem_earthsun_distance\n solarposition.nrel_earthsun_distance\n spa.calculate_deltat\n\n\nFunctions for calculating sunrise, sunset and transit times.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_sun_rise_set_transit\n solarposition.sun_rise_set_transit_ephem\n solarposition.sun_rise_set_transit_spa\n solarposition.sun_rise_set_transit_geometric\n\n\nThe spa module contains the implementation of the built-in NREL SPA\nalgorithm.\n\n.. autosummary::\n :toctree: generated/\n\n spa\n\nCorrelations and analytical expressions for low precision solar position\ncalculations.\n\n.. autosummary::\n :toctree: generated/\n\n solarposition.solar_zenith_analytical\n solarposition.solar_azimuth_analytical\n solarposition.declination_spencer71\n solarposition.declination_cooper69\n solarposition.equation_of_time_spencer71\n solarposition.equation_of_time_pvcdrom\n solarposition.hour_angle\n solarposition.sun_rise_set_transit_geometric\n\n\nClear sky\n=========\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_clearsky\n clearsky.ineichen\n clearsky.lookup_linke_turbidity\n clearsky.simplified_solis\n clearsky.haurwitz\n clearsky.detect_clearsky\n clearsky.bird\n\n\nAirmass and atmospheric models\n==============================\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.get_airmass\n atmosphere.get_absolute_airmass\n atmosphere.get_relative_airmass\n atmosphere.pres2alt\n atmosphere.alt2pres\n atmosphere.gueymard94_pw\n atmosphere.first_solar_spectral_correction\n atmosphere.bird_hulstrom80_aod_bb\n atmosphere.kasten96_lt\n atmosphere.angstrom_aod_at_lambda\n atmosphere.angstrom_alpha\n\n\nIrradiance\n==========\n\nMethods for irradiance calculations\n-----------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem.get_irradiance\n pvsystem.PVSystem.get_aoi\n pvsystem.PVSystem.get_iam\n tracking.SingleAxisTracker.get_irradiance\n\nDecomposing and combining irradiance\n------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_extra_radiation\n irradiance.aoi\n irradiance.aoi_projection\n irradiance.poa_horizontal_ratio\n irradiance.beam_component\n irradiance.poa_components\n irradiance.get_ground_diffuse\n irradiance.dni\n\nTransposition models\n--------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.get_total_irradiance\n irradiance.get_sky_diffuse\n irradiance.isotropic\n irradiance.perez\n irradiance.haydavies\n irradiance.klucher\n irradiance.reindl\n irradiance.king\n\n.. _dniestmodels:\n\nDNI estimation models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.disc\n irradiance.dirint\n irradiance.dirindex\n irradiance.erbs\n irradiance.campbell_norman\n irradiance.gti_dirint\n\nClearness index models\n----------------------\n\n.. autosummary::\n :toctree: generated/\n\n irradiance.clearness_index\n irradiance.clearness_index_zenith_independent\n irradiance.clearsky_index\n\n\nPV Modeling\n===========\n\nClasses\n-------\n\nThe :py:class:`~pvsystem.PVSystem` class provides many methods that\nwrap the functions listed below. See its documentation for details.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem\n\nIncident angle modifiers\n------------------------\n\n.. autosummary::\n :toctree: generated/\n\n iam.physical\n iam.ashrae\n iam.martin_ruiz\n iam.martin_ruiz_diffuse\n iam.sapm\n iam.interp\n iam.marion_diffuse\n iam.marion_integrate\n\nPV temperature models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n temperature.sapm_cell\n temperature.sapm_module\n temperature.sapm_cell_from_module\n temperature.pvsyst_cell\n temperature.faiman\n temperature.fuentes\n temperature.ross\n temperature.noct_sam\n pvsystem.PVSystem.sapm_celltemp\n pvsystem.PVSystem.pvsyst_celltemp\n pvsystem.PVSystem.faiman_celltemp\n pvsystem.PVSystem.fuentes_celltemp\n pvsystem.PVSystem.noct_sam_celltemp\n\nTemperature Model Parameters\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n.. currentmodule:: pvlib.temperature\n.. autodata:: TEMPERATURE_MODEL_PARAMETERS\n :annotation:\n\n.. currentmodule:: pvlib\n\nSingle diode models\n-------------------\n\nFunctions relevant for single diode models.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.calcparams_cec\n pvsystem.calcparams_desoto\n pvsystem.calcparams_pvsyst\n pvsystem.i_from_v\n pvsystem.singlediode\n pvsystem.v_from_i\n pvsystem.max_power_point\n\nLow-level functions for solving the single diode equation.\n\n.. autosummary::\n :toctree: generated/\n\n singlediode.estimate_voc\n singlediode.bishop88\n singlediode.bishop88_i_from_v\n singlediode.bishop88_v_from_i\n singlediode.bishop88_mpp\n\nFunctions for fitting diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n\nInverter models (DC to AC conversion)\n-------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem.get_ac\n inverter.sandia\n inverter.sandia_multi\n inverter.adr\n inverter.pvwatts\n inverter.pvwatts_multi\n\nFunctions for fitting inverter models\n\n.. autosummary::\n :toctree: generated/\n\n inverter.fit_sandia\n\n\nPV System Models\n----------------\n\nSandia array performance model (SAPM)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.sapm\n pvsystem.sapm_effective_irradiance\n pvsystem.sapm_spectral_loss\n inverter.sandia\n temperature.sapm_cell\n\nPvsyst model\n^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n temperature.pvsyst_cell\n pvsystem.calcparams_pvsyst\n pvsystem.singlediode\n pvsystem.dc_ohms_from_percent\n pvsystem.dc_ohmic_losses\n\nPVWatts model\n^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.pvwatts_dc\n inverter.pvwatts\n pvsystem.pvwatts_losses\n\nEstimating PV model parameters\n------------------------------\n\nFunctions for fitting single diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n ivtools.sdm.fit_pvsyst_sandia\n ivtools.sdm.fit_desoto_sandia\n\nFunctions for fitting the single diode equation\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n\nUtilities for working with IV curve data\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.utils.rectify_iv_curve\n\nOther\n-----\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.retrieve_sam\n pvsystem.scale_voltage_current_power\n\n\nEffects on PV System Output\n===========================\n\nLoss models\n-----------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.combine_loss_factors\n pvsystem.dc_ohms_from_percent\n\nSnow\n----\n\n.. autosummary::\n :toctree: generated/\n\n snow.coverage_nrel\n snow.fully_covered_nrel\n snow.dc_loss_nrel\n\nSoiling\n-------\n\n.. autosummary::\n :toctree: generated/\n\n soiling.hsu\n soiling.kimber\n\nShading\n-------\n\n.. autosummary::\n :toctree: generated/\n\n shading.masking_angle\n shading.masking_angle_passias\n shading.sky_diffuse_passias\n\nSpectrum\n--------\n\n.. autosummary::\n :toctree: generated/\n\n spectrum.spectrl2\n\nTracking\n========\n\nSingleAxisTracker\n-----------------\n\nThe :py:class:`~tracking.SingleAxisTracker` inherits from\n:py:class:`~pvsystem.PVSystem`.\n\n.. autosummary::\n :toctree: generated/\n\n tracking.SingleAxisTracker\n tracking.SingleAxisTracker.singleaxis\n tracking.SingleAxisTracker.get_irradiance\n\nFunctions\n---------\n\n.. autosummary::\n :toctree: generated/\n\n tracking.singleaxis\n tracking.calc_axis_tilt\n tracking.calc_cross_axis_tilt\n\n\n.. _iotools:\n\nIO Tools\n========\n\nFunctions for reading and writing data from a variety of file formats\nrelevant to solar energy modeling.\n\n.. autosummary::\n :toctree: generated/\n\n iotools.read_tmy2\n iotools.read_tmy3\n iotools.read_epw\n iotools.parse_epw\n iotools.read_srml\n iotools.read_srml_month_from_solardat\n iotools.read_surfrad\n iotools.read_midc\n iotools.read_midc_raw_data_from_nrel\n iotools.read_ecmwf_macc\n iotools.get_ecmwf_macc\n iotools.read_crn\n iotools.read_solrad\n iotools.get_psm3\n iotools.read_psm3\n iotools.parse_psm3\n iotools.get_pvgis_tmy\n iotools.read_pvgis_tmy\n iotools.read_bsrn\n\nA :py:class:`~pvlib.location.Location` object may be created from metadata\nin some files.\n\n.. autosummary::\n :toctree: generated/\n\n location.Location.from_tmy\n location.Location.from_epw\n\n\nForecasting\n===========\n\nForecast models\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.GFS\n forecast.NAM\n forecast.RAP\n forecast.HRRR\n forecast.HRRR_ESRL\n forecast.NDFD\n\nGetting data\n------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.get_data\n forecast.ForecastModel.get_processed_data\n\nProcessing data\n---------------\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.process_data\n forecast.ForecastModel.rename\n forecast.ForecastModel.cloud_cover_to_ghi_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_clearsky_scaling\n forecast.ForecastModel.cloud_cover_to_transmittance_linear\n forecast.ForecastModel.cloud_cover_to_irradiance_campbell_norman\n forecast.ForecastModel.cloud_cover_to_irradiance\n forecast.ForecastModel.kelvin_to_celsius\n forecast.ForecastModel.isobaric_to_ambient_temperature\n forecast.ForecastModel.uv_to_speed\n forecast.ForecastModel.gust_to_speed\n\nIO support\n----------\n\nThese are public for now, but use at your own risk.\n\n.. autosummary::\n :toctree: generated/\n\n forecast.ForecastModel.set_dataset\n forecast.ForecastModel.set_query_latlon\n forecast.ForecastModel.set_location\n forecast.ForecastModel.set_time\n\n\nModelChain\n==========\n\nCreating a ModelChain object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain\n modelchain.ModelChain.with_pvwatts\n modelchain.ModelChain.with_sapm\n\nRunning\n-------\n\nA ModelChain can be run from a number of starting points, depending on the\ninput data available.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.run_model\n modelchain.ModelChain.run_model_from_poa\n modelchain.ModelChain.run_model_from_effective_irradiance\n\nFunctions to assist with setting up ModelChains to run\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.complete_irradiance\n modelchain.ModelChain.prepare_inputs\n modelchain.ModelChain.prepare_inputs_from_poa\n\nResults\n-------\n\nOutput from the running the ModelChain is stored in the\n:py:attr:`modelchain.ModelChain.results` attribute. For more\ninformation see :py:class:`modelchain.ModelChainResult`.\n\nAttributes\n----------\n\nSimple ModelChain attributes:\n\n``system, location, clearsky_model, transposition_model,\nsolar_position_method, airmass_model``\n\nProperties\n----------\n\nModelChain properties that are aliases for your specific modeling functions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.dc_model\n modelchain.ModelChain.ac_model\n modelchain.ModelChain.aoi_model\n modelchain.ModelChain.spectral_model\n modelchain.ModelChain.temperature_model\n modelchain.ModelChain.dc_ohmic_model\n modelchain.ModelChain.losses_model\n modelchain.ModelChain.effective_irradiance_model\n\nModel definitions\n-----------------\n\nModelChain model definitions.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.sapm\n modelchain.ModelChain.cec\n modelchain.ModelChain.desoto\n modelchain.ModelChain.pvsyst\n modelchain.ModelChain.pvwatts_dc\n modelchain.ModelChain.sandia_inverter\n modelchain.ModelChain.adr_inverter\n modelchain.ModelChain.pvwatts_inverter\n modelchain.ModelChain.ashrae_aoi_loss\n modelchain.ModelChain.physical_aoi_loss\n modelchain.ModelChain.sapm_aoi_loss\n modelchain.ModelChain.no_aoi_loss\n modelchain.ModelChain.first_solar_spectral_loss\n modelchain.ModelChain.sapm_spectral_loss\n modelchain.ModelChain.no_spectral_loss\n modelchain.ModelChain.sapm_temp\n modelchain.ModelChain.pvsyst_temp\n modelchain.ModelChain.faiman_temp\n modelchain.ModelChain.fuentes_temp\n modelchain.ModelChain.dc_ohmic_model\n modelchain.ModelChain.no_dc_ohmic_loss\n modelchain.ModelChain.pvwatts_losses\n modelchain.ModelChain.no_extra_losses\n\nInference methods\n-----------------\n\nMethods that automatically determine which models should be used based\non the information in the associated :py:class:`~pvsystem.PVSystem` object.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.ModelChain.infer_dc_model\n modelchain.ModelChain.infer_ac_model\n modelchain.ModelChain.infer_aoi_model\n modelchain.ModelChain.infer_spectral_model\n modelchain.ModelChain.infer_temperature_model\n modelchain.ModelChain.infer_losses_model\n\nFunctions\n---------\n\nFunctions for power modeling.\n\n.. autosummary::\n :toctree: generated/\n\n modelchain.basic_chain\n modelchain.get_orientation\n\n\nBifacial\n========\n\nMethods for calculating back surface irradiance\n\n.. autosummary::\n :toctree: generated/\n\n bifacial.pvfactors_timeseries\n\n\nScaling\n=======\n\nMethods for manipulating irradiance for temporal or spatial considerations\n\n.. autosummary::\n :toctree: generated/\n\n scaling.wvm\n", "docs/sphinx/source/whatsnew/v0.9.0.rst": ".. _whatsnew_0900:\n\nv0.9.0 (MONTH DAY YEAR)\n-----------------------\n\nBreaking changes\n~~~~~~~~~~~~~~~~\n* Moved functions related to inverters from ``pvsystem.py`` to ``inverter.py``.\n Functions are renamed to follow a more consistent pattern, as follows (:pull:`886`, :pull:`1136`):\n\n - ``pvlib.pvsystem.snlinverter`` is now :py:func:`pvlib.inverter.sandia`\n - ``pvlib.pvsystem.pvwatts_ac`` is now :py:func:`pvlib.inverter.pvwatts`\n - ``pvlib.pvsystem.adrinverter`` is now :py:func:`pvlib.inverter.adr`\n\n* Argument ``ac_model`` for :py:class:`pvlib.modelchain.ModelChain` now accepts\n ``'sandia'``, ``'pvwatts'`` and ``'adr'`` for the inverter models. (:pull:`886`, :pull:`1136`)\n\n* Calling :py:meth:`pvlib.pvsystem.PVSystem.sapm_celltemp` without setting\n ``PVSystem.temperature_model_parameters``,\n or a valid combination of ``PVsystem.module_type`` and ``PVsystem.racking_model``, will\n now raise an exception. (:issue:`1030`, :pull:`1033`, :pull:`1136`)\n\n* Deprecated arbitrary keyword arguments for\n :py:class:`pvlib.location.Location`, :py:class:`pvlib.pvsystem.PVSystem`,\n :py:class:`pvlib.tracking.SingleAxisTracker`, and\n :py:class:`pvlib.modelchain.ModelChain`. Supplying arbitrary keyword\n to these objects result in TypeErrors in v0.9. (:issue:`1029`, :pull:`1053`, :pull:`1136`)\n\n* ``pvlib.pvsystem.LocalizedPVSystem`` and ``pvlib.pvsystem.LocalizedSingleAxisTracker``\n have been removed. Use\n :py:class:`pvlib.location.Location`, :py:class:`pvlib.pvsystem.PVSystem`,\n :py:class:`pvlib.tracking.SingleAxisTracker`, and\n :py:class:`pvlib.modelchain.ModelChain` instead.\n (:issue:`1029`, :pull:`1034`, :pull:`1053`, :pull:`1136`)\n\n* ``irradiance.liujordan`` and ``ForecastModel.cloud_cover_to_irradiance_liujordan``\n have been removed. (:pull:`1136`)\n\n* ``ModelChain.snlinverter`` changed to ``ModelChain.sandia_inverter``.\n ``ModelChain.adrinverter`` changed to ``ModelChain.adr_inverter``.\n (:pull:`1150`)\n\n* The ``orientation_strategy`` parameter has been removed from the various\n :py:class:`pvlib.modelchain.ModelChain` constructors and ``surface_tilt``,\n ``surface_azimuth`` are now required parameters for\n :py:func:`pvlib.modelchain.basic_chain` (:issue:`1028`, :pull:`1181`)\n\n\nDeprecations\n~~~~~~~~~~~~\n* The following ``ModelChain`` attributes are deprecated. They have been moved\n to the :py:class:`~pvlib.modelchain.ModelChainResult` class that is\n accessible via ``ModelChain.results``:\n\n * ``ModelChain.ac``\n * ``ModelChain.airmass``\n * ``ModelChain.aoi``\n * ``ModelChain.aoi_modifier``\n * ``ModelChain.cell_temperature``\n * ``ModelChain.dc``\n * ``ModelChain.diode_params``\n * ``ModelChain.effective_irradiance``\n * ``ModelChain.solar_position``\n * ``ModelChain.spectral_modifier``\n * ``ModelChain.total_irrad``\n * ``ModelChain.tracking``\n\nEnhancements\n~~~~~~~~~~~~\n* Add :func:`~pvlib.iotools.read_bsrn` for reading BSRN solar radiation data\n files. (:pull:`1145`, :issue:`1015`)\n* In :py:class:`~pvlib.modelchain.ModelChain`, attributes which contain\n output of models are now collected into ``ModelChain.results``.\n (:pull:`1076`, :issue:`1067`)\n* Added :py:class:`~pvlib.pvsystem.Array` class to represent an array of\n modules separately from a :py:class:`~pvlib.pvsystem.PVSystem`.\n (:pull:`1076`, :issue:`1067`)\n* Added capability for modeling a PV system with multiple arrays in\n :py:class:`~pvlib.pvsystem.PVSystem`. Updates the ``PVSystem`` API\n to operate on and return tuples where each element of the tuple corresponds\n to the input or output for a specific ``Array``. (:pull:`1076`,\n :issue:`1067`)\n* Support for systems with multiple ``Arrays`` added to\n :py:class:`~pvlib.modelchain.ModelChain`. This includes substantial API\n enhancements for accepting different weather input for each ``Array`` in the\n system. (:pull:`1076`, :issue:`1067`)\n* Support for :py:func:`~pvlib.inverter.sandia_multi` and\n :py:func:`~pvlib.inverter.pvwatts_multi` added to\n :py:class:`~pvlib.pvsystem.PVSystem` and\n :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia'``\n and ``ac_model='pvwatts'``).\n (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`, :pull:`1150`)\n* :py:class:`~pvlib.modelchain.ModelChain` 'run_model' methods now\n automatically switch to using ``'effective_irradiance'`` (if available) for\n cell temperature models, when ``'poa_global'`` is not provided in input\n weather or calculated from input weather data.\n* :py:meth:`~pvlib.modelchain.ModelChain.pvwatts_dc` now scales the DC power\n by ``pvsystem.PVSystem.modules_per_strings`` and\n ``pvsystem.PVSystem.strings_per_inverter``. Note that both attributes still\n default to 1. (:pull:`1138`)\n* :py:meth:`~pvlib.pvsystem.PVSystem.get_ac` is added to calculate AC power\n from DC power. Use parameter ``model`` to specify which inverter model to use.\n (:pull:`1147`, :issue:`998`, :pull:`1150`)\n* Added :py:func:`~pvlib.temperature.noct_sam`, a cell temperature model\n implemented in SAM (:pull:`1177`, :pull:`1195`)\n\nBug fixes\n~~~~~~~~~\n* Pass weather data to solar position calculations in\n :py:meth:`~pvlib.modelchain.ModelChain.prepare_inputs_from_poa`.\n (:issue:`1065`, :pull:`1140`)\n* Reindl model fixed to generate sky_diffuse=0 when GHI=0.\n (:issue:`1153`, :pull:`1154`)\n\nTesting\n~~~~~~~\n\nDocumentation\n~~~~~~~~~~~~~\n\n* Update intro tutorial to highlight the use of historical meteorological data\n and to make the procedural and object oriented results match exactly.\n* Update documentation links in :py:func:`pvlib.iotools.get_psm3`\n\nRequirements\n~~~~~~~~~~~~\n* ``dataclasses`` is required for python 3.6\n\nContributors\n~~~~~~~~~~~~\n* Will Holmgren (:ghuser:`wholmgren`)\n* Cliff Hansen (:ghuser:`cwhanse`)\n* Will Vining (:ghuser:`wfvining`)\n* Anton Driesse (:ghuser:`adriesse`)\n* Mark Mikofski (:ghuser:`mikofski`)\n* Nate Croft (:ghuser:`ncroft-b4`)\n* Kevin Anderson (:ghuser:`kanderso-nrel`)\n* Adam R. Jensen (:ghuser:`AdamRJensen`)\n* Joshua Stein (:ghuser:`jsstein`)\n* Tony Lorenzo (:ghuser:`alorenzo175`)\n", "pvlib/ivtools/sdm.py": "\"\"\"\nThe ``sdm`` module contains functions to fit single diode models.\n\nFunction names should follow the pattern \"fit_\" + name of model + \"_\" +\n fitting method.\n\n\"\"\"\n\nimport numpy as np\n\nimport scipy.constants\nfrom scipy import optimize\nfrom scipy.special import lambertw\n\nfrom pvlib.pvsystem import singlediode, v_from_i\n\nfrom pvlib.ivtools.utils import rectify_iv_curve, _numdiff\nfrom pvlib.ivtools.sde import _fit_sandia_cocontent\n\n\ndef fit_cec_sam(celltype, v_mp, i_mp, v_oc, i_sc, alpha_sc, beta_voc,\n gamma_pmp, cells_in_series, temp_ref=25):\n \"\"\"\n Estimates parameters for the CEC single diode model (SDM) using the SAM\n SDK.\n\n Parameters\n ----------\n celltype : str\n Value is one of 'monoSi', 'multiSi', 'polySi', 'cis', 'cigs', 'cdte',\n 'amorphous'\n v_mp : float\n Voltage at maximum power point [V]\n i_mp : float\n Current at maximum power point [A]\n v_oc : float\n Open circuit voltage [V]\n i_sc : float\n Short circuit current [A]\n alpha_sc : float\n Temperature coefficient of short circuit current [A/C]\n beta_voc : float\n Temperature coefficient of open circuit voltage [V/C]\n gamma_pmp : float\n Temperature coefficient of power at maximum point point [%/C]\n cells_in_series : int\n Number of cells in series\n temp_ref : float, default 25\n Reference temperature condition [C]\n\n Returns\n -------\n I_L_ref : float\n The light-generated current (or photocurrent) at reference\n conditions [A]\n I_o_ref : float\n The dark or diode reverse saturation current at reference\n conditions [A]\n R_s : float\n The series resistance at reference conditions, in ohms.\n R_sh_ref : float\n The shunt resistance at reference conditions, in ohms.\n a_ref : float\n The product of the usual diode ideality factor ``n`` (unitless),\n number of cells in series ``Ns``, and cell thermal voltage at\n reference conditions [V]\n Adjust : float\n The adjustment to the temperature coefficient for short circuit\n current, in percent.\n\n Raises\n ------\n ImportError if NREL-PySAM is not installed.\n\n RuntimeError if parameter extraction is not successful.\n\n Notes\n -----\n The CEC model and estimation method are described in [1]_.\n Inputs ``v_mp``, ``i_mp``, ``v_oc`` and ``i_sc`` are assumed to be from a\n single IV curve at constant irradiance and cell temperature. Irradiance is\n not explicitly used by the fitting procedure. The irradiance level at which\n the input IV curve is determined and the specified cell temperature\n ``temp_ref`` are the reference conditions for the output parameters\n ``I_L_ref``, ``I_o_ref``, ``R_s``, ``R_sh_ref``, ``a_ref`` and ``Adjust``.\n\n References\n ----------\n .. [1] A. Dobos, \"An Improved Coefficient Calculator for the California\n Energy Commission 6 Parameter Photovoltaic Module Model\", Journal of\n Solar Energy Engineering, vol 134, 2012.\n \"\"\"\n\n try:\n from PySAM import PySSC\n except ImportError:\n raise ImportError(\"Requires NREL's PySAM package at \"\n \"https://pypi.org/project/NREL-PySAM/.\")\n\n datadict = {'tech_model': '6parsolve', 'financial_model': None,\n 'celltype': celltype, 'Vmp': v_mp,\n 'Imp': i_mp, 'Voc': v_oc, 'Isc': i_sc, 'alpha_isc': alpha_sc,\n 'beta_voc': beta_voc, 'gamma_pmp': gamma_pmp,\n 'Nser': cells_in_series, 'Tref': temp_ref}\n\n result = PySSC.ssc_sim_from_dict(datadict)\n if result['cmod_success'] == 1:\n return tuple([result[k] for k in ['Il', 'Io', 'Rs', 'Rsh', 'a',\n 'Adj']])\n else:\n raise RuntimeError('Parameter estimation failed')\n\n\ndef fit_desoto(v_mp, i_mp, v_oc, i_sc, alpha_sc, beta_voc, cells_in_series,\n EgRef=1.121, dEgdT=-0.0002677, temp_ref=25, irrad_ref=1000,\n root_kwargs={}):\n \"\"\"\n Calculates the parameters for the De Soto single diode model.\n\n This procedure (described in [1]_) has the advantage of\n using common specifications given by manufacturers in the\n datasheets of PV modules.\n\n The solution is found using the scipy.optimize.root() function,\n with the corresponding default solver method 'hybr'.\n No restriction is put on the fit variables, i.e. series\n or shunt resistance could go negative. Nevertheless, if it happens,\n check carefully the inputs and their units; alpha_sc and beta_voc are\n often given in %/K in manufacturers datasheets and should be given\n in A/K and V/K here.\n\n The parameters returned by this function can be used by\n :py:func:`pvlib.pvsystem.calcparams_desoto` to calculate the values at\n different irradiance and cell temperature.\n\n Parameters\n ----------\n v_mp: float\n Module voltage at the maximum-power point at reference conditions [V].\n i_mp: float\n Module current at the maximum-power point at reference conditions [A].\n v_oc: float\n Open-circuit voltage at reference conditions [V].\n i_sc: float\n Short-circuit current at reference conditions [A].\n alpha_sc: float\n The short-circuit current (i_sc) temperature coefficient of the\n module [A/K].\n beta_voc: float\n The open-circuit voltage (v_oc) temperature coefficient of the\n module [V/K].\n cells_in_series: integer\n Number of cell in the module.\n EgRef: float, default 1.121 eV - value for silicon\n Energy of bandgap of semi-conductor used [eV]\n dEgdT: float, default -0.0002677 - value for silicon\n Variation of bandgap according to temperature [eV/K]\n temp_ref: float, default 25\n Reference temperature condition [C]\n irrad_ref: float, default 1000\n Reference irradiance condition [W/m2]\n root_kwargs: dictionary, default None\n Dictionary of arguments to pass onto scipy.optimize.root()\n\n Returns\n -------\n dict with the following elements:\n I_L_ref: float\n Light-generated current at reference conditions [A]\n I_o_ref: float\n Diode saturation current at reference conditions [A]\n R_s: float\n Series resistance [ohm]\n R_sh_ref: float\n Shunt resistance at reference conditions [ohm].\n a_ref: float\n Modified ideality factor at reference conditions.\n The product of the usual diode ideality factor (n, unitless),\n number of cells in series (Ns), and cell thermal voltage at\n specified effective irradiance and cell temperature.\n alpha_sc: float\n The short-circuit current (i_sc) temperature coefficient of the\n module [A/K].\n EgRef: float\n Energy of bandgap of semi-conductor used [eV]\n dEgdT: float\n Variation of bandgap according to temperature [eV/K]\n irrad_ref: float\n Reference irradiance condition [W/m2]\n temp_ref: float\n Reference temperature condition [C]\n\n scipy.optimize.OptimizeResult\n Optimization result of scipy.optimize.root().\n See scipy.optimize.OptimizeResult for more details.\n\n References\n ----------\n .. [1] W. De Soto et al., \"Improvement and validation of a model for\n photovoltaic array performance\", Solar Energy, vol 80, pp. 78-88,\n 2006.\n \"\"\"\n\n # Constants\n k = scipy.constants.value('Boltzmann constant in eV/K')\n Tref = temp_ref + 273.15 # [K]\n\n # initial guesses of variables for computing convergence:\n # Values are taken from [2], p753\n Rsh_0 = 100.0\n a_0 = 1.5*k*Tref*cells_in_series\n IL_0 = i_sc\n Io_0 = i_sc * np.exp(-v_oc/a_0)\n Rs_0 = (a_0*np.log1p((IL_0-i_mp)/Io_0) - v_mp)/i_mp\n # params_i : initial values vector\n params_i = np.array([IL_0, Io_0, Rs_0, Rsh_0, a_0])\n\n # specs of module\n specs = (i_sc, v_oc, i_mp, v_mp, beta_voc, alpha_sc, EgRef, dEgdT,\n Tref, k)\n\n # computing with system of equations described in [1]\n optimize_result = optimize.root(_system_of_equations_desoto, x0=params_i,\n args=(specs,), **root_kwargs)\n\n if optimize_result.success:\n sdm_params = optimize_result.x\n else:\n raise RuntimeError(\n 'Parameter estimation failed:\\n' + optimize_result.message)\n\n # results\n return ({'I_L_ref': sdm_params[0],\n 'I_o_ref': sdm_params[1],\n 'R_s': sdm_params[2],\n 'R_sh_ref': sdm_params[3],\n 'a_ref': sdm_params[4],\n 'alpha_sc': alpha_sc,\n 'EgRef': EgRef,\n 'dEgdT': dEgdT,\n 'irrad_ref': irrad_ref,\n 'temp_ref': temp_ref},\n optimize_result)\n\n\ndef _system_of_equations_desoto(params, specs):\n \"\"\"Evaluates the systems of equations used to solve for the single\n diode equation parameters. Function designed to be used by\n scipy.optimize.root in fit_desoto.\n\n Parameters\n ----------\n params: ndarray\n Array with parameters of the De Soto single diode model. Must be\n given in the following order: IL, Io, a, Rs, Rsh\n specs: tuple\n Specifications of pv module given by manufacturer. Must be given\n in the following order: Isc, Voc, Imp, Vmp, beta_oc, alpha_sc\n\n Returns\n -------\n value of the system of equations to solve with scipy.optimize.root().\n \"\"\"\n\n # six input known variables\n Isc, Voc, Imp, Vmp, beta_oc, alpha_sc, EgRef, dEgdT, Tref, k = specs\n\n # five parameters vector to find\n IL, Io, Rs, Rsh, a = params\n\n # five equation vector\n y = [0, 0, 0, 0, 0]\n\n # 1st equation - short-circuit - eq(3) in [1]\n y[0] = Isc - IL + Io * np.expm1(Isc * Rs / a) + Isc * Rs / Rsh\n\n # 2nd equation - open-circuit Tref - eq(4) in [1]\n y[1] = -IL + Io * np.expm1(Voc / a) + Voc / Rsh\n\n # 3rd equation - Imp & Vmp - eq(5) in [1]\n y[2] = Imp - IL + Io * np.expm1((Vmp + Imp * Rs) / a) \\\n + (Vmp + Imp * Rs) / Rsh\n\n # 4th equation - Pmp derivated=0 - eq23.2.6 in [2]\n # caution: eq(6) in [1] has a sign error\n y[3] = Imp \\\n - Vmp * ((Io / a) * np.exp((Vmp + Imp * Rs) / a) + 1.0 / Rsh) \\\n / (1.0 + (Io * Rs / a) * np.exp((Vmp + Imp * Rs) / a) + Rs / Rsh)\n\n # 5th equation - open-circuit T2 - eq (4) at temperature T2 in [1]\n T2 = Tref + 2\n Voc2 = (T2 - Tref) * beta_oc + Voc # eq (7) in [1]\n a2 = a * T2 / Tref # eq (8) in [1]\n IL2 = IL + alpha_sc * (T2 - Tref) # eq (11) in [1]\n Eg2 = EgRef * (1 + dEgdT * (T2 - Tref)) # eq (10) in [1]\n Io2 = Io * (T2 / Tref)**3 * np.exp(1 / k * (EgRef/Tref - Eg2/T2)) # eq (9)\n y[4] = -IL2 + Io2 * np.expm1(Voc2 / a2) + Voc2 / Rsh # eq (4) at T2\n\n return y\n\n\ndef fit_pvsyst_sandia(ivcurves, specs, const=None, maxiter=5, eps1=1.e-3):\n \"\"\"\n Estimate parameters for the PVsyst module performance model.\n\n Parameters\n ----------\n ivcurves : dict\n i : array\n One array element for each IV curve. The jth element is itself an\n array of current for jth IV curve (same length as v[j]) [A]\n v : array\n One array element for each IV curve. The jth element is itself an\n array of voltage for jth IV curve (same length as i[j]) [V]\n ee : array\n effective irradiance for each IV curve, i.e., POA broadband\n irradiance adjusted by solar spectrum modifier [W / m^2]\n tc : array\n cell temperature for each IV curve [C]\n i_sc : array\n short circuit current for each IV curve [A]\n v_oc : array\n open circuit voltage for each IV curve [V]\n i_mp : array\n current at max power point for each IV curve [A]\n v_mp : array\n voltage at max power point for each IV curve [V]\n\n specs : dict\n cells_in_series : int\n number of cells in series\n alpha_sc : float\n temperature coefficient of isc [A/C]\n\n const : dict\n E0 : float\n effective irradiance at STC, default 1000 [W/m^2]\n T0 : float\n cell temperature at STC, default 25 [C]\n k : float\n 1.38066E-23 J/K (Boltzmann's constant)\n q : float\n 1.60218E-19 Coulomb (elementary charge)\n\n maxiter : int, default 5\n input that sets the maximum number of iterations for the parameter\n updating part of the algorithm.\n\n eps1: float, default 1e-3\n Tolerance for the IV curve fitting. The parameter updating stops when\n absolute values of the percent change in mean, max and standard\n deviation of Imp, Vmp and Pmp between iterations are all less than\n eps1, or when the number of iterations exceeds maxiter.\n\n Returns\n -------\n dict\n I_L_ref : float\n light current at STC [A]\n I_o_ref : float\n dark current at STC [A]\n EgRef : float\n effective band gap at STC [eV]\n R_s : float\n series resistance at STC [ohm]\n R_sh_ref : float\n shunt resistance at STC [ohm]\n R_sh_0 : float\n shunt resistance at zero irradiance [ohm]\n R_sh_exp : float\n exponential factor defining decrease in shunt resistance with\n increasing effective irradiance\n gamma_ref : float\n diode (ideality) factor at STC [unitless]\n mu_gamma : float\n temperature coefficient for diode (ideality) factor [1/K]\n cells_in_series : int\n number of cells in series\n iph : array\n light current for each IV curve [A]\n io : array\n dark current for each IV curve [A]\n rs : array\n series resistance for each IV curve [ohm]\n rsh : array\n shunt resistance for each IV curve [ohm]\n u : array\n boolean for each IV curve indicating that the parameter values\n are deemed reasonable by the private function ``_filter_params``\n\n Notes\n -----\n The PVsyst module performance model is described in [1]_, [2]_, and [3]_.\n The fitting method is documented in [4]_, [5]_, and [6]_.\n Ported from PVLib Matlab [7]_.\n\n References\n ----------\n .. [1] K. Sauer, T. Roessler, C. W. Hansen, Modeling the Irradiance and\n Temperature Dependence of Photovoltaic Modules in PVsyst, IEEE Journal\n of Photovoltaics v5(1), January 2015.\n .. [2] A. Mermoud, PV Modules modeling, Presentation at the 2nd PV\n Performance Modeling Workshop, Santa Clara, CA, May 2013\n .. [3] A. Mermoud, T. Lejeuene, Performance Assessment of a Simulation\n Model for PV modules of any available technology, 25th European\n Photovoltaic Solar Energy Conference, Valencia, Spain, Sept. 2010\n .. [4] C. Hansen, Estimating Parameters for the PVsyst Version 6\n Photovoltaic Module Performance Model, Sandia National Laboratories\n Report SAND2015-8598\n .. [5] C. Hansen, Parameter Estimation for Single Diode Models of\n Photovoltaic Modules, Sandia National Laboratories Report SAND2015-2065\n .. [6] C. Hansen, Estimation of Parameters for Single Diode Models using\n Measured IV Curves, Proc. of the 39th IEEE PVSC, June 2013.\n .. [7] PVLib MATLAB https://github.com/sandialabs/MATLAB_PV_LIB\n \"\"\"\n\n if const is None:\n const = {'E0': 1000.0, 'T0': 25.0, 'k': 1.38066e-23, 'q': 1.60218e-19}\n\n ee = ivcurves['ee']\n tc = ivcurves['tc']\n tck = tc + 273.15\n isc = ivcurves['i_sc']\n voc = ivcurves['v_oc']\n imp = ivcurves['i_mp']\n vmp = ivcurves['v_mp']\n\n # Cell Thermal Voltage\n vth = const['k'] / const['q'] * tck\n\n n = len(ivcurves['v_oc'])\n\n # Initial estimate of Rsh used to obtain the diode factor gamma0 and diode\n # temperature coefficient mu_gamma. Rsh is estimated using the co-content\n # integral method.\n\n rsh = np.ones(n)\n for j in range(n):\n voltage, current = rectify_iv_curve(ivcurves['v'][j], ivcurves['i'][j])\n # initial estimate of Rsh, from integral over voltage regression\n # [5] Step 3a; [6] Step 3a\n _, _, _, rsh[j], _ = _fit_sandia_cocontent(\n voltage, current, vth[j] * specs['cells_in_series'])\n\n gamma_ref, mu_gamma = _fit_pvsyst_sandia_gamma(voc, isc, rsh, vth, tck,\n specs, const)\n\n badgamma = np.isnan(gamma_ref) or np.isnan(mu_gamma) \\\n or not np.isreal(gamma_ref) or not np.isreal(mu_gamma)\n\n if badgamma:\n raise RuntimeError(\n \"Failed to estimate the diode (ideality) factor parameter;\"\n \" aborting parameter estimation.\")\n\n gamma = gamma_ref + mu_gamma * (tc - const['T0'])\n nnsvth = gamma * (vth * specs['cells_in_series'])\n\n # For each IV curve, sequentially determine initial values for Io, Rs,\n # and Iph [5] Step 3a; [6] Step 3\n iph, io, rs, u = _initial_iv_params(ivcurves, ee, voc, isc, rsh,\n nnsvth)\n\n # Update values for each IV curve to converge at vmp, imp, voc and isc\n iph, io, rs, rsh, u = _update_iv_params(voc, isc, vmp, imp, ee,\n iph, io, rs, rsh, nnsvth, u,\n maxiter, eps1)\n\n # get single diode models from converged values for each IV curve\n pvsyst = _extract_sdm_params(ee, tc, iph, io, rs, rsh, gamma, u,\n specs, const, model='pvsyst')\n # Add parameters estimated in this function\n pvsyst['gamma_ref'] = gamma_ref\n pvsyst['mu_gamma'] = mu_gamma\n pvsyst['cells_in_series'] = specs['cells_in_series']\n\n return pvsyst\n\n\ndef fit_desoto_sandia(ivcurves, specs, const=None, maxiter=5, eps1=1.e-3):\n \"\"\"\n Estimate parameters for the De Soto module performance model.\n\n Parameters\n ----------\n ivcurves : dict\n i : array\n One array element for each IV curve. The jth element is itself an\n array of current for jth IV curve (same length as v[j]) [A]\n v : array\n One array element for each IV curve. The jth element is itself an\n array of voltage for jth IV curve (same length as i[j]) [V]\n ee : array\n effective irradiance for each IV curve, i.e., POA broadband\n irradiance adjusted by solar spectrum modifier [W / m^2]\n tc : array\n cell temperature for each IV curve [C]\n i_sc : array\n short circuit current for each IV curve [A]\n v_oc : array\n open circuit voltage for each IV curve [V]\n i_mp : array\n current at max power point for each IV curve [A]\n v_mp : array\n voltage at max power point for each IV curve [V]\n\n specs : dict\n cells_in_series : int\n number of cells in series\n alpha_sc : float\n temperature coefficient of Isc [A/C]\n beta_voc : float\n temperature coefficient of Voc [V/C]\n\n const : dict\n E0 : float\n effective irradiance at STC, default 1000 [W/m^2]\n T0 : float\n cell temperature at STC, default 25 [C]\n k : float\n 1.38066E-23 J/K (Boltzmann's constant)\n q : float\n 1.60218E-19 Coulomb (elementary charge)\n\n maxiter : int, default 5\n input that sets the maximum number of iterations for the parameter\n updating part of the algorithm.\n\n eps1: float, default 1e-3\n Tolerance for the IV curve fitting. The parameter updating stops when\n absolute values of the percent change in mean, max and standard\n deviation of Imp, Vmp and Pmp between iterations are all less than\n eps1, or when the number of iterations exceeds maxiter.\n\n Returns\n -------\n dict\n I_L_ref : float\n light current at STC [A]\n I_o_ref : float\n dark current at STC [A]\n EgRef : float\n effective band gap at STC [eV]\n R_s : float\n series resistance at STC [ohm]\n R_sh_ref : float\n shunt resistance at STC [ohm]\n cells_in_series : int\n number of cells in series\n iph : array\n light current for each IV curve [A]\n io : array\n dark current for each IV curve [A]\n rs : array\n series resistance for each IV curve [ohm]\n rsh : array\n shunt resistance for each IV curve [ohm]\n u : array\n boolean for each IV curve indicating that the parameter values\n are deemed reasonable by the private function ``_filter_params``\n\n Notes\n -----\n The De Soto module performance model is described in [1]_. The fitting\n method is documented in [2]_, [3]_. Ported from PVLib Matlab [4]_.\n\n References\n ----------\n .. [1] W. De Soto et al., \"Improvement and validation of a model for\n photovoltaic array performance\", Solar Energy, vol 80, pp. 78-88,\n 2006.\n .. [2] C. Hansen, Parameter Estimation for Single Diode Models of\n Photovoltaic Modules, Sandia National Laboratories Report SAND2015-2065\n .. [3] C. Hansen, Estimation of Parameters for Single Diode Models using\n Measured IV Curves, Proc. of the 39th IEEE PVSC, June 2013.\n .. [4] PVLib MATLAB https://github.com/sandialabs/MATLAB_PV_LIB\n \"\"\"\n\n if const is None:\n const = {'E0': 1000.0, 'T0': 25.0, 'k': 1.38066e-23, 'q': 1.60218e-19}\n\n ee = ivcurves['ee']\n tc = ivcurves['tc']\n tck = tc + 273.15\n isc = ivcurves['i_sc']\n voc = ivcurves['v_oc']\n imp = ivcurves['i_mp']\n vmp = ivcurves['v_mp']\n\n # Cell Thermal Voltage\n vth = const['k'] / const['q'] * tck\n\n n = len(voc)\n\n # Initial estimate of Rsh used to obtain the diode factor gamma0 and diode\n # temperature coefficient mu_gamma. Rsh is estimated using the co-content\n # integral method.\n\n rsh = np.ones(n)\n for j in range(n):\n voltage, current = rectify_iv_curve(ivcurves['v'][j], ivcurves['i'][j])\n # initial estimate of Rsh, from integral over voltage regression\n # [5] Step 3a; [6] Step 3a\n _, _, _, rsh[j], _ = _fit_sandia_cocontent(\n voltage, current, vth[j] * specs['cells_in_series'])\n\n n0 = _fit_desoto_sandia_diode(ee, voc, vth, tc, specs, const)\n\n bad_n = np.isnan(n0) or not np.isreal(n0)\n\n if bad_n:\n raise RuntimeError(\n \"Failed to estimate the diode (ideality) factor parameter;\"\n \" aborting parameter estimation.\")\n\n nnsvth = n0 * specs['cells_in_series'] * vth\n\n # For each IV curve, sequentially determine initial values for Io, Rs,\n # and Iph [5] Step 3a; [6] Step 3\n iph, io, rs, u = _initial_iv_params(ivcurves, ee, voc, isc, rsh,\n nnsvth)\n\n # Update values for each IV curve to converge at vmp, imp, voc and isc\n iph, io, rs, rsh, u = _update_iv_params(voc, isc, vmp, imp, ee,\n iph, io, rs, rsh, nnsvth, u,\n maxiter, eps1)\n\n # get single diode models from converged values for each IV curve\n desoto = _extract_sdm_params(ee, tc, iph, io, rs, rsh, n0, u,\n specs, const, model='desoto')\n # Add parameters estimated in this function\n desoto['a_ref'] = n0 * specs['cells_in_series'] * const['k'] / \\\n const['q'] * (const['T0'] + 273.15)\n desoto['cells_in_series'] = specs['cells_in_series']\n\n return desoto\n\n\ndef _fit_pvsyst_sandia_gamma(voc, isc, rsh, vth, tck, specs, const):\n # Estimate the diode factor gamma from Isc-Voc data. Method incorporates\n # temperature dependence by means of the equation for Io\n\n y = np.log(isc - voc / rsh) - 3. * np.log(tck / (const['T0'] + 273.15))\n x1 = const['q'] / const['k'] * (1. / (const['T0'] + 273.15) - 1. / tck)\n x2 = voc / (vth * specs['cells_in_series'])\n uu = np.logical_or(np.isnan(y), np.isnan(x1), np.isnan(x2))\n\n x = np.vstack((np.ones(len(x1[~uu])), x1[~uu], -x1[~uu] *\n (tck[~uu] - (const['T0'] + 273.15)), x2[~uu],\n -x2[~uu] * (tck[~uu] - (const['T0'] + 273.15)))).T\n alpha = np.linalg.lstsq(x, y[~uu], rcond=None)[0]\n\n gamma_ref = 1. / alpha[3]\n mu_gamma = alpha[4] / alpha[3] ** 2\n return gamma_ref, mu_gamma\n\n\ndef _fit_desoto_sandia_diode(ee, voc, vth, tc, specs, const):\n # estimates the diode factor for the De Soto model.\n # Helper function for fit_desoto_sandia\n try:\n import statsmodels.api as sm\n except ImportError:\n raise ImportError(\n 'Parameter extraction using Sandia method requires statsmodels')\n\n x = specs['cells_in_series'] * vth * np.log(ee / const['E0'])\n y = voc - specs['beta_voc'] * (tc - const['T0'])\n new_x = sm.add_constant(x)\n res = sm.RLM(y, new_x).fit()\n return res.params[1]\n\n\ndef _initial_iv_params(ivcurves, ee, voc, isc, rsh, nnsvth):\n # sets initial values for iph, io, rs and quality filter u.\n # Helper function for fit_<model>_sandia.\n n = len(ivcurves['v_oc'])\n io = np.ones(n)\n iph = np.ones(n)\n rs = np.ones(n)\n\n for j in range(n):\n\n if rsh[j] > 0:\n volt, curr = rectify_iv_curve(ivcurves['v'][j],\n ivcurves['i'][j])\n # Initial estimate of Io, evaluate the single diode model at\n # voc and approximate Iph + Io = Isc [5] Step 3a; [6] Step 3b\n io[j] = (isc[j] - voc[j] / rsh[j]) * np.exp(-voc[j] /\n nnsvth[j])\n\n # initial estimate of rs from dI/dV near Voc\n # [5] Step 3a; [6] Step 3c\n [didv, d2id2v] = _numdiff(volt, curr)\n t3 = volt > .5 * voc[j]\n t4 = volt < .9 * voc[j]\n tmp = -rsh[j] * didv - 1.\n with np.errstate(invalid=\"ignore\"): # expect nan in didv\n v = np.logical_and.reduce(np.array([t3, t4, ~np.isnan(tmp),\n np.greater(tmp, 0)]))\n if np.any(v):\n vtrs = (nnsvth[j] / isc[j] * (\n np.log(tmp[v] * nnsvth[j] / (rsh[j] * io[j]))\n - volt[v] / nnsvth[j]))\n rs[j] = np.mean(vtrs[vtrs > 0], axis=0)\n else:\n rs[j] = 0.\n\n # Initial estimate of Iph, evaluate the single diode model at\n # Isc [5] Step 3a; [6] Step 3d\n iph[j] = isc[j] + io[j] * np.expm1(isc[j] / nnsvth[j]) \\\n + isc[j] * rs[j] / rsh[j]\n\n else:\n io[j] = np.nan\n rs[j] = np.nan\n iph[j] = np.nan\n\n # Filter IV curves for good initial values\n # [5] Step 3b\n u = _filter_params(ee, isc, io, rs, rsh)\n\n # [5] Step 3c\n # Refine Io to match Voc\n io[u] = _update_io(voc[u], iph[u], io[u], rs[u], rsh[u], nnsvth[u])\n\n # parameters [6], Step 3c\n # Calculate Iph to be consistent with Isc and current values of other\n iph = isc + io * np.expm1(rs * isc / nnsvth) + isc * rs / rsh\n\n return iph, io, rs, u\n\n\ndef _update_iv_params(voc, isc, vmp, imp, ee, iph, io, rs, rsh, nnsvth, u,\n maxiter, eps1):\n # Refine Rsh, Rs, Io and Iph in that order.\n # Helper function for fit_<model>_sandia.\n counter = 1. # counter variable for parameter updating while loop,\n # counts iterations\n prevconvergeparams = {}\n prevconvergeparams['state'] = 0.0\n\n not_converged = np.array([True])\n\n while not_converged.any() and counter <= maxiter:\n # update rsh to match max power point using a fixed point method.\n rsh[u] = _update_rsh_fixed_pt(vmp[u], imp[u], iph[u], io[u], rs[u],\n rsh[u], nnsvth[u])\n\n # Calculate Rs to be consistent with Rsh and maximum power point\n _, phi = _calc_theta_phi_exact(vmp[u], imp[u], iph[u], io[u],\n rs[u], rsh[u], nnsvth[u])\n rs[u] = (iph[u] + io[u] - imp[u]) * rsh[u] / imp[u] - \\\n nnsvth[u] * phi / imp[u] - vmp[u] / imp[u]\n\n # Update filter for good parameters\n u = _filter_params(ee, isc, io, rs, rsh)\n\n # Update value for io to match voc\n io[u] = _update_io(voc[u], iph[u], io[u], rs[u], rsh[u], nnsvth[u])\n\n # Calculate Iph to be consistent with Isc and other parameters\n iph = isc + io * np.expm1(rs * isc / nnsvth) + isc * rs / rsh\n\n # update filter for good parameters\n u = _filter_params(ee, isc, io, rs, rsh)\n\n # compute the IV curve from the current parameter values\n result = singlediode(iph[u], io[u], rs[u], rsh[u], nnsvth[u])\n\n # check convergence criteria\n # [5] Step 3d\n convergeparams = _check_converge(\n prevconvergeparams, result, vmp[u], imp[u], counter)\n\n prevconvergeparams = convergeparams\n counter += 1.\n t5 = prevconvergeparams['vmperrmeanchange'] >= eps1\n t6 = prevconvergeparams['imperrmeanchange'] >= eps1\n t7 = prevconvergeparams['pmperrmeanchange'] >= eps1\n t8 = prevconvergeparams['vmperrstdchange'] >= eps1\n t9 = prevconvergeparams['imperrstdchange'] >= eps1\n t10 = prevconvergeparams['pmperrstdchange'] >= eps1\n t11 = prevconvergeparams['vmperrabsmaxchange'] >= eps1\n t12 = prevconvergeparams['imperrabsmaxchange'] >= eps1\n t13 = prevconvergeparams['pmperrabsmaxchange'] >= eps1\n not_converged = np.logical_or.reduce(np.array([t5, t6, t7, t8, t9,\n t10, t11, t12, t13]))\n\n return iph, io, rs, rsh, u\n\n\ndef _extract_sdm_params(ee, tc, iph, io, rs, rsh, n, u, specs, const,\n model):\n # Get single diode model parameters from five parameters iph, io, rs, rsh\n # and n vs. effective irradiance and temperature\n try:\n import statsmodels.api as sm\n except ImportError:\n raise ImportError(\n 'Parameter extraction using Sandia method requires statsmodels')\n\n tck = tc + 273.15\n tok = const['T0'] + 273.15 # convert to to K\n\n params = {}\n\n if model == 'pvsyst':\n # Estimate I_o_ref and EgRef\n x_for_io = const['q'] / const['k'] * (1. / tok - 1. / tck[u]) / n[u]\n\n # Estimate R_sh_0, R_sh_ref and R_sh_exp\n # Initial guesses. R_sh_0 is value at ee=0.\n nans = np.isnan(rsh)\n if any(ee < 400):\n grsh0 = np.mean(rsh[np.logical_and(~nans, ee < 400)])\n else:\n grsh0 = np.max(rsh)\n # Rsh_ref is value at Ee = 1000\n if any(ee > 400):\n grshref = np.mean(rsh[np.logical_and(~nans, ee > 400)])\n else:\n grshref = np.min(rsh)\n # PVsyst default for Rshexp is 5.5\n R_sh_exp = 5.5\n\n # Find parameters for Rsh equation\n\n def fun_rsh(x, rshexp, ee, e0, rsh):\n tf = np.log10(_rsh_pvsyst(x, R_sh_exp, ee, e0)) - np.log10(rsh)\n return tf\n\n x0 = np.array([grsh0, grshref])\n beta = optimize.least_squares(\n fun_rsh, x0, args=(R_sh_exp, ee[u], const['E0'], rsh[u]),\n bounds=np.array([[1., 1.], [1.e7, 1.e6]]), verbose=2)\n # Extract PVsyst parameter values\n R_sh_0 = beta.x[0]\n R_sh_ref = beta.x[1]\n\n # parameters unique to PVsyst\n params['R_sh_0'] = R_sh_0\n params['R_sh_exp'] = R_sh_exp\n\n elif model == 'desoto':\n dEgdT = 0.0002677\n x_for_io = const['q'] / const['k'] * (\n 1. / tok - 1. / tck[u] + dEgdT * (tc[u] - const['T0']) / tck[u])\n\n # Estimate R_sh_ref\n nans = np.isnan(rsh)\n x = const['E0'] / ee[np.logical_and(u, ee > 400, ~nans)]\n y = rsh[np.logical_and(u, ee > 400, ~nans)]\n new_x = sm.add_constant(x)\n beta = sm.RLM(y, new_x).fit()\n R_sh_ref = beta.params[1]\n\n params['dEgdT'] = dEgdT\n\n # Estimate I_o_ref and EgRef\n y = np.log(io[u]) - 3. * np.log(tck[u] / tok)\n new_x = sm.add_constant(x_for_io)\n res = sm.RLM(y, new_x).fit()\n beta = res.params\n I_o_ref = np.exp(beta[0])\n EgRef = beta[1]\n\n # Estimate I_L_ref\n x = tc[u] - const['T0']\n y = iph[u] * (const['E0'] / ee[u])\n # average over non-NaN values of Y and X\n nans = np.isnan(y - specs['alpha_sc'] * x)\n I_L_ref = np.mean(y[~nans] - specs['alpha_sc'] * x[~nans])\n\n # Estimate R_s\n nans = np.isnan(rs)\n R_s = np.mean(rs[np.logical_and(u, ee > 400, ~nans)])\n\n params['I_L_ref'] = I_L_ref\n params['I_o_ref'] = I_o_ref\n params['EgRef'] = EgRef\n params['R_sh_ref'] = R_sh_ref\n params['R_s'] = R_s\n # save values for each IV curve\n params['iph'] = iph\n params['io'] = io\n params['rsh'] = rsh\n params['rs'] = rs\n params['u'] = u\n\n return params\n\n\ndef _update_io(voc, iph, io, rs, rsh, nnsvth):\n \"\"\"\n Adjusts Io to match Voc using other parameter values.\n\n Helper function for fit_pvsyst_sandia, fit_desoto_sandia\n\n Description\n -----------\n Io is updated iteratively 10 times or until successive\n values are less than 0.000001 % different. The updating is similar to\n Newton's method.\n\n Parameters\n ----------\n voc: a numpy array of length N of values for Voc (V)\n iph: a numpy array of length N of values for lighbt current IL (A)\n io: a numpy array of length N of initial values for Io (A)\n rs: a numpy array of length N of values for the series resistance (ohm)\n rsh: a numpy array of length N of values for the shunt resistance (ohm)\n nnsvth: a numpy array of length N of values for the diode factor x thermal\n voltage for the module, equal to Ns (number of cells in series) x\n Vth (thermal voltage per cell).\n\n Returns\n -------\n new_io - a numpy array of length N of updated values for io\n\n References\n ----------\n .. [1] PVLib MATLAB https://github.com/sandialabs/MATLAB_PV_LIB\n .. [2] C. Hansen, Parameter Estimation for Single Diode Models of\n Photovoltaic Modules, Sandia National Laboratories Report SAND2015-2065\n .. [3] C. Hansen, Estimation of Parameteres for Single Diode Models using\n Measured IV Curves, Proc. of the 39th IEEE PVSC, June 2013.\n \"\"\"\n\n eps = 1e-6\n niter = 10\n k = 1\n maxerr = 1\n\n tio = io # Current Estimate of Io\n\n while maxerr > eps and k < niter:\n # Predict Voc\n pvoc = v_from_i(rsh, rs, nnsvth, 0., tio, iph)\n\n # Difference in Voc\n dvoc = pvoc - voc\n\n # Update Io\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n new_io = tio * (1. + (2. * dvoc) / (2. * nnsvth - dvoc))\n # Calculate Maximum Percent Difference\n maxerr = np.max(np.abs(new_io - tio) / tio) * 100.\n\n tio = new_io\n k += 1.\n\n return new_io\n\n\ndef _rsh_pvsyst(x, rshexp, g, go):\n # computes rsh for PVsyst model where the parameters are in vector xL\n # x[0] = Rsh0\n # x[1] = Rshref\n\n rsho = x[0]\n rshref = x[1]\n\n rshb = np.maximum(\n (rshref - rsho * np.exp(-rshexp)) / (1. - np.exp(-rshexp)), 0.)\n rsh = rshb + (rsho - rshb) * np.exp(-rshexp * g / go)\n return rsh\n\n\ndef _filter_params(ee, isc, io, rs, rsh):\n # Function _filter_params identifies bad parameter sets. A bad set contains\n # Nan, non-positive or imaginary values for parameters; Rs > Rsh; or data\n # where effective irradiance Ee differs by more than 5% from a linear fit\n # to Isc vs. Ee\n\n badrsh = np.logical_or(rsh < 0., np.isnan(rsh))\n negrs = rs < 0.\n badrs = np.logical_or(rs > rsh, np.isnan(rs))\n imagrs = ~(np.isreal(rs))\n badio = np.logical_or(~(np.isreal(rs)), io <= 0)\n goodr = np.logical_and(~badrsh, ~imagrs)\n goodr = np.logical_and(goodr, ~negrs)\n goodr = np.logical_and(goodr, ~badrs)\n goodr = np.logical_and(goodr, ~badio)\n\n matrix = np.vstack((ee / 1000., np.zeros(len(ee)))).T\n eff = np.linalg.lstsq(matrix, isc, rcond=None)[0][0]\n pisc = eff * ee / 1000\n pisc_error = np.abs(pisc - isc) / isc\n # check for departure from linear relation between Isc and Ee\n badiph = pisc_error > .05\n\n u = np.logical_and(goodr, ~badiph)\n return u\n\n\ndef _check_converge(prevparams, result, vmp, imp, i):\n \"\"\"\n Function _check_converge computes convergence metrics for all IV curves.\n\n Helper function for fit_pvsyst_sandia, fit_desoto_sandia\n\n Parameters\n ----------\n prevparams: Convergence Parameters from the previous Iteration (used to\n determine Percent Change in values between iterations)\n result: performacne paramters of the (predicted) single diode fitting,\n which includes Voc, Vmp, Imp, Pmp and Isc\n vmp: measured values for each IV curve\n imp: measured values for each IV curve\n i: Index of current iteration in cec_parameter_estimation\n\n Returns\n -------\n convergeparam: dict containing the following for Imp, Vmp and Pmp:\n - maximum percent difference between measured and modeled values\n - minimum percent difference between measured and modeled values\n - maximum absolute percent difference between measured and modeled\n values\n - mean percent difference between measured and modeled values\n - standard deviation of percent difference between measured and modeled\n values\n - absolute difference for previous and current values of maximum\n absolute percent difference (measured vs. modeled)\n - absolute difference for previous and current values of mean percent\n difference (measured vs. modeled)\n - absolute difference for previous and current values of standard\n deviation of percent difference (measured vs. modeled)\n \"\"\"\n\n convergeparam = {}\n\n imperror = (result['i_mp'] - imp) / imp * 100.\n vmperror = (result['v_mp'] - vmp) / vmp * 100.\n pmperror = (result['p_mp'] - (imp * vmp)) / (imp * vmp) * 100.\n\n convergeparam['imperrmax'] = max(imperror) # max of the error in Imp\n convergeparam['imperrmin'] = min(imperror) # min of the error in Imp\n # max of the absolute error in Imp\n convergeparam['imperrabsmax'] = max(abs(imperror))\n # mean of the error in Imp\n convergeparam['imperrmean'] = np.mean(imperror, axis=0)\n # std of the error in Imp\n convergeparam['imperrstd'] = np.std(imperror, axis=0, ddof=1)\n\n convergeparam['vmperrmax'] = max(vmperror) # max of the error in Vmp\n convergeparam['vmperrmin'] = min(vmperror) # min of the error in Vmp\n # max of the absolute error in Vmp\n convergeparam['vmperrabsmax'] = max(abs(vmperror))\n # mean of the error in Vmp\n convergeparam['vmperrmean'] = np.mean(vmperror, axis=0)\n # std of the error in Vmp\n convergeparam['vmperrstd'] = np.std(vmperror, axis=0, ddof=1)\n\n convergeparam['pmperrmax'] = max(pmperror) # max of the error in Pmp\n convergeparam['pmperrmin'] = min(pmperror) # min of the error in Pmp\n # max of the abs err. in Pmp\n convergeparam['pmperrabsmax'] = max(abs(pmperror))\n # mean error in Pmp\n convergeparam['pmperrmean'] = np.mean(pmperror, axis=0)\n # std error Pmp\n convergeparam['pmperrstd'] = np.std(pmperror, axis=0, ddof=1)\n\n if prevparams['state'] != 0.0:\n convergeparam['imperrstdchange'] = np.abs(\n convergeparam['imperrstd'] / prevparams['imperrstd'] - 1.)\n convergeparam['vmperrstdchange'] = np.abs(\n convergeparam['vmperrstd'] / prevparams['vmperrstd'] - 1.)\n convergeparam['pmperrstdchange'] = np.abs(\n convergeparam['pmperrstd'] / prevparams['pmperrstd'] - 1.)\n convergeparam['imperrmeanchange'] = np.abs(\n convergeparam['imperrmean'] / prevparams['imperrmean'] - 1.)\n convergeparam['vmperrmeanchange'] = np.abs(\n convergeparam['vmperrmean'] / prevparams['vmperrmean'] - 1.)\n convergeparam['pmperrmeanchange'] = np.abs(\n convergeparam['pmperrmean'] / prevparams['pmperrmean'] - 1.)\n convergeparam['imperrabsmaxchange'] = np.abs(\n convergeparam['imperrabsmax'] / prevparams['imperrabsmax'] - 1.)\n convergeparam['vmperrabsmaxchange'] = np.abs(\n convergeparam['vmperrabsmax'] / prevparams['vmperrabsmax'] - 1.)\n convergeparam['pmperrabsmaxchange'] = np.abs(\n convergeparam['pmperrabsmax'] / prevparams['pmperrabsmax'] - 1.)\n convergeparam['state'] = 1.0\n else:\n convergeparam['imperrstdchange'] = float(\"Inf\")\n convergeparam['vmperrstdchange'] = float(\"Inf\")\n convergeparam['pmperrstdchange'] = float(\"Inf\")\n convergeparam['imperrmeanchange'] = float(\"Inf\")\n convergeparam['vmperrmeanchange'] = float(\"Inf\")\n convergeparam['pmperrmeanchange'] = float(\"Inf\")\n convergeparam['imperrabsmaxchange'] = float(\"Inf\")\n convergeparam['vmperrabsmaxchange'] = float(\"Inf\")\n convergeparam['pmperrabsmaxchange'] = float(\"Inf\")\n convergeparam['state'] = 1.\n return convergeparam\n\n\ndef _update_rsh_fixed_pt(vmp, imp, iph, io, rs, rsh, nnsvth):\n \"\"\"\n Adjust Rsh to match Vmp using other parameter values\n\n Helper function for fit_pvsyst_sandia, fit_desoto_sandia\n\n Description\n -----------\n Rsh is updated iteratively using a fixed point expression\n obtained from combining Vmp = Vmp(Imp) (using the analytic solution to the\n single diode equation) and dP / dI = 0 at Imp. 500 iterations are performed\n because convergence can be very slow.\n\n Parameters\n ----------\n vmp: a numpy array of length N of values for Vmp (V)\n imp: a numpy array of length N of values for Imp (A)\n iph: a numpy array of length N of values for light current IL (A)\n io: a numpy array of length N of values for Io (A)\n rs: a numpy array of length N of values for series resistance (ohm)\n rsh: a numpy array of length N of initial values for shunt resistance (ohm)\n nnsvth: a numpy array length N of values for the diode factor x thermal\n voltage for the module, equal to Ns (number of cells in series) x\n Vth (thermal voltage per cell).\n\n Returns\n -------\n numpy array of length N of updated values for Rsh\n\n References\n ----------\n .. [1] PVLib for MATLAB https://github.com/sandialabs/MATLAB_PV_LIB\n .. [2] C. Hansen, Parameter Estimation for Single Diode Models of\n Photovoltaic Modules, Sandia National Laboratories Report SAND2015-2065\n \"\"\"\n niter = 500\n x1 = rsh\n\n for i in range(niter):\n _, z = _calc_theta_phi_exact(vmp, imp, iph, io, rs, x1, nnsvth)\n with np.errstate(divide=\"ignore\"):\n next_x1 = (1 + z) / z * ((iph + io) * x1 / imp - nnsvth * z / imp\n - 2 * vmp / imp)\n x1 = next_x1\n\n return x1\n\n\ndef _calc_theta_phi_exact(vmp, imp, iph, io, rs, rsh, nnsvth):\n \"\"\"\n _calc_theta_phi_exact computes Lambert W values appearing in the analytic\n solutions to the single diode equation for the max power point.\n\n Helper function for fit_pvsyst_sandia\n\n Parameters\n ----------\n vmp: a numpy array of length N of values for Vmp (V)\n imp: a numpy array of length N of values for Imp (A)\n iph: a numpy array of length N of values for the light current IL (A)\n io: a numpy array of length N of values for Io (A)\n rs: a numpy array of length N of values for the series resistance (ohm)\n rsh: a numpy array of length N of values for the shunt resistance (ohm)\n nnsvth: a numpy array of length N of values for the diode factor x\n thermal voltage for the module, equal to Ns\n (number of cells in series) x Vth\n (thermal voltage per cell).\n\n Returns\n -------\n theta: a numpy array of values for the Lamber W function for solving\n I = I(V)\n phi: a numpy array of values for the Lambert W function for solving\n V = V(I)\n\n Notes\n -----\n _calc_theta_phi_exact calculates values for the Lambert W function which\n are used in the analytic solutions for the single diode equation at the\n maximum power point. For V=V(I),\n phi = W(Io*Rsh/n*Vth * exp((IL + Io - Imp)*Rsh/n*Vth)). For I=I(V),\n theta = W(Rs*Io/n*Vth *\n Rsh/ (Rsh+Rs) * exp(Rsh/ (Rsh+Rs)*((Rs(IL+Io) + V)/n*Vth))\n\n References\n ----------\n .. [1] PVL MATLAB 2065 https://github.com/sandialabs/MATLAB_PV_LIB\n .. [2] C. Hansen, Parameter Estimation for Single Diode Models of\n Photovoltaic Modules, Sandia National Laboratories Report SAND2015-2065\n .. [3] A. Jain, A. Kapoor, \"Exact analytical solutions of the parameters of\n real solar cells using Lambert W-function\", Solar Energy Materials and\n Solar Cells, 81 (2004) 269-277.\n \"\"\"\n # handle singleton inputs\n vmp = np.asarray(vmp)\n imp = np.asarray(imp)\n iph = np.asarray(iph)\n io = np.asarray(io)\n rs = np.asarray(rs)\n rsh = np.asarray(rsh)\n nnsvth = np.asarray(nnsvth)\n\n # Argument for Lambert W function involved in V = V(I) [2] Eq. 12; [3]\n # Eq. 3\n with np.errstate(over=\"ignore\", divide=\"ignore\", invalid=\"ignore\"):\n argw = np.where(\n nnsvth == 0,\n np.nan,\n rsh * io / nnsvth * np.exp(rsh * (iph + io - imp) / nnsvth))\n phi = np.where(argw > 0, lambertw(argw).real, np.nan)\n\n # NaN where argw overflows. Switch to log space to evaluate\n u = np.isinf(argw)\n if np.any(u):\n logargw = (\n np.log(rsh[u]) + np.log(io[u]) - np.log(nnsvth[u])\n + rsh[u] * (iph[u] + io[u] - imp[u]) / nnsvth[u])\n # Three iterations of Newton-Raphson method to solve w+log(w)=logargW.\n # The initial guess is w=logargW. Where direct evaluation (above)\n # results in NaN from overflow, 3 iterations of Newton's method gives\n # approximately 8 digits of precision.\n x = logargw\n for i in range(3):\n x *= ((1. - np.log(x) + logargw) / (1. + x))\n phi[u] = x\n phi = np.transpose(phi)\n\n # Argument for Lambert W function involved in I = I(V) [2] Eq. 11; [3]\n # E1. 2\n with np.errstate(over=\"ignore\", divide=\"ignore\", invalid=\"ignore\"):\n argw = np.where(\n nnsvth == 0,\n np.nan,\n rsh / (rsh + rs) * rs * io / nnsvth * np.exp(\n rsh / (rsh + rs) * (rs * (iph + io) + vmp) / nnsvth))\n theta = np.where(argw > 0, lambertw(argw).real, np.nan)\n\n # NaN where argw overflows. Switch to log space to evaluate\n u = np.isinf(argw)\n if np.any(u):\n with np.errstate(divide=\"ignore\"):\n logargw = (\n np.log(rsh[u]) - np.log(rsh[u] + rs[u]) + np.log(rs[u])\n + np.log(io[u]) - np.log(nnsvth[u])\n + (rsh[u] / (rsh[u] + rs[u]))\n * (rs[u] * (iph[u] + io[u]) + vmp[u]) / nnsvth[u])\n # Three iterations of Newton-Raphson method to solve w+log(w)=logargW.\n # The initial guess is w=logargW. Where direct evaluation (above)\n # results in NaN from overflow, 3 iterations of Newton's method gives\n # approximately 8 digits of precision.\n x = logargw\n for i in range(3):\n x *= ((1. - np.log(x) + logargw) / (1. + x))\n theta[u] = x\n theta = np.transpose(theta)\n\n return theta, phi\n"}
|
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 373c879a49..633a9d8cc0 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -268,6 +268,7 @@ Functions relevant for single diode models.
pvsystem.singlediode
pvsystem.v_from_i
pvsystem.max_power_point
+ ivtools.sdm.pvsyst_temperature_coeff
Low-level functions for solving the single diode equation.
@@ -334,6 +335,7 @@ Pvsyst model
temperature.pvsyst_cell
pvsystem.calcparams_pvsyst
pvsystem.singlediode
+ ivtools.sdm.pvsyst_temperature_coeff
pvsystem.dc_ohms_from_percent
pvsystem.dc_ohmic_losses
diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 2c8e178eba..43dc7b0fe7 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -102,7 +102,10 @@ Enhancements
from DC power. Use parameter ``model`` to specify which inverter model to use.
(:pull:`1147`, :issue:`998`, :pull:`1150`)
* Added :py:func:`~pvlib.temperature.noct_sam`, a cell temperature model
- implemented in SAM (:pull:`1177`, :pull:`1195`)
+ implemented in SAM. (:pull:`1177`, :pull:`1195`)
+* Added :py:func:`~pvlib.ivtools.sdm.pvsyst_temperature_coeff` to calculate
+ the temperature coefficient of power for the pvsyst module model.
+ (:pull:`1190`)
Bug fixes
~~~~~~~~~
|
{"pvlib/ivtools/sdm.py": [{"type": "function", "name": "pvsyst_temperature_coeff", "lines": [1259, 1344], "signature": "def pvsyst_temperature_coeff(alpha_sc, gamma_ref, mu_gamma, I_L_ref, I_o_ref, R_sh_ref, R_sh_0, R_s, cells_in_series, R_sh_exp=5.5, EgRef=1.121, irrad_ref=1000, temp_ref=25):", "doc": "Calculates the temperature coefficient of power for a pvsyst single\ndiode model.\n\nThe temperature coefficient is determined as the numerical derivative\n:math:`\\frac{dP}{dT}` at the maximum power point at reference conditions\n[1]_.\n\nParameters\n----------\nalpha_sc : float\n The short-circuit current temperature coefficient of the module. [A/C]\n\ngamma_ref : float\n The diode ideality factor. [unitless]\n\nmu_gamma : float\n The temperature coefficient for the diode ideality factor. [1/K]\n\nI_L_ref : float\n The light-generated current (or photocurrent) at reference conditions.\n [A]\n\nI_o_ref : float\n The dark or diode reverse saturation current at reference conditions.\n [A]\n\nR_sh_ref : float\n The shunt resistance at reference conditions. [ohm]\n\nR_sh_0 : float\n The shunt resistance at zero irradiance conditions. [ohm]\n\nR_s : float\n The series resistance at reference conditions. [ohm]\n\ncells_in_series : int\n The number of cells connected in series.\n\nR_sh_exp : float, default 5.5\n The exponent in the equation for shunt resistance. [unitless]\n\nEgRef : float, default 1.121\n The energy bandgap of the module's cells at reference temperature.\n Default of 1.121 eV is for crystalline silicon. Must be positive. [eV]\n\nirrad_ref : float, default 1000\n Reference irradiance. [W/m^2].\n\ntemp_ref : float, default 25\n Reference cell temperature. [C]\n\n\nReturns\n-------\ngamma_pdc : float\n Temperature coefficient of power at maximum power point at reference\n conditions. [1/C]\n\nReferences\n----------\n.. [1] K. Sauer, T. Roessler, C. W. Hansen, Modeling the Irradiance and\n Temperature Dependence of Photovoltaic Modules in PVsyst, IEEE Journal\n of Photovoltaics v5(1), January 2015."}, {"type": "function", "name": "pvsyst_temperature_coeff.maxp", "lines": [1329, 1337], "signature": "def maxp(temp_cell, irrad_ref, alpha_sc, gamma_ref, mu_gamma, I_L_ref, I_o_ref, R_sh_ref, R_sh_0, R_s, cells_in_series, R_sh_exp, EgRef, temp_ref):", "doc": ""}]}
|
0.8
|
["pvlib/tests/ivtools/test_sdm.py::test_pvsyst_temperature_coeff"]
|
["pvlib/tests/ivtools/test_sdm.py::test_fit_cec_sam", "pvlib/tests/ivtools/test_sdm.py::test_fit_cec_sam_estimation_failure", "pvlib/tests/ivtools/test_sdm.py::test_fit_desoto", "pvlib/tests/ivtools/test_sdm.py::test_fit_desoto_failure", "pvlib/tests/ivtools/test_sdm.py::test_fit_desoto_sandia", "pvlib/tests/ivtools/test_sdm.py::test_fit_pvsyst_sandia", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_nans[2.0-2.0-2.0-2.0-2.0-2.0-2.0-nan]", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_nans[2.0-2.0-0.0-2.0-2.0-2.0-2.0-nan]", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_nans[2.0-2.0-2.0-0.0-2.0-2.0-2.0-nan]", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_nans[2.0-2.0-2.0-2.0-0.0-2.0-2.0-nan]", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_nans[2.0-2.0-2.0-2.0-2.0-0.0-2.0-nan]", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_nans[2.0-2.0-2.0-2.0-2.0-2.0-0.0-nan]", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_vmp0", "pvlib/tests/ivtools/test_sdm.py::test__update_rsh_fixed_pt_vector", "pvlib/tests/ivtools/test_sdm.py::test__update_io[2.0-2.0-2.0-2.0-2.0-2.0-0.5911]", "pvlib/tests/ivtools/test_sdm.py::test__update_io[2.0-2.0-2.0-0.0-2.0-2.0-0.5911]", "pvlib/tests/ivtools/test_sdm.py::test__update_io[2.0-2.0-0.0-2.0-2.0-2.0-0.0]", "pvlib/tests/ivtools/test_sdm.py::test__update_io[2.0-0.0-2.0-2.0-2.0-2.0-0.00010161]", "pvlib/tests/ivtools/test_sdm.py::test__update_io[0.0-2.0-2.0-2.0-2.0-2.0-17.9436]", "pvlib/tests/ivtools/test_sdm.py::test__update_io_nan[2.0-2.0-2.0-2.0-2.0-0.0]", "pvlib/tests/ivtools/test_sdm.py::test__update_io_nan[-1.0--1.0--1.0--1.0--1.0--1.0]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact[2.0-2.0-2.0-2.0-2.0-2.0-2.0-expected0]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact[2.0-0.0-2.0-2.0-2.0-2.0-2.0-expected1]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact[2.0-2.0-0.0-2.0-2.0-2.0-2.0-expected2]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact[0.0-2.0-2.0-2.0-2.0-2.0-2.0-expected3]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact_both_nan[2.0-2.0-2.0-0.0-2.0-2.0-2.0]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact_both_nan[2.0-2.0-2.0-2.0-2.0-2.0-0.0]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact_both_nan[2.0-0.0-2.0-2.0-2.0-0.0-2.0]", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact_one_nan", "pvlib/tests/ivtools/test_sdm.py::test__calc_theta_phi_exact_vector"]
|
311781d2380997044da0e484dc90aa146a74ca44
|
{"first_commit_time": 1615330167.0, "pr_title": "Function to calculate temperature coefficient of power for pvsyst", "pr_body": " - ~~[ ] Closes #xxxx~~\r\n - [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)\r\n - [x] Tests added\r\n - [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.\r\n - [x] Adds description and name entries in the appropriate \"what's new\" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).\r\n - [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.\r\n - [x] Pull request is nearly complete and ready for detailed review.\r\n - [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.\r\n\r\nThe temperature coefficient of power is not an input to the pvsyst single diode model. Rather, it is a result of the model, the derivative of power with respect to cell temperature, at the maximum power point, at reference conditions.", "pr_timeline": [{"time": 1615778455.0, "comment": "Ready for review"}, {"time": 1616090383.0, "comment": "thanks @cwhanse and @adriesse "}], "issues": {}}
|
pvlib/pvlib-python
| 1,562
|
https://github.com/pvlib/pvlib-python/pull/1562
|
pvlib__pvlib-python-1562
|
["1564"]
|
eefc35ccecfa92b4eebd19f96f5044f79e6b0bcf
|
diff --git a/docs/sphinx/source/reference/pv_modeling.rst b/docs/sphinx/source/reference/pv_modeling.rst
index 31c380c1bb..0f33cf8c70 100644
--- a/docs/sphinx/source/reference/pv_modeling.rst
+++ b/docs/sphinx/source/reference/pv_modeling.rst
@@ -28,6 +28,8 @@ Incident angle modifiers
iam.interp
iam.marion_diffuse
iam.marion_integrate
+ iam.schlick
+ iam.schlick_diffuse
PV temperature models
---------------------
diff --git a/docs/sphinx/source/whatsnew.rst b/docs/sphinx/source/whatsnew.rst
index 4830371985..464e59f121 100644
--- a/docs/sphinx/source/whatsnew.rst
+++ b/docs/sphinx/source/whatsnew.rst
@@ -6,6 +6,7 @@ What's New
These are new features and improvements of note in each release.
+.. include:: whatsnew/v0.9.4.rst
.. include:: whatsnew/v0.9.3.rst
.. include:: whatsnew/v0.9.2.rst
.. include:: whatsnew/v0.9.1.rst
diff --git a/docs/sphinx/source/whatsnew/v0.9.4.rst b/docs/sphinx/source/whatsnew/v0.9.4.rst
index 8a67c201f0..93d056e5a7 100644
--- a/docs/sphinx/source/whatsnew/v0.9.4.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.4.rst
@@ -1,7 +1,7 @@
.. _whatsnew_0940:
-v0.9.4 (TBD)
-------------------------
+v0.9.4 (anticipated December 2022)
+----------------------------------
Deprecations
~~~~~~~~~~~~
@@ -10,6 +10,9 @@ Deprecations
Enhancements
~~~~~~~~~~~~
* Multiple code style issues fixed that were reported by LGTM analysis. (:issue:`1275`, :pull:`1559`)
+* Added a direct IAM model :py:func:`pvlib.iam.schlick` which can be used with
+ :py:func:`~pvlib.iam.marion_diffuse`, and a diffuse IAM model
+ :py:func:`pvlib.iam.schlick_diffuse` (:pull:`1562`, :issue:`1564`)
* Added a function to calculate one of GHI, DHI, and DNI from values of the other two.
:py:func:`~pvlib.irradiance.complete_irradiance`
(:issue:`1565`, :pull:`1567`)
@@ -46,5 +49,9 @@ Contributors
* Christian Orner (:ghuser:`chrisorner`)
* Saurabh Aneja (:ghuser:`spaneja`)
* Marcus Boumans (:ghuser:`bowie2211`)
+* Yu Xie (:ghuser:`xieyupku`)
+* Anton Driesse (:ghuser:`adriesse`)
+* Cliff Hansen (:ghuser:`cwhanse`)
+* Kevin Anderson (:ghuser:`kanderso-nrel`)
* Karel De Brabandere (:ghuser:`kdebrab`)
* Naman Priyadarshi (:ghuser:`Naman-Priyadarshi`)
diff --git a/pvlib/iam.py b/pvlib/iam.py
index a8592f4036..6e60b6f97d 100644
--- a/pvlib/iam.py
+++ b/pvlib/iam.py
@@ -541,7 +541,7 @@ def marion_diffuse(model, surface_tilt, **kwargs):
----------
model : str
The IAM function to evaluate across solid angle. Must be one of
- `'ashrae', 'physical', 'martin_ruiz', 'sapm'`.
+ `'ashrae', 'physical', 'martin_ruiz', 'sapm', 'schlick'`.
surface_tilt : numeric
Surface tilt angles in decimal degrees.
@@ -592,6 +592,7 @@ def marion_diffuse(model, surface_tilt, **kwargs):
'ashrae': ashrae,
'sapm': sapm,
'martin_ruiz': martin_ruiz,
+ 'schlick': schlick,
}
try:
@@ -748,3 +749,123 @@ def marion_integrate(function, surface_tilt, region, num=None):
Fd = pd.Series(Fd, surface_tilt.index)
return Fd
+
+
+def schlick(aoi):
+ """
+ Determine incidence angle modifier (IAM) for direct irradiance using the
+ Schlick approximation to the Fresnel equations.
+
+ The Schlick approximation was proposed in [1]_ as a computationally
+ efficient alternative to computing the Fresnel factor in computer
+ graphics contexts. This implementation is a normalized form of the
+ equation in [1]_ so that it can be used as a PV IAM model.
+ Unlike other IAM models, this model has no ability to describe
+ different reflection profiles.
+
+ In PV contexts, the Schlick approximation has been used as an analytically
+ integrable alternative to the Fresnel equations for estimating IAM
+ for diffuse irradiance [2]_.
+
+ Parameters
+ ----------
+ aoi : numeric
+ The angle of incidence (AOI) between the module normal vector and the
+ sun-beam vector. Angles of nan will result in nan. [degrees]
+
+ Returns
+ -------
+ iam : numeric
+ The incident angle modifier.
+
+ References
+ ----------
+ .. [1] Schlick, C. An inexpensive BRDF model for physically-based
+ rendering. Computer graphics forum 13 (1994).
+
+ .. [2] Xie, Y., M. Sengupta, A. Habte, A. Andreas, "The 'Fresnel Equations'
+ for Diffuse radiation on Inclined photovoltaic Surfaces (FEDIS)",
+ Renewable and Sustainable Energy Reviews, vol. 161, 112362. June 2022.
+ :doi:`10.1016/j.rser.2022.112362`
+
+ See Also
+ --------
+ pvlib.iam.schlick_diffuse
+ """
+ iam = 1 - (1 - cosd(aoi)) ** 5
+ iam = np.where(np.abs(aoi) >= 90.0, 0.0, iam)
+
+ # preserve input type
+ if np.isscalar(aoi):
+ iam = iam.item()
+ elif isinstance(aoi, pd.Series):
+ iam = pd.Series(iam, aoi.index)
+
+ return iam
+
+
+def schlick_diffuse(surface_tilt):
+ """
+ Determine the incidence angle modifiers (IAM) for diffuse sky and
+ ground-reflected irradiance on a tilted surface using the Schlick
+ incident angle model.
+
+ The diffuse iam values are calculated using an analytical integration
+ of the Schlick equation [1]_ over the portion of an isotropic sky and
+ isotropic foreground that is visible from the tilted surface [2]_.
+
+ Parameters
+ ----------
+ surface_tilt : numeric
+ Surface tilt angle measured from horizontal (e.g. surface facing
+ up = 0, surface facing horizon = 90). [degrees]
+
+ Returns
+ -------
+ iam_sky : numeric
+ The incident angle modifier for sky diffuse.
+
+ iam_ground : numeric
+ The incident angle modifier for ground-reflected diffuse.
+
+ References
+ ----------
+ .. [1] Schlick, C. An inexpensive BRDF model for physically-based
+ rendering. Computer graphics forum 13 (1994).
+
+ .. [2] Xie, Y., M. Sengupta, A. Habte, A. Andreas, "The 'Fresnel Equations'
+ for Diffuse radiation on Inclined photovoltaic Surfaces (FEDIS)",
+ Renewable and Sustainable Energy Reviews, vol. 161, 112362. June 2022.
+ :doi:`10.1016/j.rser.2022.112362`
+
+ See Also
+ --------
+ pvlib.iam.schlick
+ """
+ # these calculations are as in [2]_, but with the refractive index
+ # weighting coefficient w set to 1.0 (so it is omitted)
+
+ # relative transmittance of sky diffuse radiation by PV cover:
+ cosB = cosd(surface_tilt)
+ sinB = sind(surface_tilt)
+ cuk = (2 / (np.pi * (1 + cosB))) * (
+ (30/7)*np.pi - (160/21)*np.radians(surface_tilt) - (10/3)*np.pi*cosB
+ + (160/21)*cosB*sinB - (5/3)*np.pi*cosB*sinB**2 + (20/7)*cosB*sinB**3
+ - (5/16)*np.pi*cosB*sinB**4 + (16/105)*cosB*sinB**5
+ ) # Eq 4 in [2]
+
+ # relative transmittance of ground-reflected radiation by PV cover:
+ with np.errstate(divide='ignore', invalid='ignore'): # Eq 6 in [2]
+ cug = 40 / (21 * (1 - cosB)) - (1 + cosB) / (1 - cosB) * cuk
+
+ cug = np.where(surface_tilt < 1e-6, 0, cug)
+
+ # respect input types:
+ if np.isscalar(surface_tilt):
+ cuk = cuk.item()
+ cug = cug.item()
+ elif isinstance(surface_tilt, pd.Series):
+ cuk = pd.Series(cuk, surface_tilt.index)
+ cug = pd.Series(cug, surface_tilt.index)
+
+ return cuk, cug
|
diff --git a/pvlib/tests/test_iam.py b/pvlib/tests/test_iam.py
index 4310ee837a..df4d9ee877 100644
--- a/pvlib/tests/test_iam.py
+++ b/pvlib/tests/test_iam.py
@@ -322,3 +322,48 @@ def test_marion_integrate_invalid():
with pytest.raises(ValueError):
_iam.marion_integrate(_iam.ashrae, 0, 'bad', 180)
+
+
+def test_schlick():
+ idx = pd.date_range('2019-01-01', freq='h', periods=9)
+ aoi = pd.Series([-180, -135, -90, -45, 0, 45, 90, 135, 180], idx)
+ expected = pd.Series([0, 0, 0, 0.99784451, 1.0, 0.99784451, 0, 0, 0], idx)
+
+ # scalars
+ for aoi_scalar, expected_scalar in zip(aoi, expected):
+ actual = _iam.schlick(aoi_scalar)
+ assert_allclose(expected_scalar, actual)
+
+ # numpy arrays
+ actual = _iam.schlick(aoi.values)
+ assert_allclose(expected.values, actual)
+
+ # pandas Series
+ actual = _iam.schlick(aoi)
+ assert_series_equal(expected, actual)
+
+
+def test_schlick_diffuse():
+ surface_tilt = np.array([0, 20, 70, 90])
+ # expected values calculated with marion_integrate and schlick
+ expected_sky = np.array([0.95238092, 0.96249934, 0.96228167, 0.95238094])
+ expected_ground = np.array([0, 0.62693858, 0.93218737, 0.95238094])
+
+ # numpy arrays
+ actual_sky, actual_ground = _iam.schlick_diffuse(surface_tilt)
+ assert_allclose(expected_sky, actual_sky)
+ assert_allclose(expected_ground, actual_ground, rtol=1e-6)
+
+ # scalars
+ for i in range(len(surface_tilt)):
+ actual_sky, actual_ground = _iam.schlick_diffuse(surface_tilt[i])
+ assert_allclose(expected_sky[i], actual_sky)
+ assert_allclose(expected_ground[i], actual_ground, rtol=1e-6)
+
+ # pandas Series
+ idx = pd.date_range('2019-01-01', freq='h', periods=len(surface_tilt))
+ actual_sky, actual_ground = _iam.schlick_diffuse(pd.Series(surface_tilt,
+ idx))
+ assert_series_equal(pd.Series(expected_sky, idx), actual_sky)
+ assert_series_equal(pd.Series(expected_ground, idx), actual_ground,
+ rtol=1e-6)
| 2022-09-30T00:13:00
|
{"README.md": "<p align=\"center\">\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img src=\"assets/FEA-Bench-full.png\" style=\"height: 10em\" alt=\"fea-bench\" />\n </a>\n</p>\n\n<p align=\"center\">\n <em>A benchmark that aims to evaluate the capability of implementing new features in the code repositories.</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://arxiv.org/abs/2503.06680\">\n <img alt=\"paper\" src=\"https://img.shields.io/badge/ArXiv-%23B31B1B?style=for-the-badge&logo=arXiv\">\n </a>\n <a href=\"./LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/SWE-bench/SWE-bench?style=for-the-badge\">\n </a>\n <a href=\"https://gmago-leway.github.io/fea-bench.github.io/\">\n <img alt=\"Leaderboard\" src=\"https://img.shields.io/badge/leaderboard-%F0%9F%8F%86-1?style=for-the-badge\">\n </a>\n <a href=\"https://huggingface.co/datasets/microsoft/FEA-Bench\">\n <img alt=\"dataset\" src=\"https://img.shields.io/badge/Dataset-HF-FFD21E.svg?style=for-the-badge&logo=huggingface&logoColor=FFD21E\">\n </a>\n</p>\n\n---\n\n# Evaluation\n\nThis repository is the official implementation of the paper \"FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation.\" It can be used for baseline evaluation using the prompts mentioned in the paper.\n\nThe repository includes several functionalities, primarily for obtaining the full dataset, running model inference aligned with the paper, and evaluating the results. The complete pipeline is as follows:\n\n## 1. Environment Setup\n\nYou can create a new Python environment and install all dependencies using:\n```bash\npip install -e .\n```\nIf you plan to use VLLM inference, ensure that the installed libraries match your hardware.\n\n## 2. Building the Full Evaluation Dataset\n\nDue to licensing and company policies, we cannot release the full dataset. Our published version ([https://huggingface.co/datasets/microsoft/FEA-Bench](https://huggingface.co/datasets/microsoft/FEA-Bench)) only includes essential attributes, and the remaining content needs to be scraped from GitHub.\n\nTo construct the full FEA-Bench dataset and save it in the `feabench-data` folder, run the following command. Note that you need to replace `GITHUB_TOKEN` with your own GitHub token, which should have read-only access to public repositories:\n```bash\nexport GITHUB_TOKEN=\"xxx\"\n\npython -m feabench.get_dataset \\\n --dataset microsoft/FEA-Bench \\\n --testbed feabench-data/testbed \\\n --lite_ids instances_lite.json \\\n --medium_file feabench-data/FEA-Bench-v1.0-medium.jsonl \\\n --standard_dataset_path feabench-data/FEA-Bench-v1.0-Standard \\\n --oracle_dataset_path feabench-data/FEA-Bench-v1.0-Oracle \\\n --lite_standard_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Standard \\\n --lite_oracle_dataset_path feabench-data/FEA-Bench-v1.0-Lite-Oracle\n```\n\n## 3. Running Model Inference\n\nOur repository only provides inference methods consistent with those in the paper. Agentless and other agent-based inferences can use the `FEA-Bench-v1.0-Lite-Standard` dataset constructed in the previous step, which is aligned with the format of SWE-Bench.\n\n### Example of VLLM Inference:\n```bash\nexport MAX_SEQ_LEN=128000\nexport MAX_GEN_LEN=4096\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=Qwen/Qwen2.5-Coder-3B-Instruct\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type vllm \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE\n```\n\n### Example of OpenAI API-style Inference:\n(DEEPSEEK_TOKENIZER is only required when using DeepSeek model inference)\n```bash\nexport DEEPSEEK_TOKENIZER_PATH=\"xxx\"\nexport OPENAI_API_KEY=\"xxx\"\nexport OPENAI_BASE_URL=\"https://api.deepseek.com\"\n\nDATASET_PATH=feabench-data/FEA-Bench-v1.0-Oracle\nMODEL_NAME=deepseek-chat\nRESULTS_ROOT_DIR=scripts/experiments/results_full\n\nPROMPT_MODE=natural-detailed\npython -m feabench.run_prediction \\\n --dataset_name_or_path $DATASET_PATH \\\n --model_type openai \\\n --model_name_or_path $MODEL_NAME \\\n --input_text $PROMPT_MODE \\\n --output_dir $RESULTS_ROOT_DIR/$PROMPT_MODE \\\n --num_proc 1\n```\n\nAfter running the inference, you should see the output `.jsonl` result files in the specified `output_dir`.\n\n## 4. Running Model Evaluation\n\nOur evaluation process is based on the code provided by SWE-Bench. We have provided a patch file `swe-bench.diff` to include the environment configurations for the task instances we are involved in.\n\nClone the SWE-Bench repository and apply the patch:\n```bash\nmkdir -p evaluator\ncd evaluator\ngit clone https://github.com/SWE-bench/SWE-bench.git\ncd SWE-bench\ngit checkout a0536ee6f9fd5ff88acf17a36a384bf3da3d93d6\ngit apply ../../swe-bench.diff\nconda create --name fea-eval python=3.11\nconda activate fea-eval\npip install -e .\n```\n\nTo verify that the FEA-Bench task instances can run correctly on your machine, you can build a gold result based on the dataset:\n```bash\npython -m feabench.get_gold_results \\\n --dataset_name_or_path feabench-data/FEA-Bench-v1.0-Standard \\\n --save_dir feabench-data/experiments/gold \\\n --file_name Gold__FEABench_v1.0__test.jsonl\n```\n\nThe command to run the evaluation script is as follows (using the gold result constructed above as an example):\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name ../../feabench-data/FEA-Bench-v1.0-Standard \\\n --predictions_path ../../feabench-data/experiments/gold/Gold__FEABench_v1.0__test.jsonl \\\n --max_workers 10 \\\n --cache_level instance \\\n --timeout 900 \\\n --run_id FEABench_v1_Gold\n```\nThe usage is identical to SWE-Bench. You can set the cache level `cache_level` based on your disk size. You should then obtain a result file similar to the following `.json` format:\n```json\n{\n \"total_instances\": 1401,\n \"submitted_instances\": 1401,\n \"completed_instances\": 1401,\n \"resolved_instances\": 1401,\n \"unresolved_instances\": 0,\n \"empty_patch_instances\": 0,\n \"error_instances\": 0,\n ...\n}\n```\n\nCongratulations! You have completed the usage of FEA-Bench. If you have any questions, please raise them in the issues.\n\n---\n\nFor more details, please refer to the [FEA-Bench Paper](https://arxiv.org/abs/2503.06680).\nIf you find our work helpful, we would be grateful if you could cite our work.\n```\n@misc{li2025feabenchbenchmarkevaluatingrepositorylevel,\n title={FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation}, \n author={Wei Li and Xin Zhang and Zhongxin Guo and Shaoguang Mao and Wen Luo and Guangyue Peng and Yangyu Huang and Houfeng Wang and Scarlett Li},\n year={2025},\n eprint={2503.06680},\n archivePrefix={arXiv},\n primaryClass={cs.SE},\n url={https://arxiv.org/abs/2503.06680}, \n}\n```\n\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [[email protected]](mailto:[email protected]) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n"}
|
{"docs/sphinx/source/reference/pv_modeling.rst": ".. currentmodule:: pvlib\n\nPV Modeling\n===========\n\nClasses\n-------\n\nThe :py:class:`~pvsystem.PVSystem` class provides many methods that\nwrap the functions listed below. See its documentation for details.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem\n\nIncident angle modifiers\n------------------------\n\n.. autosummary::\n :toctree: generated/\n\n iam.physical\n iam.ashrae\n iam.martin_ruiz\n iam.martin_ruiz_diffuse\n iam.sapm\n iam.interp\n iam.marion_diffuse\n iam.marion_integrate\n\nPV temperature models\n---------------------\n\n.. autosummary::\n :toctree: generated/\n\n temperature.sapm_cell\n temperature.sapm_module\n temperature.sapm_cell_from_module\n temperature.pvsyst_cell\n temperature.faiman\n temperature.fuentes\n temperature.ross\n temperature.noct_sam\n temperature.prilliman\n pvsystem.PVSystem.get_cell_temperature\n temperature.generic_linear\n temperature.GenericLinearModel\n\nTemperature Model Parameters\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n.. currentmodule:: pvlib.temperature\n.. autodata:: TEMPERATURE_MODEL_PARAMETERS\n :annotation:\n\n.. currentmodule:: pvlib\n\nSingle diode models\n-------------------\n\nFunctions relevant for single diode models.\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.calcparams_cec\n pvsystem.calcparams_desoto\n pvsystem.calcparams_pvsyst\n pvsystem.i_from_v\n pvsystem.singlediode\n pvsystem.v_from_i\n pvsystem.max_power_point\n ivtools.sdm.pvsyst_temperature_coeff\n\nLow-level functions for solving the single diode equation.\n\n.. autosummary::\n :toctree: generated/\n\n singlediode.estimate_voc\n singlediode.bishop88\n singlediode.bishop88_i_from_v\n singlediode.bishop88_v_from_i\n singlediode.bishop88_mpp\n\nFunctions for fitting diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n\nInverter models (DC to AC conversion)\n-------------------------------------\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.PVSystem.get_ac\n inverter.sandia\n inverter.sandia_multi\n inverter.adr\n inverter.pvwatts\n inverter.pvwatts_multi\n\nFunctions for fitting inverter models\n\n.. autosummary::\n :toctree: generated/\n\n inverter.fit_sandia\n\n\nPV System Models\n----------------\n\nSandia array performance model (SAPM)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.sapm\n pvsystem.sapm_effective_irradiance\n pvsystem.sapm_spectral_loss\n inverter.sandia\n temperature.sapm_cell\n\nPvsyst model\n^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n temperature.pvsyst_cell\n pvsystem.calcparams_pvsyst\n pvsystem.singlediode\n ivtools.sdm.pvsyst_temperature_coeff\n pvsystem.dc_ohms_from_percent\n pvsystem.dc_ohmic_losses\n\nPVWatts model\n^^^^^^^^^^^^^\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.pvwatts_dc\n inverter.pvwatts\n pvsystem.pvwatts_losses\n\nEstimating PV model parameters\n------------------------------\n\nFunctions for fitting single diode models\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sdm.fit_cec_sam\n ivtools.sdm.fit_desoto\n ivtools.sdm.fit_pvsyst_sandia\n ivtools.sdm.fit_desoto_sandia\n\nFunctions for fitting the single diode equation\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.sde.fit_sandia_simple\n\nUtilities for working with IV curve data\n\n.. autosummary::\n :toctree: generated/\n\n ivtools.utils.rectify_iv_curve\n\nOther\n-----\n\n.. autosummary::\n :toctree: generated/\n\n pvsystem.retrieve_sam\n pvsystem.scale_voltage_current_power\n", "docs/sphinx/source/whatsnew.rst": ".. _whatsnew:\n\n**********\nWhat's New\n**********\n\nThese are new features and improvements of note in each release.\n\n.. include:: whatsnew/v0.9.3.rst\n.. include:: whatsnew/v0.9.2.rst\n.. include:: whatsnew/v0.9.1.rst\n.. include:: whatsnew/v0.9.0.rst\n.. include:: whatsnew/v0.8.1.rst\n.. include:: whatsnew/v0.8.0.rst\n.. include:: whatsnew/v0.7.2.rst\n.. include:: whatsnew/v0.7.1.rst\n.. include:: whatsnew/v0.7.0.rst\n.. include:: whatsnew/v0.6.3.rst\n.. include:: whatsnew/v0.6.2.rst\n.. include:: whatsnew/v0.6.1.rst\n.. include:: whatsnew/v0.6.0.rst\n.. include:: whatsnew/v0.5.2.rst\n.. include:: whatsnew/v0.5.1.rst\n.. include:: whatsnew/v0.5.0.rst\n.. include:: whatsnew/v0.4.5.txt\n.. include:: whatsnew/v0.4.4.txt\n.. include:: whatsnew/v0.4.3.txt\n.. include:: whatsnew/v0.4.2.txt\n.. include:: whatsnew/v0.4.1.txt\n.. include:: whatsnew/v0.4.0.txt\n.. include:: whatsnew/v0.3.3.txt\n.. include:: whatsnew/v0.3.2.txt\n.. include:: whatsnew/v0.3.1.txt\n.. include:: whatsnew/v0.3.0.txt\n.. include:: whatsnew/v0.2.2.txt\n.. include:: whatsnew/v0.2.1.txt\n.. include:: whatsnew/v0.2.0.txt\n.. include:: whatsnew/v0.1.0.txt\n", "docs/sphinx/source/whatsnew/v0.9.4.rst": ".. _whatsnew_0940:\n\nv0.9.4 (TBD)\n------------------------\n\nDeprecations\n~~~~~~~~~~~~\n\n\nEnhancements\n~~~~~~~~~~~~\n* Multiple code style issues fixed that were reported by LGTM analysis. (:issue:`1275`, :pull:`1559`)\n* Added a function to calculate one of GHI, DHI, and DNI from values of the other two.\n :py:func:`~pvlib.irradiance.complete_irradiance`\n (:issue:`1565`, :pull:`1567`)\n* Add optional ``return_components`` parameter to :py:func:`pvlib.irradiance.haydavies` to return\n individual diffuse irradiance components (:issue:`1553`, :pull:`1568`)\n\n\nBug fixes\n~~~~~~~~~\n\n* Fixed bug in :py:func:`pvlib.shading.masking_angle` and :py:func:`pvlib.bifacial.infinite_sheds._ground_angle`\n where zero ``gcr`` input caused a ZeroDivisionError (:issue:`1576`, :pull:`1589`)\n\nTesting\n~~~~~~~\n* Corrected a flawed test for :py:func:`~pvlib.irradiance.get_ground_diffuse` (:issue:`1569`, :pull:`1575`)\n\nDocumentation\n~~~~~~~~~~~~~\n\n\nBenchmarking\n~~~~~~~~~~~~~\n* Removed ``time_tracker_singleaxis`` function from tracking.py (:issue:`1508`, :pull:`1535`)\n\n\nRequirements\n~~~~~~~~~~~~\n\n\nContributors\n~~~~~~~~~~~~\n* Kirsten Perry (:ghuser:`kperrynrel`)\n* Christian Orner (:ghuser:`chrisorner`)\n* Saurabh Aneja (:ghuser:`spaneja`)\n* Marcus Boumans (:ghuser:`bowie2211`)\n* Karel De Brabandere (:ghuser:`kdebrab`)\n* Naman Priyadarshi (:ghuser:`Naman-Priyadarshi`)\n", "pvlib/iam.py": "r\"\"\"\nThe ``iam`` module contains functions that implement models for the incidence\nangle modifier (IAM). The IAM quantifies the fraction of direct irradiance on\na module's front surface that is transmitted through the module materials to\nthe cells. Stated differently, the quantity 1 - IAM is the fraction of direct\nirradiance that is reflected away or absorbed by the module's front materials.\nIAM is typically a function of the angle of incidence (AOI) of the direct\nirradiance to the module's surface.\n\"\"\"\n\nimport numpy as np\nimport pandas as pd\nimport functools\nfrom pvlib.tools import cosd, sind, tand, asind\n\n# a dict of required parameter names for each IAM model\n# keys are the function names for the IAM models\n_IAM_MODEL_PARAMS = {\n 'ashrae': {'b'},\n 'physical': {'n', 'K', 'L'},\n 'martin_ruiz': {'a_r'},\n 'sapm': {'B0', 'B1', 'B2', 'B3', 'B4', 'B5'},\n 'interp': set()\n}\n\n\ndef ashrae(aoi, b=0.05):\n r\"\"\"\n Determine the incidence angle modifier using the ASHRAE transmission\n model.\n\n The ASHRAE (American Society of Heating, Refrigeration, and Air\n Conditioning Engineers) transmission model is developed in\n [1]_, and in [2]_. The model has been used in software such as PVSyst [3]_.\n\n Parameters\n ----------\n aoi : numeric\n The angle of incidence (AOI) between the module normal vector and the\n sun-beam vector in degrees. Angles of nan will result in nan.\n\n b : float, default 0.05\n A parameter to adjust the incidence angle modifier as a function of\n angle of incidence. Typical values are on the order of 0.05 [3].\n\n Returns\n -------\n iam : numeric\n The incident angle modifier (IAM). Returns zero for all abs(aoi) >= 90\n and for all ``iam`` values that would be less than 0.\n\n Notes\n -----\n The incidence angle modifier is calculated as\n\n .. math::\n\n IAM = 1 - b (\\sec(aoi) - 1)\n\n As AOI approaches 90 degrees, the model yields negative values for IAM;\n negative IAM values are set to zero in this implementation.\n\n References\n ----------\n .. [1] Souka A.F., Safwat H.H., \"Determination of the optimum\n orientations for the double exposure flat-plate collector and its\n reflections\". Solar Energy vol .10, pp 170-174. 1966.\n\n .. [2] ASHRAE standard 93-77\n\n .. [3] PVsyst Contextual Help.\n https://files.pvsyst.com/help/index.html?iam_loss.htm retrieved on\n October 14, 2019\n\n See Also\n --------\n pvlib.iam.physical\n pvlib.iam.martin_ruiz\n pvlib.iam.interp\n \"\"\"\n\n iam = 1 - b * (1 / np.cos(np.radians(aoi)) - 1)\n aoi_gte_90 = np.full_like(aoi, False, dtype='bool')\n np.greater_equal(np.abs(aoi), 90, where=~np.isnan(aoi), out=aoi_gte_90)\n iam = np.where(aoi_gte_90, 0, iam)\n iam = np.maximum(0, iam)\n\n if isinstance(aoi, pd.Series):\n iam = pd.Series(iam, index=aoi.index)\n\n return iam\n\n\ndef physical(aoi, n=1.526, K=4., L=0.002):\n r\"\"\"\n Determine the incidence angle modifier using refractive index ``n``,\n extinction coefficient ``K``, and glazing thickness ``L``.\n\n ``iam.physical`` calculates the incidence angle modifier as described in\n [1]_, Section 3. The calculation is based on a physical model of absorbtion\n and transmission through a transparent cover.\n\n Parameters\n ----------\n aoi : numeric\n The angle of incidence between the module normal vector and the\n sun-beam vector in degrees. Angles of 0 are replaced with 1e-06\n to ensure non-nan results. Angles of nan will result in nan.\n\n n : numeric, default 1.526\n The effective index of refraction (unitless). Reference [1]_\n indicates that a value of 1.526 is acceptable for glass.\n\n K : numeric, default 4.0\n The glazing extinction coefficient in units of 1/meters.\n Reference [1] indicates that a value of 4 is reasonable for\n \"water white\" glass.\n\n L : numeric, default 0.002\n The glazing thickness in units of meters. Reference [1]_\n indicates that 0.002 meters (2 mm) is reasonable for most\n glass-covered PV panels.\n\n Returns\n -------\n iam : numeric\n The incident angle modifier\n\n Notes\n -----\n The pvlib python authors believe that Eqn. 14 in [1]_ is\n incorrect, which presents :math:`\\theta_{r} = \\arcsin(n \\sin(AOI))`.\n Here, :math:`\\theta_{r} = \\arcsin(1/n \\times \\sin(AOI))`\n\n References\n ----------\n .. [1] W. De Soto et al., \"Improvement and validation of a model for\n photovoltaic array performance\", Solar Energy, vol 80, pp. 78-88,\n 2006.\n\n .. [2] Duffie, John A. & Beckman, William A.. (2006). Solar Engineering\n of Thermal Processes, third edition. [Books24x7 version] Available\n from http://common.books24x7.com/toc.aspx?bookid=17160.\n\n See Also\n --------\n pvlib.iam.martin_ruiz\n pvlib.iam.ashrae\n pvlib.iam.interp\n pvlib.iam.sapm\n \"\"\"\n zeroang = 1e-06\n\n # hold a new reference to the input aoi object since we're going to\n # overwrite the aoi reference below, but we'll need it for the\n # series check at the end of the function\n aoi_input = aoi\n\n aoi = np.where(aoi == 0, zeroang, aoi)\n\n # angle of reflection\n thetar_deg = asind(1.0 / n * (sind(aoi)))\n\n # reflectance and transmittance for normal incidence light\n rho_zero = ((1-n) / (1+n)) ** 2\n tau_zero = np.exp(-K*L)\n\n # reflectance for parallel and perpendicular polarized light\n rho_para = (tand(thetar_deg - aoi) / tand(thetar_deg + aoi)) ** 2\n rho_perp = (sind(thetar_deg - aoi) / sind(thetar_deg + aoi)) ** 2\n\n # transmittance for non-normal light\n tau = np.exp(-K * L / cosd(thetar_deg))\n\n # iam is ratio of non-normal to normal incidence transmitted light\n # after deducting the reflected portion of each\n iam = ((1 - (rho_para + rho_perp) / 2) / (1 - rho_zero) * tau / tau_zero)\n\n with np.errstate(invalid='ignore'):\n # angles near zero produce nan, but iam is defined as one\n small_angle = 1e-06\n iam = np.where(np.abs(aoi) < small_angle, 1.0, iam)\n\n # angles at 90 degrees can produce tiny negative values,\n # which should be zero. this is a result of calculation precision\n # rather than the physical model\n iam = np.where(iam < 0, 0, iam)\n\n # for light coming from behind the plane, none can enter the module\n iam = np.where(aoi > 90, 0, iam)\n\n if isinstance(aoi_input, pd.Series):\n iam = pd.Series(iam, index=aoi_input.index)\n\n return iam\n\n\ndef martin_ruiz(aoi, a_r=0.16):\n r'''\n Determine the incidence angle modifier (IAM) using the Martin\n and Ruiz incident angle model.\n\n Parameters\n ----------\n aoi : numeric, degrees\n The angle of incidence between the module normal vector and the\n sun-beam vector in degrees.\n\n a_r : numeric\n The angular losses coefficient described in equation 3 of [1]_.\n This is an empirical dimensionless parameter. Values of ``a_r`` are\n generally on the order of 0.08 to 0.25 for flat-plate PV modules.\n\n Returns\n -------\n iam : numeric\n The incident angle modifier(s)\n\n Notes\n -----\n `martin_ruiz` calculates the incidence angle modifier (IAM) as described in\n [1]_. The information required is the incident angle (AOI) and the angular\n losses coefficient (a_r). Note that [1]_ has a corrigendum [2]_ which\n clarifies a mix-up of 'alpha's and 'a's in the former.\n\n The incident angle modifier is defined as\n\n .. math::\n\n IAM = \\frac{1 - \\exp(-\\cos(\\frac{aoi}{a_r}))}\n {1 - \\exp(\\frac{-1}{a_r})}\n\n which is presented as :math:`AL(\\alpha) = 1 - IAM` in equation 4 of [1]_,\n with :math:`\\alpha` representing the angle of incidence AOI. Thus IAM = 1\n at AOI = 0, and IAM = 0 at AOI = 90. This equation is only valid for\n -90 <= aoi <= 90, therefore `iam` is constrained to 0.0 outside this\n interval.\n\n References\n ----------\n .. [1] N. Martin and J. M. Ruiz, \"Calculation of the PV modules angular\n losses under field conditions by means of an analytical model\", Solar\n Energy Materials & Solar Cells, vol. 70, pp. 25-38, 2001.\n\n .. [2] N. Martin and J. M. Ruiz, \"Corrigendum to 'Calculation of the PV\n modules angular losses under field conditions by means of an\n analytical model'\", Solar Energy Materials & Solar Cells, vol. 110,\n pp. 154, 2013.\n\n See Also\n --------\n pvlib.iam.martin_ruiz_diffuse\n pvlib.iam.physical\n pvlib.iam.ashrae\n pvlib.iam.interp\n pvlib.iam.sapm\n '''\n # Contributed by Anton Driesse (@adriesse), PV Performance Labs. July, 2019\n\n aoi_input = aoi\n\n aoi = np.asanyarray(aoi)\n a_r = np.asanyarray(a_r)\n\n if np.any(np.less_equal(a_r, 0)):\n raise ValueError(\"The parameter 'a_r' cannot be zero or negative.\")\n\n with np.errstate(invalid='ignore'):\n iam = (1 - np.exp(-cosd(aoi) / a_r)) / (1 - np.exp(-1 / a_r))\n iam = np.where(np.abs(aoi) >= 90.0, 0.0, iam)\n\n if isinstance(aoi_input, pd.Series):\n iam = pd.Series(iam, index=aoi_input.index)\n\n return iam\n\n\ndef martin_ruiz_diffuse(surface_tilt, a_r=0.16, c1=0.4244, c2=None):\n '''\n Determine the incidence angle modifiers (iam) for diffuse sky and\n ground-reflected irradiance using the Martin and Ruiz incident angle model.\n\n Parameters\n ----------\n surface_tilt: float or array-like, default 0\n Surface tilt angles in decimal degrees.\n The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90)\n surface_tilt must be in the range [0, 180]\n\n a_r : numeric\n The angular losses coefficient described in equation 3 of [1]_.\n This is an empirical dimensionless parameter. Values of a_r are\n generally on the order of 0.08 to 0.25 for flat-plate PV modules.\n a_r must be greater than zero.\n\n c1 : float\n First fitting parameter for the expressions that approximate the\n integral of diffuse irradiance coming from different directions.\n c1 is given as the constant 4 / 3 / pi (0.4244) in [1]_.\n\n c2 : float\n Second fitting parameter for the expressions that approximate the\n integral of diffuse irradiance coming from different directions.\n If c2 is None, it will be calculated according to the linear\n relationship given in [3]_.\n\n Returns\n -------\n iam_sky : numeric\n The incident angle modifier for sky diffuse\n\n iam_ground : numeric\n The incident angle modifier for ground-reflected diffuse\n\n Notes\n -----\n Sky and ground modifiers are complementary: iam_sky for tilt = 30 is\n equal to iam_ground for tilt = 180 - 30. For vertical surfaces,\n tilt = 90, the two factors are equal.\n\n References\n ----------\n .. [1] N. Martin and J. M. Ruiz, \"Calculation of the PV modules angular\n losses under field conditions by means of an analytical model\", Solar\n Energy Materials & Solar Cells, vol. 70, pp. 25-38, 2001.\n\n .. [2] N. Martin and J. M. Ruiz, \"Corrigendum to 'Calculation of the PV\n modules angular losses under field conditions by means of an\n analytical model'\", Solar Energy Materials & Solar Cells, vol. 110,\n pp. 154, 2013.\n\n .. [3] \"IEC 61853-3 Photovoltaic (PV) module performance testing and energy\n rating - Part 3: Energy rating of PV modules\". IEC, Geneva, 2018.\n\n See Also\n --------\n pvlib.iam.martin_ruiz\n pvlib.iam.physical\n pvlib.iam.ashrae\n pvlib.iam.interp\n pvlib.iam.sapm\n '''\n # Contributed by Anton Driesse (@adriesse), PV Performance Labs. Oct. 2019\n\n if isinstance(surface_tilt, pd.Series):\n out_index = surface_tilt.index\n else:\n out_index = None\n\n surface_tilt = np.asanyarray(surface_tilt)\n\n # avoid undefined results for horizontal or upside-down surfaces\n zeroang = 1e-06\n\n\n surface_tilt = np.where(surface_tilt == 0, zeroang, surface_tilt)\n surface_tilt = np.where(surface_tilt == 180, 180 - zeroang, surface_tilt)\n\n if c2 is None:\n # This equation is from [3] Sect. 7.2\n c2 = 0.5 * a_r - 0.154\n\n beta = np.radians(surface_tilt)\n sin = np.sin\n pi = np.pi\n cos = np.cos\n\n # avoid RuntimeWarnings for <, sin, and cos with nan\n with np.errstate(invalid='ignore'):\n # because sin(pi) isn't exactly zero\n sin_beta = np.where(surface_tilt < 90, sin(beta), sin(pi - beta))\n\n trig_term_sky = sin_beta + (pi - beta - sin_beta) / (1 + cos(beta))\n trig_term_gnd = sin_beta + (beta - sin_beta) / (1 - cos(beta)) # noqa: E222 E261 E501\n\n iam_sky = 1 - np.exp(-(c1 + c2 * trig_term_sky) * trig_term_sky / a_r)\n iam_gnd = 1 - np.exp(-(c1 + c2 * trig_term_gnd) * trig_term_gnd / a_r)\n\n if out_index is not None:\n iam_sky = pd.Series(iam_sky, index=out_index, name='iam_sky')\n iam_gnd = pd.Series(iam_gnd, index=out_index, name='iam_ground')\n\n return iam_sky, iam_gnd\n\n\ndef interp(aoi, theta_ref, iam_ref, method='linear', normalize=True):\n r'''\n Determine the incidence angle modifier (IAM) by interpolating a set of\n reference values, which are usually measured values.\n\n Parameters\n ----------\n aoi : numeric\n The angle of incidence between the module normal vector and the\n sun-beam vector [degrees].\n\n theta_ref : numeric\n Vector of angles at which the IAM is known [degrees].\n\n iam_ref : numeric\n IAM values for each angle in ``theta_ref`` [unitless].\n\n method : str, default 'linear'\n Specifies the interpolation method.\n Useful options are: 'linear', 'quadratic', 'cubic'.\n See scipy.interpolate.interp1d for more options.\n\n normalize : boolean, default True\n When true, the interpolated values are divided by the interpolated\n value at zero degrees. This ensures that ``iam=1.0`` at normal\n incidence.\n\n Returns\n -------\n iam : numeric\n The incident angle modifier(s) [unitless]\n\n Notes\n -----\n ``theta_ref`` must have two or more points and may span any range of\n angles. Typically there will be a dozen or more points in the range 0-90\n degrees. Beyond the range of ``theta_ref``, IAM values are extrapolated,\n but constrained to be non-negative.\n\n The sign of ``aoi`` is ignored; only the magnitude is used.\n\n See Also\n --------\n pvlib.iam.physical\n pvlib.iam.ashrae\n pvlib.iam.martin_ruiz\n pvlib.iam.sapm\n '''\n # Contributed by Anton Driesse (@adriesse), PV Performance Labs. July, 2019\n\n from scipy.interpolate import interp1d\n\n # Scipy doesn't give the clearest feedback, so check number of points here.\n MIN_REF_VALS = {'linear': 2, 'quadratic': 3, 'cubic': 4, 1: 2, 2: 3, 3: 4}\n\n if len(theta_ref) < MIN_REF_VALS.get(method, 2):\n raise ValueError(\"Too few reference points defined \"\n \"for interpolation method '%s'.\" % method)\n\n if np.any(np.less(iam_ref, 0)):\n raise ValueError(\"Negative value(s) found in 'iam_ref'. \"\n \"This is not physically possible.\")\n\n interpolator = interp1d(theta_ref, iam_ref, kind=method,\n fill_value='extrapolate')\n aoi_input = aoi\n\n aoi = np.asanyarray(aoi)\n aoi = np.abs(aoi)\n iam = interpolator(aoi)\n iam = np.clip(iam, 0, None)\n\n if normalize:\n iam /= interpolator(0)\n\n if isinstance(aoi_input, pd.Series):\n iam = pd.Series(iam, index=aoi_input.index)\n\n return iam\n\n\ndef sapm(aoi, module, upper=None):\n r\"\"\"\n Determine the incidence angle modifier (IAM) using the SAPM model.\n\n Parameters\n ----------\n aoi : numeric\n Angle of incidence in degrees. Negative input angles will return\n zeros.\n\n module : dict-like\n A dict or Series with the SAPM IAM model parameters.\n See the :py:func:`sapm` notes section for more details.\n\n upper : None or float, default None\n Upper limit on the results.\n\n Returns\n -------\n iam : numeric\n The SAPM angle of incidence loss coefficient, termed F2 in [1]_.\n\n Notes\n -----\n The SAPM [1]_ traditionally does not define an upper limit on the AOI\n loss function and values slightly exceeding 1 may exist for moderate\n angles of incidence (15-40 degrees). However, users may consider\n imposing an upper limit of 1.\n\n References\n ----------\n .. [1] King, D. et al, 2004, \"Sandia Photovoltaic Array Performance\n Model\", SAND Report 3535, Sandia National Laboratories, Albuquerque,\n NM.\n\n .. [2] B.H. King et al, \"Procedure to Determine Coefficients for the\n Sandia Array Performance Model (SAPM),\" SAND2016-5284, Sandia\n National Laboratories (2016).\n\n .. [3] B.H. King et al, \"Recent Advancements in Outdoor Measurement\n Techniques for Angle of Incidence Effects,\" 42nd IEEE PVSC (2015).\n DOI: 10.1109/PVSC.2015.7355849\n\n See Also\n --------\n pvlib.iam.physical\n pvlib.iam.ashrae\n pvlib.iam.martin_ruiz\n pvlib.iam.interp\n \"\"\"\n\n aoi_coeff = [module['B5'], module['B4'], module['B3'], module['B2'],\n module['B1'], module['B0']]\n\n iam = np.polyval(aoi_coeff, aoi)\n iam = np.clip(iam, 0, upper)\n # nan tolerant masking\n aoi_lt_0 = np.full_like(aoi, False, dtype='bool')\n np.less(aoi, 0, where=~np.isnan(aoi), out=aoi_lt_0)\n iam = np.where(aoi_lt_0, 0, iam)\n\n if isinstance(aoi, pd.Series):\n iam = pd.Series(iam, aoi.index)\n\n return iam\n\n\ndef marion_diffuse(model, surface_tilt, **kwargs):\n \"\"\"\n Determine diffuse irradiance incidence angle modifiers using Marion's\n method of integrating over solid angle.\n\n Parameters\n ----------\n model : str\n The IAM function to evaluate across solid angle. Must be one of\n `'ashrae', 'physical', 'martin_ruiz', 'sapm'`.\n\n surface_tilt : numeric\n Surface tilt angles in decimal degrees.\n The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90).\n\n **kwargs\n Extra parameters passed to the IAM function.\n\n Returns\n -------\n iam : dict\n IAM values for each type of diffuse irradiance:\n\n * 'sky': radiation from the sky dome (zenith <= 90)\n * 'horizon': radiation from the region of the sky near the horizon\n (89.5 <= zenith <= 90)\n * 'ground': radiation reflected from the ground (zenith >= 90)\n\n See [1]_ for a detailed description of each class.\n\n See Also\n --------\n pvlib.iam.marion_integrate\n\n References\n ----------\n .. [1] B. Marion \"Numerical method for angle-of-incidence correction\n factors for diffuse radiation incident photovoltaic modules\",\n Solar Energy, Volume 147, Pages 344-348. 2017.\n DOI: 10.1016/j.solener.2017.03.027\n\n Examples\n --------\n >>> marion_diffuse('physical', surface_tilt=20)\n {'sky': 0.9539178294437575,\n 'horizon': 0.7652650139134007,\n 'ground': 0.6387140117795903}\n\n >>> marion_diffuse('ashrae', [20, 30], b=0.04)\n {'sky': array([0.96748999, 0.96938408]),\n 'horizon': array([0.86478428, 0.91825792]),\n 'ground': array([0.77004435, 0.8522436 ])}\n \"\"\"\n\n models = {\n 'physical': physical,\n 'ashrae': ashrae,\n 'sapm': sapm,\n 'martin_ruiz': martin_ruiz,\n }\n\n try:\n iam_model = models[model]\n except KeyError:\n raise ValueError('model must be one of: ' + str(list(models.keys())))\n\n iam_function = functools.partial(iam_model, **kwargs)\n iam = {}\n for region in ['sky', 'horizon', 'ground']:\n iam[region] = marion_integrate(iam_function, surface_tilt, region)\n\n return iam\n\n\ndef marion_integrate(function, surface_tilt, region, num=None):\n \"\"\"\n Integrate an incidence angle modifier (IAM) function over solid angle\n to determine a diffuse irradiance correction factor using Marion's method.\n\n This lower-level function actually performs the IAM integration for the\n specified solid angle region.\n\n Parameters\n ----------\n function : callable(aoi)\n The IAM function to evaluate across solid angle. The function must\n be vectorized and take only one parameter, the angle of incidence in\n degrees.\n\n surface_tilt : numeric\n Surface tilt angles in decimal degrees.\n The tilt angle is defined as degrees from horizontal\n (e.g. surface facing up = 0, surface facing horizon = 90).\n\n region : {'sky', 'horizon', 'ground'}\n The region to integrate over. Must be one of:\n\n * 'sky': radiation from the sky dome (zenith <= 90)\n * 'horizon': radiation from the region of the sky near the horizon\n (89.5 <= zenith <= 90)\n * 'ground': radiation reflected from the ground (zenith >= 90)\n\n See [1]_ for a detailed description of each class.\n\n num : int, optional\n The number of increments in the zenith integration.\n If not specified, N will follow the values used in [1]_:\n\n * 'sky' or 'ground': num = 180\n * 'horizon': num = 1800\n\n Returns\n -------\n iam : numeric\n AOI diffuse correction factor for the specified region.\n\n See Also\n --------\n pvlib.iam.marion_diffuse\n\n References\n ----------\n .. [1] B. Marion \"Numerical method for angle-of-incidence correction\n factors for diffuse radiation incident photovoltaic modules\",\n Solar Energy, Volume 147, Pages 344-348. 2017.\n DOI: 10.1016/j.solener.2017.03.027\n\n Examples\n --------\n >>> marion_integrate(pvlib.iam.ashrae, 20, 'sky')\n 0.9596085829811408\n\n >>> from functools import partial\n >>> f = partial(pvlib.iam.physical, n=1.3)\n >>> marion_integrate(f, [20, 30], 'sky')\n array([0.96225034, 0.9653219 ])\n \"\"\"\n\n if num is None:\n if region in ['sky', 'ground']:\n num = 180\n elif region == 'horizon':\n num = 1800\n else:\n raise ValueError(f'Invalid region: {region}')\n\n beta = np.radians(surface_tilt)\n if isinstance(beta, pd.Series):\n # convert Series to np array for broadcasting later\n beta = beta.values\n ai = np.pi/num # angular increment\n\n phi_range = np.linspace(0, np.pi, num, endpoint=False)\n psi_range = np.linspace(0, 2*np.pi, 2*num, endpoint=False)\n\n # the pseudocode in [1] do these checks at the end, but it's\n # faster to do this criteria check up front instead of later.\n if region == 'sky':\n mask = phi_range + ai <= np.pi/2\n elif region == 'horizon':\n lo = 89.5 * np.pi/180\n hi = np.pi/2\n mask = (lo <= phi_range) & (phi_range + ai <= hi)\n elif region == 'ground':\n mask = (phi_range >= np.pi/2)\n else:\n raise ValueError(f'Invalid region: {region}')\n phi_range = phi_range[mask]\n\n # fast Cartesian product of phi and psi\n angles = np.array(np.meshgrid(phi_range, psi_range)).T.reshape(-1, 2)\n # index with single-element lists to maintain 2nd dimension so that\n # these angle arrays broadcast across the beta array\n phi_1 = angles[:, [0]]\n psi_1 = angles[:, [1]]\n phi_2 = phi_1 + ai\n # psi_2 = psi_1 + ai # not needed\n phi_avg = phi_1 + 0.5*ai\n psi_avg = psi_1 + 0.5*ai\n term_1 = np.cos(beta) * np.cos(phi_avg)\n # The AOI formula includes a term based on the difference between\n # panel azimuth and the photon azimuth, but because we assume each class\n # of diffuse irradiance is isotropic and we are integrating over all\n # angles, it doesn't matter what panel azimuth we choose (i.e., the\n # system is rotationally invariant). So we choose gamma to be zero so\n # that we can omit it from the cos(psi_avg) term.\n # Marion's paper mentions this in the Section 3 pseudocode:\n # \"set gamma to pi (or any value between 0 and 2pi)\"\n term_2 = np.sin(beta) * np.sin(phi_avg) * np.cos(psi_avg)\n cosaoi = term_1 + term_2\n aoi = np.arccos(cosaoi)\n # simplify Eq 8, (psi_2 - psi_1) is always ai\n dAs = ai * (np.cos(phi_1) - np.cos(phi_2))\n cosaoi_dAs = cosaoi * dAs\n # apply the final AOI check, zeroing out non-passing points\n mask = aoi < np.pi/2\n cosaoi_dAs = np.where(mask, cosaoi_dAs, 0)\n numerator = np.sum(function(np.degrees(aoi)) * cosaoi_dAs, axis=0)\n denominator = np.sum(cosaoi_dAs, axis=0)\n\n with np.errstate(invalid='ignore'):\n # in some cases, no points pass the criteria\n # (e.g. region='ground', surface_tilt=0), so we override the division\n # by zero to set Fd=0. Also, preserve nans in beta.\n Fd = np.where((denominator != 0) | ~np.isfinite(beta),\n numerator / denominator,\n 0)\n\n # preserve input type\n if np.isscalar(surface_tilt):\n Fd = Fd.item()\n elif isinstance(surface_tilt, pd.Series):\n Fd = pd.Series(Fd, surface_tilt.index)\n\n return Fd\n"}
|
diff --git a/docs/sphinx/source/reference/pv_modeling.rst b/docs/sphinx/source/reference/pv_modeling.rst
index 31c380c1bb..0f33cf8c70 100644
--- a/docs/sphinx/source/reference/pv_modeling.rst
+++ b/docs/sphinx/source/reference/pv_modeling.rst
@@ -28,6 +28,8 @@ Incident angle modifiers
iam.interp
iam.marion_diffuse
iam.marion_integrate
+ iam.schlick
+ iam.schlick_diffuse
PV temperature models
---------------------
diff --git a/docs/sphinx/source/whatsnew.rst b/docs/sphinx/source/whatsnew.rst
index 4830371985..464e59f121 100644
--- a/docs/sphinx/source/whatsnew.rst
+++ b/docs/sphinx/source/whatsnew.rst
@@ -6,6 +6,7 @@ What's New
These are new features and improvements of note in each release.
+.. include:: whatsnew/v0.9.4.rst
.. include:: whatsnew/v0.9.3.rst
.. include:: whatsnew/v0.9.2.rst
.. include:: whatsnew/v0.9.1.rst
diff --git a/docs/sphinx/source/whatsnew/v0.9.4.rst b/docs/sphinx/source/whatsnew/v0.9.4.rst
index 8a67c201f0..93d056e5a7 100644
--- a/docs/sphinx/source/whatsnew/v0.9.4.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.4.rst
@@ -1,7 +1,7 @@
.. _whatsnew_0940:
-v0.9.4 (TBD)
-------------------------
+v0.9.4 (anticipated December 2022)
+----------------------------------
Deprecations
~~~~~~~~~~~~
@@ -10,6 +10,9 @@ Deprecations
Enhancements
~~~~~~~~~~~~
* Multiple code style issues fixed that were reported by LGTM analysis. (:issue:`1275`, :pull:`1559`)
+* Added a direct IAM model :py:func:`pvlib.iam.schlick` which can be used with
+ :py:func:`~pvlib.iam.marion_diffuse`, and a diffuse IAM model
+ :py:func:`pvlib.iam.schlick_diffuse` (:pull:`1562`, :issue:`1564`)
* Added a function to calculate one of GHI, DHI, and DNI from values of the other two.
:py:func:`~pvlib.irradiance.complete_irradiance`
(:issue:`1565`, :pull:`1567`)
@@ -46,5 +49,9 @@ Contributors
* Christian Orner (:ghuser:`chrisorner`)
* Saurabh Aneja (:ghuser:`spaneja`)
* Marcus Boumans (:ghuser:`bowie2211`)
+* Yu Xie (:ghuser:`xieyupku`)
+* Anton Driesse (:ghuser:`adriesse`)
+* Cliff Hansen (:ghuser:`cwhanse`)
+* Kevin Anderson (:ghuser:`kanderso-nrel`)
* Karel De Brabandere (:ghuser:`kdebrab`)
* Naman Priyadarshi (:ghuser:`Naman-Priyadarshi`)
|
{"pvlib/iam.py": [{"type": "function", "name": "schlick", "lines": [754, 804], "signature": "def schlick(aoi):", "doc": "Determine incidence angle modifier (IAM) for direct irradiance using the\nSchlick approximation to the Fresnel equations.\n\nThe Schlick approximation was proposed in [1]_ as a computationally\nefficient alternative to computing the Fresnel factor in computer\ngraphics contexts. This implementation is a normalized form of the\nequation in [1]_ so that it can be used as a PV IAM model.\nUnlike other IAM models, this model has no ability to describe\ndifferent reflection profiles.\n\nIn PV contexts, the Schlick approximation has been used as an analytically\nintegrable alternative to the Fresnel equations for estimating IAM\nfor diffuse irradiance [2]_.\n\nParameters\n----------\naoi : numeric\n The angle of incidence (AOI) between the module normal vector and the\n sun-beam vector. Angles of nan will result in nan. [degrees]\n\nReturns\n-------\niam : numeric\n The incident angle modifier.\n\nReferences\n----------\n.. [1] Schlick, C. An inexpensive BRDF model for physically-based\n rendering. Computer graphics forum 13 (1994).\n\n.. [2] Xie, Y., M. Sengupta, A. Habte, A. Andreas, \"The 'Fresnel Equations'\n for Diffuse radiation on Inclined photovoltaic Surfaces (FEDIS)\",\n Renewable and Sustainable Energy Reviews, vol. 161, 112362. June 2022.\n :doi:`10.1016/j.rser.2022.112362`\n\nSee Also\n--------\npvlib.iam.schlick_diffuse"}, {"type": "function", "name": "schlick_diffuse", "lines": [807, 871], "signature": "def schlick_diffuse(surface_tilt):", "doc": "Determine the incidence angle modifiers (IAM) for diffuse sky and\nground-reflected irradiance on a tilted surface using the Schlick\nincident angle model.\n\nThe diffuse iam values are calculated using an analytical integration\nof the Schlick equation [1]_ over the portion of an isotropic sky and\nisotropic foreground that is visible from the tilted surface [2]_.\n\nParameters\n----------\nsurface_tilt : numeric\n Surface tilt angle measured from horizontal (e.g. surface facing\n up = 0, surface facing horizon = 90). [degrees]\n\nReturns\n-------\niam_sky : numeric\n The incident angle modifier for sky diffuse.\n\niam_ground : numeric\n The incident angle modifier for ground-reflected diffuse.\n\nReferences\n----------\n.. [1] Schlick, C. An inexpensive BRDF model for physically-based\n rendering. Computer graphics forum 13 (1994).\n\n.. [2] Xie, Y., M. Sengupta, A. Habte, A. Andreas, \"The 'Fresnel Equations'\n for Diffuse radiation on Inclined photovoltaic Surfaces (FEDIS)\",\n Renewable and Sustainable Energy Reviews, vol. 161, 112362. June 2022.\n :doi:`10.1016/j.rser.2022.112362`\n\nSee Also\n--------\npvlib.iam.schlick"}]}
|
0.8
|
["pvlib/tests/test_iam.py::test_schlick", "pvlib/tests/test_iam.py::test_schlick_diffuse"]
|
["pvlib/tests/test_iam.py::test_ashrae", "pvlib/tests/test_iam.py::test_ashrae_scalar", "pvlib/tests/test_iam.py::test_physical", "pvlib/tests/test_iam.py::test_physical_scalar", "pvlib/tests/test_iam.py::test_martin_ruiz", "pvlib/tests/test_iam.py::test_martin_ruiz_exception", "pvlib/tests/test_iam.py::test_martin_ruiz_diffuse", "pvlib/tests/test_iam.py::test_iam_interp", "pvlib/tests/test_iam.py::test_sapm[45-0.9975036250000002]", "pvlib/tests/test_iam.py::test_sapm[aoi1-expected1]", "pvlib/tests/test_iam.py::test_sapm[aoi2-expected2]", "pvlib/tests/test_iam.py::test_sapm_limits", "pvlib/tests/test_iam.py::test_marion_diffuse_model", "pvlib/tests/test_iam.py::test_marion_diffuse_kwargs", "pvlib/tests/test_iam.py::test_marion_diffuse_invalid", "pvlib/tests/test_iam.py::test_marion_integrate_scalar[sky-180-0.9596085829811408]", "pvlib/tests/test_iam.py::test_marion_integrate_scalar[horizon-1800-0.8329070417832541]", "pvlib/tests/test_iam.py::test_marion_integrate_scalar[ground-180-0.719823559106309]", "pvlib/tests/test_iam.py::test_marion_integrate_list[sky-180-expected0]", "pvlib/tests/test_iam.py::test_marion_integrate_list[horizon-1800-expected1]", "pvlib/tests/test_iam.py::test_marion_integrate_list[ground-180-expected2]", "pvlib/tests/test_iam.py::test_marion_integrate_series[sky-180-expected0]", "pvlib/tests/test_iam.py::test_marion_integrate_series[horizon-1800-expected1]", "pvlib/tests/test_iam.py::test_marion_integrate_series[ground-180-expected2]", "pvlib/tests/test_iam.py::test_marion_integrate_ground_flat", "pvlib/tests/test_iam.py::test_marion_integrate_invalid"]
|
311781d2380997044da0e484dc90aa146a74ca44
|
{"first_commit_time": 1664485363.0, "pr_title": "Implement iam.schlick, iam.schlick_diffuse", "pr_body": " - [x] Closes #1564\r\n - [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)\r\n - [x] Tests added\r\n - [x] Updates entries in [`docs/sphinx/source/reference`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/reference) for API changes.\r\n - [x] Adds description and name entries in the appropriate \"what's new\" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).\r\n - [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.\r\n - [x] Pull request is nearly complete and ready for detailed review.\r\n - [x] Maintainer: Appropriate GitHub Labels (including `remote-data`) and Milestone are assigned to the Pull Request and linked Issue.\r\n\r\nYu Xie from NREL presented a new IAM model at the recent PVPMC meeting ([slides](https://pvpmc.sandia.gov/download/8412/)) and wanted to contribute it to pvlib. This one is a little unusual in that it calculates the IAM factor as a relative quantity, taking the indices of refraction of both the PV surface and the accompanying pyranometer into account. \r\n\r\n", "pr_timeline": [{"time": 1664545975.0, "comment": "I suppose we could move the diffuse calculations to a separate function like was done with martin_ruiz. It's a little strange to return scalar sky & ground IAM but time series direct IAM as the current code does for a fixed tilt simulation. "}, {"time": 1665423280.0, "comment": "Offline discussion with @cwhanse and the model's author determined that the parameter this PR calls `n_ref` may have some niche uses but should be set to the same value as `n` for standard PV modeling workflows (unlike as assumed in the reference). That is now the default behavior. \r\n\r\nI also added `pvlib.iam.schlick` in this PR as suggested in #1564."}, {"time": 1665431501.0, "comment": "Should/does fedis direct be equal to schlick? If not, why not?"}, {"time": 1665432232.0, "comment": "> Should/does fedis direct be equal to schlick? If not, why not?\r\n\r\nClose-ish but not identical. FEDIS is using the true Fresnel equations for the direct component so it cannot match exactly. The Schlick approximation only comes in for the diffuse components: in the absence of an analytical integration of the real Fresnel equations, FEDIS's diffuse components are instead an integration of the Shlick approximation. It's the alternative to Marion's integration method -- whereas Marion's is an approximate integration of an exact integrand, FEDIS is an exact integration of an approximate integrand. "}, {"time": 1665433352.0, "comment": "> Close-ish but not identical. FEDIS is using the true Fresnel equations for the direct component \r\n\r\nI see, but we already have that in iam.physical, right?"}, {"time": 1665433996.0, "comment": "> I see, but we already have that in iam.physical, right?\r\n\r\nYes, with two minor differences: FEDIS does not consider extinction (i.e. it assumes `K=0` in `iam.physical`), and FEDIS has the option to normalize the IAM profile with respect to another device with a different index of refraction. The latter is what prevents `iam.physical` from being a superset of FEDIS's direct IAM calculation. A separate index of refraction for the normal incidence normalization coefficient is probably only useful in unusual modeling scenarios, but nevertheless it is part of the model so I include it here.\r\n\r\nFor context, I have the sense that the FEDIS paper to some extent views its diffuse IAM equations as the primary scientific contribution with the direct component being included for completeness. "}, {"time": 1665435811.0, "comment": "> FEDIS has the option to normalize the IAM profile with respect to another device with a different index of refraction.\r\n\r\nNormally IAM is a property or characteristic of one device, so we should try to avoid confusion. If a ratio is needed for some unusual modeling scenarios, this can be calculated with any pair of existing IAM models and parameters, I would think."}, {"time": 1665447256.0, "comment": "The behavior as currently implemented is to default to equal refractive index so that everything is for a single device. I'm open to removing the second refractive index altogether so long as we include sufficient justification in the docstring, but this would be a substantial deviation from the reference which, for better or worse, places a lot of focus on the multiple refractive index aspect."}, {"time": 1665601243.0, "comment": "> > Should/does fedis direct be equal to schlick? If not, why not?\r\n> \r\n> Close-ish but not identical. FEDIS is using the true Fresnel equations for the direct component so it cannot match exactly. The Schlick approximation only comes in for the diffuse components: in the absence of an analytical integration of the real Fresnel equations, FEDIS's diffuse components are instead an integration of the Shlick approximation. It's the alternative to Marion's integration method -- whereas Marion's is an approximate integration of an exact integrand, FEDIS is an exact integration of an approximate integrand.\r\n\r\nThis is a good way to put the distinction. Based on my most recent re-read, FEDIS is an exact integration of an approximate integrand ***plus*** a scaling/correction factor ***w*** to make up for the fact that the integrating the approximate integrand (Schlick) doesn't actually produce the desired result."}, {"time": 1665668757.0, "comment": "Maybe of interest... comparison with analogous calculations from `iam.marion_integrate` (compare with Fig 3 in the reference):\r\n\r\n\r\n\r\n<details>\r\n <summary>Source</summary>\r\n\r\n```python\r\nimport pvlib\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nn = 1.6\r\naoi = np.linspace(0, 90)\r\nsurface_tilt = np.linspace(0, 90, num=20)\r\n\r\ndef physical_no_extinction(aoi):\r\n return pvlib.iam.physical(aoi, n=n, K=0)\r\n\r\n\r\nfig, axes = plt.subplots(3, 1)\r\n\r\n# %%\r\n\r\nphysical = physical_no_extinction(aoi)\r\nfedis = pvlib.iam.fedis(aoi, 0, n=n)['direct']\r\n\r\naxes[0].plot(aoi, fedis, label='fedis')\r\naxes[0].plot(aoi, physical, label='physical (no extinction)')\r\naxes[0].set_ylabel('IAM (direct)')\r\naxes[0].set_xlabel('aoi [degrees]')\r\naxes[0].legend()\r\n\r\n# %%\r\n\r\nfedis = pvlib.iam.fedis(0, surface_tilt, n=n)['sky']\r\n\r\npolycoeffs = [2.77526e-09, 3.74953, -5.18727, 3.41186, -1.08794, 0.136060]\r\nw = np.polynomial.polynomial.polyval(n, polycoeffs)\r\n\r\nmarion_schlick = pvlib.iam.marion_integrate(pvlib.iam.schlick,\r\n surface_tilt=surface_tilt, region='sky', num=1000)\r\nmarion_physical = pvlib.iam.marion_integrate(physical_no_extinction,\r\n surface_tilt=surface_tilt, region='sky', num=1000)\r\n\r\naxes[1].plot(surface_tilt, fedis, label='fedis')\r\naxes[1].plot(surface_tilt, marion_schlick * w, label='marion + schlick (with $w$)')\r\naxes[1].plot(surface_tilt, marion_physical, label='marion + physical')\r\naxes[1].set_ylabel('IAM (sky)')\r\naxes[1].set_xlabel('surface_tilt [degrees]')\r\naxes[1].legend()\r\n\r\n# %%\r\n\r\nfedis = pvlib.iam.fedis(0, surface_tilt, n=n)['ground']\r\nmarion_schlick = pvlib.iam.marion_integrate(pvlib.iam.schlick,\r\n surface_tilt=surface_tilt, region='ground', num=1000)\r\nmarion_physical = pvlib.iam.marion_integrate(physical_no_extinction,\r\n surface_tilt=surface_tilt, region='ground', num=1000)\r\n\r\naxes[2].plot(surface_tilt, fedis, label='fedis')\r\naxes[2].plot(surface_tilt, marion_schlick * w, label='marion + schlick (with $w$)')\r\naxes[2].plot(surface_tilt, marion_physical, label='marion + physical')\r\naxes[2].set_ylabel('IAM (ground)')\r\naxes[2].set_xlabel('surface_tilt [degrees]')\r\naxes[2].legend()\r\n```\r\n</details>"}, {"time": 1665670289.0, "comment": "@kanderso-nrel that bottom panel is an eyebrow-raiser. I had not seen that result before. Although ground-reflected irradiance onto the front surface is a very minor part of plane-of-array, that figure suggests it should be further reduced by roughly a factor of 2, for many installations."}, {"time": 1665762310.0, "comment": "This PR now implements four separate functions (schlick/fedis, direct/diffuse) as @adriesse suggested. For interest here is a plot similar to above but comparing two indices of refraction to show the effect of FEDIS's weight coefficient `w`:\r\n\r\n\r\n\r\n<details>\r\n <summary>Source</summary>\r\n\r\n```python\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\nfrom pvlib.iam import physical, fedis, fedis_diffuse, schlick, schlick_diffuse, marion_integrate\r\n\r\naoi = np.linspace(0, 90)\r\nsurface_tilt = np.linspace(0, 90, num=20)\r\n\r\ndef physical_no_extinction(aoi):\r\n return physical(aoi, n=n, K=0)\r\n\r\n\r\nfig, axes = plt.subplots(3, 2)\r\n\r\nfor i, n in enumerate([1.32, 1.5]):\r\n\r\n iam_physical = physical_no_extinction(aoi)\r\n iam_fedis = fedis(aoi, n=n)\r\n\r\n axes[0, i].plot(aoi, iam_fedis, label='fedis')\r\n axes[0, i].plot(aoi, iam_physical, label='physical (no extinction)')\r\n axes[0, i].set_ylabel('IAM (direct)')\r\n axes[0, i].set_xlabel('aoi [degrees]')\r\n axes[0, i].legend()\r\n\r\n iam_schlick = schlick_diffuse(surface_tilt)[0]\r\n iam_fedis = fedis_diffuse(surface_tilt, n=n)[0]\r\n marion_schlick = marion_integrate(schlick, surface_tilt=surface_tilt, region='sky', num=1000)\r\n marion_physical = marion_integrate(physical_no_extinction, surface_tilt=surface_tilt, region='sky', num=1000)\r\n\r\n axes[1, i].plot(surface_tilt, iam_fedis, label='fedis_diffuse')\r\n axes[1, i].plot(surface_tilt, iam_schlick, label='schlick_diffuse')\r\n axes[1, i].plot(surface_tilt, marion_schlick, label='marion + schlick')\r\n axes[1, i].plot(surface_tilt, marion_physical, label='marion + physical')\r\n axes[1, i].set_ylabel('IAM (sky)')\r\n axes[1, i].set_xlabel('surface_tilt [degrees]')\r\n axes[1, i].legend()\r\n\r\n iam_schlick = schlick_diffuse(surface_tilt)[1]\r\n iam_fedis = fedis_diffuse(surface_tilt, n=n)[1]\r\n marion_schlick = marion_integrate(schlick, surface_tilt=surface_tilt, region='ground', num=1000)\r\n marion_physical = marion_integrate(physical_no_extinction, surface_tilt=surface_tilt, region='ground', num=1000)\r\n\r\n axes[2, i].plot(surface_tilt, iam_fedis, label='fedis_diffuse')\r\n axes[2, i].plot(surface_tilt, iam_schlick, label='schlick_diffuse')\r\n axes[2, i].plot(surface_tilt, marion_schlick, label='marion + schlick')\r\n axes[2, i].plot(surface_tilt, marion_physical, label='marion + physical')\r\n axes[2, i].set_ylabel('IAM (ground)')\r\n axes[2, i].set_xlabel('surface_tilt [degrees]')\r\n axes[2, i].legend()\r\n\r\n axes[0, i].set_title(f'n={n}')\r\n```\r\n</details>"}, {"time": 1665777345.0, "comment": "Nicely done. Let's see what other comments appear here over the coming days...\r\n\r\nOn a related note, I think it would be great if the original author could submit two examples to pvlib-python: one example illustrating an important use case for ```n_ref``` ; and one to replicate the validation comparisons that are presented in the publication."}, {"time": 1666963408.0, "comment": "Perhaps we should invite a additional reviewer on this one?"}, {"time": 1666966191.0, "comment": "It turns out that ```term1``` in ```fedis_diffuse()``` is exactly the same as the weighting function w in ```fedis()```, just in simplified form. I suggest making the code the same in both functions and referring to this ratio/weight/multiplier as ```w0```. (Or the calculation could even be put in a separate helper function.)\r\n\r\nFor clarity, I would suggest using the long version because it shows what's actually going on, and putting the short version in a comment with reference to the paper. "}, {"time": 1666987875.0, "comment": "> For clarity, I would suggest using the long version because it shows what's actually going on, and putting the short version in a comment with reference to the paper.\r\n\r\nI have kept the equations as-is so they continue being directly tied to the reference, but the latest commit at least leaves a comment noting the equivalence for any interested readers. Hopefully that achieves the same goal?\r\n\r\n> Perhaps we should invite a additional reviewer on this one?\r\n\r\nI wouldn't mind input from additional reviewers here, but the three of us (@adriesse, @cwhanse, myself) reaching consensus would be good enough for me. "}, {"time": 1668707813.0, "comment": "Following offline discussion, we have decided to move forward here with only the two schlick functions. I'll wait a few days to merge this to leave opportunity to review for anyone who wants to. "}, {"time": 1669658348.0, "comment": "Thanks again for the thoughtful and productive discussion @adriesse and @cwhanse"}, {"time": 1669665254.0, "comment": "No problem. I learned a few things in the process!"}], "issues": {"1564": {"issue_title": "Add iam function based on Schlick 1994", "issue_body": "**Is your feature request related to a problem? Please describe.**\r\n\r\nEquation 15 in Schlick 1994 gives an efficient approximation equation for surface reflectance as a function of incidence angle that could be used as the basis for an *iam* function.\r\n\r\nEquation 4 in Xie 2022 gives an analytical integration of the Schlick equation over the portion of an isotropic sky that is visible from a tilted surface.\r\n\r\nEquation 6 in Xie 2022 gives an analytical integration of the Schlick equation over the portion of an isotropic ground that is visible from a tilted surface.\r\n\r\n\r\n**Describe the solution you'd like**\r\nA new function called ```schlick``` in the ```iam``` module. The equation would be:\r\n\r\n```\r\niam = 1 - (1 - cos(aoi)) ** 5\r\n```\r\nThis equation has no parameters for fitting to iam measurements or to represent different types of PV module glass.\r\n\r\nA second new function called ```schlick_diffuse``` in the ```iam``` module (similar to the function ```martin_ruiz_diffuse```) which implements Equations 4 and 6 from Xie 2022 (excluding the empirical weighting factor *w*, or in other words, with *w=1*).\r\n\r\n**References**\r\n\r\nSchlick, C. An inexpensive BRDF model for physically-based rendering. Computer graphics forum 13 (1994).\r\nXie, Y. et al. The 'Fresnel Equations' for Diffuse radiation on Inclined photovoltaic Surfaces (FEDIS). J.RSER (2022)\r\n", "issue_timeline": [{"time": 1664809889.0, "comment": "If I understand right, the new `schlick` equation quantifies the fraction of beam irradiance that enters the front material at the air/front interface. It would not account for extinction within the front material, which is the role of the exponential factor in the [physical iam model](https://pvpmc.sandia.gov/modeling-steps/1-weather-design-inputs/shading-soiling-and-reflection-losses/incident-angle-reflection-losses/physical-model-of-iam/). Is that the proper understanding @adriesse ?\r\n\r\nThe `schlick` approximation may have been the motivation for Dave King's fifth order polynomial (fitted to IAM measurements) but that's just speculation on my part."}, {"time": 1664820722.0, "comment": "Extinction in the glass is negligible so we can ignore that part. The point is that when the original Schlick reflectance function is changed to a normalized iam function, the refractive index (which is only used to determine the normal incidence reflectance) drops out. Thus to see refractive index reappear in the sky-integrated iam is, well, surprising. But if we implement the iam function here, then we can compare numerical and analytical integration and see what's up."}, {"time": 1664998147.0, "comment": "> Thus to see refractive index reappear in the sky-integrated iam is, well, surprising. \r\n\r\nI assume this is in reference to FEDIS (#1562) rather than `schlick`. If I understand correctly, this is because FEDIS is calculating transmittance relative to readings from a pyranometer with its own, presumably different, refractive index. Because of that, FEDIS is perhaps not exactly an IAM model, at least if IAM is defined strictly as $\\tau(\\theta) / \\tau(0)$ with $\\tau$ corresponding to the same device in the numerator and denominator. I suppose it's a sort of \"IAM mismatch\" correction?\r\n\r\nWhat confuses me is that I didn't think accounting for pyranometer reflection was necessary in the first place. Aren't field pyranometers calibrated such that their readings already account for reflection off the dome? \r\n\r\n(Also, github comments [now support mathjax](https://github.blog/2022-05-19-math-support-in-markdown/)!)"}, {"time": 1665004142.0, "comment": "> (Also, github comments [now support mathjax](https://github.blog/2022-05-19-math-support-in-markdown/)!) \r\n\r\nNice!"}, {"time": 1665670903.0, "comment": "Please note that I have expanded this feature request to include a function for the diffuse iam factors. These two functions could in fact be done in a separate PR to form a base layer for FEDIS.\r\n\r\nI actually wrote and ran them and compared to schlick integrated with marion and they converge well."}]}}}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.